Quantcast
Channel: You Had Me At EHLO…
Viewing all 301 articles
Browse latest View live

Released: Update Rollup 6 for Exchange 2010 Service Pack 3

$
0
0

The Exchange team is announcing today the availability of Update Rollup 6 for Exchange Server 2010 Service Pack 3. Update Rollup 6 is the latest rollup of customer fixes available for Exchange Server 2010 Service Pack 3. The release contains fixes for customer reported issues and previously released security bulletins. Update Rollup 6 is not considered a security release as it contains no new previously unreleased security bulletins. A complete list of issues resolved in Exchange Server 2010 Service Pack 3 Update Rollup 6 may be found in KB2936871. Customers running any Service Pack 3 Update Rollup for Exchange Server 2010 can move to Update Rollup 6 directly.

The release is now available on the Microsoft Download Center. Update Rollup 6 will be available on Microsoft Update in early July.

Note: KB articles may not be fully available at the time of publishing of this post.

The Exchange Team


Released: Update Rollup 7 for Exchange Server 2010 Service Pack 3

$
0
0

The Exchange team is announcing today the availability of Update Rollup 7 for Exchange Server 2010 Service Pack 3. Update Rollup 7 is the latest rollup of customer fixes available for Exchange Server 2010 Service Pack 3. The release contains fixes for customer reported issues and previously released security bulletins. Update Rollup 7 is not considered a security release as it contains no new previously unreleased security bulletins. A complete list of issues resolved in Exchange Server 2010 Service Pack 3 Update Rollup 7 may be found in KB2961522. Customers running any Service Pack 3 Update Rollup for Exchange Server 2010 can move to Update Rollup 7 directly.

The release is now available on the Microsoft Download Center. Update Rollup7 will be available on Microsoft Update in September.

Note: The KB article may not be fully available at the time this post was published.

The Exchange Team

Keep your Federation Trust up-to-date

$
0
0

Microsoft periodically refreshes certificates in Office 365 as part of our effort to maintain a highly available and secure environment. On September 23, 2014, we are making a certificate change on our Microsoft Federation Gateway that could affect some customers as detailed in knowledge base article 2928514. The good news is, you can easily avoid any disruption.

Who is affected?

This certificate change can affect any customer that is using the Microsoft Federation Gateway. If you are in a hybrid configuration or if you are sharing free/busy information between two different on-premises organizations using the Microsoft Federation Gateway as a trust broker, you need to take action.

When will the change occur?

The change is scheduled to occur on September 23, 2014. You must take action before then to avoid any disruption.

What type of issues will you face if no action is taken?

If you don't take action, you won't be able to use services that rely on the Microsoft Federation Gateway. For example:

  • A cloud user won't be able to see free/busy information for an on-premises user and vice versa.
  • MailTips will not work in a Hybrid configuration.
  • Cross-premises free/busy will stop working between organizations that have organization relationships in place.

What action should you take?

If you’re using Exchange Server 2013 SP1 or later no action is required. This is a common task in Exchange 2013 SP1, it happens automatically. Installing the latest version of Exchange Server 2013 will make this an automated task for you.

If you are not running Exchange 2013 SP1 or later, you can create a scheduled task to keep your Federation Trust up-to-date. You can use the following command on your Exchange Server to create a scheduled task to run the update process periodically. This is how we recommend you keep your Federation Trust constantly updated. This will prevent you from being negatively affected by future metadata changes.

Schtasks /create /sc Daily /tn FedRefresh /tr "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -version 2.0 -command Add-PSSnapIn Microsoft.Exchange.Management.PowerShell.E2010;$fedTrust = Get-FederationTrust;Set-FederationTrust -Identity $fedTrust.Name -RefreshMetadata" /ru System

If you prefer to not use a scheduled task, you can manually run the command at any time to refresh the metadata. If you choose a manual option, it is still best practice to update Federation information at least monthly.

Get-Federationtrust | Set-FederationTrust –RefreshMetadata

Jim Lucey

Protecting against Rogue Administrators

$
0
0

Occasionally I am asked the following question – how can I protect the messaging environment from a rogue administrator? There are essentially two concerns being asked in this question:

  1. How do I protect the data from being deleted by a rogue administrator?
  2. How do I protect the data from being accessed and/or altered by a rogue administrator?

Sometimes this discussion leads to a discussion about only the chosen backup architecture. The reality is that whether you implement Exchange Native Data Protection or a third-party backup solution, a backup, by itself, does not protect you from rogue administrators; it only mitigates the damage they potentially cause. Any administrator that has the privileged access to the messaging data (whether it be live data and/or backup data), can compromise the system. Therefore, some operational changes must be implemented within the organization in order to reduce the attack surface of an administrator who has gone rogue.

Important: This article is not intended to be a comprehensive set of instructions on how to restrict administrators. Instead, this article will highlight the principles and techniques that can be used.

Immutable Laws of Security

The Microsoft Security Response Center published the Ten Immutable Laws of Security. They are:

  • Law #1: If a bad guy can persuade you to run his program on your computer, it's not solely your computer anymore.
  • Law #2: If a bad guy can alter the operating system on your computer, it's not your computer anymore.
  • Law #3: If a bad guy has unrestricted physical access to your computer, it's not your computer anymore.
  • Law #4: If you allow a bad guy to run active content in your website, it's not your website any more.
  • Law #5: Weak passwords trump strong security.
  • Law #6: A computer is only as secure as the administrator is trustworthy.
  • Law #7: Encrypted data is only as secure as its decryption key.
  • Law #8: An out-of-date antimalware scanner is only marginally better than no scanner at all.
  • Law #9: Absolute anonymity isn't practically achievable, online or offline.
  • Law #10: Technology is not a panacea.

These guiding principles are a cornerstone in how we secure Office 365 and are equally applicable to on-premises deployments.

Active Directory

Exchange relies on Active Directory, storing configuration data within the configuration partition and storing recipient data within the domain partitions. The forest administrators, who are members of either the Enterprise Admins group or the root Domain Admins group, control all aspects of the directory and control data stored in the directory (though they sometimes delegate specific rights to others so others can perform very specific actions that are usually narrowly scoped within a forest). Likewise, domain administrators in each domain partition own their respective recipient data. Companies normally restrict forest-wide and domain-wide actions to be exclusively performed by forest administrators because of the risk that configuration changes can have broad adverse impact.

Many organizations today operate an Active Directory model with a least-privilege administrative model. If you are not operating in this fashion, this should be a top priority for your organization. Remember, any member of the Enterprise Admins or Domain Admins can alter messaging configuration settings on recipients and/or the Exchange configuration, without utilizing the Exchange Management Tools. For more information, see Best Practices for Securing Active Directory.

In addition to operating Active Directory in a least-privilege manner, it is important to implement background checks and other processes to determine the trustworthiness of your administrators, otherwise Law #6 cannot be mitigated. In addition, you may want to consider investing in an identity management solution, like Forefront Identity Manager, to manage and audit administrative permission requests and approvals.

Permissions

Exchange administrators have a wide variety of tools that they potentially can utilize to manage a messaging environment. The preferred method is via Remote PowerShellas Exchange can then authorize access and audit any actions performed. However, if an Exchange administrator is granted additional permissions in Active Directory (e.g., the ability to modify any attribute on a user), then the administrator can utilize other tools (e.g., LDP, ADSIEdit, local PowerShell, scripts, etc.) to modify recipient and/or configuration data, bypassing the authorization and auditing checks built into the system.

To effectively mitigate any possibility of modifications that could occur outside of the Remote PowerShell framework, it is imperative to follow Best Practices for Securing Active Directory, as previously mentioned. Often times, Exchange administrators are not evaluated with the same rigor as Active Directory administrators; therefore, Exchange administrators must not be granted permissions in Active Directory that allow them to circumvent Remote PowerShell (e.g., recipient modification).

If Active Directory privileges are accurately and narrowly scoped, a rogue administrator will have difficulty making unauthorized changes no matter which tool he/she tries to use.

PowerShell

Exchange leverages Remote PowerShell which is a feature within PowerShell that lets admins run commands on remote systems. Remote PowerShell enables the functionality utilized by Exchange to audit cmdlet execution and enforce authorization through RBAC.

One way to ensure that administrators can only manage the messaging environment through Remote PowerShell is to not allow the installation of the Exchange Management Tools on administrator workstations. Administrators are then forced to use the Exchange Admin Center or to connect to Exchange using Remote Shell.

In addition, PowerShell 3.0 and higher provides robust audit capabilities when module logging is enabled. PowerShell Module logging captures pipeline execution events for specified modules, and records this data in Windows PowerShellEvent log. Among the events logged, Event ID 800 (Pipeline Execution Details) provides command line commands and a script name if one is used. If a script was used, the ScriptName value will be populated with the file name of the script that was executed and an event will be logged for each line in the script, with each event including the command from that line. The data recorded in the event log can be collected, analyzed, and provide auditing data by which an organization can determine what PowerShell operations are occurring in the environment.

Mitigating Data Destruction

An Exchange administrator has the necessary privilege to access and destroy Exchange data. The best way to protect the messaging data from an administrator is to:

  1. Shrink the window of opportunity for administrators to perform malicious activities. The mechanisms that can be implemented include removing local administrative access, deploying Applocker, removing access to destructive cmdlets, and deploying lagged database copies.
  2. Ensure all administrative actions are logged and implement alerting and reporting based upon monitoring of logged activities.
  3. Implement a least-privilege access control model whereby the elevation process grants access only to perform the intended activities. Even more effective is implementing an access control model whereby most or all administrative activities are done via a ‘control plane’ that is used by administrators to request that actions be performed against Exchange and the control plane can then implement business logic upon the request that will determine if the request will actually be executed. The business logic might include a request to a second person to review and approve the action or it might check an employee work scheduling system to see if the original requestor is on vacation, for example.

Removing Local Administrative Access

The majority of Exchange management tasks are accomplished via Remote PowerShell. As a result, the only time an Exchange administrator needs local administrative access to the Exchange server is to perform system updates (installing driver updates, installing Exchange cumulative updates, operating system updates, etc.). Therefore, local administrative access can be restricted and granted only when needed, for a specific period of time, after which access is revoked.

Without local administrative access, administrators will be unable to obtain certain information about the Exchange server health that they may need to appropriately manage the environment. Therefore, administrators should be granted access to the following:

  • File shares (secured via read-only access to the appropriate security groups) can be created to enable access to Exchange diagnostic logs (which by default, are located at %Program Files%\ Exchange Server\V15\Logging and %SystemDrive%\inetpub\logs).
  • Read-only access to the event logs can be granted by adding the appropriate security groups to the member server’s local Event Log Readers security group.
  • Access to performance counters can be granted by adding the appropriate security groups to the member server’s Performance Monitor Users security group.
  • To allow administrators to schedule performance counter logging, enable and collect tracing, the appropriate security groups can be added to the member server’s Performance Log Users security group.

In addition, local administrative access on the administrator’s workstations should also be removed. This ensures that administrators can only run approved applications and cannot bypass any security mechanisms you may put into place to audit and monitor their actions. For more information on running Windows in a least privileged manner, see Applying the Principle of Least Privilege to User Accounts on Windows.

By removing local administrative access, Laws #2 and #3 are mitigated.

AppLocker

AppLocker provides organizations with a strong defense in the prevention of malicious software and unapproved applications from affecting your server environment. AppLocker can be used to limit the software that Exchange operators can use so that only an approved list of applications (e.g., Exchange Management Tools, approved PowerShell scripts, etc.) will run on a server where AppLocker policy enforcement is in effect. For more information, see the TechNet article on AppLocker.

AppLocker allows you to mitigate Law #1.

Removing Access to Destructive Cmdlets

Both Exchange 2010 and Exchange 2013 include Role Based Access Control (RBAC), a mechanism by which administrators are authorized to perform certain administrative tasks. RBAC provides the capability for very granular control of managing the messaging environment – for example, restricting access to a particular cmdlet or to a particular property of a cmdlet.

For more information on RBAC, please see the following articles:

While Administrator Audit Logging will record the actions taken by administrators, it does not prevent the administrators from performing destructive actions if they are authorized to do so. Using RBAC, organizations can reduce the attack surface area by removing access to destructive cmdlets. A destructive cmdlet is any cmdlet that can access, modify, or delete messaging data.

The below table identifies cmdlets and parameters that may be considered destructive (e.g., Remove-Mailbox). Each cmdlet should be evaluated to determine how it is used in your environment and whether it should be removed from day-to-day usage.

Cmdlet

Parameters

Roles

Add-ResubmitRequest

 

Databases

Disable-Mailbox

 

Mail Recipients

Move-ActiveMailboxDatabase

MountDialOverrride

Databases

Mount-Database

Force

Databases

Remove-AcceptedDomain

 

Remote and Accepted Domains

Remove-ActiveSyncDeviceAccessRule

 

Organization Client Access

Remove-ActiveSyncDeviceClass

 

Organization Client Access

Remove-ActiveSyncMailboxPolicy

 

Recipient Policies

Remove-ActiveSyncVirtualDirectory

 

Exchange Virtual Directories

Remove-AddressBookPolicy

 

Address Lists

Remove-AddressList

 

Address Lists

Remove-ADPermission

 

Active Directory Permissions

Remove-AuthRedirect

 

Organization Client Access, Organization Configuration

Remove-AuthServer

 

Organization Client Access

Remove-AutodiscoverVirtualDirectory

 

Exchange Virtual Directories

Remove-AvailabilityAddressSpace

 

Federated Sharing, Mail Tips, Message Tracking, Organization Configuration

Remove-ClassificationRuleCollection

 

Data Loss Prevention

Remove-ClientAccessArray

 

Organization Client Access

Remove-ClientAccessRule

 

Organization Client Access

Remove-CompliancePolicySyncNotification

 

Data Loss Prevention

Remove-ContentFilterPhrase

 

Exchange Servers, Transport Hygiene

Remove-DatabaseAvailabilityGroup

 

Database Availability Groups

Remove-DatabaseAvailabilityGroupConfiguration

 

Database Availability Groups

Remove-DatabaseAvailabilityGroupNetwork

 

Database Availability Groups

Remove-DatabaseAvailabilityGroupServer

 

Database Availability Groups, Exchange Servers

Remove-DataClassification

 

Data Loss Prevention

Remove-DeliveryAgentConnector

 

Exchange Connectors

Remove-DistributionGroup

 

Distribution Groups, Security Group Creation and Membership, ExchangeCrossServiceIntegration

Remove-DlpPolicy

 

Data Loss Prevention

Remove-DlpPolicyTemplate

 

Data Loss Prevention

Remove-DynamicDistributionGroup

 

Distribution Groups

Remove-EcpVirtualDirectory

 

Exchange Virtual Directories

Remove-EdgeSubscription

 

Edge Subscriptions

Remove-EmailAddressPolicy

 

E-Mail Address Policies

Remove-ExchangeCertificate

 

Exchange Server Certificates

Remove-FederatedDomain

 

Federated Sharing

Remove-FederationTrust

 

Federated Sharing

Remove-ForeignConnector

 

Federated Sharing

Remove-GlobalAddressList

 

Address Lists

Remove-GlobalMonitoringOverride

 

Organization Configuration, View-Only Configuration

Remove-GroupMailbox

 

ExchangeCrossServiceIntegration

Remove-HybridConfiguration

 

Database Copies, Federated Sharing, Mail Recipients

Remove-IntraOrganizationConnector

 

Federated Sharing

Remove-JournalRule

 

Journaling

Remove-Mailbox

 

Mail Recipient Creation

Remove-MailboxDatabase

 

Databases

Remove-MailboxDatabaseCopy

 

Database Copies

Remove-MailContact

 

Mail Recipient Creation, ExchangeCrossServiceIntegration

Remove-MailUser

 

Mail Recipient Creation, ExchangeCrossServiceIntegration

Remove-MalwareFilterPolicy

 

Transport Hygiene

Remove-MalwareFilterRule

 

Transport Hygiene

Remove-ManagedContentSettings

 

Retention Management

Remove-ManagedFolder

 

Retention Management

Remove-ManagedFolderMailboxPolicy

 

Retention Management

Remove-ManagementRole

 

Role Management, UnScoped Role Management

Remove-ManagementRoleAssignment

 

Role Management, UnScoped Role Management

Remove-ManagementRoleEntry

 

Role Management, UnScoped Role Management

Remove-ManagementScope

 

Role Management

Remove-MapiVirtualDirectory

 

Exchange Virtual Directories

Remove-Message

 

Transport Queues

Remove-MessageClassification

 

Transport Rules

Remove-MigrationEndpoint

 

Migration

Remove-MigrationUser

 

Migration

Remove-MobileDeviceMailboxPolicy

 

Recipient Policies

Remove-OabVirtualDirectory

 

Exchange Virtual Directories

Remove-OfflineAddressBook

 

Address Lists

Remove-OrganizationRelationship

 

Federated Sharing

Remove-OutlookProtectionRule

 

Information Rights Management

Remove-OutlookProvider

 

Organization Client Access

Remove-OwaMailboxPolicy

 

Recipient Policies, Mail Recipients

Remove-OwaVirtualDirectory

 

Exchange Virtual Directories

Remove-PolicyTipConfig

 

Data Loss Prevention

Remove-PowerShellVirtualDirectory

 

Exchange Virtual Directories

Remove-PublicFolder

 

Public Folders

Remove-PublicFolderDatabase

 

Databases

Remove-ReceiveConnector

 

Receive Connectors

Remove-RemoteDomain

 

Remote and Accepted Domains

Remove-RemoteMailbox

 

Mail Recipient Creation

Remove-ResourcePolicy

 

WorkloadManagement

Remove-ResubmitRequest

 

Databases

Remove-RetentionPolicy

 

Retention Management

Remove-RetentionPolicyTag

 

Retention Management

Remove-RoleAssignmentPolicy

 

Role Management

Remove-RoleGroup

 

Role Management

Remove-RoleGroupMember

 

Role Management

Remove-RoutingGroupConnector

 

Exchange Connectors

Remove-RpcClientAccess

 

Organization Client Access

Remove-SendConnector

 

Send Connectors

Remove-ServerMonitoringOverride

 

Exchange Servers, View-Only Configuration

Remove-SettingOverride

 

Organization Configuration

Remove-SharingPolicy

 

Federated Sharing

Remove-SiteMailboxProvisioningPolicy

 

Team Mailboxes

Remove-StoreMailbox

 

Databases

Remove-ThrottlingPolicy

 

Recipient Policies, WorkloadManagement

Remove-TransportRule

 

Transport Rules, Data Loss Prevention

Remove-UMAutoAttendant

 

Unified Messaging

Remove-UMCallAnsweringRule

 

UM Mailboxes

Remove-UMDialPlan

 

Unified Messaging

Remove-UMHuntGroup

 

Unified Messaging

Remove-UMIPGateway

 

Unified Messaging

Remove-UMMailboxPolicy

 

Database Copies, Unified Messaging

Remove-WebServicesVirtualDirectory

 

Exchange Virtual Directories

Remove-WorkloadManagementPolicy

 

WorkloadManagement

Remove-WorkloadPolicy

 

WorkloadManagement

Remove-X400AuthoritativeDomain

 

Remote and Accepted Domains

Set-Mailbox

Database, ArchiveDatabase

Disaster Recovery

Set-MailboxDatabaseCopy

ReplayLagTime

Database Copies

Set-TransportConfig

 

Journaling, Organization Transport Settings

Search-Mailbox

DeleteContent

Mailbox Search, Mailbox Import Export

Set-ResubmitRequest

 

Databases

Update-HybridConfiguration

 

Database Copies, Federated Sharing, Mail Recipients

Update-MailboxDatabaseCopy

 

Database Copies, Databases

Note: The above table does not include the cmdlet list identified in the section “Removing Access to Data Access Cmdlets” below, but they should also be considered as those cmdlets provide access to messaging data.

For example, if I wanted to ensure that my administrators cannot remove database copies and manipulate the lagged database copy’s configuration, I could remove those cmdlets by using RBAC in the following manner:

  1. Create two new role groups: Protected Organization Management and Protected Server Management.

    $ORoleGroup = Get-RoleGroup “Organization Management"
     
    New-RoleGroup "Protected Organization Management" -Roles $ORoleGroup.Roles
     
    $SRoleGroup = Get-RoleGroup “Server Management"
     
    New-RoleGroup "Protected Server Management" -Roles $SRoleGroup.Roles

  2. Remove the Database Copies role from the protected role groups.

    Get-ManagementRoleAssignment -RoleAssignee "Protected Organization Management" -Role "Database Copies" -Delegating $false | Remove-ManagementRoleAssignment
     
    Get-ManagementRoleAssignment -RoleAssignee "Protected Server Management" -Role "Database Copies" -Delegating $false | Remove-ManagementRoleAssignment

  3. Create a Protected Database Copies role.

    New-ManagementRole –Parent "Database Copies" –Name "Protected Database Copies"

  4. Remove the Remove-MailboxDatabaseCopy role entry from the Protected Database Copies role.

    Remove-ManagementRoleEntry "Protected Database Copies\Remove-MailboxDatabaseCopy"

    Additionally, you can also remove access to the ReplayLagTime parameter from the Protected Database Copies role, thereby ensuring administrators cannot disable the lagged database copy.

    Set-ManagementRoleEntry "Protected Database Copies\Set-MailboxDatabaseCopy" –Parameters ReplayLagTime -RemoveParameter

  5. Add the Protected Database Copies role to the protected role groups.

    New-ManagementRoleAssignment –SecurityGroup "Protected Organization Management” –Role “Protected Database Copies"
     
    New-ManagementRoleAssignment –SecurityGroup "Protected Server Management” –Role “Protected Database Copies"

  6. Add users to the protected role groups.

    $OrgAdmins = Get-RoleGroupMember "Organization Management"
     
    $SrvAdmins = Get-RoleGroupMember "Server Management"
     
    $OrgAdmins | ForEach {Add-RoleGroupMember "Protected Organization Management" –Member $_.Name}
     
    $SrvAdmins | ForEach {Add-RoleGroupMember "Protected Server Management" –Member $_.Name}

  7. Remove the desired users from the Organization Management and Server Management role groups.

You can repeat the appropriate steps for each destructive cmdlet that needs to be restricted. Use the following cmdlet to determine which roles contain the desired cmdlets:

Get-ManagementRoleEntry "*\<cmdlet>" | fl name,role

Lagged Database Copies

A lagged database copy is a rolling point-in-time (up to 14 days) copy of the database. Lagged database copies were first introduced in Exchange 2007 and have evolved considerably since then. Lagged database copies are part of The Preferred Architecture. While the primary reason for deploying a lagged database copy is protection against rare, catastrophic logical corruption events, lagged database copies can be used to recover mailboxes and/or items that have been deleted by a rogue administrator. By implementing the access control mechanisms previously discussed, the lagged database copy is protected from destruction by a rogue administrator.

Mitigating Data Access

There are several mechanisms you can implement to mitigate unwarranted data access within your messaging environment. These mechanisms include auditing, using Bitlocker to encrypt the disk drives, and removing access to certain cmdlets that enable administrators to gain access to user data.

Auditing

There are several different forms of auditing that can be implemented in the messaging environment. Within Exchange, you can enable Administrator Audit Logging. Administrator Audit Logging captures all operations that are performed within the Exchange Management Shell (EMS) or Exchange Admin Center (EAC).

By default, all cmdlets, except Get- or Search- cmdlets are logged in the audit logs. You can change the cmdlet logging via Set-AdminAuditLogConfig; for example, since Search-Mailbox includes the ability to delete content, you will want to add that cmdlet to the list. Audit logs are stored in an arbitration mailbox. Reports can be accessed via the Search-AdminAuditLog, New-AdminAuditLogSearch cmdlets, or via EAC.

In addition to auditing the actions taken by Exchange administrators managing the service, you will also want to audit server operation events (e.g., logon events). For more information, see the Windows Server Security Auditing Overview article and the Audit Policy Recommendations article for securing Active Directory.

Bitlocker

Another way to reduce the attack surface area is to use Bitlocker to ensure that operators with physical access to the servers cannot remove disk drives and access the data contained on them.

As mentioned previously, separation of roles is imperative, otherwise Law #7 cannot be mitigated. As Bitlocker recovery information is stored in Active Directory (specifically the msFVE-RecoveryInformation attribute), delegation of this data must not be granted to Exchange administrators.

Removing Access to Data Access Cmdlets

Using RBAC, organizations can reduce the attack surface area by removing access to cmdlets that allow access to mailbox data. The below table identifies cmdlets that grant access to data, or enable administrators to hide their tracks. Each cmdlet should be evaluated to determine how it is used in your environment and whether it should be removed from day-to-day usage.

Cmdlet

Roles

Add-ADPermission

Active Directory Permissions

Add-MailboxFolderPermission

Mail Recipients

Add-MailboxPermission

Mail Recipients

Add-PublicFolderClientPermission

Public Folders

Export-Mailbox

Public Folders

Export-Message

Transport Queues

New-MailboxExportRequest

Mailbox Search, Mailbox Import Export

New-MailboxSearch

Mailbox Search, Legal Hold

New-MoveRequest

Move Mailboxes

New-InboxRule

Mail Recipients

Search-Mailbox

Mailbox Search, Mailbox Import Export

Set-AdminAuditLogConfig

Audit Logs

Set-InboxRule

Mail Recipients

Set-JournalRule

Journaling

Set-MailboxExportRequest

Mailbox Search, Mailbox Import Export

Set-MailboxFolderPermission

Mail Recipient Creation

Set-MailboxSearch

 

New-TransportRule

Transport Rules, Data Loss Prevention

New-JournalRule

Journaling

New-MailboxRestoreRequest

Mailbox Search, Legal Hold

To remove access to the above cmdlets, follow steps 1-7 from the “Removing Access to Destructive Cmdlets” section.

Requesting Elevation

Over time, administrators will require access to restricted cmdlets to perform a necessary operation (e.g., disabling mailboxes, removing unnecessary transport rules, etc.). There are many ways you could go about this. For example, you could simply build RBAC roles for each individual cmdlet (and/or property) that you restrict; when elevation is required, you add the administrator to the appropriate role group, granting them access, and then remove the administrator when the task is complete. Or you could develop a process and workflow that includes the following:

  1. A means by which an administrator may submit a request for elevated access. The request needs to include specifics like the cmdlets being requested, the targeted servers/users/etc., and the length of that time elevated access is required.
  2. The request can either be implicitly approved based on the requested action, or it can require human approval.
  3. All actions must be logged once elevated access has been granted.
  4. Elevated access must be removed once the time period has expired (either based on the request or based on a default time period for access elevations) or the task has been completed.

How is this addressed in Office 365?

Exchange Online implements a variety of mechanisms to prevent rogue administrators from accessing or destroying data.

Perry Clarke and Vivek Sharma recently discussed Lockbox, a mechanism we use to enforce no standing access, in the article From Inside the Cloud: Who has access to your data within Office 365?

In particular, Exchange Online personnel do not have persistent access to the service or servers; instead, administrators (who have undergone background checks) have to request specific access (to servers, cmdlets, etc.) and can only perform the requested tasks during a specified timeframe. Additionally, for requests for elevation to a role with access to customer data, approval must be performed by another human being and the ability to approve this type of request is restricted to a smaller set of more senior personnel.

We also use other mechanisms to protect the service and customer data. For example, Bitlocker is used to encrypt all disks that contain customer data. When the disks are end-of-life they are shredded, thereby ensuring the data cannot be accessed.

Conclusion

While this article covers the basics, there are many other mechanisms you can implement in your environment to reduce the surface attack area within your messaging environment. For more information about security mechanisms protecting Exchange Online that can be used in on-premises deployments, see the MEC session Exchange Online service security investments: you CAN and SHOULD do this at home.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

Be aware: October 26 2014 Russian time zone changes and Exchange

$
0
0

We wanted to give you a heads up that depending on the version of Exchange you are running, there might be some impact to either names of time zones that are changing on October 26, or the way that actual meetings are displayed in affected time zones. Customers using our newer versions of Exchange, 2010 and 2013, can expect meetings to appear on calendars correctly (provided underlying operating systems have been updated). Customers who are running Exchange 2007 might see meetings displayed at wrong times.

We are committed to correct these inconsistencies in our November release wave.

Please see KB article 3004235 for more information.

Nino Bilic

Come get your Calculator Updates!

$
0
0

Today, we released updated versions of both the Exchange 2010 Server Role Requirements Calculator and the Exchange 2013 Server Role Requirements Calculator.

The Exchange 2010 version is an incremental update and only includes minor bug fixes. You can view what changes have been made, or download the update directly.

In addition to bug fixes, the Exchange 2013 version, on the other hand, includes new functionality.  In particular, the ability to define how many AutoReseed volumes you would like in your design and mailbox space modeling. You can view what changes have been made, or download the update directly.

Mailbox space modeling provides a visual graph that indicates the expected amount of time it will take to consume the send/receive prohibit quota assuming the message profile remains constant.  As you can see from the example below, if I start with a 2GB mailbox with a 200 message profile and allocate a 10GB quota (and assuming no deletes), I expect to consume that quota in roughly 22 months.  Hopefully, this feature will allow you to plan out storage allocation more appropriately moving forward.

image

Modeling

As always, we welcome your feedback.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

November Exchange Releases delayed until December

$
0
0

We know that many of you are anxiously awaiting the release of our quarterly Exchange updates planned for November. Earlier today the Exchange Team decided to hold the release of these packages until December. We made this decision to provide more time to resolve a late breaking issue in the Installer package used with Exchange Server 2013. We have discovered that in some instances, OWA files will be corrupted by installation of a Security Update. The issue is resolved by executing an MSI repair operation before a Security Update is installed. We do not believe this is acceptable behavior and is unfortunately something that customers might only discover after they install a Security Update.

As of this blog announcement, we believe the installer defect is limited to Exchange Server 2013. However, we are also evaluating previous versions of Exchange Server and are delaying the planned 2007 and 2010 releases as well to complete that investigation.

The Exchange team remains committed to ensuring that our customers have the best possible experience and because of that we have opted to delay the November releases to address this issue.

Exchange Team

Exchange releases: December 2014

$
0
0

Editor's Note: Updates added below for important information related to Exchange Server 2010 SP3 Update Rollup 8.

The Exchange team is announcing today a number of releases. Today’s releases include updates for Exchange Server 2013, 2010, and 2007. The following packages are now available on the Microsoft download center.

These releases represent the latest set of fixes available for each of their respective products. The releases include fixes for customer reported issues and minor feature improvements. The cumulative updates and rollup updates for each product version contain important updates for recently introduced Russian time zones, as well as fixes for the security issues identified in MS14-075. Also available for release today are MS14-075 Security Updates for Exchange Server 2013 Service Pack 1 and Exchange Server 2013 Cumulative Update 6.

Exchange Server 2013 Cumulative Update 7 includes updates which make migrating to Exchange Server 2013 easier. These include:

  • Support for Public Folder Hierarchies in Exchange Server 2013 which contain 250,000 public folders
  • Improved support for OAB distribution in large Exchange Server 2013 environments

Customers with Public Folders deployed in an environment where multiple Exchange versions co-exist will want to read Brian Day’s post for additional information.

Cumulative Update 7 includes minor improvements in the area of backup. We encourage all customers who backup their Exchange databases to upgrade to Cumulative Update 7 as soon as possible and complete a full backup once the upgrade has been completed. These improvements remove potential challenges restoring a previously backed up database.

For the latest information and product announcements about Exchange 2013, please read What's New in Exchange 2013, Release Notes and Exchange 2013 documentation on TechNet.

Cumulative Update 7 includes Exchange-related updates to Active Directory schema and configuration. For information on extending schema and configuring Active Directory, please review Prepare Active Directory and Domains in Exchange 2013 documentation.

Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current Cumulative Update release.

Update 12/10/2014:

An issue has been identified in the Exchange Server 2010 SP3 Update Rollup 8. The update has been recalled and is no longer available on the download center pending a new RU8 release. Customers should not proceed with deployments of this update until the new RU8 version is made available. Customers who have already started deployment of RU8 should rollback this update.

The issue impacts the ability of Outlook to connect to Exchange, thus we are taking the action to recall the RU8 to resolve this problem. We will deliver a revised RU8 package as soon as the issue can be isolated, corrected, and validated. We will publish further updates to this blog post regarding RU8.

This issue only impacts the Exchange Server 2010 SP3 RU8 update, the other updates remain valid and customers can continue with deployment of these packages.

The Exchange Team


Concerning Trends Discovered During Several Critical Escalations

$
0
0

Over the last several months, I have been involved in several critical customer escalations (what we refer to as critsits) for Exchange 2010 and Exchange 2013. As a result of my involvement, I have noticed several common themes and trends. The intent of this blog post is to describe some of these common issues and problems, and hopefully this post will lead you to come to the same conclusion that I have – that many of these issues could have been avoided by taking sensible, proactive steps.

Software Patching

By far, the most common issue was that almost every customer was running out-of-date software. This included OS patches, Exchange patches, Outlook client patches, drivers, and firmware. One might think that being out-of-date is not such a bad thing, but in almost every case, the customer was experiencing known issues that were resolved in current releases. Maintaining currency also ensures an environment is protected from known security defects. In addition, as the software version ages, it eventually goes out of support (e.g., Exchange Server 2010 Service Pack 2).

Software patching is not simply an issue for Microsoft software. You must also ensure that all inter-dependent solutions (e.g., Blackberry Enterprise Server, backup software, etc.) are kept up-to-date for a specific release as this ensures optimal reliability and compatibility.

Microsoft recommends adopting a software update strategy that ensures all software follows N to N-1 policy, where N is a service pack, update rollup, cumulative update, maintenance release, or whatever terminology is used by the software vendor. We strongly recommend that our customers also adopt a similar strategy with respect to hardware firmware and drivers ensuring that network cards, BIOS, and storage controllers/interfaces are kept up to date.

Customers must also follow the software vendor’s Software Lifecycle and appropriately plan on upgrading to a supported version in the event that support for a specific version is about to expire or is already out of support.

For Exchange 2010, this means having all servers deployed with Service Pack 3 and either Rollup 7 or Rollup 8 (at the time of this writing). For Exchange 2013, this means having all servers deployed with Cumulative Update 6 or Cumulative Update 7 (at the time of this writing).

For environments that have a hybrid configuration with Office 365, the servers participating in the hybrid configuration must be running the latest version (e.g., Exchange 2010 SP3 RU8 or Exchange 2013 CU7) or the prior version (e.g., Exchange 2010 SP3 RU7 or Exchange 2013 CU6) in order to maintain and ensure compatibility with Office 365. There are some required dependencies for hybrid deployments, so it’s even more critical you keep your software up to date if you choose to go hybrid.

Change Control

Change control is a critical process that is used to ensure an environment remains healthy. Change control enables you to build a process by which you can identify, approve, and reject proposed changes. It also provides a means by which you can develop a historical accounting of changes that occur. Often times I find that customers only leverage a change control process for “big ticket” items, and forego the change control process for what are deemed as “simple changes.”

In addition to building a change control process, it is also critical to ensure that all proposed changes are vetted in a lab environment that closely mirrors production, and includes any 3rdparty applications you have integrated (the number of times I have seen Exchange get updated and heard the integrated app has failed is non-zero, to use a developer’s phrase).

While lab environments provide a great means to validate the functionality of a proposed change, they often do not provide a view on the scalability impact of a change. One way to address this is to leverage a “slice in production” where a change is deployed to a subset of the user population. This subset of the user population can be isolated using a variety of means, depending on the technology (e.g., dedicated forests, dedicated hardware, etc.). Within Office 365, we use slices in productions a variety of different ways; for example, we leverage them to test (or what we call dogfood) new functionality prior to customer release and we use it as a First Release mechanism so that customers can experience new functionality prior to worldwide deployment.

If you can’t build a scale impact lab, you should at a minimum build an environment that includes all of the component pieces you have in place, and make sure you keep it updated so you can validate changes within your core usage scenarios.

The other common theme I saw is bundling multiple changes together in a single change control request. While bundling multiple changes together may seem innocuous, when you are troubleshooting an issue, the last thing you want to do is make multiple changes. First, if the issue gets resolved, you do not know which particular change resolved the issue. Second, it is entirely possible the changes may exacerbate the current issue.

Complexity

Failure happens. There is no technology that can change that fact. Disks, servers, racks, network appliances, cables, power substations, pumps, generators, operating systems, applications, drivers, and other services – there is simply no part of an IT service that is not subject to failure.

This is why we use built-in redundancy to mitigate failures. Where one entity is likely to fail, two or more entities are used. This pattern can be observed in Web server arrays, disk arrays, front-end and back-end pools, and the like. But redundancy can be prohibitively expensive (as a simple multiplication of cost). For example, the cost and complexity of the SAN-based storage system that was at the heart of Exchange until the 2007 release, drove the Exchange Team to evolve Exchange to integrate key elements of storage directly into its architecture. Every SAN system and every disk will ultimately fail, and implementing a highly-redundant system using SAN technology is cost-prohibitive, so Exchange evolved from requiring expensive, scaled-up, high-performance storage systems, to being optimized for commodity scaled-out servers with commodity low-performance SAS/SATA drives in a JBOD configuration with commodity disk controllers. This architecture enables Exchange to be resilient to any storage failure.

By building a replication architecture into Exchange and optimizing Exchange for commodity hardware, failure modes are predictable from a hardware perspective, and that redundancy can removed from other hardware layers, as well. Redundant NICs, redundant power supplies, etc., can also be removed from the server hardware. Whether it is a disk, a controller, or a motherboard that fails, the end result is the same: another database copy is activated on another server.

The more complex the hardware or software architecture, the more unpredictable failure events can be. Managing failure at scale requires making recovery predictable, which drives the necessity for predictable failure modes. Examples of complex redundancy are active/passive network appliance pairs, aggregation points on a network with complex routing configurations, network teaming, RAID, multiple fiber pathways, and so forth.

Removing complex redundancy seems counter-intuitive – how can removing hardware redundancy increase availability? Moving away from complex redundancy models to a software-based redundancy model creates a predictable failure mode.

Several of my critsit escalations involved customers with complex architectures where components within the architecture were part of the systemic issue trying to be resolved:

  1. Load balancers were not configured to use round robin or least connection management for Exchange 2013. Customers that did implement least connection management, did not have the “slow start” feature enabled. Slow start ensures that when a server is returned to a load-balanced pool, it is not immediately flooded with connections. Instead, the connections are slowly ramped up on that server. If your load balancer does not provide a slow start function for least connection management, we strongly recommend using round robin connection management.
  2. Hypervisor hosts were not configured in accordance with vendor recommendations for large socket/pCPU machines.
  3. Firewalls between Exchange servers, Active Directory servers, or Lync servers. As discussed in Exchange, Firewalls, and Support…Oh, my!, Microsoft does not support configurations when Exchange servers have network port restrictions that interfere with communicating with other Exchange servers, Active Directory servers, or Lync servers.
  4. Ensuring the correct file-based anti-virus exclusions are in place.
  5. Deploying asymmetric designs in a “failover datacenter.” In all instances, there were fewer servers in the failover datacenter than the primary datacenter. The logic used in designing these architectures was that the failover datacenter would only be used during maintenance activities or during catastrophic events. The fundamental flaw in this logic is that it assumes there will not be 100% user activity. As a result, users are affected by higher response latencies, slower mail delivery, and other performance issues when the failover datacenter is activated.
  6. SSL offloading (another supported, but rarely recommended scenario) was not configured per our guidance.
  7. Storage area networks were not designed to deliver the capacity and IO requirements necessary to support the messaging environment. We have seen customers invest in tiered storage to help Exchange and other applications; however, due to the way the Extensible Storage Engine and the Managed Store work and the random nature of the requests being made, tiered storage is not beneficial for Exchange. The IO is simply not available when needed.

How can the complexity be reduced? For Exchange, we use predictable recovery models (for example, activation of a database copy). Our Preferred Architecture is designed to reduce complexity and deliver a symmetrical design that ensures that the user experience is maintained when failures occur.

Ignoring Recommendations

Another concerning trend I witnessed is that customers repeatedly ignored recommendations from their product vendors. There are many reasons I’ve heard to explain away why a vendor’s advice about configuring or managing their own product was ignored, but it’s rare to see a case where a customer honestly knows more about how a vendor’s product works than does the vendor. If the vendor tells you to configure X or update to version Y, chances are they are telling you for a reason, and you would be wise to follow that advice and not ignore it.

Microsoft’s recommendations are grounded upon data- the data we collect during a support call, the data we collect during a Risk Assessment, and the data we get from you. All of this data is analyzed before recommendations are made. And because we have a lot of customers, the collective learnings we get from you plays a big part.

Deployment Practices

When deploying a new version of software, whether it's Exchange or another product, it's important to follow an appropriate deployment plan. Customers that don't take on the unnecessary risk of running into unexpected issues during the deployment.

Proper planning of an Exchange deployment is imperative. At a minimum, any deployment plan you use should include the following steps:

  1. Identify the business and technical requirements that need to be solved.
  2. You'll need to know your peak usage time(s) and you will collect IO and message profile data during your peak usage time(s).
  3. Design a solution based on the requirements and data collected.
  4. Then, you use the Exchange Server Role Requirements Calculator to model the design based on this collected data and any extrapolations required for your design.
  5. Then, you'll procure the necessary hardware based on the calculator output, design choices, and leverage the advice of your hardware vendor.
  6. Next, you'll configure the hardware according to your design.
  7. Before going into production, you'll validate the storage system with Jetstress (following the recommendations in the Jetstress Field Guide) to verify that your storage configuration can meet the requirements defined in the calculator.
  8. Once the hardware has been validated you can deploy a pilot that mirrors your expected production load.
  9. Be sure to collect performance data and analyze it. Verify that the data matches your theoretical projections. If the pilot requires additional hardware to meet the demands of the user base, optimize the design accordingly.
  10. Deploy the optimized design and start onboarding the remainder of your users.
  11. Continue collecting data and analyzing it, and adjust if changes occur.

The last step is important. Far too often, I see customers implement an architecture and then question why the system is overloaded. The landscape is constantly evolving. Years ago, bring your own device (BYOD) was not an option in many customer environments, whereas, now it is becoming the norm. As a result, your messaging environment is constantly changing – users are adapting to the larger mailbox quotas, the proliferation of devices, the capabilities within the devices, etc. These changes affect your design and can consume more resources. In order to account for this, you must baseline, monitor, and evaluate how the system is performing and make changes, if necessary.

Historical Data

To run a successful service at any scale, you must be able to monitor the solution to not only identify issues as they occur in real-time, but to also proactively predict and trend how the user base or user base activity is growing. Performance, event log and protocol logging data provides two valuable functions:

  1. It allows you to trend and determine how your users’ message profile evolves over time.
  2. When an issue occurs, it allows you to go back in time and see whether there were indicators that were missed.

The data collected can also be used to build intelligent reports that expose the overall health of the environment. These reports can then be shared at monthly service reviews that outline the health and metrics, actions taken within the last month, plans for the next month, issues occurring within the environment and steps being taken to resolve the issues.

If you do not have a monitoring solution capable of collecting and storing historical data, you can still collect the data you need.

  • Exchange 2013 captures performance data automatically and stores it in the Microsoft\Exchange Server \V15\Logging\Diagnostics\DailyPerformanceLogs folder. If you are not running Exchange 2013, you can use Experfwiz to capture the data.
  • Event logs capture all relevant events that Exchange writes natively. Unfortunately, I often see customers configure Event logs to flush after a short period of time (one day). Event logs should collect and retain information for one week at a minimum.
  • Exchange automatically writes a ton of useful information into protocol logs that can tell you how your users and their devices behave. Log Parser Studio 2.2 provides means to interact with this data easily.
  • Message tracking data is stored on Hub Transport servers and/or Mailbox servers and provides a wealth of information on the message flow in an environment.

Summary

As I said at the beginning of this article, many of these customer issues could have been avoided by taking sensible, proactive steps. I hope this article inspires you to investigate how many of these might affect your environments, and more importantly, to take steps to resolve them, before you are my next critsit escalation.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

Exchange 2013 and Exchange 2010 Coexistence with Kerberos Authentication

$
0
0

In April 2011, I documented our recommendation around utilizing Kerberos authentication for MAPI clients to address scalability limits with NTLM authentication. The solution leverages deploying an Alternate Service Account (ASA) credential so that domain-joined and domain-connected Outlook clients, as well as other MAPI clients, can leverage Kerberos authentication.

Recently, we published guidance on how to enable Kerberos Authentication for Exchange 2013 MAPI clients. While this guidance explains the necessary steps to deploy the ASA credential to Exchange 2013, it does not describe what steps you must take to coexist in Exchange 2010 environment. There are certain steps you must take in order to deploy Kerberos authentication for Exchange 2013 and coexist with Exchange 2010.

As with all configuration changes, we recommend you thoroughly test this in a lab environment that mirrors your production environment.

Step 1 – Deploy Outlook Updates

In order to ensure an Exchange 2013 mailbox utilizing Kerberos authentication can connect via the Outlook client to legacy Public Folders and shared mailboxes hosted on Exchange 2010, the Outlook client must be running the following minimum versions:

Until you install these Outlook updates, you must not attempt to enable Kerberos authentication within your messaging environment when coexisting with Exchange 2013 and Exchange 2010, otherwise your users will see continuous authentication dialog prompts.

Step 2 – Configure Legacy Public Folder Access

As discussed in On-Premises Legacy Public Folder Coexistence for Exchange 2013 CU7 and Beyond, in order for Exchange 2013 mailboxes to access legacy Public Folders correctly, we need to ensure that the legacy Public Folders are discoverable via AutoDiscover.  This enables the Outlook client to perform a second AutoDiscover request using the legacy Public Folder discovery mailbox’s SMTP address, allowing Outlook to use the correct connection endpoint and authentication settings.

For more information on how to configure legacy Public Folder Access, see the Configure legacy public folders where user mailboxes are on Exchange 2013 servers TechNet article.

Step 3 – Create a New Alternate Service Account Credential

The RollAlternateserviceAccountCredential.ps1 script cannot deserialize objects and pass them between servers that are different versions. This means the script cannot be used to copy the credentials from an Exchange 2010 server or push the credentials to an Exchange 2010 server. As a result, Exchange 2013 and Exchange 2010 cannot share the same Alternate Service Account (ASA) credential.

The Exchange 2013 ASA has the same requirements that were established with Exchange 2010. Specifically, all computers within the Client Access server array must share the same service account. In addition, any Client Access servers that participate in an unbound namespace or may be activated as part of a datacenter switchover must also share the same service account. In general, it’s sufficient to have a single account per forest, but knowing that 2010 and 2013 can’t share the same ASA, this should lead you to conclude you need one per version, per forest.

You can create a computer account or a user account for the alternate service account. Because a computer account doesn’t allow interactive logon, it may have simpler security policies than a user account and is therefore the preferred solution for the ASA credential. If you create a computer account, the password doesn't actually expire, but we still recommend that you update the password periodically. The local group policy can specify a maximum account age for computer accounts and in some customer environments scripts are utilized on a scheduled basis to periodically delete computer accounts that don’t meet current policies. To ensure that your computer accounts aren't deleted for not meeting local policy, update the password for computer accounts periodically. Your local security policy will determine when the password must be changed.

Step 4 – Remove HTTP Service Principal Names from Exchange 2010 ASA

At this point you will be required to schedule an outage for your user population. If you do not, internal users may experience authentication dialogs while attempting to connect to HTTP resources (e.g., Autodiscover, OAB downloads) within Outlook.

If you followed our guidance for Exchange 2010, you have at least the following Service Principal Name (SPN) records associated with the Exchange 2010 ASA:

  • http/mail.corp.contoso.com
  • http/autod.corp.contoso.com
  • exchangeMDB/outlook.corp.contoso.com
  • exchangeRFR/outlook.corp.contoso.com
  • exchangeAB/outlook.corp.contoso.com

The Exchange 2010 ASA will continue to retain the exchangeMDB, ExchangeRFR, and ExchangeAB SPN records, but will lose the HTTP records as they will move to the Exchange 2013 ASA.

Use the following steps to remove the HTTP SPNs:

  1. Obtain the HTTP sessions you need to remove from the Exchange 2010 ASA:

    setspn –F <domain\E2010ASA$>

  2. For each HTTP record that needs to be removed, execute the following:

    setspn –D http/<record> <domain\E2010ASA$>

Step 5 – Deploy ASA to Exchange 2013 Client Access Servers

To enable deployment of the ASA credential, the RollAlternateServiceAccountPassword.ps1 has been updated to support Exchange 2013. You need to run this script from CU7 or later. The script is located in the Scripts directory.

For more information on how to use the script, please see the section “Configure and then verify configuration of the ASA credential on each Client Access server” in the article Configuring Kerberos authentication for load-balanced Client Access servers.

Step 6 – Assign the Service Principal Names to the Exchange 2013 ASA

Now that the ASA has been deployed to the Exchange 2013 Client Access servers, you can assign the SPNs using the following command:

setspn –S http/<record> <domain\E2013ASA$>

Step 7 – Enable Kerberos Authentication for Outlook clients

By default, Kerberos authentication is not enabled for internal clients in Exchange 2013.

To enable Kerberos authentication for Outlook Anywhere clients, run the following command against each Exchange 2013 Client Access server:

Get-OutlookAnywhere -server <server> | Set-OutlookAnywhere -InternalClientAuthenticationMethod Negotiate

To enable Kerberos authentication for MAPI over HTTP clients, run the following against each Exchange 2013 Client Access server:

Get-MapiVirtualDirectory -Server <server> | Set-MapiVirtualDirectory -IISAuthenticationMethods Ntlm, Negotiate

Once you confirm the changes have replicated across Active Directory and verified Outlook clients have connected using Kerberos authentication (which you can determine via the HTTPProxy logs on the server and klist on the client), the scheduled outage is effectively over.

Summary

Exchange 2010 and Exchange 2013 coexistence requires each version to have a unique ASA credential in order to support Kerberos authentication with MAPI clients.  In addition, Outlook client updates are required to support all coexistence scenarios.

For more information, including information on how to plan what SPNs you should deploy with your ASA credential, see Configuring Kerberos Authentication for Load-Balanced Client Access Servers.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

Updates

  • 3/11/15: Added section on configuring legacy Public Folder access.

Automated Hybrid Troubleshooting Experience

$
0
0

One of the more popular deployment options for Office 365, and more specifically Exchange Online, is the Exchange Hybrid Configuration. This deployment option gives customers a way to move some of their mailboxes to Exchange Online, while keeping some on-premises. Our goal with an Exchange Hybrid configuration is to make the two physically separate environments operate as if they were one. Features such as Mail Flow, Free/Busy, MailTips, and compliance features like eDiscovery searches work seamlessly even for mailboxes in different environments.

While all of this automation is great, there are some tradeoffs. When something goes wrong with the Hybrid Configuration Wizard (HCW), what do you do? We’ve heard complaints like, “This thing is a black box!” and “Why do the errors have to be so vague?” While we are proud of the work we’ve done with the HCW, we agree that troubleshooting some of the problems can be difficult.

Announcing a New Automated Troubleshooting Experience

The first version of the new automated troubleshooting experience is quite simple. You run a troubleshooter (http://aka.ms/HCWCheck) from the same server on which HCW failed (Internet Explorer only at this time). This will collect the HCW logs and parse them for you.

If you are experiencing a known issue, there will be a message that tells you what went wrong and then provide you with a link to an article that contains the solution.

Added Benefit

The troubleshooter has predetermined sets of patterns it looks for in the HCW logs to determine the conclusion. Whether it finds a conclusion or not, the HCW logs are uploaded to our datacenter and made available to a support professional in the event that you open a case. This can dramatically reduce the amount of time it takes for your issue to get resolved.

Moving forward

We plan on adding more complex checks and diagnostic results to this troubleshooter. We are working on a series of troubleshooters that will automate troubleshooting and data collection efforts for things like Migrations, Free/Busy, and OAUTH issues.

Got feedback?

We would love to hear any feedback you have. Drop us a line at HCWCheckFeedback@microsoft.com

Kudos goes to the following folks that made this possible:

Saidivya Velagapudi, Caius Preda, Scott Roberts, Nikhil Chinchwade, Karl Buhariwala, Timothy Heeney, Jeremy Kelly, Wei Wu, Cathy Westman, Gabriel Popescu, and numerous others

The Exchange Hybrid Team

Announcing Update Rollup 9 for Exchange Server 2010 Service Pack 3

$
0
0

The Exchange team is announcing today the availability of Update Rollup 9 for Exchange Server 2010 Service Pack 3. Update Rollup 9 is the latest rollup of customer fixes available for Exchange Server 2010 Service Pack 3. The release contains fixes for customer reported issues and previously released security bulletins. Update Rollup 9 is not considered a security release as it contains no new previously unreleased security bulletins. A complete list of issues resolved in Exchange Server 2010 Service Pack 3 Update Rollup 9 may be found in KB3030085. Customers running any Service Pack 3 Update Rollup for Exchange Server 2010 can move to Update Rollup 9 directly.

Update Rollup 9 is available now on the Microsoft Download Center and will be distributed via Microsoft Update in April.

We would also like to remind users of Exchange Server 2010 that the product is now officially in Extended Support. We plan to release one more scheduled Update Rollup for Exchange Server2010 after which we will move to on-demand releases. Lifecycle policy for Exchange Server and other Microsoft products can be found on the Microsoft Support Lifecycle web site.

Note: Documentation may not be fully available at the time this post was published.

The Exchange Team

VSSTester script updated – troubleshoot Exchange 2013 and 2010 database backups

$
0
0

It’s been a while since we talked about VSSTester script!

Murali, who maintained the script before, asked me to take over the script maintenance and thus I’m releasing this updated version. This script update is long overdue but better late than never! As before, you can grab a copy of the script from here.

What’s changed in v1.1:

  • Exchange 2013 support
  • Execution option #3 (custom) is removed
  • Better user input handling
  • Output formatting improvements
  • Execution speed optimization (in particular, gathering system and application events)
  • Various bugfixes (e.g. forcing log file overwrites so that you can run the script multiple times on the same machine)

For the nitty-gritty details on the script, please refer to the original blog post which announced it. The operation of the script is the same and should be fairly self-explanatory.

Here’s a screenshot preview of how the script starts now:

image

I hope this script helps you troubleshoot your database backups. Please give feedback in the comments or by emailing me directly (my email is in the script).

Best,

Matthew Huynh

A better way to collect logs from your Exchange servers

$
0
0

Do you dislike having to collect diagnostics logs manually from each server in order to troubleshoot an issue or establish a server performance baseline? Did it ever happen to you that you collected the logs, but then found out that you needed more logs from the time the issue was happening, but when you went back to try to collect the needed logs - you found out that they have been over-written?

I have been working on a script that allows you to collect all the Exchange logs that you need and copies them over to a different location. The script can even zip up all the files for you so all you have to do is run it on each server and upload the data once it is done. By default, the script will only collect your App/Sys logs from the server, as we want to specify through switches (please see the download page for a list of them all) what data we want to collect so we are able to collect only the relevant data needed for the issue at hand. If you don’t know which logs you need, you can just use the switch AllPossibleLogs and it will collect everything that you need based off the version of Exchange and the role(s) that you have on the server.

Currently the script only supports Exchange 2010 and 2013, as there isn’t much logging in Exchange 2007 running by default. With Exchange 2010 we by default started to log more information to help us troubleshoot issues (which we then increased again in Exchange 2013), however, all the common logs that Exchange Support does look at in order to troubleshoot issues are collected based on the version of Exchange that you run the script on. One of the major differences in the script run between 2010 and 2013 is the way you need to zip up the logs. In 2010 we don’t have .NET Framework 4.5 installed, so I rely on 7za.exe utility that you can use to have the script zip up your files (or you can also specify that you do not want the files to be zipped up at all). In order to take advantage of this, you would just need to have the 7za.exe file in the same directory as the script, and use the SevenZip switch when executing it. The location of where to download this utility is within the header of the script if you need to download it.

Why was this created?

The reason why I created this script was because when I was troubleshooting a performance issue with a customer, we were collecting GBs of data (from all relevant servers) every day in order to get a baseline of the server’s performance and compare that to the time when we were having issues. Since in this environment the customer had to collect the data manually, sometimes all the data that I requested wasn’t there or was already overwritten by the time they got to collecting the logs. By creating this useful script, we were able to just run it each day and it would collect all the correct information, move it over to another drive on the server and zip it up for us so we didn’t have to worry about logs being over-written. Then all the customer needed to do was upload the data to me once it was finished.

How to run the script

Download the script from here. Place it on your Exchange server and open up an EMS or PowerShell session as an Administrator. If you don’t run it as an Administrator, it will error out:

image

Then you just need to determine the location you would like the data saved to, and what logs you would like collected.

If you don’t specify a location, the script will automatically select “C:\MS_Logs_Collection” as your location to copy the data over to. There is also an automatic check that verifies that the drive letter has at least 15 GB of free space (if you are not zipping up the data, this requirements goes to 25 GB), as we don’t want to fill up any drives on the server.

The data that you want to collect is really dependent on the issue that you are having. If you don’t know then it is just best to collect it all and use the AllPossibleLogs switch. This would be recommended if you ran into an issue and you would like to investigate the root cause of, at a later time. If you do know which logs you would like to collect, you can use the switches for each set of logs. For example if you know you want to collect the IIS logs there is a single switch called IISLogs. This switch is going to go through and collect the IIS and HTTPErr logs based off the role(s) that you have installed on the server. It is a common mistake to not collect both, or that the FE and BE IIS logs get mixed together and those things can cause more delays in getting the data reviewed. As we don’t need to whole directory for the IIS and HTTPErr, the script will only go back X number of days (3 is the default) and you can specify how far you would like to go back with the DaysWorth switch. If you would like to know more about the available switches that are in the script and what they do, please look over the script download page.

Here are a couple of examples of the script working and how it collects the data

.\CollectLogsScript.ps1 -FilePath C:\MyLogs -IISLogs -EWSLogs -DaysWorth 1

image

As you can see, we do have progress bar of where the script is currently at. Keep in mind that for large amounts of data, it can take a while to complete a compression step before it moves on to the next set of logs. After we collect one type of log, we go through and zip up that folder in order to save on drive space, then remove the original folder. This is only done for the original sub folders of the main root folder, which is the name of your server. This root folder will also get zipped up and append the M/dd of when you ran the script so you can just have one .zip file, but the original root folder will not get removed from this location.

Once the run is complete you will have something like the following:

image

Looking inside of the folder:

image

If you are collecting additional data like experfwiz, Netmons, ExtraTrace, etc. and that data is in the same directory, you can include it into the collection as well by using the CustomData switch:

.\CollectLogsScript.ps1 -CustomDataDirectory C:\PerfCollection –CustomData

That example will collect everything in the C:\PerfCollection folder and sub directories. You would use this if you wanted to collect an additional directory of data that Exchange doesn’t log by default, or other types of logs that this script doesn’t collect. However, I would not recommend to use this for very large files like process/memory dumps, as it can take a very long time to copy, compress, and then upload larger files. Experiment with what will work for you!

I also included a switch called IISLogDirectory in the script for those environments where the default location of IIS (C:\inetpub\logs\LogFiles) has been moved. Now all you need to add in this parameter is the Parent directory of “W3SVC1” or “W3SVC2” and the script will automatically add the correct IIS sub directory to collect. Here is an example of how to use this:

.\CollectLogsScript.ps1 –FilePath D:\MS_Logs –IISLogs –IISLogDirectory E:\IISLogs\LogFiles

That would collect the IIS logs from the directory called E:\IISLogs\LogFiles\W3SVC1 and/or E:\IISLogs\LogFiles\W3SVC2 if you are on Exchange 2013, with both roles installed.

In the script there is also a switch that is called DatabaseFailoverIssue, which allows you to collect the correct data from your server that recently had a database failover. It collects the performance data, the clustering event logs, the managed availability logs, as well as the Application and System logs from the Exchange 2013 server that you run it on. If you are on Exchange 2010 then it will collect just the Application, System, and clustering logs. If you are running this switch because you did have a failover issue recently, I would highly recommend that you run this script on all servers within the DAG. This should provide you with enough information to help best determine the cause of the issue, as the database could failover for multiple reasons so by collecting all this information, we are covering all possible areas of default logging.

Conclusion

Hopefully providing a common way for you to collect data from your servers will help reduce the amount of manual work that you would need to do to collect the information needed to troubleshoot various issues or periodically collecting logs to establish baseline server behavior. It should also reduce the amount of times that we didn’t collect the correct information, then go back to try to collect it again and it is already overwritten.

Keep an eye out on the download location for newer versions of the script, as I will continue to fix issues that may come up that are reported to me and continue to improve and add features to the script that are needed to help improve the process of collecting data from the servers.

David Paulson

Generating user message profiles for use with the Exchange Calculators

$
0
0

Greetings Exchange Community!

My name is Dan Sheehan, and I work as a Premier Field Engineer for Microsoft, specializing in Microsoft Exchange. As a long time Exchange engineer I am an avid PowerShell scripter, and as such I end up writing a lot of PowerShell scripts.

Today I present to you one of those scripts that assists Exchange administrators/service owners with generating an Exchange “user message profile”. This “user message profile” is a critical part of the information entered into the Exchange Server Role Requirements Calculator and the Exchange Client Network Bandwidth Calculator (more on those below).

The script, which is published here on the TechNet Gallery, is designed to work in environments of all sizes, and has been tested in environments with hundreds of Exchange sites. The current version works with the Management Shell of Exchange 2010 and 2013, and I am working on a version for Exchange 2007. I have a number of scripts published on the TechNet Gallery, both from before I joined Microsoft and after, and I encourage you to check them out as well as the TechNet Gallery in general.

Without any further ado, on to the script.

Background

An Exchange “user message profile” represents the amount of messages a user sends and receives in a day, and the average size of those messages. This critical information is used by the Role Requirements Calculator to determine the typical workload a group of users will place on an Exchange system, which in turn is used to properly size a new Exchange environment design. This information is also used by the Client Bandwidth Calculator to estimate potential bandwidth impact email users will have on the network, depending on their client type and versions used.

Some Exchange service owners “guesstimate” a couple of different user message profiles based on the anticipated workload, while others use data from their existing environment to try and create a messaging profile based on recent user activity. Gathering the necessary information based on recent user activity and creating a user message profile is not an easy task, and quite often service owners turn to third party tools for assistance with this process.

This PowerShell script was created to assist Exchange service owners who want to generate average user message profiles based upon their current environment, but don’t have or want to use a third party tool to gather the necessary information and generate a message profile.

There are other messaging statistics gathering scripts published on the Internet, such as this this one by Mjolinor on the TechNet Gallery and this one by our own Neil Johnson (who BTW is responsible for the Client Bandwidth Calculator). Typically those types of “messagestats” scripts create a per-user report of all messaging activity which takes a long time, includes information beyond what is required to create a user message profile, and the output requires further manipulation to come up with an average user message profile. This script on the other hand focuses on just the messages sent and received by users, which is faster than gathering all messaging activity, and provides a user message profile per Exchange (AD) site versus individual user results.

Functionality

The script uses native Exchange PowerShell cmdlets to extract the mailbox count from mailbox role servers and mailbox messaging activity from the Hub Transport role server message tracking logs for the specified date range. The information is then processed to obtain user/mailbox message profiles consisting of averages for sent messages, received messages, and message sizes.

The script requires a start and end date, and can be run multiple times to accumulate groups/blocks of days into the final output. For instance instead of gathering 30 straight days of data from the Exchange servers, which includes weekend days that generally negatively skew the averages due to reduced user load, the script can be run 4 consecutive times using the 4 groupings of weekdays within that 30 day period which helps keep the averages reflective of a typical work day. The output to a CSV file can then be performed on the 4th and final week.

The script can be run against Exchange servers in specific AD sites, collections of AD sites, or all AD sites, and the generated message profiles that are returned are organized by AD site. The ability to specify a specific collection of AD sites is important for multi-site international Exchange deployments because not every location around the world follows a Monday through Friday work week. This functionality can be combined with the script’s ability to accumulate and combine data from multiple runs into a single report, even if some sites had to be queried using different date ranges.

The script can optionally provide a “total” summary user message profile for users across all collected sites under the site name of “~All Sites” which will show up at the top of the output. The collected data can be exported to a CSV file at the end of each script run, otherwise it will be automatically stored as PowerShell variable for further manipulation.

The script provides detailed output to the screen, including tiered progress bars indicating what site is currently being processed, what server in that site is being processed, and what specific processing activity is occurring. The script output also includes an execution time summary at the end so you can plan for future data gathering time requirements:

clip_image002

Resultant Data

There are a number of script parameters (covered below) that can be used exclude certain types of mailboxes and messages from the data gathering and subsequent output of the script. For example if you exclude all RoomMailboxes from the data gathering, then they won’t be reflected in the script’s output. This means the use of the words “all” and “total” below are in reference to the messages and mailboxes the script was told to gather and process, and not necessarily all of the data available on the servers.

The data in the output is grouped into the following columns per Exchange site (as well as the optional “~All Sites” entry):

image

  1. Site Name– This is the name of the AD site that the Exchange servers live in, as defined in AD Sites and Services.
  2. Mailboxes – This is the count of all mailboxes discovered in the site. This information is used by both Calculators.
  3. AvgTotalMsgs – This is the count of sent and received messages for the mailboxes in the site. This information is used by the Role Requirements Calculator.
  4. AvgTotalKB– This is the average size in KB of all included sent and received messages in the site. This information is used by both Calculators.
  5. AvgSentMsgs– This is the average count of sent messages for the mailboxes in the site. This information is used by the Client Network Bandwidth Calculator.
  6. AvgRcvdMsgs– This is the average count of received messages for the mailboxes in the site. This information is used by the Client Network Bandwidth Calculator.
  7. AvgSentKB– This is the average size in KB of sent messages for the mailboxes in the site.
  8. AvgRcvdKB– This is the average size in KB of received messages for the mailboxes in the site.
  9. SentMsgs– This is the total amount of sent messages for the mailboxes in the site.
  10. RcvdMsgs– This is the total amount of received messages for the mailboxes in the site.
  11. SentKB– This is the total size in KB of all sent messages for the mailboxes in the site.
  12. RcvdKB– This is the total size of KB of all received messages for the mailboxes in the site.
  13. UTCOffset– This is the UTC time zone offset for the AD site. This information is used by the Client Network Bandwidth Calculator.
  14. TimeSpan– This represents the amount of time difference between the clock on the local computer running the script and the clock of the remote server being processed. This is informational only.
  15. TotalDays– This represents the number of days collected for the site. This information is needed by the script when you are using it to combine multiple runs into a single output.

Parameters

The script has a number of parameters to allow administrators control what goes into/is excluded from the user message profile generation process. Most of the parameters are grouped into one of three “parameter sets”, with the exception of one parameter that is in 2 sets and a couple that are not in any set.

Parameter sets group related parameters together, so once a parameter is one set is chosen the only other available parameters are those in that same set and those that aren’t assigned to any set. Furthermore a required parameter is only required within its parameter set, meaning if you are using one parameter set, then the required parameters in other sets don’t apply.

If the concept of parameter sets is a little confusing and you are using Exchange 2013, then you can use the PowerShell 3 (and later) cmdlet Show-Command with the script to create a graphical representation of the parameter sets like this:

Show-Command .\Generate-MessageProfile.ps1

which will pop-up the window:

image

The script also supports the traditional -Verbose and -Debug switches in addition to what’s listed below:

Parameter

Set

Required

Description

ADSites

Gather

Optional

When set to “*” indicates all AD sites with Exchange should be processed. Alternatively explicit site names, site names with wild cards, or any combination thereof can be used to specify multiple AD sites to filter on. If no site is defined, the local AD site will be used. The format for multiple sites is each site name in quotes, separated by a comma with no spaces such as: “Site1”, “Site2”, “AltSite*”, etc…

StartDate

Gather

Required

Specifies the day to start the message tracking log search on, which starts at 12:00AM. The format is MM/DD/YYYY.

EndDate

Gather

Required

Specifies the day to end the message tracking log search on, which ends at 12:00AM. This means that if you want to search Monday through Friday, you need to specify the end date of Saturday so the search stops at 12:00AM Saturday. The format is MM/DD/YYYY.

ExcludePFData

Gather

Optional

Tries to filter out messages sent to or from Exchange 2010 Public Folder databases.

NOTE: This parameter is not recommended because its filter relies on message subject line filtering which could potentially filter out user messages. Additionally this does not filter out all Public Folder messaging data because some Public Folder message subject lines were not included due to the high likelihood that users would use them in their own messages.

ExcludeHealthData

Gather

Optional

Excludes messages sent to and the inclusion of “extest_” mailboxes and Exchange 2013 “HealthMailbox” mailboxes.

NOTE: Because the extest and HealthMailboxes can generate a lot of traffic, it is recommended to use this switch to get a more accurate message profile reflection of your users.

ExcludeRoomMailboxes

Gather

Optional

Excludes message sent to and the inclusion of room mailboxes. By default equipment and discovery mailboxes are excluded from the as they negatively skew the average user message profile. Room mailboxes are included by default because they can send/receive email.

NOTE: This parameter is not recommended if you have active conference room booking in your environment as that means you have active message traffic to and from room mailboxes.

BypassRPCCheck

Gather

Optional

Instructs the script to bypass testing RPC connectivity to the remote computers by using Get-WMIObject. Bypassing the RPC check should not be necessary as long as the account running the script has the appropriate permissions to connect to WMI on the remote computers.

ExcludeSites

Gather

Import

Optional

Specifies which sites should be excluded from data processing. This is useful when you want to use a wild card to gather data from multiple sites, but you want to exclude specific sites that would normally be included in the wild card collection. For data importing, this is useful when you want to exclude sites from a previous collection. The format for multiple sites is each site name in quotes separated by a comma with no spaces such as:

"Site1","Site2", etc...

NOTE: Wild cards are not supported.

InCSVFile

Import

Required

Specifies the path and file name of the CSV to import previously gathered data from.

InMemory

Existing

Required

Instructs the script to only use existing in-memory data. This intended only to be used with the AverageAllSites parameter switch.

AverageAllSites

<None>

Optional

Instructs the script to create an "~All Sites" entry in the collection that represents an average message profile of all sites collected. If an existing "~All Sites" entry already exists, its data is overwritten with the updated data.

OutCSVFile

<None>

Optional

Specifies the path and file name of the CSV to export the gathered data to. If this parameter is omitted then the collected data is saved in the shell variable $MessageProfile.
NOTE: Do not use this parameter if you are collecting multiple weeks of data individual, such as collections of work weeks to avoid weekends, until the last week so only the complete data set exported to a CSV.

NOTE: This list of parameters will be updated on the TechNet Gallery posting as the script is updated.

Examples

The following are just some examples of the script being used:

1. Process Exchange servers in all sites starting on Monday 12/1/2014 through the end of Friday 12/5/2014. Export the data, excluding the message data for Exchange 2013 HealthMailboxes and any extest_ mailboxes, to the AllSites.CSV file.

Generate-MailboxProfile.ps1 -ADSites * -StartDate 12/1/2014 -EndDate 12/6/2014 -OutCSVFile AllSites.CSV -ExcludeHealthData

2. Process Exchange servers in AD sites whose name starts with "East", starting on Monday 12/1/2014 through the end of Monday 12/1/2014. Output the additional Verbose and Debug information to the screen while the script is running. The collected data is made available in the $MessageProfile variable after the script completes.

Generate-MailboxProfile.ps1 -ADSites East* -StartDate 12/1/2014 -EndDate 12/2/2014 –Verbose -Debug

3. Process Exchange servers in the EastDC1 AD site, and any sites that start with the name "West", starting on Monday 12/1/2014 through the end of Tuesday 12/30/2014. Export the data, which should exclude most Public Folder traffic, to the MultiSites.CSV file.

Generate-MailboxProfile.ps1 -ADSites “EastDC1”,”West*” -StartDate 12/1/2014 -EndDate 12/31/2014 -OutCSVFile MultiSites.CSV –ExcludePFData

4. Import the data from the PreviousCollection CSV file in the current working directory into the in-memory data collection $MessageProfile for future use.

Generate-MessageProfile.ps1 -InCSVFile .\PreviousCollection.CSV

5. Process the previously collected data stored in the $MessageProfile variable and add an average for all the sites is to the data collection as the site name “~All Sites”.

Generate-MessageProfile.ps1 -InMemory -AverageAllSites

FAQ

1. Is the output generated by this script an accurate representation of my user’s messaging profile, which I can use in other tools such as the Role Requirements Calculator?

  • This script generates a point in time reflection of your user’s messaging activity. The data is only as good as the date range(s) you selected to run it in, the data you opted to include or exclude, and the information stored on the accessible servers. For example if you ran this script during date range that included a holiday and a lot of users took vacation, then the information is going to reflect a lower average message profile than a more “normal” work period would reflect.
  • Taking into consideration that this script will only reflect the messaging activity of your users during your selected date range, you should use the output as a guideline for formulating the message profile to represent your users in other tools.

2. Should I inflate/enhance the message profile produced by this script to give myself some “elbow room” in my Exchange system design?

  • If you are designing an email system that is going to need to last for multiple years, it’s probably a good idea to increase the numbers slightly to account for future growth of your system and the likelihood that yours will increase their message profile over time. How much you inflate the information is up to you.

3. The messaging profile for my users seems lower than I expected. What are some factors that could attribute to this/how can I increase the values generated by the script?

  • Review the data range(s) you chose when running the script to see if they were periods of time where user activity was expected to be low.
  • If your date range(s) include weekends/non-work days, re-run the script excluding those days. This may require multiple cumulative runs if you want to include multiple work weeks in the average.
  • If you have a lot of resource rooms that are rarely used but you did not exclude them, then try re-running the script with the ExcludeRoomMailboxes parameter to see if the averages increase. Conversely if you used some of the script’s parameters to exclude data, re-running the query without the exclusions may increase the average as well. You will need to test various parameter combinations in your environment until you are happy with the results.
  • If you recently decommissioned any Hub Transport role servers in a site, then the message tracking logs stored on those servers that provide user activity details were removed as well. Therefore it his highly recommended that this script only be run on sites that have not had any Hub Transport role servers decommissioned during the specified time ranges. The script even has a built in warning when it detects a Hub Transport role server was added to a site during the specified date range, to remind you that if another Hub Transport role server was recently removed from that site as well then the user message profile could be negatively affected.

4. Why don’t I see any per-user information? Why is this site based?

  • This script was designed to maximize speed by gathering messaging profile information on a per-site basis to facilitate the use of both the Role Requirements and Client Network Bandwidth Calculators. The Client Bandwidth Calculator wants the message profile information on a per-site basis, and the per-site basis works for the Requirements Calculator as well. Reporting on per-user information is being considered for a future version of this script.
  • Per-user information is not needed for either Calculator. Separate user profiles can be optionally put into each Calculator using the same message profile but reflecting other differences such as larger mailboxes or expected IOPS increases (such as when a group of users also using mobile devices).
  • If you require per-user reporting, please use one of the scripts I referenced in the Background section.

5. Why did I get an alert that one or more sites were skipped or excluded?

  • A site will be skipped if there were connectivity issues to any server in the site. Since a message profile for a site must contain data from all of the servers, missing data from even one server could result in incomplete information. Therefore the script will skip the site if it encounters connectivity issues to even one server versus reporting only partial data.
  • A site will be excluded if there are no mailboxes or messaging activity found in it. Passive Exchange DR sites with no active mailbox databases are an example of a site that will be safely excluded. Even though there may be active Hub Transport servers in those sites, their message tracking data is not needed as they will hand messages off to Hub Transport role servers in the site(s) with the target mailboxes. The logs from those final Hub Transport role servers will in turn be used for the message profile generation.
  • If any sites were skipped for data collection issues, they will be recorded in a $SkippedSites variable which will be available after the script finishes. This allows you to re-run the script and specify the $SkippedSites as the value for the ADSites parameter, which causes the script to focus gathering data only from those skipped sites. This is helpful in cases where server connectivity issues were due to temporary WAN connectivity issues, and another run of the script will process those skipped sites successfully.

6. Why can’t I specify the hours of a day I want to be searched in addition to the days?

  • The script is designed to work with whole/entire days, not fractions of a day, to create the averages. Specifying a time of day would result in a faction of a day which is not supported in creating a “per day” user message profile average.

7. Why does the EndDate need to be the day following the day I want to stop reporting on?

  • When only a date is used for a “DateTime” variable, PowerShell assigns the time for that day as 12:00AM. For the StartDate, that time is exactly what needs to be used as that represents the entire day starting 12:00AM. However for the EndDate this causes the data collection to stop at 12:00AM on the specified day, therefore the EndDate needs to be the day following the last day you want included in the output.
  • The script has logic built in to ensure that the Start date does not occur in the future, that the End date does not occur before the Start date, the Start date is at least on day prior to the current date, and that the End date is no later than the current date.

8. Why would I want to store data in a CSV file and then later import it with the script?

  • Sometimes some sites just can’t be reached over the WAN. This allows for the data collection to be performed locally on server in the remote site, and then the data transferred back to the main site via a CSV file where it can be imported into the main data collection.
  • This functionality also allows you to take data collections from different points in time, such as over the course of several weeks or months, and import it into a single longer term user message profile generation.
  • This functionality also allows you to take the data in-memory and remove sites from the collection by exporting it to a CSV, and then re-importing the data to a new collection and using the ExcludeSites parameter to block the import of the unwanted sites.

9. What is the purpose of the InMemory parameter?

  • The only reason to use this switch is if you already have your data loaded into memory, either through one or more gathering or importing processes, and want to use the AverageAllSites parameter to provide a single global user message profile under the site name of “~All Sites”. Essentially this parameter allows you to bypass gathering or importing data and just use what is already “in memory”.

10. Why do I get an error about “inconsistent number of days” when I try to use the AverageAllSites?

  • The process that generates a single global user message profile requires that the value for TotalDays be the same for all collected sites. Otherwise the aggregated data would be represented incorrectly because the TotalDays value is used to calculate the “per day” average. You need to review your site data, most likely by exporting it to a CSV file and reviewing it manually, to determine which sites have different TotalDays recorded and deal with them accordingly.

11. Why is the information saved to the $MessageProfile variable if I don’t use the –OutToCSV parameter? Also how do I “wipe” the collected data from memory so I can start over?

  • Storing the data inside of PowerShell variable is necessary if you want to run the script multiple times to accumulate data, because the script uses this variable to store the cumulative data in between runs.
  • This also allows you to take the in-memory $MessageProfile variable data and pass it to other PowerShell scripts or commands that you wish.
  • You have the option of using the command “$MessageProfile | Export-CSV ….” to create your own CSV if you decide to later store the collected data in a CSV file.
  • To clear the $MessageProfile data from memory use the following command:

$MessageProfile = $Null

12. Why does the output of the script include a value called “TimeSpan” and also the time zone of the remote site?

  • The time span represents the delta in hours, positive or negative, between the server running the script and the remote server it is connecting to. By default when the Get-MessageTrackingLog cmdlet is executed against a remote server, the DateTime values used for the start and end dates passed to it are always from the perspective of the server running the cmdlet. This means that if the computer running the cmdlet is 5 hours behind the remote server, then the dates (which include a time of day) passed to that remote server by the cmdlet would actually be 5 hours behind your intended date.
  • The script uses this time span to properly offset the DateTime values as they are passed to the Get-MessageTrackingLog cmdlet, so they are always processed by the remote server with the original intended dates (and the 12:00AM time of day). Following the example above, the script will add 5 hours to the date when the cmdlet is run against the remote server. Since this value is crucial to accurate script execution, it is recorded in the output for tracking purposes.
  • The Client Network Bandwidth Calculator wants to know the time zone of the user message profile being specified. To facilitate use of this calculator, the site’s time zone information is recorded in the output of the script.

13. Why did you build in an ExcludePFData parameter switch if it doesn’t exclude all Public Folder traffic?

  • Initial testing of the script showed that dedicated Public Folder servers reflected a large amount of Public Folder replication based Hub Transport messaging activity.
  • Because the most accurate depiction of the user messaging profile was desired, a switch was added to try and filter out some Public Folder replication data. Since the only way to consistently identify the Public Folder traffic was by message subject line keyword matching, a filter was created that strips out messages with Public Folder replication subject phrases not likely to be used by users to try and limit accidentally stripping actual user messages.

14. I see Equipment and Discovery mailboxes are excluded, why aren’t Arbitration Mailboxes excluded?

  • Equipment and Discovery mailboxes do not send and receive email through the Hub Transport service, so including them would only serve to negatively impact the user message profile.
  • Arbitration mailboxes on the other hand are normally limited in number and therefore including them in the mailbox count is not expected to dramatically impact the message profile in a negative way. At the same time messages can be sent to and received from Arbitration mailboxes, depending on the organization’s use of features like moderated Distribution Groups, so including them could positively impact the message profile.

Conclusion

So there you have it, a PowerShell script to assist you with generating an average user message profile for your environment, with a number of options for you to tailor it to your preferences. I hope you find it useful with the two calculators, but also any future troubleshooting efforts of your existing environment.

When I finish the Exchange 2007 version, I will attach it to the TechNet Gallery posting, so if you are looking for that version please check back periodically. Likewise as I make enhancements or other changes to the script, I will be updating the TechNet Gallery posting. So please check back with that posting periodically.

Lastly I am always open to suggestions and ideas, so please feel free to leave a comment here or reach out to me directly.

Thanks and happy PowerShelling!

Dan Sheehan
Senior Premier Field Engineer


New Support Policy for Repaired Exchange Databases

$
0
0

The database repair process is often used as a last ditch effort to recover an Exchange database when no other means of recovery is available. The process should only be followed at the advice of Microsoft Support and after determining that all other recovery options have been exhausted. For many years in many versions of Exchange, the repair process has largely been the same. However, that process is changing, based on information Microsoft has gathered from an extensive analysis of support cases.

In short, Microsoft is changing the support policy for databases that have had a repair operation performed on them. Originally a database was supported if the repair was performed using ESEUTIL and ISINTEG/repair cmdlets. Under the new support policy, any database where the repair count is greater than 0 will need to be evacuated – all mailboxes on such a database will need to be moved to a new database.

Existing Repair Process

The process consists of three steps:

  1. Repair the database at the page level
  2. Defragmentation of the database to restructure and recreate the database
  3. Repair of the logical structures within the database

Step 1 of the repair process is accomplished by using ESEUTIL /p. This is typically performed when there is page level corruption in the database - for example, a -1018 JET error, or when a database is left in dirty shutdown state as the result of not having the necessary log files to bring the database to a clean shutdown state. After executing ESEUTIL /p you are prompted to confirm that data loss may result. Selecting OK is required to continue.

db1

[PS] C:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group>eseutil /p '.\Mailbox Database.edb'
Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 08.03
Copyright (C) Microsoft Corporation. All Rights Reserved.
Initiating REPAIR mode...
Database: .\Mailbox Database.edb
Temp. Database: TEMPREPAIR4520.EDB
Checking database integrity.
The database is not up-to-date. This operation may find that this database is corrupt because data from the log files has yet to be placed in the database. To ensure the database is up-to-date please use the 'Recovery' operation.
Scanning Status (% complete)
0 10 20 30 40 50 60 70 80 90 100
|----|----|----|----|----|----|----|----|----|----|
.
Rebuilding MSysObjectsShadow from MSysObjects.
Scanning Status (% complete)
0 10 20 30 40 50 60 70 80 90 100
|----|----|----|----|----|----|----|----|----|----|
...................................................
Checking the database.
Scanning Status (% complete)
0 10 20 30 40 50 60 70 80 90 100
|----|----|----|----|----|----|----|----|----|----|
...................................................
Scanning the database.
Scanning Status (% complete)
0 10 20 30 40 50 60 70 80 90 100
|----|----|----|----|----|----|----|----|----|----|
...................................................
Repairing damaged tables.
Scanning Status (% complete)
0 10 20 30 40 50 60 70 80 90 100
|----|----|----|----|----|----|----|----|----|----|
...................................................
Repair completed. Database corruption has been repaired!
Note:
It is recommended that you immediately perform a full backup of this database. If you restore a backup made before the repair, the database will be rolled back to the state it was in at the time of that backup.
Operation completed successfully with 595 (JET_wrnDatabaseRepaired, Database corruption has been repaired) after 30.187 seconds.

At this point, the database should be in a clean shutdown state and the repair process may proceed. This can be verified with ESEUTIL /mh.

[PS] C:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group>eseutil /mh '.\Mailbox Database.edb'
State: Clean Shutdown

Step 2 is to defragment the database using ESEUTIL /d. Defragmentation requires significant free space on the volume that will host the temporary database (typically 110% of the size of the database must be available as free disk space).

[PS] C:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group>eseutil /d '.\Mailbox Database.edb'
Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 08.03
Copyright (C) Microsoft Corporation. All Rights Reserved.
Initiating DEFRAGMENTATION mode...
Database: .\Mailbox Database.edb
Defragmentation Status (% complete)
0 10 20 30 40 50 60 70 80 90 100
|----|----|----|----|----|----|----|----|----|----|
...................................................
Moving 'TEMPDFRG3620.EDB' to '.\Mailbox Database.edb'... DONE!
Note:
It is recommended that you immediately perform a full backup of this database. If you restore a backup made before the defragmentation, the database will be rolled back to the state it was in at the time of that backup.
Operation completed successfully in 7.547 seconds.

Step 3 is the logical repair of the objects within the database. The method used to accomplish this varies by Exchange version.

In Exchange 2007, ISINTEG is used to perform the logical repair, as illustrated in the following example:

C:\>isinteg -s wingtip-e2k7 -fix -test alltests -verbose -l c:\isinteg.log
Databases for server wingtip-e2k7:
Only databases marked as Offline can be checked
Index Status Database-Name
Storage Group Name: First Storage Group
1 Offline Mailbox Database
Enter a number to select a database or press Return to exit.
1
You have selected First Storage Group / Mailbox Database.
Continue?(Y/N)y
Test Categorization Tables result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Restriction Tables result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Search Folder Links result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s);time: 0h:0m:0s
Test Global result: 0 error(s); 0 warning(s); 0 fix(es); 1 row(s); time: 0h:0m:0s
Test Delivered To result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Repl Schedule result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time:0h:0m:0s
Test Timed Events result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test reference table construction result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Folder result: 0 error(s); 0 warning(s); 0 fix(es); 4996 row(s); time: 0h:0m:2s
Test Deleted Messages result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Message result: 0 error(s); 0 warning(s); 0 fix(es); 1789 row(s); time: 0h:0m:0s
Test Attachment result: 0 error(s); 0 warning(s); 0 fix(es); 406 row(s); time: 0h:0m:0s
Test Mailbox result: 0 error(s); 0 warning(s); 0 fix(es); 249 row(s); time: 0h:0m:0s
Test Sites result: 0 error(s); 0 warning(s); 0 fix(es); 996 row(s); time: 0h:0m:0s
Test Categories result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Per-User Read result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time:0h:0m:0s
Test special folders result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Message Tombstone result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Folder Tombstone result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Now in test 20(reference count verification) of total 20 tests; 100% complete.
Typically when ISINTEG completes, it advises reviewing the isinteg.log file. At the end of the file is a summary section, listing the number of errors encountered. If the number of errors is greater than zero, you need to re-run the command. Continued repairs need to be performed until the error count reaches 0 or the same number of errors is encountered after two executions.
. . . . . SUMMARY . . . . .
Total number of tests : 20
Total number of warnings : 0
Total number of errors : 0
Total number of fixes : 0
Total time : 0h:0m:3s

In Exchange 2010 and later, ISINTEG was deprecated and certain functions were replaced by the New-MailboxRepairRequest and New-PublicFolderDatabaseRepairRequest cmdlets, both of which allow for repair operations to occur while the database is online.

Exchange 2010:

[PS] C:\Windows\system32>New-MailboxRepairRequest -Mailbox user252 -CorruptionType SearchFolder,FolderView,AggregateCounts,ProvisionedFolder,MessagePtagCN,MessageID
RequestID Mailbox ArchiveMailbox Database Server
--------- ------- -------------- -------- ------
7f499ce3-e Wingtip False Mailbox. WINGTIP-E2K10.Wingti...

Exchange 2013:

[PS] C:\>New-MailboxRepairRequest -Mailbox User532 -CorruptionType SearchFolder,FolderView,AggregateCounts,
ProvisionedFolder,ReplState,MessagePTAGCn,MessageID,RuleMessageClass,RestrictionFolder,FolderACL,
UniqueMidIndex,CorruptJunkRule,MissingSpecialFolders,DropAllLazyIndexes,ImapID,ScheduledCheck,Extension1,
Extension2,Extension3,Extension4,Extension5
Identity Task Detect Only Job State Progress
-------- ---- ----------- --------- --------
a44acf2b {Sea False Queued 0

Upon completion of these repair options, typically the database could be mounted and normal user operations continued.

Support Change for Repaired Databases

Over the course of the last two years, we have reviewed Watson dumps for Information Store crashes that have been automatically uploaded by customers’ servers. The crashes were cause by inexplicable, seemingly impossible store level corruption. The types of store level corruption varied and they come from many different databases, servers, Exchange versions, and customers. In almost all of these cases one significant fact was noted – the repair count recorded on the database was > 0.

When ESEUTIL /p is executed, and a repair to the database is necessary, the repair count is incremented and the repair time is recorded in the header of the database. The repair information stored in the database header will be retained after offline defragmentation . Repair information in the header may be viewed with ESEUTIL /mh.

[PS] C:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group>eseutil /mh '.\Mailbox Database.edb'
Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 08.03
Copyright (C) Microsoft Corporation. All Rights Reserved.
Initiating FILE DUMP mode...
Database: .\Mailbox Database.edb
File Type: Database
Format ulMagic: 0x89abcdef
Engine ulMagic: 0x89abcdef
Format ulVersion: 0x620,12
Engine ulVersion: 0x620,12
Created ulVersion: 0x620,12
DB Signature: Create time:04/05/2015 08:39:24 Rand:2178804664 Computer:
cbDbPage: 8192
dbtime: 1059112 (0x102928)
State: Clean Shutdown
Log Required: 0-0 (0x0-0x0)
Log Committed: 0-0 (0x0-0x0)
Streaming File: No
Shadowed: Yes
Last Objid: 4020
Scrub Dbtime: 0 (0x0)
Scrub Date: 00/00/1900 00:00:00
Repair Count: 2
Repair Date: 04/05/2015 08:39:24
Old Repair Count: 0

Last Consistent: (0x0,0,0) 04/05/2015 08:39:25
Last Attach: (0x0,0,0) 04/05/2015 08:39:24
Last Detach: (0x0,0,0) 04/05/2015 08:39:25
Dbid: 1
Log Signature: Create time:00/00/1900 00:00:00 Rand:0 Computer:
OS Version: (6.1.7601 SP 1 NLS 60101.60101)
Previous Full Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Previous Incremental Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Previous Copy Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Previous Differential Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Current Full Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Current Shadow copy backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
cpgUpgrade55Format: 0
cpgUpgradeFreePages: 0
cpgUpgradeSpaceMapPages: 0
ECC Fix Success Count: none
Old ECC Fix Success Count: none
ECC Fix Error Count: none
Old ECC Fix Error Count: none
Bad Checksum Error Count: none
Old bad Checksum Error Count: none
Operation completed successfully in 0.78 seconds.

Uncorrectable corruption can linger in a repaired database and cause store crashes and server instability, we have changed our support policy to require an evacuation of any Exchange database that persistently has a repair count or old repair count equal to or greater than 1. Moving mailboxes (and public folders) to new databases will ensure that the underlying database structure is good, free from any corruption that might not be corrected by the database repair process, and it helps prevent store crashes and server instability.

Tim McMichael

Parsing the Admin Audit Logs with PowerShell

$
0
0

One of the nice features introduced in Exchange 2010 was Admin Audit Logging. Concerned administrators everywhere rejoiced! This meant that a record of Exchange PowerShell activity, organization wide, was now saved and searchable.

Administrators could query the Admin Audit Log, using the Search-AdminAuditLog Cmdlet, and reveal any CmdLets invoked, the date and time they were executed and the identity of the person who issued the commands. However, the results of the search are a bit cryptic and it didn’t allow for easy bulk manipulation like parsing, reporting or archiving.

The main complaint I heard from customers went something like this: “It’s great that I can see what Cmdlets are run, and what switches were used… but I can’t see the values of those switches!” Well, as it turns out, that data has actually been there the whole time; it’s just been stored in a non-obvious manner.

Consider a scenario where you’ve been informed that many, or all, of the mail users in your organization are reporting the wrong phone number listed in the Global Address List. It seems everyone has the same phone number now, let’s say 867-5309.

image

Because your organization uses Office 365 Directory Synchronization (DirSync), you know the change had to occur within your on-premises organization and was then subsequently synchronized to Office 365. The Search-AdminAuditLog Cmdlet must, therefore, be run on-premises.

It’s important to remember this concept. If you were investigating a Send Connector configuration change for your Office 365 – Exchange Online tenant, a search would need to be performed against your tenant instead. But let’s get back to our Jenny Phone number issue.

You know that the change was made on the 6th so you restrict the search to that date.

Search-AdminAuditLog -StartDate "4/6/2015 12:00:00 AM" -EndDate 4/6/2015 11:20:00 AM"

image
(click on screenshots that might be too small to read)

Reviewing the output, you find that Tommy executed the Set-User Cmdlet but no indication as to what parameter(s) or values were used? What exactly did Tommy run? Where are the details!?

Then, you spot a clue. The ‘CmdletParameters’ and ‘ModifiedProperties’ are enclosed with braces { }. Braces are normally indicative of a hash table. You know a hash table is simply a collection of name-value pairs. You wonder if you’re only seeing the keys or a truncated view in this output. Could more details remain hidden?

Digging a bit deeper, you decide to store the search results to an array, named $AuditLog, which will allow for easier parsing.

$AuditLog = Search-AdminAuditLog -StartDate "4/6/2015 12:00:00 AM" -EndDate "4/6/2015 11:20:00 AM"

image

Next, you isolate a single entry in the array. This is done by calling the array variable and adding [0] to it. This returns only the first entry in the array.

$AuditLog[0]

image

To determine the object type of the ‘CmdletParameter’, you use the GetType() method and sure enough, it’s an array list.

$AuditLog[0].CmdletParameters.GetType()

image

Finally, you return the CmdletParameters array list to reveal all the details needed to conclude your investigation.

$AuditLog[0].CmdletParameters

image

Considering there are hundreds or thousands of entries in the audit log, how would you generate a full list of all the objects Tommy has changed? Or better yet, report all objects that he changed where ONLY the ‘Phone’ attribute was modified?

Fortunately, you don’t have to expend too much time on this. My colleague, Matthew Byrd recognized this exact problem and he wrote a PowerShell Script that does all the aforementioned steps for you and then some!

The script can be downloaded from TechNet Gallery and you’ll find it’s well documented and commented throughout. The script includes help (get-help .\Get-SimpleAuditLogReport.ps1) and can be used within Exchange 2010, Exchange 2013 and Office 365 - Exchange Online environments. That said, I’m not going to dissect the script. Instead, I will demonstrate how to use it.

The script simply manipulates or formats the results of the Search-AdminAuditLog query into a much cleaner and detailed output. You form your Search-AdminAuditLog query, then pipe it through the Get-SimpleAuditlogReport script for formatting and parsing.

Here are some usage examples:

This first example will output the results to the PowerShell Screen.

$Search = Search-AdminAuditLog -StartDate "4/6/2015 12:00:00 AM" -EndDate "4/6/2015 11:20:00 AM"
$Search | C:\Scripts\Get-SimpleAuditLogReport.ps1 –agree

image

You can see that the Get-SimpleAuditLogReport.ps1 script has taken results stored in the $Search variable and attempted to rebuild the original Command run. It isn’t perfect but the goal of the script is to give you a command that you could copy and paste into an Exchange Shell Window and it should run.

Should you expect a lot of data to be returned or wish to save the results for later use, this example will save the results to a CSV file.

Search-AdminAuditLog -StartDate "4/6/2015 12:00:00 AM" -EndDate "4/6/2015 11:20:00 AM"| C:\Scripts\Get-SimpleAuditlogReport.ps1 -agree | Export-CSV -path C:\temp\auditlog.csv

image

This example uses one of my favorite output objects, Out-GridView, to display the results. This is a nice hybrid CSV/PowerShell output option. The results shown in the Out-GridView window is sortable and filterable. You can select, copy/paste the filtered results into a CSV file. Meanwhile the raw, unfiltered, results are saved to a CSV file for future later use or archival.

Search-AdminAuditLog -StartDate "4/6/2015 12:00:00 AM" -EndDate "4/6/2015 11:20:00 AM"| C:\Scripts\Get-SimpleAuditlogReport.ps1 -agree | Out-GridView –PassThru | Export-Csv -Path c:\temp\auditlog.csv

image

Here I restrict it to only commands Tommy ran and remove anything that he ran against the discovery mailbox since it is a system mailbox.

image

Copy/Paste the filtered results into a CSV file. The Out-GridView has no built in export or save feature. To save your filtered results, click on an entry and then ctrl-a / ctrl-c to select all and copy results to your clipboard. Finally, in Excel, paste and you’re done.

image

There you have it. Admin Audit Log Mastery – CHECK! Thanks to Matthew Byrd’s wonderful script you can get the most out of your audit logs. Check it out over at TechNet.

Brandon Everhardt

Released: June 2015 Exchange Cumulative Update and Update Rollups

$
0
0

The Exchange team is announcing today the availability of our latest quarterly updates for Exchange Server 2013 as well as updates for Exchange Server 2010 Service Pack 3 and Exchange Server 2007 Service Pack 3.

Cumulative Update 9 for Exchange Server 2013 and UM Language Packs are now available on the Microsoft Download Center. Cumulative Update 9 contains the latest set of fixes and builds upon Exchange Server 2013 Cumulative Update 8. The release includes fixes for customer reported issues, minor product enhancements and previously released security bulletins. A complete list of customer reported issues resolved can be found in Knowledge Base Article KB3049849. Customers running any previous release of Exchange Server 2013 can move directly to Cumulative Update 9 today. Customers deploying Exchange Server 2013 for the first time may skip previous releases and start their deployment with Cumulative Update 9 directly.

For the latest information and product announcements please read What’s New in Exchange Server 2013, Release Notes and product documentation available on TechNet.

Cumulative Update 9 may include Exchange related updates to the Active Directory schema and Exchange configuration when compared with the version of Exchange 2013 you have currently deployed. Microsoft recommends all customers test the deployment of a cumulative update in their lab environment to determine the proper installation process for your production environment. For information on extending the schema and configuring Active Directory, please review the appropriate TechNet documentation.

Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474 to adjust the settings.

Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., CU9) or the prior (e.g., CU8) Cumulative Update release.

Also being released today are, Exchange Server 2010 Service Pack 3 Update Rollup 10 (KB3049853) and Exchange Server 2007 Service Pack 3 Update Rollup 17 (KB3056710). These releases provide minor improvements and fixes for customer reported issues. Update Rollup 10 is the last scheduled release for Exchange Server 2010. Both Exchange Server 2010 and Exchange Server 2007 are in extended support and will receive security and time zone fixes on-demand on a go-forward basis.

Note: If you do not see the new CMDlet parameters available in Exchange 2013 CU9, please perform /PrepareAD manually.

The Exchange Team

Ask The Perf Guy: What’s The Story With Hyperthreading and Virtualization?

$
0
0

There’s been a fair amount of confusion amongst customers and partners lately about the right way to think about hyperthreading when virtualizing Exchange. Hopefully I can clear up that confusion very quickly.

We’ve had relatively strong guidance in recent versions of Exchange that hyperthreading should be disabled. This guidance is specific to physical server deployments, not virtualized deployments. The reasoning for strongly recommending that hyperthreading be disabled on physical deployments can really be summarized in 2 different points:

  • The increase in logical processor count at the OS level due to enabling hyperthreading results in increased memory consumption (due to various algorithms that allocate memory heaps based on core count), and in some cases also results in increased CPU consumption or other scalability issues due to high thread counts and lock contention.
  • The increased CPU throughput associated with hyperthreading is non-deterministic and difficult to measure, leading to capacity planning challenges.

The first point is really the largest concern, and in a virtual deployment, it is a non-issue with regard to configuration of hyperthreading. The guest VMs do not see the logical processors presented to the host, so they see no difference in processor count when hyperthreading is turned on or off. Where this concern can become an issue for guest VMs is in the number of virtual CPUs presented to the VM. Don’t allocate more virtual CPUs to your Exchange server VMs that are necessary based on sizing calculations. If you allocate extra virtual CPUs, you can run into the same class of issues associated with hyperthreading on physical deployments.

In summary:

  • If you have a physical deployment, turn off hyperthreading.
  • If you have a virtual deployment, you can enable hyperthreading (best to follow the recommendation of your hypervisor vendor), and:
    • Don’t allocate extra virtual CPUs to Exchange server guest VMs.
    • Don’t use the extra logical CPUs exposed to the host for sizing/capacity calculations (see the hyperthreading guidance at http://aka.ms/e2013sizing for further details on this).

Jeff Mealiffe
Principal PM Manager
Office 365 Customer Experience

Released: September 2015 Quarterly Exchange Updates

$
0
0

The Exchange team is announcing today the availability of our latest quarterly update for Exchange Server 2013 and Exchange Server 2010 Service Pack 3 Update Rollup 11.

Cumulative Update 10 for Exchange Server 2013 and UM Language Packs are now available on the Microsoft Download Center. Cumulative Update 10 contains the latest set of fixes and builds upon Exchange Server 2013 Cumulative Update 9. The release includes fixes for customer reported issues, minor product enhancements and previously released security bulletins, including MS15-103. A complete list of customer reported issues resolved can be found in Knowledge Base Article KB3078678. Customers running any previous release of Exchange Server 2013 can move directly to Cumulative Update 10. Customers deploying Exchange Server 2013 for the first time may skip previous releases and start their deployment with Cumulative Update 10 directly.

Cumulative Update 10 does not include updates to Active Directory Schema, but does include additional RBAC definitions requiring PrepareAD to be executed prior to upgrading any servers to CU10. PrepareAD will run automatically during the first server upgrade if Setup detects this is required and the logged on user has sufficient permission.

Microsoft recommends all customers test the deployment of a cumulative update in their lab environment to determine the proper installation process for your production environment. For information on extending the schema and configuring Active Directory, please review the appropriate TechNet documentation.

Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474 to adjust the settings.

Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., CU10) or the prior (e.g., CU9) Cumulative Update release.

Also being released today is, Exchange Server 2010 Service Pack 3 Update Rollup 11 (KB3078674). This release provides an important fix for an Information Store crash when customers are upgrading their Lync server infrastructure to Skype for Business. Exchange Server 2010 is in extended support and will receive security and time zone fixes on-demand on a go-forward basis.

The updates released today are important pre-requisites for customers with existing Exchange deployments who will deploy Exchange Server 2016. Cumulative Update 10 is the minimum version of Exchange Server 2013 which will co-exist with Exchange Server 2016. Exchange Server 2010 Service Pack 3 Update Rollup 11, is the minimum version of Exchange Server 2010 which will be supported in a coexistence deployment with Exchange Server 2016. No earlier versions of Exchange will be supported co-existing with Exchange Server 2016. Exchange Server 2016 is anticipated to be released later this year. Customers who plan to deploy this new release of Exchange into their environment should be aware of these important prerequisites.

For the latest information and product announcements please read What’s New in Exchange Server 2013, Release Notes and product documentation available on TechNet.

Note: Documentation may not be fully available at the time this post was published.

The Exchange Team

Viewing all 301 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>