exchange design concepts and best practices

36
park the future. May 4 – 8, 2015 Chicago, IL

Upload: dinhliem

Post on 03-Jan-2017

228 views

Category:

Documents


0 download

TRANSCRIPT

Exchange Design Concepts and Best Practices

Spark the future.May 4 8, 2015Chicago, IL

Microsoft Ignite 2015 2015 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.5/8/2015 11:13 AM1

Exchange Design Concepts and Best PracticesBoris LokhvitskyMCM | ExchangePrincipal Consultant / Delivery ArchitectMicrosoft Consulting ServicesBRK3131

AgendaArchitecture Concepts and Design PrinciplesExchange Evolution: from Hardware to Software Powered SolutionExchange Design PrinciplesAvailability and Reliability: role of Critical Dependencies and Redundant ComponentsArchitecture Models: Shared Infrastructure vs. Building BlocksModern Servers and Datacenters: Scale Like the CloudDesign Options: Supported vs. Recommended vs. StructuredPeople and Process, not just TechnologyHow Architecture Drives Design DecisionsConsolidated vs. Distributed DesignService Site Model: Bound vs. UnboundAll about DAGs: Sizing, IP-less DAG, Database Copy Layout, and Dedicated Replication NetworkVirtualization and Role ConsolidationStorage Challenges: SAN/DAS, RAID/JBOD, Thin Provisioning, Native vs. Low Level Data ReplicationServer and Storage Platform, Disk and Database LayoutBackups and Lagged CopiesArchiving: Retention vs. ComplianceClient Breakdown and Penalty Factors

2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.5/8/2015 11:13 AM3

Architecture Concepts and Design Principles

2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.5/8/2015 11:13 AM4

Hardware vs. Software powered solutionExchange 2003Shared infrastructure with redundant hardware componentsExchange 2010/2013/2016Commodity building blocks with software controlled redundancyNew architecture and design principles

I/O MeterI/O MeterHow much disk performance Exchange database needs?

5

Exchange Design PrinciplesIn modern Exchange world software, not hardware, powers and controls the solutionAvailabilityReduce complexity, simplify the solutionDecrease the number of system dependencies to improve availability and lower the risksUse native capabilities where possible as it makes the design simplerDeploy redundant solution components to increase availability and protect the solutionAvoid failure domains: do not group redundant solution components into blocks that could be impacted by a single failureFunctionality / ProductivityEnable and enhance user experienceProvide functionality and access that is required or expected by the end usersProvide large low cost mailboxesUse Exchange as a single data repositoryIncrease value with Lync and SharePoint integrationBuild a bridge to the cloud ensure feature rich cloud integration and co-existenceOperations: Optimize People and Process, not just TechnologyDecrease complexity of team collaboration by leveraging solution / workload focused teamsSimplify / optimize administration / monitoring / troubleshooting processCost: Reduce / minimize total cost of the ownership (TCO) for the solutionUse commodity hardware and leverage native product capabilitiesImplement storage solution that minimizes cost, complexity, and administrative overhead

6

Availability and ReliabilityFailures *do* happen!Critical system dependencies decrease availabilityDeploy Multi-role serversAvoid intermediate and extra components (e.g. SAN; network teaming; archiving servers)Simpler design is always better: KISSRedundant components increase availabilityMultiple database copiesMultiple balanced serversFailure domains combining redundant components decrease availabilityExamples: SAN; Blade chassis; Virtualization hostsSoftware, not hardware is driving the solutionExchange powered replication and managed availabilityRedundant transport and Safety NetLoad balancing and proxying to the destinationAvailability principles:DAG beyond the Ahttp://blogs.technet.com/b/exchange/archive/2011/09/16/dag-beyond-the-a.aspx

7

Classical shared infrastructure design introduces numerous critical dependency componentsRelying on hardware requires expensive redundant componentsFailure domains reduce availability and introduce significant extra complexityShared Infrastructure and Failure Domains

8

Building Block ArchitectureInexpensive commodity servers and storageScale the solution out, not in; more servers mean better availabilityNearline SAS storage: provide large mailboxes by using large low cost drivesExchange I/O reduced by 93% since Exchange 2003Exchange 2013 database needs ~10 IOPS; single Nearline SAS disk provides ~60 IOPS; single 2.5 15K rpm SAS disk provides ~230 IOPSRedundancy and availability provided and controlled by Exchange, not by infrastructure3+ redundant database copies eliminate the need for RAID and backupsRedundant servers eliminate the need for redundant server components (e.g. NIC teaming or MPIO)DAG is the ultimate building block allowing you to scale the solution

9

Modern Server: Commodity HardwareGoogle, Microsoft, Amazon, Yahoo! use commodity hardware for 10+ years alreadyNot only for messaging but for other technologies as well (started with search, actually)Inexpensive commodity server and storage as a building blockEasily replaceable, highly scalable, extremely cost efficientSoftware, not hardware is the brain of the solution

Photo Credit: Stephen Shankland/CNET

10

People and Process(not just Technology)Decrease complexity of team collaborationSimplify administration / monitoring / troubleshootingSolution focused teamsTraditional application team owns only a small piece of the solution / workflowMultiple teams must be engaged to implement the design or troubleshoot the issueTeam organization based on solution, not on specific infrastructure or technology simplifies administration / troubleshooting and reduces operational costsOwn your solution!Or if you cant do it right, reduce your OpEx by moving to O365 ;)

11

Exchange Product Line Architecture

Exchange PLA: Special tightly scoped reference architecture offering from Microsoft Consulting ServicesBased on deployment best practices and collective customer experienceStructured design based on detailed rule sets to avoid common mistakes and misconceptionsBased on cornerstone design principles:4 database copies across 2 sitesUnbound Service Site model (single namespace)Witness in the 3rd siteMulti-role serversDAS storage with NL SAS or SATA JBOD configurationL7 load balancing (no session affinity)Large low cost mailboxes (25/50 GB standard mailbox size)Enable access for all internal / external clientsSystem Center for monitoringExchange Online Protection for messaging hygieneSupportedRecommendedStructuredStandardizedYou should be here!

12

Exchange On-Premises Custom Design

Exchange On-Premises PLA Design

Exchange OnlinePublic Cloud(Office 365)

Exchange On-Premises recommended best practices

How Architecture Drives Design Decisions

2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.5/8/2015 11:13 AM13

Consolidated vs. Distributed DesignConsolidated design is usually preferred optionMax reduction of server footprint, deployment costs, and administrative overheadSimplifies use of single unified namespaceStill need 2-3 datacenters for proper implementation of site resilienceDistributed design places servers closer to end usersOptimizes client to server traffic (maybe except the DR scenarios)Might be driven by regulatory compliance requirements (keep data on the home soil)Presents challenges for site resilience / DR design / firewalls / administrationChoice between client-server vs. server-server traffic

14

Bound vs. Unbound Service Site ModelBound Service modelBinds user mailboxes to a preferred datacenterUses site specific namespacesUses Active/Passive DAG designBuilding block: two active/passive DAGsPLA recommended for Exchange 2010Unbound Service model (recommended)Users dont have a preferred datacenterAllows use of single unified namespaceUses Active/Active DAG designBuilding block: one active/active DAGPLA recommended for Exchange 2013

15

DAG Sizing: Small, Medium, Large?DAG size is important because DAG is the building blockGeneral guidance is to prefer larger DAG sizeLarger DAGs provide better availability and load balancingIf 1 server with X active mailboxes fails in the N-node DAG, active mailbox count on each server increases only by X/(N-1)Proper symmetric database copy layout is important to achieve good mailbox load balancingLarger DAGs, however, have disadvantages tooLarge DAGs are more vulnerable to network issues as number of network connections in the N-server DAG is N*(N-1)/2 (16-node DAG needs 240 P2P connections!)Intermittent network issues can cause databases misbalanced and DAG nodes evicted; see this article for details:http://aka.ms/partitioned-cluster-networks More impact on cluster writes and increased failure zoneScalability planning due to growth also impacts the decisionAdding just a few servers to the existing DAG is hard as it requires database copy layout changes

16

Database Copy Layout PrinciplesGoal: Provide symmetric database copy layout to ensure even load distributionhttp://blogs.technet.com/b/exchange/archive/2010/09/10/3410995.aspx Server3Failure

Server6Failure

17

Homeless DAGNew capability in Exchange 2013: DAG without a Cluster Administrative Access Point (a.k.a. IP-less DAG)http://blogs.technet.com/b/scottschnoll/archive/2014/02/25/database-availability-groups-and-windows-server-2012-r2.aspx http://blogs.technet.com/b/timmcmic/archive/2015/04/29/my-exchange-2013-dag-has-gone-commando.aspx Recommended and preferred model, default in Exchange 2016Advantages: reduced complexityNo need to deal with cluster network object (CNO) computer account and permissionsNo need to reserve and manage DAG IP addressesSingle cluster resource left (File Share Witness)Disadvantages:Cannot use Failover Cluster Manager must use Powershell cluster commandsMight present issues to some 3rd party applications that use cluster name (e.g. backups) move away from thoseUseful Powershell cmdlets:Get-Cluster -Name DAG01 | select *Get-ClusterNode -Cluster DAG01 [-Name SVR01] | select *Get-ClusterNetwork -Cluster DAG01 [-Name DAGNetwork01] | select *Get-ClusterQuorum -Cluster DAG01 | flGet-ClusterGroup -Cluster DAG01Move-ClusterGroup -Cluster DAG01 -Name "Cluster Group" -Node SVR01Get-ClusterLog Cluster DAG01

18

Network: HA/SR and Replication NetworkHigh Availability (HA) is redundancy of solution components within a datacenterSite Resilience (SR) is redundancy across datacenters providing a DR solutionBoth HA and SR are based on native Exchange data replicationEach database exists in multiple copies, one of them activeData is shipped to passive copies via transaction log replication over the networkIt is possible to use dedicated isolated network for Exchange data replicationNetwork requirements for replication:Each active passive database replication stream generates X bandwidthThe more database copies, the more bandwidth is requiredExchange natively encrypts and compresses replication trafficPros and cons for dedicated replication network => Not recommendedReplication network can help isolating client traffic from replication trafficReplication network must be truly isolated along the entire data transfer path: having separate NICs but sharing the network path after the first switch is meaninglessReplication network requires configuring static routes and eliminating cross talk; this leads to extra complexity and increases risk of human error If server NICs are 10Gbps capable, its easier to have a single network for everythingNo need for network teaming: think of a NIC as JBOD

19

Introduces additional critical solution component and associated performance and maintenance overheadReduces availability and introduces extra complexityCould make sense for small deployments helping consolidate workloads but this introduces shared infrastructureConsolidated roles is a guidance since Exchange 2010 and now there is only a single role in Exchange 2016!Deploying multiple Exchange servers on the same host would create failure domainHypervisor powered high availability is not needed with proper Exchange DAG designsNo real benefits from Virtualization as Exchange provides equivalent benefits natively at the application levelVirtualization vs. Role Consolidation

20

Storage Design Options / ChallengesSAN DASSAN is NOT faster than DASReduce complexityNo need in expensive redundant high performing intermediate SAN componentsSAN concept follows shared infrastructure model, not building blockRAID JBOD (RBOD)No need for disk redundancy: data redundancy is moved to application levelThink of Ex2013 servers as software RAIDRAID is supported but doubles disk count (assuming RAID-10) and costEnable controller caching: 75/25 write/readMany designs are supported; there are three storage design dimensionsFC SAS SATANeed large disks to provide large mailboxesIn Ex2013 IOPS requirements reduced ~93% from Ex2003!Typical Ex2013 database requires ~10 IOPS7200 rpm LFF (3.5) SATA/NL-SAS disk provides ~60 IOPS15K rpm SFF (2.5) SAS/FC disk provides ~230 IOPSNo need for fast but small and expensive high performing disks

21

RAID vs. JBOD with Native ReplicationConceptually similar replication goal is to introduce redundant copy of the dataSoftware, not hardware powered Application aware replicationEnables each server and associated storage as independent isolated building blockExchange 2013 is capable of automatic reseed using hot spare (no manual actions besides replacing the failed disk!)Finally, cost factor: RAID1/0 requires 2x storage (you still want 4 database copies for Exchange availability)!

22

Thick Risks of Thin ProvisioningExchange mailboxes will grow but they dont consume that much on Day 1The desire not to pay for full storage capacity upfront is understoodHowever, inability to provision more storage and extend capacity quickly when needed is a big riskSuccessful thin provisioning requires significant operational maturity and process excellence unseen in the wildMicrosoft guidance and best practice is to use thick provisioning with low cost storageIncremental provisioning model can be considered a reasonable compromise

Thick ProvisioningThin ProvisioningIncremental Provisioning

23

Exchange continuous replication is a native transactional replication (based on transaction data shipping)Database itself is not replicated (transaction logs are played back to target database copy)Each transaction is checked for consistency and integrity before replay (hence physical corruption cannot propagate)Page patching is automatically activated for corrupted pagesReplication data stream can be natively encrypted and compressed (both settings configurable, default is cross site only)In case of data loss Exchange automatically reseeds or resynchronizes the database (depending on type of loss)If hot spare disk is configured, Exchange automatically uses it for reseeding (like RAID rebuild does)Native vs. Low Level Data Replication

24

Typical Exchange Disk LayoutTwo mirrored (RAID1) disks for system partition (OS; Exchange binaries, transport queues, logs)One hot spare diskNine or more RBOD disks (single disk RAID-0) for Exchange databases with collocated transaction logsFour database copies collocated per disk, not to exceed 2TB database size

25

Need more? Layout for more storageThere are servers that can house more than 12 LFF disks (up to 16 with rear bay)

There are already DAS enclosures available that provide 720 TB capacity in a single 4U unit (90 x 8TB drives)!

Scalability limits: still 100 database copies / server

This means no more than 25 drives @ 4 databases/disk or 50 drives @ 2 databases/disk

26

Backup and RecoveryChallenge 1: Backup or Extra copy?Exchange Native Data Protection: 3+ highly available database copiesLagged copy to protect from unlikely scenarios (logical corruption, admin error)Replay Lag Manager with automatic copy conversionWhat will you do with your tapes?

Challenge 2: How do I recover data without backup?Self Service Recovery to restore items from Recoverable Items Deletions Single Item Recovery to protect items in Recoverable Items and restore them administratively via mailbox searchLarge mailboxes can accommodate large dumpster no need for archive mailbox or 3rd party archiving

27

Archiving: Retention vs. ComplianceWhere is my archive?10+ years ago archiving enabled offloading Exchange data to the lower cost storageWith large mailboxes on commodity storage it does not make sense anymoreSingle data repository is better than multiple solutions and lots of PST filesWhat do you need archiving for?Theres still in-place archive a.k.a. online archive which is just a second mailbox available in online modeOnly needed based on client performance considerations or when you cannot extend mailbox capacityOutlook 2013+ has the magic slider = no impact from large OST file1K 20K 100K 1M items in critical path folders (client has impact too) are you stubbing? RetentionRetention is to help users delete their data when it is no longer neededRetention tags and retention policiesComplianceCompliance is to prevent users from deleting sensitive data that might be requested by legalLitigation Hold, In-Place Hold to protect and preserve data at rest (in the mailbox)Data Loss Prevention (DLP) to protect data in transitUnified Compliance Center to combine Exchange/Lync and SharePoint together for eDiscovery

28

Client BreakdownIn todays world, users access Exchange mailboxes from many clients and devicesCumulative client concurrency can be over 100%Penalty factors are measured in units of load created by a single Outlook clientSome clients generate more server load than a baseline Outlook clientPenalty factor should be calculated as weighted average across all types of clientsCaveats of this model:Individual penalty factors are provided for illustration purpose only and should be adjusted based on internal test results, client configurations, vendor guidance and other relevant factorsPenalty factor for BES 5 is based on performance benchmarking guide published by Blackberry at http://aka.ms/bes5performance Penalty factor for Good is based on data published on Good Portal at http://aka.ms/goodperformance Continue to monitor system utilization closely and adjust sizing model as necessary as you scale outSample client breakdown calculation (for illustration only)

29

Exchange 2013 PLA Conceptual Design

Four or more physical servers (collocated roles) in each DAG split symmetrically between two datacenter sitesFour database copies (one lagged with lag replay manager) for HA and SR on DAS storage with JBOD; minimized failure domainsUnbound Service site model with single unified load balanced namespace and Witness in the 3rd datacenter

30

Main takeawaysKeep your design simple!Follow building block architecture principlesEnsure sufficient availabilityDo your Exchange design right or go to Office 365!

31

Sessions to attendBRK3197 - Exchange Server Preferred ArchitectureBRK3129 - Deploying Exchange Server 2016BRK3178 - Exchange on IaaS: Concerns, Tradeoffs, and Best PracticesBRK3173 - Experts Unplugged: Exchange Server Deployment and ArchitectureBRK2189 - Desktop Outlook: Evolved and RedefinedBRK3102 - Experts Unplugged: Exchange Server High Availability and Site ResilienceBRK3125 - High Availability and Site Resilience: Learning from the Cloud and FieldBRK3147 - Meeting Complex Security Requirements for Publishing ExchangeBRK3160 - Mail Flow and Transport Deep DiveBRK3163 - Making Managed Availability Easier to Monitor and TroubleshootBRK3180 - Tools and Techniques for Exchange Performance TroubleshootingBRK3186 - Behind the Curtain: Running Exchange OnlineBRK3206 - Exchange Storage for Insiders: Its ESEBRK4105 - Under the hood with DAGsBRK4115 - Advanced Exchange Hybrid Topologies

Pre-Release Programs Be first in line!Exchange & SharePoint On-Premises ProgramsCustomers get:Early access to new featuresOpportunity to shape featuresClose relationship with the product teamsOpportunity to provide feedbackTechnical conference calls with members of the product teamsOpportunity to review and comment on documentationGet selected to be in a program:Sign-up at Ignite at the Preview Program deskORFill out a nomination: http://aka.ms/joinofficeQuestions:Visit the Preview Program desk in the Expo HallContact us at: [email protected]

Microsoft Ignite 2015 2015 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.5/8/2015 11:13 AM33

EvaluationPlease complete session evaluation!Contact informationE-mail:[email protected] Profile:https://www.linkedin.com/in/borisl Social:https://www.facebook.com/lokhvitsky

Visit Myignite at http://myignite.microsoft.com or download and use the Ignite Mobile App with the QR code above.Please evaluate this sessionYour feedback is important to us!

2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.5/8/2015 11:13 AM35

Thank You!

2015 Microsoft Corporation. All rights reserved.

5/8/2015 11:13 AM36 2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

EvolutionNode 1DASNode 2DASNode 3DASNode 4DASNL-SASNL-SASNL-SASNL-SASJBODJBODJBODJBODRAID 10Node 2Node 1SANFCFCBADGOODSystem dependenciesRedundancyGOODBADfailuredomainsHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDSANFCFCVirtual serversSANVirtual serversCombined workloads + Shared spindles + Dynamic disk provisioning + Virtual LUN carving = very complexVirtualization hostshp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15kFCFChp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15khp StorageWorksBay 14Bay 1300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15k300GB 15kBlade ChassisHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDHP ProLiantBL460c Gen8UIDVirtualization hostsBlade ChassisLow level replicationLow level replicationSite 1Site 2Failure domainFailure domainFailure domainFailure domainFailure domainFailure domainHPStorageWorksMSA70UID22232421251718191620121314111578961023415serial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 k60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GBSmart Array P600HPStorageWorksMSA70UID22232421251718191620121314111578961023415serial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 k60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GBSmart Array P600HPStorageWorksMSA70UID22232421251718191620121314111578961023415serial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 k60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GBSmart Array P600Physical serversHPStorageWorksMSA70UID22232421251718191620121314111578961023415serial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 kserial ata5.4 k60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GB60 GBSmart Array P600JBODJBODJBODJBODDAG replicationDAG replicationDAG replicationHardware building blockExchange architecture building block (DAG)Team Dependencies and Solution ComplexityTechnologiesStorage TeamHelp DeskMessagingTeamCollaborationTeamUCTeamDirectory TeamSecurity TeamLoad Balancing TeamNetwork TeamBADServer Platform TeamApplication TeamWindows TeamTelephony TeamOwn the solution / technology, not the infrastructure areaClientServerServerConsolidated deploymentServer-Server trafficvarious protocolsClient-Server trafficHTTPSFirewall OKWANClientServerServerDistributed deploymentServer-Server trafficvarious protocolsNo firewallsClient-Server trafficHTTPSWANDatacenter siteUser siteUser siteUser siteUser siteDatacenter siteDAG2site specific namespacemail1.company.comsite specific namespacemail2.company.comDAG1Datacenter siteUser siteUser siteUser siteUser siteDatacenter sitesingle unified namespacemail.company.comDAGServerServerServerServerServerServer1060Assigned404040404040Active101010101010Database NameActive ServerServer1Server2Server3Server4Server5Server6DB01Server11234DB02Server21234DB03Server31234DB04Server44123DB05Server53412DB06Server62341DB07Server11234DB08Server21234DB09Server34123DB10Server43412DB11Server52341DB12Server62341DB13Server11234DB14Server24123DB15Server33412DB16Server42341DB17Server52341DB18Server62341Disk 1Disk 2Disk 31060Assigned404040404040Active12120121212Database NameActive ServerServer1Server2Server3Server4Server5Server6DB01Server11234DB02Server21234DB03Server41234DB04Server44123DB05Server53412DB06Server62341DB07Server11234DB08Server21234DB09Server54123DB10Server43412DB11Server52341DB12Server62341DB13Server11234DB14Server24123DB15Server63412DB16Server42341DB17Server52341DB18Server62341Disk 1Disk 2Disk 31060Assigned404040404040Active1515015150Database NameActive ServerServer1Server2Server3Server4Server5Server6DB01Server11234DB02Server21234DB03Server41234DB04Server44123DB05Server53412DB06Server12341DB07Server11234DB08Server21234DB09Server54123DB10Server43412DB11Server52341DB12Server22341DB13Server11234DB14Server24123DB15Server13412DB16Server42341DB17Server52341DB18Server42341Disk 1Disk 2Disk 3CASroleMBXroleConsolidated serverVirtualization hostCASVMMBXVMVirtualization LayerNo extra overheadVirtualization hostMulti-Role VMVirtualization LayerVirtualization hostVirtualization LayerMulti-Role VMFailure domain!Multi-Role VMDASSANRAIDJBODSATANL-SASFC/SSDSAS/SCSIExchange sweet spot2 servers, 4 disks, 2 database copiesBADBETTER3 servers, 4 disks, 3 database copiesBEST4 servers, 4 disks, 4 database copies0.02.04.06.08.010.012.020152016201720182019Underprovisioned storageUnused storageUsed Mailbox Storage0.02.04.06.08.010.012.020152016201720182019Underprovisioned storageUnused storageUsed Mailbox Storage0.02.04.06.08.010.012.020152016201720182019Underprovisioned storageUnused storageUsed Mailbox StorageMailbox ServerMailbox ServerExchange replication(transaction logs only)Low level replication(all data changed on disk)Application awareresiliency(automatic replication; no physical corruption; page patching; auto failover; auto database mount; auto reseed; auto use of hot spare; etc.)Low levelresiliency(corruption can propagate; consistency check will take long time; etc.)DatabaseLogsLogsDatabaseNeed to checkconsistency(eseutil /k)StorageStorageServerHP DL380p G9Dell R730xdCisco C240 M4or similar12 or more 3.5" drives8TB8TB8TB8TB8TB8TB8TB8TB8TBOSOSSystem disk (RAID1)Database disksHotSpareDAG1-DB11DAG1-DB171DAG1-DB331DAG1-DB491DAG1-DB651DAG1-DB811DAG1-DB971DAG1-DB1131DAG1-DB1291DAG1-DB162DAG1-DB312DAG1-DB462DAG1-DB612DAG1-DB762DAG1-DB912DAG1-DB1062DAG1-DB1212DAG1-DB1362DAG1-DB153DAG1-DB303DAG1-DB453DAG1-DB603DAG1-DB753DAG1-DB903DAG1-DB1053DAG1-DB1203DAG1-DB1353DAG1-DB144DAG1-DB294DAG1-DB444DAG1-DB594DAG1-DB744DAG1-DB894DAG1-DB1044DAG1-DB1194DAG1-DB1344Disk 7Disk 8Disk 9Disk 1Disk 2Disk 3Disk 4Disk 5Disk 64TB4TB4TB4TB4TB4TB4TB4TB4TBHotSpare4TB4TB4TB4TB4TB4TB4TB4TB4TB4TB4TBEnclosureHP D3600Dell MD1400or similar12 or more 3.5" drivesServerHP DL380p G9Dell R730xdCisco C240 M4or similar12 or more 3.5" drivesOSOSSystem disk (RAID1)HotSpareDatabase disksDatabase disks3-node DAGBackup4-node DAGClient type# of clientsIO PenaltyCPU PenaltyOutlook 2007011Outlook 2010500011Outlook 2013500011OWA100011EWS10011ActiveSync Device 150022ActiveSync Device 210001.51.5ActiveSync Device 350011Blackberry 5.x10001.01.5Blackberry 10100011GoodLink1002.162.38Total mailboxes10000Concurrency75%Aggregate IO penalty1.22Aggregate CPU penalty1.26KWKWKWKWKWKWKWKWDB1LogsPassive Copy 2DB2LogsActive Copy 1DB6LogsPassive Copy 4DB8LogsPassive Copy 3Log1Log2DB1LogsActive Copy 1DB4LogsPassive Copy 4DB5LogsPassive Copy 3DB7LogsPassive Copy 2DB5LogsPassive Copy 3DB2Passive Copy 2DB3LogsActive Copy 1DB7LogsPassive Copy 4DB6LogsPassive Copy 4DB8Passive Copy 3DB3LogsPassive Copy 2DB4LogsActive Copy 1Log1Log2Log1Log2Log1Log2Log3LogsLogsSwitchSwitchSwitchSwitchKWKWKWKWDB1Log1Log2LogsPassive Copy 4DB7Passive Copy 3DB4LogsPassive Copy 2DB5LogsActive Copy 1DB1LogsPassive Copy 4DB2Passive Copy 3DB8LogsPassive Copy 2DB6LogsActive Copy 1Log1Log2LogsLogsSwitchSwitchKWKWKWKWDB2LogsPassive Copy 4DB5Passive Copy 3DB3LogsPassive Copy 2DB7LogsActive Copy 1DB3LogsPassive Copy 4DB6Passive Copy 3DB4LogsPassive Copy 2DB8LogsActive Copy 1Log1Log2Log1Log2LogsLogsSwitchSwitchKWKWLoad balancerLoad balancerRouterRouterRouter