v100r001c31 white paper - huawei/media/cnbg/downloads/product/it/en...opportunities and challenges...

95
IDC Solution V100R001C31 White Paper Issue 01 Date 2013-07-30 HUAWEI TECHNOLOGIES CO., LTD.

Upload: others

Post on 24-Aug-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution V100R001C31

White Paper

Issue 01

Date 2013-07-30

HUAWEI TECHNOLOGIES CO., LTD.

Page 2: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

ii

Copyright © Huawei Technologies Co., Ltd. 2014. All rights reserved.

No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd.

Trademarks and Permissions

and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.

All other trademarks and trade names mentioned in this document are the property of their respective holders.

Notice

The purchased products, services and features are stipulated by the contract made between Huawei and

the customer. All or part of the products, services and features described in this document may not

be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all

statements, information, and recommendations in this document are provided "AS IS" without warranties, guarantees or representations of any kind, either express or implied.

The information in this document is subject to change without notice. Every effort has been made in the

preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute a warranty of any kind, express or implied.

Huawei Technologies Co., Ltd.

Address: Huawei Industrial Base

Bantian, Longgang

Shenzhen 518129

People's Republic of China

Website: http://enterprise.huawei.com

Page 3: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper About This Document

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

iii

About This Document

Purpose

This document describes services, architecture, and reference models of the Internet Data

Center (IDC) solution.

Intended Audience This document is intended for:

Technical support engineers

System engineers

Symbol Conventions The symbols that may be found in this document are defined as follows.

Symbol Description

DANGER indicates a hazard with a high level or medium

level of risk which, if not avoided, could result in death or serious injury.

WARNING indicates a hazard with a low level of risk which,

if not avoided, could result in minor or moderate injury.

CAUTION indicates a potentially hazardous situation that, if

not avoided, could result in equipment damage, data loss, performance deterioration, or unanticipated results.

TIP indicates a tip that may help you solve a problem or save

time.

NOTE provides additional information to emphasize or

supplement important points of the main text.

Page 4: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper About This Document

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

iv

Change History

Changes between document issues are cumulative. The latest document issue contains all the

changes made in earlier issues.

Issue 01 (2013-07-30)

This issue is used for first office application (FOA).

Page 5: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper Contents

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

v

Contents

About This Document .............................................................................................................. iii

1 IDC Trend and Introduction ................................................................................................... 7

2 Customer Benefits .................................................................................................................... 9

3 IDC Services............................................................................................................................ 10

3.1 Cloud Host Service ...................................................................................................................................10

3.1.1 Overview .........................................................................................................................................10

3.1.2 Highlights ........................................................................................................................................10

3.1.3 Logical Architecture ........................................................................................................................ 11

3.1.4 Features ...........................................................................................................................................12

3.1.5 Operation Model ..............................................................................................................................16

3.2 VPC Service .............................................................................................................................................16

3.2.1 Introduction .....................................................................................................................................16

3.2.2 Highlights ........................................................................................................................................17

3.2.3 Logical Architecture ........................................................................................................................18

3.2.4 Technical Features ...........................................................................................................................19

3.2.5 Operation Model ..............................................................................................................................23

3.3 Massive Object Storage ............................................................................................................................23

3.3.1 Overview .........................................................................................................................................23

3.3.2 Highlights ........................................................................................................................................23

3.3.3 Operation Model ..............................................................................................................................26

3.4 Cloud Backup ...........................................................................................................................................26

3.4.1 Introduction .....................................................................................................................................26

3.4.2 Highlights ........................................................................................................................................26

3.4.3 Service Architecture .........................................................................................................................27

3.4.4 Operation Model ..............................................................................................................................27

4 IDC Architecture .................................................................................................................... 28

4.1 Computing and Storage Cloud-based Architecture .....................................................................................28

4.1.1 Cloud OS .........................................................................................................................................28

4.1.2 Server +SAN Storage (Traditional Architecture) ...............................................................................32

4.1.3 Server +FusionStorage Distributed Storage (Innovative Architecture) ...............................................37

4.1.4 Comparison Between SAN Storage and FusionStorage .....................................................................41

Page 6: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper Contents

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

vi

4.2 Network ...................................................................................................................................................42

4.2.1 Overview .........................................................................................................................................42

4.2.2 IDC Network Architecture ...............................................................................................................44

4.2.3 Area-Based Architecture Design .......................................................................................................45

4.2.4 Data Center Interconnection Architecture .........................................................................................48

4.2.5 Features ...........................................................................................................................................50

4.3 Security ....................................................................................................................................................53

4.3.1 Overview .........................................................................................................................................53

4.3.2 IDC Security Architecture ................................................................................................................54

4.3.3 IDC Security Layer Design ..............................................................................................................56

4.3.4 IDC Security Features ......................................................................................................................58

4.4 Management .............................................................................................................................................59

4.4.1 IDC Management Overview.............................................................................................................59

4.4.2 IDC Management Architecture .........................................................................................................61

4.4.3 Customer Benefits ...........................................................................................................................65

4.5 DR ...........................................................................................................................................................66

4.5.1 Overview .........................................................................................................................................66

4.5.2 Active/Active VIS DR Solution ........................................................................................................69

4.5.3 Array Replication-based DR Solution ...............................................................................................72

4.5.4 Comparisons Between DR Solutions ................................................................................................75

4.6 Backup .....................................................................................................................................................75

4.6.1 Overview .........................................................................................................................................75

4.6.2 VM-level Backup Solution ...............................................................................................................78

4.6.3 CommVault Cloud Backup Solution .................................................................................................80

4.6.4 Comparison Between Backup Solutions ...........................................................................................82

5 IDC Reference Models .......................................................................................................... 83

5.1 Reference Model for a FusionStorage Small-scale Scenario .......................................................................83

5.1.1 Networking......................................................................................................................................83

5.1.2 Device Configuration .......................................................................................................................84

5.2 Reference Model for a FusionStorage Large-scale Scenario .......................................................................87

5.2.1 Networking......................................................................................................................................87

5.2.2 Device Configuration .......................................................................................................................88

5.3 Reference Model for the Rack Server + SAN Scenario ..............................................................................90

5.3.1 Networking......................................................................................................................................90

5.3.2 Device Configuration .......................................................................................................................91

5.4 Reference Model for the Blade Server + SAN Scenario .............................................................................93

5.4.1 Networking......................................................................................................................................93

5.4.2 Device Configuration .......................................................................................................................93

A Acronyms and Abbreviations .............................................................................................. 94

Page 7: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 1 IDC Trend and Introduction

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

7

1 IDC Trend and Introduction

The IDC is a high-performance local area network (LAN) platform deployed in secure,

reliable, and professional equipment rooms. The platform has high-speed Internet access

bandwidth, implements professional operation and maintenance management, and provides

comprehensive operable applications for enterprises and individuals over the Internet. IDC

carriers can provide Internet-based IT platform services and various value-added services by

their owned or leased data centers for users such as enterprises, individuals, Internet service

providers (ISPs), and Internet content providers (ICPs). Data centers built by enterprises and

used for themselves are not IDCs.

IDC carriers have the following types:

Traditional telecom carriers: have backbone networks, Internet bandwidth, and

professional data equipment rooms.

IDC carriers that have their owned equipment rooms: They are generally traditional ISPs and ICPs who are familiar with applications.

IDC carriers that lease equipment rooms: They lease equipment rooms and bandwidth to build operable data centers.

Traditional IDC services include basic services (such as equipment room leasing, rack leasing,

hosting, and host leasing), resource leasing (such as public IP address leasing, bandwidth

leasing, and data storage services), and value-added services (such as data backup, bandwidth

management, traffic analysis, load balancing, and security services). Currently, traditional

IDC markets, especially in Europe and America, are to be saturated. Large-scale telecom

carriers continuously build data centers worldwide to strengthen global service capabilities

and balance data center deployment. Most of newly built data centers are cloud computing

data centers, providing Infrastructure as a Service (IaaS), platform as a service (PaaS), and

software as a service (SaaS).

With the development of the e-commerce, e-Government, Internet of Things (IoT), and

mobile Internet, the Internet industry has rich applications, the number of users and the access

volume of services are increasing, and big data processing in the background increases rapidly.

Big data has different features from traditional data. Big data analysis requires high

performance and real-time performance and has different requirements for software and

hardware architectures from traditional data analysis. Users have high requirements for

service access speed, real-time access performance, and value-added services, which are

opportunities and challenges for IDCs. Traditional IDCs face challenges on big data

processing performance and new value-added services, including network automation, rapid

application deployment, elastic resource utilization, and unified resource management.

To improve the traditional IDC architecture, cloud computing technologies are changing the

mode of the entire IT industry and provide opportunities for the comprehensive

Page 8: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 1 IDC Trend and Introduction

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

8

transformation from the communications industry to the ICT industry. Cloud data centers use

virtualization technologies and high-performance infrastructures and implement unified

resource management and operation to provide on-demand services. The on-demand services

become a mainstream market trend.

Page 9: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 2 Customer Benefits

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

9

2 Customer Benefits

The IDC is an Internet-based network. It provides operation and maintenance services for

devices that collect, store, process, and transfer data and provides services for external

networks. Traditional IDC services include basic services (such as equipment room leasing,

rack leasing, hosting, and host leasing), resource leasing (such as public IP address leasing,

bandwidth leasing, and data storage services), and value-added services (such as data backup,

bandwidth management, traffic analysis, load balancing, and security services). The IDC

brings the following values:

Brings more profits for carriers

IDC vendors provide IDC hardware, software, services, operation, and maintenance

capabilities for carriers. Then carriers provide related services for end users, such as

enterprises and the public, by these hardware, software, and services to gain profits. With

the development of IT technologies, especially cloud technologies, cloud-related services

become rich. IDCs become energy-saving and environmentally-friendly and can be easily managed and maintained.

Facilitates the IT transformation.

As the IDC develops, enterprises begin to assess architectures (including

high-performance computing, memory computing technologies, and large-scale

concurrent processing) and information management (especially information

management of analysis domains) policies to meet requirements of service departments.

They realize that the traditional IDC faces challenges: insufficient capital and planning

capabilities, high maintenance costs, difficult service expansion, and low efficiency.

Therefore, IT services brought by the IDC are the trend of IT-based enterprise development.

Allows the public to use and share massive data.

Nowadays, the public have high requirements for Internet information, such as

microblogs, WeChat, mobile office, telephone reading, electronic documents, online

music, online TVs, and online video. How to provide these services and ensure data

security is important. The IDC can meet the public's requirements for massive

information. The IDC not only provides rich services and massive data, but also ensures data privacy, integrity, and reliability.

Page 10: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

10

3 IDC Services

Traditional IDC carriers primarily provide resource leasing services, such as the equipment

room leasing, rack leasing, hosting, host leasing, public IP address leasing, and bandwidth

leasing. These services are increasingly homogenous. Traditional IDC carriers face great

challenges in high electric power consumption, low resource utilization, and complex

application deployment.

Cloud computing technologies are used to improve the resource utilization of IDCs, facilitate

on-demand and rapid application deployment, and implement unified O&M and operation

management. The new services brought by cloud IDCs generate new streams of revenues for

IDC carriers and improve their profitability.

3.1 Cloud Host Service

3.1.1 Overview

Cloud host services brought by the IDC public cloud offer flexible, on-demand VMs based on

the Huawei virtual computing environment FusionSphere. Based on the service deployment,

users can lease VMs with different configurations and related value-added service features,

and run standard or customized image files on VMs.

Users have full control of leased cloud host resources, such as usage, monitoring, and

maintenance of basic cloud hosts and value-added features.

User data can be stored on cloud host instances. However, when cloud host instances are

damaged, user data will be lost. Users generally store user data on the Elastic Block Storage

(EBS) that is independent from cloud host instances and provides reliable and elastic block

storage services for cloud hosts. In addition, users can purchase cloud host instances or

backend as a service (BaaS) of the EBS to implement snapshot backup for key service data

volumes or cloud host instances.

3.1.2 Highlights

Elastic and on-demand resource allocation

Users are allowed to apply for resources as required, saving expenses.

Flexible, rapid, and automated service deployment

Multiple cloud host specifications for the CPU, memory, storage, and operating system (OS) are supported. Service provisioning and deployment are rapid.

Page 11: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

11

Full control

Users have full control of cloud hosts, such as logging in to VMs using virtual network

computing (VNC) and having administrator rights.

Real-time performance monitoring

Users can monitor performance, such as the CPU, memory, and disk utilization, disk input/output (I/O), and network I/O, in real time to learn about the service running status.

Data security

VM-level backup is implemented based on the disk snapshot function of cloud hosts.

Security

Network access among cloud hosts can be controlled by the virtual firewall (VFW) and

security group configuration. VPCs can be used to isolate networks. The IP Security

(IPsec) virtual private network (VPN) technology is used to connect cloud data centers to

enterprise IT centers.

Various value-added network services

Various value-added network services, such as elastic IP addresses and elastic load

balancer (ELB), are provided for users, and these services can be configured on the Internet Explorer page.

3.1.3 Logical Architecture

Mainstream application layers include the web server, application server, and database (DB)

server. Cloud host services include the VM instance installation and value-added services and

features.

Cloud hosts not only provide VMs for users, but also provide a VM-centric comprehensive

solution, including VM instances and value-added services and features, for user IT

application deployment based on the cloud IDC architecture. The following describes cloud

host features:

VM instance supporting various specifications

After IDC infrastructures are virtualized, a virtualization resource pool that can be

managed in unified manner is constructed. IDC carriers can implement unified O&M

and operation management for the resource pool (physical and virtual resources) by

using FusionManager. Users can flexibly apply for virtual computing and storage

resources based on their own service characteristics to create VM instances with different specifications.

Cloud host usage

Users have full control and usage rights of cloud hosts, such as starting, stopping, and

logging out of cloud hosts, modifying cloud host specifications, logging in to VMs using VNC for VM O&M, and monitoring cloud host performance on a specific page.

Elastic IP address

Public IP addresses can be flexibly bound to, unbound from, or reclaimed from cloud

hosts on the user portal. FusionManager automatically provisions the Network Address

Translation (NAT) configuration to firewalls, so that carriers' administrators are not

required to perform the NAT configuration on firewalls.

EBS volume

Users can create EBS volumes and attach these volumes to VM instances to provide

reliable and elastic block storage services. Even if VM instances are damaged, user data will not be lost.

Cloud host snapshot backup

Page 12: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

12

Snapshot backup can be performed for VM instances or volumes of the leased cloud

hosts. Users can configure and query backup policies and restoration tasks.

3.1.4 Features

Cloud Host Instance Supporting Various Specifications

Cloud OS software, consisting of virtualization platforms and cloud service platforms, is

deployed on IDCs. This software is used to virtualize IDC hardware resources (computing,

storage, and network resources) by using virtual computing, storage, and network

technologies and centrally manage virtual, service, and user resources.

Computing virtualization

Computing virtualization enables physical server resources to be virtualized into virtual

CPUs (vCPUs), memory, disks, I/O resources. A server can be divided into a few or many separated VMs. At the same time, VM isolation and high security is ensured.

Network virtualization

Network virtualization is a process wherein the physical network resources of a server

are virtualized into multiple logical resources. For example, a NIC of the server can be

virtualized into several or even hundreds of isolated virtual NICs. FusionCompute

supports intelligent NICs to implement multi-queue, virtual switching, high QoS, and

uplink aggregation functions. This improves the I/O performance of virtual NICs.

Storage virtualization

Storage virtualization enables storage devices to be converted to logical data storage. A

VM is stored as a set of files in a directory on the data storage. Data storage is a logical

container that is similar to a file system. It hides the features of each storage device and

forms a unified model to provide disks for VMs. The storage virtualization technology

helps the system manage virtual infrastructure storage resources with high resource

utilization and flexibility. Data storage supports the virtual image management system and network file system.

Users can select VMs with different specifications based on IT resource consumption. Logical

resources of VM instances include vCPUs, memory, system disks, data disks, and NICs. Table

3-1 lists the common instances.

Table 3-1 VM specifications

vCPU Memory (GB)

System Disk (GB)

Data Disk (GB) Network

Standard host: Suitable for most applications.

Low-end host 1 2 60 130 100 Mbit/s

Mid-range host 2 4 60 260 100 Mbit/s

High-end host 4 8 60 520 100 Mbit/s

Super large host 8 16 60 1040 1 Gbit/s

Host with large memory capacity: Suitable for high-throughput applications such as database and cache

applications.

Mid-range host 8 24 60 1040 1 Gbit/s

Page 13: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

13

vCPU Memory (GB)

System Disk (GB)

Data Disk (GB) Network

High-end host 16 48 60 2080 1 Gbit/s

Host with high computing performance: Suitable for computing-intensive applications.

High-end host 8 4 60 520 1 Gbit/s

Super large host 16 8 60 1040 1 Gbit/s

Cloud Host Usage

Users have full control and usage rights of leased cloud hosts. The basic usage rights include:

Logging in to VMs using VNC

Starting, stopping, and logging out of cloud hosts

Modifying cloud host specifications

Monitoring cloud hosts

O&M administrators log in to VMs using VNC to maintain and operate cloud hosts, such as

resolving problems during the remote server login, setting private IP addresses, and modifying

network configurations. IDC end users also require cloud host maintenance. In this case, the

VNC service needs to perform NAT conversion across the public network.

Users are allowed to maintain and operate cloud hosts, such as starting, stopping, and logging

out of cloud hosts, on the user portal.

Users can apply to online or offline modify cloud host specifications, including vCPUs,

memory, storage capacity, and the number of NICs, based on the service running status.

Cloud hosts do not support the cloud monitoring function by default. Only after users

subscribe the cloud monitoring service, the cloud monitoring function can be supported.

When cloud monitoring is enabled for a cloud host on the self-service portal, the operation

management system, that is, the Cloud Service Brokerage (CSB) sends monitoring messages

to the resource pool system (FusionManager). The resource pool system starts to obtain

monitoring data in real time from underlying virtualization platforms or network devices.

Then the operation management system regularly queries real-time monitoring data about

specific resources from the resource pool system. Monitoring data is displayed on the user

portal according to customers' habits. When the performance report of a resource needs to be

queried, the operation management system sends report querying information to the operation

and maintenance management (OMM) system. Then the OMM system queries performance

report data in specific time periods according to the resource ID and reflects these data to the

operation management system. These data is displayed on the user portal according to

customers' habits. Important cloud host indicators, such as the CPU, memory, and disk

utilization, disk I/O, network traffic.

Elastic IP Address

The elastic IP address refers to the public IP address that is bound to the first NIC of a VM.

The elastic IP address allows users to access a VM over the public network by using a fixed

address. An elastic IP address can be bound to only one VM at a time. The elastic IP address

bound to a VM will not be released or changed when the VM is stopped or migrated.

Page 14: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

14

An elastic IP address can be configured on the user portal. When a public IP address is bound

to a cloud host, FusionManager automatically configures the NAT mapping relationships

between the public IP address and the private IP address of the cloud host on firewalls. When

the public IP address is unbound from the cloud host, FusionManager automatically deletes

the NAT mapping relationships, and the public IP address is released for other cloud hosts.

Figure 3-1 Elastic IP address

EBS Volume

The Huawei elastic block storage (EBS) provides block storage for cloud host instances. EBS

volumes are independent from the storage devices of cloud host instances. User data is stored

in EBSs. If cloud host instances are damaged, user data is not affected. EBSs are suitable for

databases, file systems, or data raw devices. The EBS volume capacity ranges from 10 GB to

1 TB. EBS volumes can be attached to different cloud host instances and function as local

disks. One cloud host instance can be attached with multiple EBS volumes. One EBS volume

can only be attached to one cloud host instance at a time.

Figure 3-2 Logical architecture of the EBS

Page 15: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

15

Cloud Host Snapshot Backup

The VM snapshot backup service is available for the users who lease cloud hosts from carriers.

This service enables users to back up and restore leased VMs by using snapshots, ensuring

data security. Users can configure backup policies and restore VMs as required. Users can

enable the VM-level backup service in one-click mode without building other backup systems.

Figure 3-3 shows the logical architecture of VM-level snapshot backup.

Figure 3-3 Logical architecture of VM-level backup

Control data traffic

Backup data traffic

End users

VMs in enterprise B VMs in enterprise A

Local snapshot Snapshot backup

VMs in enterprise B VMs in enterprise A

Local snapshot Snapshot backup

Backup storage resource

OMM system

The logical architecture consists of the OMM system, FusionCompute, and HyperDP backup

module.

The OMM system provides a service management portal and an operation service portal. The

HyperDP cooperates with FusionCompute to back up VMs or specified VM volumes based on

specified policies. When VM data is lost or VMs are faulty, data can be restored by using

backups. Backup data is stored on the virtual disks attached to the HyperDP VM or the

peripheral storage devices of the NFS/CIFS shared file system.

The VM-level backup service enables users to configure the full backup policy, incremental

backup policy, backup cycle, backup period, and backup data expiration policy. Different

types of VMs can be configured with different backup policies.

Carriers' administrators can manage the VM snapshot backup service (such as user

management, service management, and charging and measuring management) over the

service management portal. End users can subscribe, unsubscribe, renew the VM snapshot

backup service, configure backup policies, back up and restore data, and query resource usage

over the operation service portal.

The HyperDP backup module implements data backup and restoration function. This module

automatically backs up data based on policies, stores and manages end users' backup data,

records detailed backup and restoration operations, and provides detailed resource usage and

Page 16: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

16

measuring information. Backup storage devices provide physical storage space for backup

data.

3.1.5 Operation Model

The cloud host service is charged at a flat monthly or yearly rate based on VM specifications.

The following cloud host features are charged:

Cloud host

Cloud hosts are charged at a flat monthly or yearly rate or by usage volume. The

charging period is an hour. This is a postpaid mode.

Cloud monitoring

Cloud monitoring services are charged at a flat monthly or yearly rate

Elastic IP address

Elastic IP addresses are charged at a flat monthly or yearly rate based on the quantity and

usage time of elastic IP addresses.

ELB

ELBs are charged at a flat monthly or yearly rate.

VFW

VFWs are charged at a flat monthly or yearly rate based on VFW specifications.

EBS volume

EBS volumes are charged at a flat monthly or yearly rate based on the EBS volume capacity.

VM snapshot backup

The cloud host VM snapshot backup service is charged based on the backup object size (GB) multiplied by hours. This is a postpaid mode.

3.2 VPC Service

3.2.1 Introduction

The VPC service is to create a private network in the public cloud. The private network can

communicate with user networks. Enterprises can apply for VPCs on the public cloud

platform. Enterprises can use independent IP addresses and spaces in VPCs, and are isolated

from other networks which are not in VPCs.

Enterprises can use VPN gateways to connect VPCs to the existing enterprise networks.

Virtual networks function as the sub-networks of the existing enterprise networks. VPCs

support webs, databases, emails, OA, CRM, and HR applications. The enterprise data can be

backed up to cloud hosts in VPCs.

VPNs are provided by carriers or created by customers themselves. Therefore, complex host

configuration is not required.

Page 17: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

17

Figure 3-4 Logical VPC architecture

Cloud data center

Core switch

VPN gatewayEnterprise A's employees on business trip

Enterprise A Enterprise B

As for enterprises, the private cloud uses the virtualization technology to isolate network and

computing resources because cloud resources can be accessed only by VPNs. The

virtualization layer on the cloud also uses the isolation technology to ensure that users can use

VMs exclusively. The private cloud simplifies deployment. A user can specify an internal IP

address for a VM and configure the subnet and routing table, seamlessly connecting the VM

to the enterprise intranet. The VM functions as if it were a local physical server. The VM

provides powerful computing performance, uses encrypted link channels for communication,

and is protected by firewall rules. In addition, the VM provides value-added services, such as

load balancing and multi-firewall security groups. Therefore, more optional services and

deployment modes are available for enterprise IT systems.

3.2.2 Highlights

Elastic Resource Expansion

The cloud computing has the capability to dynamically adjust resources. Each enterprise can

dynamically apply for VPC resources over the self-service portal. Each VPC is established

based on physical resources. As for users, they can obtain endless resources from the cloud.

Elastic resource expansion is suitable for rapidly developed enterprises and supports smooth

expansion from small-scale database to large-scale database.

Flexible Resource Management

Users can centrally manage the VMs added in VPCs over the self-service portal. Users have

permissions to monitor owned resources and query resource running status and active services.

Users can also customize resource adjustment methods and group resources. Resource online

and exit operations are performed on the unified VPC management portal, which is more convenient than the decentralized management mode of the traditional data center.

Page 18: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

18

Secure and Reliable Network

Enterprises' internal IT systems run on VPCs, and sensitive user data is transmitted over the

network. Therefore, secure and reliable networks are required to ensure data security. VPCs

use IPsec VPNs to access enterprises. Data link reliability is ensured and data transmission is

encrypted, which prevent user data from being intercepted.

3.2.3 Logical Architecture

Figure 3-5 shows the logical architecture of the VPC service.

Figure 3-5 Logical architecture of the VPC service

Public network connection

Platform management/service configuration connection

Physical connection

Enterprise IT administrator

Enterprise A

Enterprise user

Enterprise IT resource

Enterprise B

Carrier's bearer network

IDC administrator

IP SAN

VM service area

Virtualization platform

Virtualization platform

IDC operation and O&M server group

SSH, RDP, and VNC

SSH, RDP, and VNC

Enterprise IT administrator

Enterprise user

Enterprise IT resource

IP SAN

VM service area

The carrier's IDC provides a large resource pool, which includes computing resources, storage

resources, virtual networks, VFWs, and ELBs. Users obtain VPC resources from the resource

pool without concerning resource locations and methods for providing resources. Only the

unified OMM interface is visible to users.

Carriers' administrators use a unified management platform to manage physical and virtual

resources of the entire data center. Carriers' administrators approve users' applications for

VPC resources and handle routine operations and O&M affairs. Users subscribe and change

services over the self-service portal. Users can apply for services. After carriers'

administrators approve the application, services are automatically provisioned, and resources

are automatically created and allocated.

By using VPN tunnels, users connect to the VPCs that they have applied from the IDC. VPCs

become a part of enterprise IT resources. User data transmission is encrypted at the enterprise

Page 19: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

19

and IDC edges and is transparent to public networks. VPC IP addresses are assigned by users

and can be planned based on actual situations.

In the IDC, network isolation covers from access devices to edge egress devices. Therefore,

different tenants can use the same VPC IP address segment. Users do not worry about that IP

addresses in a carrier's data center conflict with each other.

3.2.4 Technical Features

VPC Service Usage

Users can apply for, renew, unsubscribe, and configure services over the OMM system user

portal. Operation administrators can manage VPC specifications over the management portal.

VPC specification management: Operation administrators log in to the CSB management

platform to manage VPC specifications. When VPC specifications are created, operation

administrators must set the maximum number of network and public network IP addresses,

network bandwidth, and priority. Networks include the direct network, routing network, and

intranet.

Service application: Users can customize VPC names and template types to apply for services.

After the application has been approved by carriers' administrators, resources are

automatically provisioned.

Service renewal: when services expire, services are frozen. In this case, users can renew

services.

Service unsubscription: If a user does not want to use a service, the user can unsubscribe from

the service to release resources.

Users configure IPsec VPN tunnels on VPC VFWs and enterprise edge network devices. All

enterprise data is transmitted through IPsec tunnels. After VPN tunnels have been enabled,

enterprises can plan or customize IP address segments for VPCs based on enterprise intranets.

The gateways of IP address segments are configured on VFWs.

ELB

Allows an ELB to be virtualized into multiple LBs, which provide load balancing for each

tenant separately. Tenants can customize load balancing policies, algorithms, listeners,

uploaded certificates, and weight. The ELB, a value-added feature for multi-cloud host

solutions, must be used along with cloud hosts. The VLB can automatically assign service

traffic of applications to multiple cloud hosts, check the health status of cloud hosts, and

automatically isolate abnormal cloud hosts. This ensures good performance, capacity

expansion, and stability of a server. In addition, the ELB enhances the threat prevention

capability of the server pool and ensures secure isolation of applications and servers.

Page 20: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

20

Figure 3-6 Logical architecture of the ELB

The key components in the logical architecture include the OMM system and F5 LBs or

software load balancers (SLBs).

The OMM system is the core software of public cloud data center. The OMM system is

responsible for data center operation, maintenance, monitoring, and resource pool

management and scheduling.

The unified portal module is used for users and carriers' administrators to access data center

services.

The operation management module is used to apply for, unsubscribe from, provision, cancel,

and delete services. By using this module, individual and enterprise users can apply for,

unsubscribe from, and configure services, and carriers' administrators can customize service

packages and provision services.

The resource pool management module connects to underlying hardware devices or software

to convert user requests and underlying functions and support correct configuration on related

devices or software.

F5 LBs support the load balancing function, providing the load balancing function for each

user separately. Requests sent by each user can be configured independently, and the

configuration is not conflicted because load balancing is isolated.

The resource pool management module connects to F5 LBs. The operation management

module sends user requests to the resource pool management module. The resource pool

management module converts user requests to functional instructions and sends functional

instructions to hardware devices or software for configuration. After configuration is

implemented on the hardware devices or software, the hardware devices or software returns a

value to the resource pool management module. Then the resource pool management module

returns a value to the operation management module, indicating that the service has been

configured.

F5 LBs provide command-line interfaces (CLIs) and application programming interfaces

(APIs). The resource pool management module can invoke these interfaces to provision F5

LB configuration to F5 LBs based on users' instructions.

Page 21: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

21

VFW

A physical firewall is virtualized into multiple VFWs. Each tenant can apply for one or

multiple VFWs. The functions of VFWs are the same as those of the physical firewall. The

tenant can customize the server gateway IP address, access control list (ACL), application

specific packet filter (ASPF), and names and number of security domains to meet different

requirements on security.

The VFW service is a value-added security service provided by carriers. The VFW offers

configurable and customized network firewall functions, which cover most of the functions of

common firewalls. The VFW also helps create a reliable security protection layer. This layer

protects users' cloud resources from external attacks and supports customized policies to

allow authorized users to access cloud resources. Users can apply VFWs by themselves. The

VFW can be deployed and enabled in several minutes.

Figure 3-7 Logical architecture o f the VFW

The key components in the logical architecture include the OMM, firewall, and virtualization

platform.

The OMM system is the core software of public cloud data center. The OMM system is

responsible for data center operation, maintenance, monitoring, and resource pool

management and scheduling.

The unified portal module is used for users and carriers' administrators to access data center

services.

The operation management module is used to apply for, unsubscribe from, provision, cancel,

and delete services. By using this module, individual and enterprise users can apply for,

unsubscribe from, and configure services, and carriers' administrators can customize service

packages and provision services.

The resource pool management module connects to underlying hardware devices or software

to convert user requests and underlying functions and support correct configuration on related

devices or software.

Page 22: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

22

The Huawei firewall supports the VFW function, providing a VFW for each user. VFWs can

be independently configured, and VFW configuration is not conflicted.

The resource pool management module connects to the Huawei firewall. The operation

management module sends user requests to the resource pool management module. The

resource pool management module converts user requests to functional instructions and sends

functional instructions to the Huawei firewall for configuration. After configuration is

implemented on the Huawei firewall, the Huawei firewall returns a value to the resource pool

management module. Then the resource pool management module returns a value to the

operation management module, indicating that the service has been configured.

The Huawei firewall provides the CLI. The resource pool management module can invoke the

CLI to provision VFW configuration based on users' instructions.

Security Group

The security group feature provides network isolation capabilities in the VPC.

You can add some IP addresses in the VPC to a security group, and set access rules for

different security groups. These IP addresses do not need to be consecutive, that is, the IP

addresses om a security group can belong to different segments.

Inter-security group access rules can be configured as follows:

Allow or forbid requests from another security group to access the security group

through specified protocols and interfaces.

Allow or forbid requests from a specified IP address segment to access the security group

through specified protocols and interfaces.

IP addresses in the same security group are not restricted for access.

In this way, refined control can be implemented for internal network security behavior in the

VPC, as shown in Figure 3-8.

Figure 3-8 Network security control

Page 23: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

23

3.2.5 Operation Model

Enterprise users subscribe VPCs over the self-service portal. Network resources are allocated

to enterprise users after carriers' administrators approve subscription. Enterprise users can add

subscribed cloud hosts to VPCs. Cloud hosts are paid per use. IPsec VPN tunnels are used to

connect cloud hosts in VPCs to internal enterprise IT systems. Enterprises only have

permissions to user physical resources, but do not own physical resources. When enterprises

unsubscribe services, resources are reclaimed for other VPCs.

The VPC service is free, and only the VFW, ELB, and security group features are charged.

3.3 Massive Object Storage

3.3.1 Overview

Designed based on Huawei cloud storage technologies, carrier service characteristics, and

Huawei rich experience, the massive object storage solution constructs a systematic, operable,

manageable, maintainable, compatible, and reliable service system.

By using the massive object storage service, carriers can offer storage resource pool

products with massive capacities for end users. The operation management platform CSB

provide a tenant GUI portal for creating and deleting buckets, managing UDSKEYs, and

uploading and downloading objects.

3.3.2 Highlights

Massive Cloud Storage

The Huawei-developed massive cloud storage service integrates the universal hardware

platform, advanced storage software, intelligent management system, and flexible service

software. The massive cloud storage service meets storage requirements of the massive and

increasing data and provides operable cloud storage solutions, satisfying requirements of

telecom carriers, government agencies, universities, large enterprises, industrial customers,

VPBs, and data center value-added services. It also supports TB-level, PB-level, and EB-level

capacity expansion. Users can expand storage capacity on demand, helping reduce costs.

Powerful Storage Management

The object storage service provides the following storage management functions:

Enables administrators to manage enterprise and individual tenants, and enables enterprise tenants to manage sub-tenants.

Manages storage capacities allocated to tenants and sub-tenants, tenant information, and

permissions to access paths.

Meets the SLA and provides independent physical or logical storage capacities for

leasing.

Stores data of different types, including images, videos, audio, and files.

Page 24: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

24

Transmission and Storage Security

The object storage service data security is ensured from the following perspectives:

Transmission security

Data transmission is encrypted and data is transmitted by using HTTPS, which ensure data transmission security.

Storage security

User data is segmented. Data segments are stored on different storage nodes and hard

disks by using the private storage protocol. As a result, attackers cannot obtain user data

by attacking only one storage node and hard disk. A faulty storage node and hard disk

can be discarded without data clearing. Key user authentication information, such as

passwords, are encrypted and stored in the Cookie. This prevents unauthorized users from obtaining user authentication information by analyzing Cookie content.

Access security

The object storage mechanism is used. Data is stored as objects in specified containers.

Containers are logically isolated and data cannot be accessed without authorization codes and signature keys, which ensure data access security.

Hierarchical group management mechanism is used to manage users. Operation permissions are granted to service users and system administrators to control operations.

Data persistency

The object storage service resource pool ensures service reliability with 99.999%

availability and 99.9999% persistency. Redundancy is used to protect the stored data.

One data object has multiple duplicates, which are allocated in different physical nodes. If one node is faulty, data can be rapidly restored from other nodes.

Unified Management

The object storage service provides the unified service management, storage service, resource

management, and system OMM.

Service management

The OMM system manages tenants in a unified manner. Tenants can access the OMM

system through browsers to use powerful storage management functions provided by the object storage service.

Resource management

The object storage service manages storage space integration and allocation.

System OMM

The object storage service supports OMM for performance, alarms, and events of the storage system.

Scalability

The object storage service scalability is described as follows:

Storage capability expansion

Storage nodes and hard disks can be added to the background storage system without

interrupting services. Therefore, the TB-level storage capacity can be expanded to the PB level or EB level.

Performance expansion

Page 25: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

25

The management node cluster can be expanded by adding management nodes. This

enhances concurrent access capabilities of the cloud storage service and accelerates data processing speed. Therefore, system performance is improved.

Fault Data Self-Recovery

The storage resource system that implements the object storage service adopts a powerful data

protection mechanism. Data is segmented and distributed on multiple storage nodes in

different cabinets. The multi-duplicate technology enables data to be automatically stored for

multiple copies. Copies of each data segment are stored on different storage nodes.

If a fault occurs and causes data inconsistency, the storage resource system uses its internal

self-check mechanism to check copies of data segments on different storage nodes and

discover the data fault. After discovering the fault, the storage resource system enables the

data recovery function to recover data at the background. Data is separately stored on multiple

storage nodes so that data recovery is performed on each storage node. That is, a small part of

data is recovered on each storage node. This enables the concurrent data recovery on all

storage nodes, avoiding performance bottlenecks caused by recovery of a large amount of

data on one node. Therefore, the impact on upper-layer services is minimized. Figure 3-9

shows the fault data self-recovery steps.

Figure 3-9 Fault data self-recovery

Data is stored as segments on storage nodes.

Concurrent data is recovered on the storage nodes.

The UDS creates data copies.

The UDS checks data copies of data segments.

Hardware is faulty.

Figure 3-10 shows the fault data self-recovery process.

Figure 3-10 Fault data self-recovery process

Page 26: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

26

3.3.3 Operation Model

Users can apply for the service online instantly after authorization.

The GUI portal allows users to create and delete buckets, manage UDSKEYs, and upload and

download objects.

3.4 Cloud Backup

3.4.1 Introduction

The cloud backup service provided by the IDC public cloud is to protect user data. Carriers

establish unified data protection platforms to implement the BaaS cloud backup service. This

allows end users to use the enterprise-level backup function as a service over the Internet.

After enabling the cloud backup service, end users can back up and restore data of files,

mainstream databases, and mainstream applications on OSs including Windows, SUSE Linux,

and Unix. Users can set backup policies and perform backup and restoration operations flexibly.

3.4.2 Highlights

Enterprise-level backup

The enterprise-level backup function allows data backup of mass files on OSs including

Windows, SUSE Linux, and Unix. This function enables users to back up and restore

data in databases such as Oracle, DB2, MS SQL, and MySQL databases, and data of applications such as SharePoint, Exchange, SAP, and Lotus Notes.

High security

Access streams and data streams of the backup service are transferred by using HTTPS

and encrypted by using the advanced encryption standard (AES) 256 algorithm, ensuring data security during transfer.

The backup system isolates multiple tenants. That is, each user can use the personal

username and password to log in to the backup system, and query and access only the

use's own resources and files.

Backup servers and media servers can be accessed only through the web console and

Proxy server for obtaining configuration and service data of the backup service. This prevents the backup system from being exposed to threads of public networks.

Page 27: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 3 IDC Services

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

27

3.4.3 Service Architecture

The cloud backup service of the current version supports only offline operation.

The backup system is the core of the cloud backup service implementation. The backup

system comprises backup management software, backup servers, and backup storage devices.

In this version, the Simpana backup software developed by Commvault is used. The Simpana

software adopts the following three-layer architecture:

Management server: CommServe

The CommServe:

− Manages data backup, archive, and restoration.

− Maintains CommCell configuration and storage data and controls licenses for tasks,

policies, user security, and modules. The CommServe controls only licenses and does not store backup data.

− Stores metabase catalogs.

The metabase stores metadata that describes backup data features and locations.

− Provides the centralized event controller that records all event logs and delivers notifications about keys events.

Media agent: MediaAgent

The MediaAgent transmits data between clients and storage media. Each MediaAgent

can communicate with one or multiple local or remote storage devices including storage

media.

The MediaAgent is independent from storage media and is compatible with storage

media of different types. Load balancing policies can be configured on the backup system to prevent each MediaAgent from heavy loads.

Web consoles and Proxy servers are deployed in the demilitarized zone (DMZ) in the data

center.

Web console server

The Web console provides a GUI for end users to configure backup policies, and view, back up, and restore data.

Proxy server

The Proxy server delivers backup data streams from external networks to the MediaAgent in the intra-network backup system.

Intelligent data agent: iDataAgent

The iDataAgent is a software module used to back up and restore data and supports

mainstream file systems and applications. A host on which the iDataAgent is installed is

called a client. Multiple iDataAgents can be installed on a client. The iDataAgent module

has multiple types including online backup modules, such as file systems, Oracle

databases, and SQL databases, and other function modules such as ProxyHost. If data of

different types exists on the client, data of each type requires an iDataAgent function module.

3.4.4 Operation Model

Currently, only the enterprise customers can apply for the service offline.

Page 28: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

28

4 IDC Architecture

The IDC architecture provides comprehensive systems, including computing, network,

storage, security, cloud OS software, and management subsystems. Various IDC services are

provided based on this architecture.

4.1 Computing and Storage Cloud-based Architecture The computing and storage cloud-based architecture includes two parts, cloud OS as well as

computing and storage devices. The computing and storage devices can adopt traditional

architecture of server + SAN storage or innovative architecture of server + FusionStorage.

4.1.1 Cloud OS

Positioning

FusionCompute, Huawei cloud OS software, consists of the virtualization infrastructure

platform and cloud infrastructure service platform. It is responsible for hardware resource

virtualization and centralized management of virtual resources, service resources, and user

resources. FusionCompute uses virtualization technologies such as virtualization computing,

storage, and network technologies to virtualize computing, storage, and network resources,

and performs centralized management and scheduling of virtual resources over a unified

interface. This helps reduce operating expense (OPEX), ensure system security and reliability,

and construct a secure and energy-lit cloud data center.

Page 29: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

29

Figure 4-1 Solution components

Infrastructure virtualization

Desktop cloud

Cloud infrastructure service platform

Image management

DR and backup

Multiple tenants Elastic IP address

Service resource scheduling

Virtual resource interface

Virtualization infrastructure platform

Virtual resource scheduling

Availability Security Scalability

Virtualization computing Virtualization storage

Virtualization network

Infrastructure

Servers Storage devices Network devices Security devices

Cloud equipment room

Cabinet Power supply

Cabling Monitoring CoolingFire

extinguishing

Decoration Lightning protection and grounding

Technical Features Unified virtualization platform

FusionCompute uses virtualization management software to divide computing resources

into multiple resources that can be used by high-performance, operational, and

manageable VMs. FusionCompute provides the following features:

− Allocates VM resources on demand.

− Supports multiple OSs.

− Isolates VMs to ensure QoS.

Supports for various hardware platforms

FusionCompute can be deployed on various types of x86 servers and is compatible with

various types of storage devices.

Large cluster

A single logical computing cluster supports a maximum of 3000 VMs.

− When the VIMS system is used, a single shared domain supports a maximum of 32 physical servers.

Page 30: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

30

− When the FusionStorage system is used, a single shared domain supports a maximum

of 128 physical servers.

Automatic scheduling

FusionCompute allows users to define SLA policies, fault reporting criteria, and fault rectification policies.

− FusionCompute implements centralized IT resource scheduling, heat management, and power consumption management, reducing maintenance costs.

− FusionCompute automatically detects the load of servers or services and intelligently

schedules resources to achieve load balancing across servers and service systems, ensuring good user experience and optimal system response.

Cross-domain resource management

FusionCompute supports cross-domain resource management and implements centralized management of all resources.

− Distributed storage and cluster O&M: FusionCompute uses distributed storage and

cluster O&M technologies to consolidate devices on the physical layer and network

layer, provide upper-layer services with uniformly-managed storage, computing, and

network capabilities, and provide users with a network-based management system to implement unified scheduling of all resources.

− Rights- and domain-based management: FusionCompute provides comprehensive

rights- and domain-based management functions by area, role, or rights, so that users at different areas can be authorized to manage local resources.

− Cross-domain scheduling: VMs can be scheduled across domains over the layer-3 network by using elastic IP addresses.

Comprehensive OMM

FusionCompute provides various types of O&M tools to control and manage services, improving operation and maintenance efficiency. FusionCompute supports the following:

− Black box for rapid fault location: enables users to rapidly locate faults by obtaining

exception logs and program stacks. This reduces fault locating time.

− Automatic health status inspection: helps detect faults in a timely manner and present pre-warning reports. This facilitates VM operation and management.

− Web interface: FusionCompute provides a web interface, through which users can

monitor and manage all hardware resources, virtual resources, and service

provisioning.

Cloud security

FusionCompute adopts various security measures and policies and complies with local

information security laws and regulations to provide end-to-end protection for user access, management and maintenance, data, network, and virtualization.

Logical Architecture

FusionCompute consists of the virtualization infrastructure platform and cloud infrastructure

service platform. Figure 4-2 shows the logical architecture of FusionCompute.

Page 31: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

31

Figure 4-2 Logical architecture of FusionCompute

Cloud infrastructure service platform

Virtualization infrastructure platform

Table 4-1 lists logical modules of FusionCompute.

Table 4-1 Logical modules of FusionCompute

Module Function

Image storage

(IMGS)

Stores image files.

Image

management (IMGM)

Provides image data management, including registering, querying,

modifying, and deleting images.

Computing node

agent (CNA)

The CNA provides the following functions:

Implements the virtual computing function.

Manages the VMs running on the CNA.

Manages the computing, storage, and network resources of the CNA.

VRM The VRM provides the following functions:

Manages block storage resources in clusters.

Assigns private IP addresses to VMs using Dynamic Host Configuration Protocol (DHCP).

Manages nodes in the computing cluster and maps physical computing resources to virtual computing resources.

Manages network resources, such as IP addresses, virtual local area

networks (VLANs), security groups, and DHCP severs, in the cluster and assigns private IP addresses to non-VPC VMs.

Manages the lifecycle of VMs in the cluster and distributes and migrates VMs across CNAs.

Page 32: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

32

Module Function

Dynamically adjusts resources in the cluster.

Implements centralized management of virtual resources and user

data and provides elastic computing, storage, and IP address services.

Allows O&M personnel to remotely access FusionCompute from

the web user interface (WebUI) to perform resource monitoring and management and view resource statistics reports.

Virtualization infrastructure platform

The virtualization infrastructure platform virtualizes physical resources, such as

computing, storage, and network resources, into virtual resources that can be centrally

managed, flexibly scheduled, and dynamically allocated. It is the key platform used for building cloud data centers.

Cloud infrastructure service platform

The cloud infrastructure service platform encapsulates and manages virtual resources

provided by the virtualization infrastructure platform, helping carriers and enterprises to

build their data center O&M capabilities. The management function includes resource

management, image management, charging management, scheduling management, and user management.

4.1.2 Server +SAN Storage (Traditional Architecture)

Servers and SAN storage devices to be used as infrastructure servers can be selected

according to project requirements.

Server

Servers are classified into tower, rack, and blade servers by appearance. Rack and blade

servers are installed in the standard 19-inch cabinet.

Various rack servers are available for selection. As for computing capabilities, there are

1-socket to 8-socket rack servers.

The blade server adopts a new server structure. The peripheral networks, management

modules, PSUs, heat dissipation components are integrated into a unified chassis,

implementing integrated deployment for multiple servers. Compared with rack servers, blade

servers improve computing density, reduce server deployment time and space, and simplify

cabling. However, blade servers are poorer in hard disk and NIC expansion than rack servers

because multiple servers are deployed in the integrated manner.

Common servers consist of the blade server E6000 and the rack server RH2288 V2. Table 4-2

and Table 4-3 describe the specifications of the two types of servers.

Table 4-2 E6000 blade server

Component Sub-Componen

t Specifications

E6000H chassis Deployment

mode

8 U blade server

Page 33: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

33

Component Sub-Component

Specifications

Blade slot 10

I/O module Six GE, 10GE, or 8 Gbit/s FC I/O modules

Backplane High-speed passive backplanes can be used to support

the connections of the following modules:

Server blade

Ethernet or FC switch module

Management module

Power module

Fan module

System

management

Provides powerful management capabilities and

supports FE, Serial, and USB management ports.

Chassis management module (MM) and business

management controller (BMC) of a server blade are

connected by IPMB.

Supports two chassis management modules (or only

one standard chassis management module that

supports redundancy configuration) and the remote

KVM.

Power module A maximum of six 1600 W AC/DC or 1300

W DC/DC hot swapping power modules can be

configured. The N+N or N+1 redundancy, load balancing, and fault switchover are supported.

Note: The N+N redundancy is not supported when the HB640 V2 blade server is in full configuration.

Fan module Nine hot swapping fan modules can be configured and

the N+1 redundancy is supported.

BH620 V2

server blade

Processor Supports a maximum of two Intel® Xeon E5-2400

series processors.

Memory Supports a maximum of 12 DIMM slots.

Local storage Supports a maximum of four 2.5-inch SAS/SATA/SSD hot-swappable hard disks.

PCIe slot Supports two PCIe x8 Mezz interfaces.

BH621 V2 Processor Supports a maximum of two Intel® Xeon E5-2400

series processors.

Memory Supports a maximum of 12 DIMM slots.

Local storage Supports a maximum of two 2.5-inch

SAS/SATA/SSD hot-swappable hard disks.

Page 34: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

34

Component Sub-Component

Specifications

PCIe slot Supports two PCIe x8 Mezz interfaces.

Supports one standard full-height half-length PCIe

slot.

BH622 V2 Processor Supports a maximum of two Intel® Xeon E5-2600

series processors.

Memory Supports a maximum of 24 DIMM slots.

Local storage Supports a maximum of two 2.5-inch

SAS/SATA/SSD hot-swappable hard disks.

PCIe slot Supports two PCIe x8 Mezz interfaces.

BH640 V2 Processor Supports a maximum of four Intel® Xeon E5-4600

series processors.

Memory Supports a maximum of 24 DIMM slots.

Local storage Supports a maximum of two 2.5-inch

SAS/SATA/SSD hot-swappable hard disks.

PCIe slot Supports two PCIe x8 Mezz interfaces.

Table 4-3 RH2288 V2 technical specifications

Component Specifications

Deployment

mode

2U rack server

Processor Supports a maximum of two Intel® Xeon E5-2600 series processors

Memory Supports a maximum of 24 DIMM slots

Local storage Supports three types of hard disk configurations:

Eight 2.5-inch SAS/SATA/SSD hard disks

Twelve 3.5-inch SAS/SATA hard disks and two 2.5-inch SAS/SATA

hard disks

Twenty-four 2.5-inch SAS/SATA/SSD hard disks and two 2.5-inch SAS/SATA hard disks

Mixed configuration of SAS/SATA/SSD hard disks is supported.

Network port Four integrated GE 1000BASE-T ports.

Page 35: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

35

Component Specifications

PCIe slot A single processor (a riser card is required) supports the following:

One PCIe 3.0 x16 card of full height and full length

One PCIe 3.0 x8 card of full height and three fourths length

One non-standard PCIe slot for installing a RAID controller card

Dual processors (a riser card is required) support the following:

One PCIe 3.0 x16 card of full height and full length

One PCIe 3.0 x8 card of full height and three fourths length

Three half-height PCIe 3.0 x8 cards

One non-standard PCIe slot for installing a RAID controller card

Fan Four hot-swappable counter-rotating fans, allowing single-fan failures.

Power module Two redundant and hot-swappable 460 W/750 W/800 W AC PSUs and

800 W -48 V DC PSUs.

Storage

You can select FC SAN or IP SAN based on project requirements.

In consideration of costs and applications, the IP SAN is recommended for IDC services. The

reasons are as follows:

The IP SAN functions based on TCP/IP. IT personnel are familiar with TCP/IP, which

helps reduce maintenance costs.

The IP SAN adopts Ethernet switches, which are cost-effective. If the FC SAN is used,

HBA cards must be deployed on servers and FC storage switches are also required, which improve device and license costs.

The FC SAN transmission distance does not exceed 50 km, which is unsuitable for

integrating remote storage.

In consideration of performance and reliability, the FC SAN is recommended for key services.

The reasons are as follows:

The FC SAN transfers data based on the closed network of traffic control. The IP SAN

transfers data based on the CSMA/CD mechanism. Therefore, the FC SAN provides

higher transfer efficiency than the IP SAN.

The FC SAN uses the efficient fiber channel protocol. Most of FC SAN functions are

achieved by using hardware. For example, a server uses a dedicated HBA that provides

the ASIC chip to process data. Therefore, the FC SAN offers higher performance than

the IP SAN.

The common storage array includes mid-range storage such as OceanStor S5500T, S5600T,

S5800T, and S6800T as well as high-end storage such as OceanStor18500 and 18800. Table 4-1 describes the technical parameters of S5500T and 18500.

Table 4-4 S5500T

Component Specifications

Cache capacity 8 GB/16 GB/32 GB

Page 36: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

36

Component Specifications

Number of

controllers 2

Type of front

channel ports

8 Gbit/s FC, 1 Gbit/s iSCSI, 10 Gbit/s iSCSI (Toe), and 10 Gbit/s

FCoE

Type of rear channel

ports 6 Gbit/s SAS 2.0 wide port

Number of onboard

I/O ports 8 x 8 Gbit/s front FC port and 4 x SAS 2.0 rear SAS wide port

Maximum number

of I/O modules 2

Maximum number

of hard disk slots 528

Disk Type SAS, NL SAS, SATA, and SSD

RAID support 0, 1, 3, 5, 6, 10, 50

Supported number

of snapshots 1024

Supported number

of LUNs 4096

TurboModule Supported

Other software Hyperimage, HyperCopy, HyperClone, HyperMirror, HyperThin,

UltraPath, DiskGuard, and SmartCache

Table 4-5 OceanStor18500

Component Specifications

Maximum cache

capacity 768 GB

Maximum number

of controllers 4

Maximum number

of front host ports 96

Maximum number

of hard disk slots 1584

Disk Type 2.5-inch SAS, 2.5-inch SSD, 3.5-inch SAS, 3.5-inch NL SAS, and 3.5-inch SSD

RAID support 5, 6, 10

Maximum number

of hosts 65536

Page 37: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

37

Component Specifications

Supported number

of LUNs 65536

Other software HyperSnap, HyperClone, HyperCopy, HyperReplication, SmartThin,

SmartTier, SmartQos, and SmartVirtualization

4.1.3 Server +FusionStorage Distributed Storage (Innovative Architecture)

With the development of services, the integrated SAN storage array is facing a series of

problems and challenges:

Performance and data need more time to restore due to RAID mechanisms. According to

RAID implementation principle, in the same RAID group, the larger the number of disks

is, the higher the performance is. However, the error rate of multiple disks in the same

RAID group increases and reliability decreases, as the number of disks grows. Therefore, you are advised to deploy a maximum of 15 disks.

Using central accesses and management, heads can become the bottleneck. Mid-range

storage only has two controllers, controller A and controller B. when disks are excessive,

the storage controllers usually become the bottleneck of performance, and restrict the scalability of SAN storage.

To address the foregoing challenge, Huawei FusionStorage pools the local hard disks of the

x86 servers to provide block storage.

FusionStorage has the following characteristics:

Cutting-edge distributed architecture, deep convergence of computing and storage

− FusionStorage uses a distributed architecture, including distributed management

cluster, distributed hash-based routing algorithm, distributed and stateless engines,

distributed intelligent caching. The structure ensures that the SPOF does not occur in the system.

Page 38: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

38

Figure 4-3 FusionStorage distributed architecture

− FusionStorage is deployed on the servers that the local hard disks are attached to. The

local disks form a virtual resource pool to function as an external storage device, which integrates computing and storage devices.

High reliability

− To ensure data reliability, FusionStorage uses the multiple data copy mechanism

rather than the RAID technology. Before storing the data, FusionStorage fragments

the data and stores the data fragments on the cluster nodes based on specified rules.

Figure 4-4 shows the multiple data copy mechanism that FusionStorage uses. For

partition P1 on disk 1 of server 1, its data copy is P1' on disk 2 of server 2. P1 and P1' are two data copies of the same data block.

− In a public cloud scenario, when three data copies are stored, the data availability reaches 99.9999% on FusionStorage.

Figure 4-4 Multiple data copy mechanism

− FusionStorage uses the tight consistency and replication technology to ensure data

consistency between data copies. If the data written into the system is divided into

multiple portions, the system also creates multiple copies for each data portion and

stores them in different disks. FusionStorage also ensures consistency between the copies of each data portion.

Page 39: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

39

− When failing to read data, FusionStorage determines the failure cause. If the failure is

resulted from a disk sector, FusionStorage reads the required data from other copies

and restores the data in the original copy. This ensures that the total number of data copies does not decrease.

High performance

− FusionStorage uses distributed stateless engines. Engines are deployed on each server

and no centric engine is deployed. The engines consume little CPU resources on each server and allow more IOPS.

− FusionStorage uses some memory of each server for read cache and non-volatile dual

in-line memory module (NVDIMM) for write cache. Caches are evenly distributed to

all nodes. The total cache size on all servers is greatly larger than that provided by

external storage devices. Even when using the large-capacity and low-cost SATA

disks, FusionStorage can still provide one to three times higher input/output performance and larger effective capacity.

− The DHT of FusionStorage ensures that the input/output operations performed by

upper-layer applications are evenly distributed on the hard disks of various servers and the load is globally balanced as follows:

Table 4-6 DHT routing algorithm that FusionStorage supports

− FusionStorage creates snapshots based on the distributed hash table (DHT)

technology. Creating snapshots does not have any adverse impact on the volumes.

For example, 24 GB space is required for constructing indexes in the memory for a

2-TB disk. You can determine whether any snapshot has been created for the disks in

a hash query operation. If a snapshot has been created, the hash query operation

enables you to determine the storage location of the snapshot. Therefore, the DHT

technology provides high query efficiency.

Page 40: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

40

Table 4-7 High-performance snapshots provided by FusionStorage

Rapid parallel data reconstruction upon failure

FusionStorage provides the powerful data protection mechanism. When storing data,

FusionStorage fragments the data and stores the fragments on multiple storage nodes

that may be installed in different cabinets. In addition, FusionStorage automatically

duplicates the data and the data fragments into multiple copies and also stores the

data and fragment copies on different storage nodes. When data becomes inconsistent

due to hardware failures, FusionStorage uses the internal check mechanism to

compare data fragment copies on different nodes and automatically detect the data

failure. After detecting a data error, FusionStorage uses a data restoration mechanism

to automatically restore data. The restoration mechanism allows FusionStorage to

simultaneously restore a minimal amount of data on different nodes because the data

copies are stored on different storage nodes. This mechanism prevents performance

deterioration caused by restoration of a large amount of data on a single node, and

therefore minimizes adverse impact on upper-layer services. Figure 4-8 shows the automatic data reconstruction process upon failure on FusionStorage.

Table 4-8 Data reconstruction process on FusionStorage

FusionStorage supports parallel and rapid fault troubleshooting and data reconstruction as follows:

Page 41: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

41

Data is fragmented in the resource pool. If a disk is faulty, these data fragments can

be automatically reconstructed by simultaneously restoring data copies in the resource pool efficiently.

Data can be distributed to different servers or different cabinets so that data can be obtained even when a server is faulty.

Load is automatically shared between existing nodes and new nodes. You do not need

to adjust application configuration to obtain larger capacity and higher performance.

Easy Expansion and Ultra-Large Capacity

− The DHT routing algorithm ensures rapid load balancing after capacity expansion and avoids migration of a large amount of data.

− When computing nodes are added, the storage capacity is added. After capacity expansion, computing and storage are still integrated.

− The bandwidth and cache are evenly distributed on each node and do not decrease with capacity expansion.

4.1.4 Comparison Between SAN Storage and FusionStorage

Table 4-9 Restriction of server + FusionStorage

Item Subitem SAN Storage FusionStorage

Specificat

ions

Maximum number

of servers in single

resource pool

32 (VIMS enabled) 128

Performan

ce

IOPS of the same

disk

8 K, read/write 3:7,

100% random

read/write IOPS:

14463.7

8 K, read/write 3:7,

100% random

read/write IOPS:

10701.8

8 K, read/write 3:7, 100%

random read/write IOPS: 8000

8 K, read/write 3:7, 100%

random read/write IOPS: 7364

Compatibi

lity

Server Decoupled

interaction, without

special limitations

Supports only RH2288H V2.

Virtualization Compatible when the

mapping virtualization

platform supports

standard SAN storage,

including

FusionStorage, Xen,

Vmware, and hyper-V;

Supports only FusionSphere.

Page 42: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

42

Item Subitem SAN Storage FusionStorage

Scalabilit

y

Capacity

expansion

A set of expansion

disk racks and

disks, without special limitations

No limitations on

adding multiple sets

of SAN storage

devices

Multiple FusionStorage

resource pools are not

supported.

Capacity expansion

requirements of single

FusionStorage resource

pool: the configuration of

the expanded SCNA nodes

must be the same as that of

current SCNA nodes in

number and specification;

Storage

migration

Storage migration Supports storage live

migration and

cross-cluster migration

Does not support migration

between FusionStorage pools

or migration between

FusionStorage and other SAN

Disaster

recovery

Disaster recovery

solution

Supports two disaster

recovery solutions:

active-active storage

and array replication

Without disaster recovery

solutions

4.2 Network

4.2.1 Overview

The data center network architecture, which carries all application data, is the core of the IT

architecture. A proper network design is important, which must support high network

performance, flexibility, and scalability. IDC network design should also consider rapid

deployment and flexibility to support new services. A flexible network architecture allows

new applications to be deployed in a short time.

The IDC network design adopts a time-tested layered mode which is a basis for data center

design. The layered mode is to ensure network scalability, performance, flexibility, elasticity,

and maintainability. In addition, the network is divided into different functional areas and

three logical planes: management, storage, and service planes. The Huawei IDC network has

the following characteristics:

Tenant isolation

Resources of different enterprise tenants are isolated in the data center. Tenants' network,

computing, and storage resources are isolated in an end-to-end (E2E) manner. Cloud data

center resources of different tenants are isolated by using technologies such as the VFW,

virtual routing and forwarding (VRF), and VLAN. Tenants can set network routes and

security policies, bind elastic IP addresses, and configure server gateways. IP addresses

of different tenants can overlap. Computing resources of each tenant exclusively occupy

the CPU and memory to achieve logical isolation between tenants. Computing resources

of different tenants do not affect each other. Storage resources are mapped to VMs of

each tenant by using logically isolated data volumes. Storage resources of the same

Page 43: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

43

tenant can access each other, while storage resources of different tenants cannot access

each other.

Large layer 2 network

The data center supports large layer 2 networking. The Transparent Interconnection of

Lots of Links (TRILL) protocol is used to construct a cost-effective, large-scale, and

unblocked large layer 2 network. The TRILL protocol supports high-efficiency

forwarding, fast convergence, easy deployment, and multi-tenancy technology and

effectively avoids loops. The TRILL-based network architecture can meet requirements of data center services in the cloud computing era.

Layer 2 interconnections among multiple data centers

A layer 2 network across multiple data centers is constructed at the external access layer

of different data center networks to meet requirements of scenarios such as server

clusters and VM dynamic migration. The virtual private LAN segment (VPLS)

technology allows different data centers to be in the same layer 2 network. Services can

be normally provisioned without modifying network properties when VMs are migrated

among data centers.

Network convergence

Date center network construction focuses on simple storage network deployment. Data

center networks tend to converge, and the fibre channel over Ethernet (FCoE) technology

makes network convergence possible. The FCoE technology is used to converge the

storage network and Ethernet.

Network virtualization

The data center infrastructure provides VFW, ELB, and switch virtualization

capabilities, which provide each tenant with firewall, load balancing, and switching

functions. Resources are isolated, and tenants can customize policies. The VFW provides

isolated security services for each tenant, and each tenant has a VFW. Each VFW

consists of private interfaces, security areas, security domains, ACL, and NAT address

pools and can provide private security services such as address binding, blacklists, NAT,

packet filtering, statistics, attack defense, and ASPF. An LB is virtualized into multiple

ELBs to support multi-tenancy. Tenants can customize the load balancing policy,

algorithm, weight, and associated servers. Tenants can manage and maintain ELBs by

themselves and view the running status of ELBs and resource utilization. Load balancing

resources of different tenants are isolated from each other. The switch virtualization

capability is provided by the cluster switch system (CSS). In the CSS, multiple switches

supporting clusters are connected to function as a switch. The CSS supports the following features:

− Simplified network topology

− Improved network performance

− Simple O&M and reduced operating expense (OPEX) in that the CSS is managed as

a switch.

− Single point of failure (SPOF) prevention

When a device in the CSS fails, other devices in the CSS can take over the control and forwarding services of the faulty device, preventing the SPOF.

− Loop-free networks

The CSS supports multi-chassis link aggregation which helps avoid loops. Complex

loop avoidance protocols such as Multiple Spanning Tree Protocol (MSTP) do not

need to be configured.

− High link bandwidth utilization

The bandwidth utilization of multi-chassis links is up to 100%.

Page 44: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

44

4.2.2 IDC Network Architecture

Figure 4-5 IDC network architecture

CSS

External area

Core layer

DDoS traffic cleaning

InternetMPLS

VPN

iStack

云 OS

Storage network

Storage resources

iStack

云 OS

Operation

management

iStack

云 OS

DMZMaintenance

management

iStack

云 OS

VES/CA system

management

iStack

云 OS

Service area

Server + SAN

Storage

cabinet

Setup

cabinet

Computing

cabinet

RH2288 RH2288 RH2288

Backup

cabinet

N8500

Server + Dsware

Ne

two

rk s

erv

ice

are

a FW

LB

VPN

IDS/IPS

FW

LB

VPN

IDS/IPS Ne

two

rk s

erv

ice

are

a

Access switch RouterAggregation switchCore switch LB FW/IPS

Server Storage device

GE link10GE link Stacking cable

DDoS traffic cleaning

Legend

VM FC switch UVP Virtualization system

VPN device IDS/IPS

The IDC adopts the flattened two-layer architecture design. The internal switching

architecture is simple and clear. The server +SAN storage architecture consists of the core

layer and access layer. In the server + FusionStorage architecture, however, servers provide

10GE ports to connect to the core layer switches.

The IDC is divided into five areas based on the network logical functions: external area, core

area, network service area, access area, and storage area.

The external area connects to the Internet and VPN network.

Page 45: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

45

The core area is the switching core of the data center and consists of high-performance

switches.

The network service area provides value-added services such as traffic statistics, the

intrusion detection system (IDS)/intrusion prevention system (IPS), firewalls, load balancing, and VPN.

The access area provides access for data center server nodes.

The storage area provides SAN access.

At the core layer, subrack switches are used to form a cluster, and multiple switches are

logically virtualized into a switch to implement device redundancy. Only one management IP

address is required to manage the devices.

The CSS + iStack + Eth-Trunk mode is used to construct a reliable loop-free layer 2 network.

At the core layer, the VRF technology is used to logically isolate network areas and service

areas at layer 3.

With the virtualization function, VFWs, ELBs, and virtual switches are provided to meet user

requirements on virtualization and isolation.

At the access layer, VLAN and 802.1Q technologies are used to implement layer 2 isolation.

Service areas have three security levels: high, medium, and low. The operation management

area and maintenance management area have a high security level, and the DMZ has a

medium security level.

Other service areas are assigned with security levels as required.

4.2.3 Area-Based Architecture Design

External Area

The external area connects the data center network to the Internet and dedicated networks.

Figure 4-6 shows the external area networking.

Figure 4-6 External area networking

External area

DDoS traffic cleaning

Internet MPLS

Optional

The external area connects to the Multiprotocol Label Switching (MPLS) bearer network,

Internet, and data communication network (DCN) through routers. The external area

Page 46: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

46

communicates with extranets by using the Border Gateway Protocol (BGP) or static routes

(static routes are used for enterprise access). Active and standby links are configured to

improve reliability. The external area connects to extranets through dual links, and each link

connects to a carrier's network to ensure reliability of external connections.

The anti-distributed denial of service (DDOS) device is used to prevent DDOS attacks. The

anti-DDoS device connects to routers in bypass mode. Routers are configured with port

mirroring to send traffic to the anti-DDoS device for analysis. Dynamic routes are configured

between routers and the anti-DDoS device to dynamically divert attack traffic. The anti-DDoS

device injects the normal traffic after cleaning to routers through policy-based routes, and

routers divert the normal traffic after cleaning to the data center through policy-based

routes, which ensure traffic security for the data center.

Firewalls in the external area are optional components. The firewalls are serially connected

and work in routing mode. The firewall provides four domains: untrusted, trusted, DMZ, and

management domains. Inter-domain policies are used to ensure access security. Open Shortest

Path First (OSPF) is configured between the firewall and the egress router to dynamically

learn routing entries advertised by the intranet.

Core Area

The core area is the switching core of the data center. Based on project requirements, two

high-performance switches are configured to ensure reliability and redundancy.

At the core layer, subrack switches are used to form a cluster, and multiple switches are

logically virtualized into a switch to implement device redundancy and facilitate

management. The core switch uses the VRF technology to logically isolate service networks.

The OSPF protocol is used between the core area and the external area, and between the

core area and the aggregation switch to advertise routes, ensuring that correct routes

between the core area and the intranet/extranet are quickly learned. This adapts to frequent service changes and dynamically updates routes in real time.

Downlink ports physically connect to access switches through aggregated 10GE fiber

links. A maximum of eight 10GE links can be bound. In this manner, high bandwidth is

provided. The maximum single-area uplink bandwidth reaches 80 Gbit/s. A trunk link is

configured to transport service VLAN communication. A loop-free high-efficiency layer 2 network is constructed, and different users and different services are logically isolated.

Network Service Area

Devices in the network service area connect to the core switch in bypass mode and provide

value-added network services and value-added network security services.

Figure 4-7 Network service area

Core layer

Ne

two

rk s

erv

ice

are

a

Ne

two

rk s

erv

ice

are

a

CSS

Page 47: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

47

The network service area houses the traffic analysis and statistics device Netstream, IDS/IPS, firewalls, LBs, and VPN devices.

Firewalls are used to isolate security domains and control access between security

domains. Firewalls support the VFW function to meet high security requirements of tenants.

LBs implement load balancing for users' applications. An LB can be virtualized into

ELBs. ELBs are logically isolated. Each tenant can use at least one ELB. Tenants' data

configurations are independent of each other.

VPN devices provide remote encrypted access for users to manage and maintain their

own data center resources.

Access Area

The access area houses network elements (NEs) that access the data center, such as servers.

There are two scenarios:

Server + SAN Storage

Rack servers connect to access switches, and blade servers access the data center through the

switching backplane in the chassis.

Two access switches are stacked and virtualized into one switch. Layer 2 isolation is

performed on service planes by VLANs. Different downlink ports are configured to

allow different VLANs to pass through. Uplink ports are configured in trunk mode to

allow all VLANs to pass through. Multiple physical ports are aggregated and connect upstream to the core switch.

When rack servers or blade servers are used in the service area, TOR switches or

switching backplanes in blade chassis are used respectively to provide the switching

capability. Each plane on the server adopts dual NICs, and each NIC corresponds to one

access switch, which improve the reliability of servers and networks. Two switches for

one server are in a pair and stacked. The switches connect upstream to the aggregation

switch through bound 10GE uplinks.

No gateway is configured for access switches. Each pair of access switches supports

4094 VLANs and communicates with the aggregation switch at layer 2. In this manner, a loop-free high-efficiency layer 2 network is constructed.

Server + FusionStorage

RH2288H rack servers are used. A server NIC egress provides two 10GE ports. In

small-capacity scenarios, servers directly connect to core switches without independent

access switches; in large-capacity scenarios, CE6850 switches are configured as access switches.

Each RH2288H server uses two 10GE load sharing uplink ports. The management,

service, and storage planes are separated by VLANs and support QoS reservation.

Storage Area

The storage area provides SAN access in the server + SAN storage scenario.

Page 48: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

48

Figure 4-8 Storage area

Storage aggregation network

IP SAN FC SAN

The storage network is an independent Fibre Channel (FC) or IP network, which

provides storage access. The storage area supports IP SAN devices, FC SAN devices,

and virtual intelligent storage (VIS) devices. The IP SAN device connects upstream to

the storage access IP switches through eight GE ports, and the FC SAN device connects

upstream to the storage access FC switches through four GE ports. Each set of links has

multiple hot backup links. The storage area adopts multipath design, providing high capacity and storage network reliability.

Storage aggregation switches are classified into IP switches and FC switches. The

aggregation switches connect upstream to the storage plane of each server and downstream to storage access switches.

Storage access switches are classified into IP switches and FC switches. IP switches

connect downstream to the IP SAN, and FC switches connect downstream to the FC SAN.

4.2.4 Data Center Interconnection Architecture

When IDC carriers provide public cloud services to Internet users, data centers that are

located in different places need to interconnect with each other. Besides, disaster recovery

(DR) for the IDC also involves data center interconnection. Figure 4-9 shows the data center

interconnection architecture.

Page 49: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

49

Figure 4-9 Data center interconnection architecture

DC 1 DC 2Layer 3 interconnection

Layer 2 service interconnection

Bare fiber interconnection

IP SAN/FC SAN IP SAN/FC SAN

Networks between data centers can be bare fiber networks, dedicated networks, leased

MPLS or IP networks, metropolitan bearer networks, Internet, or self-built wide area

networks (WANs).

Different interconnection technologies are used for different networks. For example, for

self-built WANs and bare fiber networks that support high scalability and flexibility,

a wide range of interconnection technologies can be selected. For leased MPLS or IP

networks, you are advised to lease VPN services (VPLS, MPLS L3VPN, and GRE VPN)

from carriers together with the MPLS or IP networks.

Multiple interconnection networks can be used together. For example, data centers in the

same city interconnect with each other through bare fiber networks or self-built WANs,

data centers in different cities interconnect with each other through leased MPLS or IP

networks, and branch offices interconnect with each other through the Internet.

Table 4-10 lists suggestions about data center interconnections.

Table 4-10 Data center interconnection suggestions

Application Scenario

Service Deployment Suggestion

L2/L3 Network Solution

Feature/Advantage

Recommended Product

Intra-city fiber

interconnection

Services are

deployed in active/active mode.

L3 VPN

Pure Ethernet

VPLS/Virtual leased

line (VLL)

Low network delay,

sufficient bandwidth,

high reliability, and flexible deployment

OSN6800/S9700

Intra-city private

line interconnection

Some services are

deployed in active/active mode.

L3 VPN VPLS/VLL Flexible bandwidth,

and high reliability and security

NE40E/S9700

Page 50: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

50

Application Scenario

Service Deployment Suggestion

L2/L3 Network Solution

Feature/Advantage

Recommended Product

Remote private

line

interconnection

Some services are

deployed in

active/active mode.

L3 VPN

VPLS/VLL

Flexible bandwidth,

and high reliability

and security

NE40E/S9700

Remote MPLS

network interconnection

Services are

deployed in

active/standby mode.

L3 VPN

VPLS/VLL

Leased networks are

cost effective and

provide good isolation.

NE40E/S9700

Remote IP

network

interconnection

Services are

deployed in

active/standby mode.

L3 VPN over GRE

VPLS over GRE

Leased networks are

cost effective and

facilitate deployment and maintenance.

NE40E

Shared equipment

room in the same city

Services are locally

backed up.

MPLS L3 VPN Self-built WANs

feature flexibility and good isolation.

NE40E

Remote

shared DR center

The DR center

provides

remote DR for the production center.

MPLS L3 VPN Leased networks are

cost effective.

NE40E

4.2.5 Features

Tenant Isolation

The data center supports multi-tenancy management. IT administrators allocate dedicated

infrastructure resources for each tenant. Deploying public infrastructures shared by multiple

tenants in the data center improves resource utilization. Resources of different tenants must be

isolated in the date center to ensure E2E path isolation and meet tenants' security

requirements.

To support multiple tenants and provide the same dedicated infrastructures to tenants, the data

center reference architecture adopts the path isolation technology to logically divide the

infrastructure into several infrastructures shared by multiple virtual networks (tenants). Tenant

isolation relies on device virtualization and is implemented across layers.

Layer 3 isolation

At the core layer or aggregation layer, the VRF technology is used to isolate tenants at

layer 3. Each tenant is assigned with an independent routing and forwarding table. Data

stream exchange between tenants (from server to server) is not allowed, except that data

stream exchange is required and definite configuration is provided. IP addresses of different tenants can overlap.

Layer 2 isolation

VLAN IDs and 802.1Q tags are used to implement layer 2 isolation. Tenants can forward

data only within their own layer 2 networks.

Network service isolation

Page 51: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

51

Service modules of physical devices or network devices are virtualized to provide VFW

and ELB functions to each tenant. VFWs or ELBs do not affect each other. Tenants can customize policies. Different tenants can set the same policies.

Large Layer 2 Network

In the cloud computing era, the TRILL protocol is used to construct a large layer 2 network to

efficiently use data center resources.

The TRILL protocol applies the layer 3 link state routing technology to layer 2 networks. The

TRILL protocol uses the Intermediate System to Intermediate System (IS-IS) protocol as the

control protocol to meet large layer 2 networking requirements and provide solutions to data

center services.

The TRILL protocol has the following advantages:

High-efficiency forwarding

Each device with the TRILL protocol enabled on the network takes itself as the source

node and calculates the shortest path to other nodes by using the shortest path algorithm.

If multiple equal-cost links exist, load balancing is implemented when unicast routing

entries are generated. Compared with traditional layer 2 networks, the TRILL protocol

improves the data forwarding efficiency for data centers and throughput of data center networks.

Loop avoidance

The TRILL protocol can automatically construct a share-multicast distribution tree

(Share-MDT). The Share-MDT connects all nodes on a network to carry layer 2

unknown unicast, broadcast, or multicast data packets. In this way, no loop is formed.

When the network topology changes, data packets received at incorrect ports are

discarded by using the reverse path forwarding (RPF) technology to avoid loops. In

addition, the TRILL protocol packet header carries the Hop field, which helps minimize the impact of temporary loops and avoid loop storms.

Fast convergence

The TRILL protocol generates forwarding entries by using the IS-IS protocol, and the

Hop field in the TRILL protocol packet header allows temporary loops, so that fast convergence can be implemented when a node or link fails.

Easy deployment

TRILL configuration is simple. Many parameters such as Nickname and System ID can

be automatically generated. In addition, TRILL networks are layer 2 networks, which support plug and play (PnP) and ease-of-use features.

The TRILL data center network adopts the typical layer 2 networking mode, and the TRILL

protocol is enabled for all access switches, aggregation switches, and core switches on the

network. Figure 4-10 shows the TRILL data center network.

Page 52: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

52

Figure 4-10 TRILL data center network

Core layer

Aggregation layer

Access layer

Access switches can be TOR switches and End Of Rack (EOR) switches. Aggregation

switches and core switches are generally EOR switches. To ensure that data center services

are normally deployed, the TRILL protocol is required to implement high-efficiency data

forwarding between servers and between servers and Internet users.

Network Convergence

FCoE is a network convergence technology. It provides an I/O integration solution with the

FC storage protocol being the core to enable LAN and SAN networks to share network

resources. The FCoE technology is not to substitute for the traditional FC technology. Instead,

the FCoE technology extends FC at the Ethernet transport layer. In this way, Ethernet frames

can carry FC frames, in other words, FC frames are encapsulated into Ethernet frames so that

SAN data can be transmitted over the Ethernet.

The FCoE network has the following advantages:

Reduced total cost of ownership (TCO)

The LAN and SAN networks share network resources by using the FCoE

technology. Distributed resources are consolidated and efficiently used. Investment to

SAN infrastructures is reduced, the network is simplified, and network management and

maintenance costs are reduced. Converged Network Adapters (CNAs) are installed on

servers to reduce power and cooling costs of data centers.

Strong investment protection

The FCoE network can seamlessly communicate with the existing Ethernet and FC

infrastructures, which protects the existing FC SAN investment (such as investment to FC devices, tools, and management facilities).

Enhanced service flexibility

The FCoE technology allows all servers to share storage resources, which meets VM migration requirements and improve system flexibility and availability.

Page 53: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

53

Deploying FCoE at the access layer achieves server I/O integration and reduces cables

connecting servers to the network. CNAs on servers support FCoE and connect to access

switches. Then access switches connect to the LAN and SAN networks through Ethernet links, as shown in Figure 4-11.

Figure 4-11 LAN and SAN networking

Servers connect to access switches through FCoE NICs. All access switches support FCoE.

Then access switches connect to the SAN through the Ethernet, and data is transmitted

between access switches and SAN by using the FCoE protocol. FC switches or FCoE

switches can be deployed on the SAN, which connect downstream to storage devices,

reducing network lines.

4.3 Security

4.3.1 Overview

With the IT development, for example, as Web2.0, service oriented architecture (SOA), and

cloud computing technologies are emerging and mobile devices, remote access devices,

browsers, plug-ins of various applications, intelligent terminals, and cloud hosts come into

being, information security faces new challenges. Attacks from the intranet and extranet and

system vulnerability are major threats to information security. Based on IDC service

characteristics, we must consider whether security technologies and products meet IDC

security requirements, and how to use various software and hardware security products to

construct an IDC security solution to meet various IDC service security requirements.

The IDC functioning as an information hub contains servers, storage devices, network devices,

and applications. With the emergence of the cloud computing, new elements, such as

virtualization, are added in the IDC. Therefore, the IDC security solution must be

Page 54: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

54

designed with consideration of all IDC elements. Using traditional security technologies only

cannot ensure IDC security.

The Huawei IDC security architecture is designed based on the best practice in the industry

and Huawei's expertise and experience. The security architecture is designed to meet service

security, reliability, and data integrity requirements. The Huawei IDC security architecture

supports the following features:

Reliability

Security devices and key components in security systems adopt high-reliability design.

That is, dual-node hot backup is implemented to meet long-term running requirements of data center services.

Modularization

The Huawei IDC security architecture is designed based on eight modules: physical

security, network security, host security, application security, virtualization security, user

security, security management, and security services. A security architecture can be

quickly formed based on customer requirements to provide a customized security system.

E2E security

The Huawei IDC security solution provides E2E protection from user access, use, and

exit. Technologies such as the authentication technology based on dual factors, rights

control technology for privileged users, VPN, application protection technology, and

event auditing technology are used to control user access to IT resources, ensure data communication security and secure application access, and audit operations.

Low coupling

The Huawei IDC security architecture must provide protection for multiple layers, such

as the data layer and application layer. Therefore, the IDC security architecture involves

various security technologies, products, and management policies. The IDC security

architecture features low coupling. That is, various security technologies are not tightly

associated, security products provided by different vendors can be used and are not

limited to specific models, and security management policy formulation does not depend on specific security products.

Scalability

The Huawei IDC security architecture is a guiding framework. Users can implement

security construction based on the guiding framework and security requirements, which

protects investment while meeting security requirements.

Support for e-Government level 3 protection

The Huawei IDC security architecture is designed from aspects of physical security,

network security, host security, application security, data security, user management, and

security management to meet level 3 security protection requirements of e-Government.

The Huawei IDC security architecture is one of the best guiding frameworks for

constructing e-Government data centers. Virtualization security is also ensured by the Huawei IDC security architecture based on cloud computing characteristics.

4.3.2 IDC Security Architecture

According to the ideas of layered and in-depth defense, the Huawei IDC security architecture

is divided into physical device security, network security, host security, application security,

virtualization security, data security, user management, and security management layers. The

IDC security architecture meets different security requirements. Figure 4-12 shows the IDC

security architecture.

Page 55: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

55

Figure 4-12 Security architecture

Physical security

Network security

Firewall/UTM

Traffic cleaning

VPN access

Gatekeeper

Intrusion detection/Intrusion

prevention

Security domain division

Network antivirus

Network auditing

Application security

Server antivirus

Data volume encryption

Host system hardening

OS security hardening

Email security

Web page anti-tamper

Web application

firewall

Security services

Security integration

implementation service

Security assessment

service

Security optimization

service

Virtualization security

Cloud management application hardening

Hypervisor hardening

VM protection against malware

VM template security

hardening

VM isolation

User management

Data encryption

and key management

Document permission

management

Data deletion

Identification and access

management

Access management and auditing for privileged

users

Dual-factor authentication

Security management

Security policy

management

Security information and event

management

Vulnerability management

Data security

Host security

0 shows the Huawei IDC security integration design.

Figure 4-13 IDC security integration design

CSS

Web application

firewall

VPN

Core layer

iStack

VPN

Internet application (DMZ)

Government extranet

Firewall/Anti-DDoS/IPS

Firewall/Anti-DDoS/IPS

Internet

Router Router

UMAUMA

Web application

firewall

Antivirus gateway Antivirus

gateway

IDS/IPSIDS/IPS

CDPHDP

Backup system area

iStack

Application area

CSSWeb

application server

Core layer

Application area

UMAUMA

Web application

server

Antivirus gateway Antivirus

gateway

e-Government dedicated network

FirewallFirewall

Router Router

e-Government dedicated network

iStack

Desktop cloud

Gatekeeper

iStack

iStack

AGAG

FWFW

FWFW

FWFW

Private line

e-Government dedicated network

Internet

Firewall/Anti-DDoS/IPSFirewall/Anti-

DDoS/IPS

Government extranet

Router Router

Government extranet

FW/IPS

FW/IPS

DBAPP

Security control

iStack

Running management area

AG

UVP

Virtualization hardening

UVP UVP

UVP

Virtualization hardening

UVP

Virtualization hardening

UVP

Virtualization hardening

Virus Agent

Virus Agent

Anti-tamper

Database encryption

Document management

Database auditing

Database auditing

Network auditing

Network auditing

iSOC

Vulnerability scanning

tool

Antivirus server

Host core security

hardening system

Dual-factor authentication system

Network service area

Network service area

Web page anti-tamper

system

Document security

management system

UVP

Host core security

hardening system

Network service

area

IDC

Network service

area

LB LB

Different security components are deployed on different network areas based on component

functions to ensure security. Government hosting has the highest security requirements, as

shown in Figure 4-13. The following describes the deployment of security components:

High-end firewalls and traffic cleaning devices are deployed on the network border.

Page 56: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

56

The components, such as the intrusion detection system (IDS)/intrusion prevention

system (IPS), Unified Maintenance Audit (UMA), VPN device, low-end firewall, and antivirus gateway, are deployed on the network service area in bypass mode.

The components, such as the Information Security Operations Center (iSOC),

vulnerability scanning tool, web page anti-tamper system, dual-factor authentication

system, antivirus server, host core security hardening system server, document security management system, are deployed on the running management area.

Antivirus software and host core security hardening system software are deployed on service hosts.

Auditing systems are deployed on aggregation switches of DB servers in bypass mode.

Web application firewalls are deployed on the Internet application area.

Service backup systems are deployed on the backup system area.

Security gatekeepers are deployed between core switches of government extranet and e-Government dedicated networks.

4.3.3 IDC Security Layer Design

Physical Security

The physical security is ensured by deploying the access control system, video surveillance

system, and environment surveillance system. The access control system allows only the

authorized personnel to enter the IDC. The video surveillance system and environment

surveillance system facilitate subsequent auditing. In addition, the following aspects must be

considered to further ensure physical security:

Anti-theft and anti-sabotage

Anti-lightning

Fire prevention

Waterproofing and moisture proof

Electrostatic discharging

Humidity control

Power supply protection

Electromagnetic shielding

Network Security

The firewall, IPS, SSL VPN, anti-DDoS, and data ferry technologies are used to protect

systems and communication data. These technologies prevent data from being damaged,

changed, or disclosed accidentally or intentionally. With these technologies, systems are

reliable, secure, and able to run continuously without service interruption.

Host Security

Client antivirus protection, HIPS protection, HIDS protection, VM security hardening are

used to prevent VMs from being threatened by intranet and extranet viruses, hackers, and

security vulnerability. This ensures VM security and enables user services to run stably in the

long term.

Page 57: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

57

Application Security

Mail security protection and web application security protection are used to protect key

applications, such as emails, web applications, and portal networks in the IDC, and prevent

data from being damaged, changed, or disclosed accidentally or intentionally.

Virtualization Security

The virtualization layer and cloud management application layer are hardened, and VMs are

isolated to prevent the virtualization environment from being threatened by viruses and

hackers. Even if one VM is attacked, other VMs are not affected.

Data Security

The remaining data destruction mechanism ensures the security of the data stored in the

storage of the cloud computing platform. User data stored on physical storage devices is

deleted so that the user data is not disclosed when the storage devices are leased to other users.

This helps dispel users' worries about data security of cloud services.

High-security encryption algorithms are used to encrypt data. This ensures data integrity and

data security.

Distributed file systems are used in the storage system. Data is segmented and distributed on

different cloud hard disks on storage nodes. Data cannot be restored by using a single hard

disk. When a hard disk is faulty, the hard disk can be discarded without data deletion.

User Management

A unified O&M access control solution is provided to control and manage accounts,

authentication, authorization, and auditing of IT resources (such as OSs of core services and

network devices). The unified security control and management center collects operation logs

to perform event association analysis and discover security risks in a timely manner. Logs and

related analysis results provide evidence for information security events.

Security Management

Security management involves the technologies, methods, and products that support security

policies and security management regulations. A unified security control and management

center collects IDC security logs to perform association analysis and discover security risks in

a timely manner. These security logs include firewall logs, intrusion detection logs, intrusion

prevention logs, traffic cleaning logs, VPN access logs, junk mail gateway logs, Hypervisor

logs, and auditing logs.

To protect services without interrupting services and affecting efficiency, obey the following

rules to configure security policies for IDC network devices, security devices, and security

software:

Minimum authorization rule

Service relativity rule

Policy maximization rule

Ensure that security policies are not exclusive with each other.

Page 58: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

58

Security Service

Security services not only include integration services, such as end-to-end IDC devices,

software installation, and software configuration, but also include professional security

assessment and optimization services. These services help users know about their existing and

potential security threats and take measures in a timely manner.

4.3.4 IDC Security Features

VFW

A firewall is logically divided into multiple VFWs to provide independent security insurance

for enterprises and maximize resource utilization of physical firewalls.

Figure 4-14 VFW

Enterprise A

Enterprise B

Each VFW is a complex of a VPN instance, security instance and configuration instance. It

can provide private routing services, security services, and configuration management

services for users.

VPN instance

The VPN instance provides isolated VPN routes that correspond to VFWs for VFW users.

These VPN routes support routing for packets received by VFWs.

Security instance

The security instance provides isolated security services that correspond to VFWs for

VFW users. These security instances consist of private interfaces, security areas, security

domains, ACL, and NAT address pools and can provide private security services such as

blacklists, package filtering, ASPF, and NAT.

Configuration instance

The configuration instance provides isolated configuration management services that

correspond to VFWs for VFW users. These configuration instances allow VFW users to

log in to their VFWs and manage and maintain preceding private VPN routers and security instances.

VM Isolation

VM isolation refers to the resource isolation of different VMs on the same physical machine.

VM isolation is a basic feature of virtualization applications. VM isolation includes the

isolation of CPUs, memory, internal networks, and disk I/O.

Page 59: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

59

Account Management, Authentication and Authorization

The Huawei Operation and Maintenance Management (OMM) System supports account

period management. A super administrator named admin is provided by default. Users can log

in to the system as user admin, create other accounts, and assign rights to these accounts.

The OMM System supports role management and role-based authorization. The OMM

System supports three types of roles: super administrator, O&M administrator, and

guest. Different roles are assigned with different rights.

Tailoring and Hardening of the Cloud Platform OS

The Huawei cloud platform OS is tailored and hardened, and is implemented security

configuration.

OS tailoring

This solution simplifies the cloud platform OS based on the rule of installing

systems with minimum configurations. Only required components are installed. The

quantity of OS software is substantially reduced. This lowers the possibility of systems from being attacked.

Security configuration

This solution implements the security settings for OSs on nodes by referring to the

Center for Internet Security (CIS) Linux benchmark. For example, insecure services are

disabled; account and password complexity policies, and permissions for files and directories are correctly configured.

Security patch management

Huawei implements a strict process for managing security patches and regularly releases

tested OS patch packages on the Huawei support website. O&M personnel regularly

download and install OS patches.

Protection Against Malicious VMs Protection against address spoofing

vSwitch (bridge) of the Hypervisor binds the IP address of each VM to the MAC address

of the VM, so that each VM can send packets only using the local address. This prevents VM IP address spoofing and address resolution protocol (ARP) address spoofing.

Protection against malicious sniffing

The vSwitch is only for switching but not for sharing. When packets of different VMs

are forwarded to the specified virtual port, a VM cannot receive packets of other VMs

even on the same physical host. This prevents malicious sniffing.

4.4 Management

4.4.1 IDC Management Overview

With the development of cloud computing and virtualization technologies, data center

management is transforming from IT infrastructure management to self-service management,

IT service automation, horizontally-expanded server management and storage and network

architecture management, and unified management.

To implement data center management transformation, the following must be performed:

Page 60: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

60

Reduce costs of physical servers and operating costs.

Shorten service rollout periods.

Integrate physical and virtual resources from different vendors and implement unified

resource management.

Enhance device utilization.

Restore faults of key applications and implement DR.

Simplify O&M and reduce management costs.

Reduce power consumption.

The next-generation data center adopts the virtualization technology, centrally manages

services, and supports dynamic resource allocation. The next-generation data center must

provide high-performance, reliable, adaptable, and secure resources to support enterprise

application migration from existing enterprise systems to the cloud. In addition, automatic

infrastructure management is required for the next-generation data center, ensuring that

infrastructures rapidly respond to business changes and IT costs are reduced.

To meet management requirements of the next-generation data center, the Huawei OMM

system data center management solution supports:

End-to-end self-service management

Unified operation and maintenance management

Automatic process orchestration, resource allocation, and application deployment

Unified management of multiple data centers

Heterogeneous management of multiple virtualization platforms

Integration service for various hardware and software

Maintenance management for servers, network devices, storage devices, databases, and middleware

Standard ITIL processes

3D visualized equipment room management and power consumption management

Page 61: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

61

4.4.2 IDC Management Architecture

Overall Architecture

Figure 4-15 ManageOne architecture

API API

Service

catalog

Order Management

Cloud service operation Service Maintenance

Service

Definition

User

management

Measurement

Event

management

Service

Request

Change

management

Configuration

management

Releasing

management

Capacity

management

Fault

management

Equipment

management

Performance

monitoring

ITOM

Computing

virtualizationFacilities

Local

management

layer

Global

management

layer

IP network

Computing

serverStorage device

Storage

virtualization

Network device

Network

virtualization

Data center 1

Automatic control

Resource management

Resource

scheduling

Resource

monitoring

HA

management

Computing

virtualization

Computing

serverStorage device

Storage

virtualization

Network device

Network

virtualization

Data center n

CMDBMaintenance center

Fault

management

Unified

monitoring

Performance

management

Resource

scheduling

Template

Management

Resource

Topology

Automatic control/deployment

Resource management

Resource

scheduling

Resource

monitoring

HA

management

Fault

management

Equipment

management

Performance

monitoring

ITOM

User portal Management portal

Template/Image

management

Template/Image

management

Service O&M Module

The service O&M module provides service assurance and management functions for the

cloud platform, ensuring resource availability and cloud service quality. The service O&M

module includes the cloud service O&M management process and CMDB.

The service maintenance module is an IT service management product based on the ITIL V3

concept. Functions in this project, such as service request, event management, change

management, release management, configuration management, and CMDB functions, can be

implemented by the service maintenance module.

Relationships between the service O&M module and other modules are as follows:

The service O&M module receives service requests, including resource deployment

requests, resource maintenance requests, and resource change requests, from the O&M center and processes the requests.

The O&M center converges alarm and performance information from each data center,

centrally monitors devices in each data center, and processes faults reported by each data

center. Alarms in the O&M center are generated into events in the service O&M module

based on specified rules, and then the events are processed based on the ITIL event management process.

Maintenance Center Module

Module Overview

The maintenance center module is oriented to data center maintenance scenarios. It connects

to different management systems and supports across-area management and multiple data

center scenarios.

Page 62: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

62

The maintenance center module is a unified maintenance management module in the

maintenance layer. It operates and maintains data center services based on scenarios, provides

visualized status, risk, and efficiency analysis capabilities, and proactively analyzes problems. The maintenance center module provides the following functions: comprehensive analysis

and manual operation. Comprehensive analysis includes alarm and performance analysis.

Manual operations include routine maintenance, manual entry of traditional data center asset

data, and traditional resource (equipment room space and cabinets) provisioning.

The maintenance center is also a unified maintenance management portal of the solution. As

a work platform of O&M personnel, the maintenance center not only provides its function

links on its home page, but also provides a unified portal and redirection links to access other

components.

Architecture

Figure 4-16 shows the architecture of the maintenance center module.

Figure 4-16 Maintenance center module architecture

Presentation layer

Intelligence layer

Basic layer

Platform

functions

Adaptation layer

OC adaptation container

Unified

object

databaseMODB

Internal service bus

Comprehensivealarm Event & Alarm

Comprehensivemonitoring

Status

&Topology

Maintenance

personnel

Interconnected systems

Operation

Center

Resource, alarm, performance

CMDB

Comprehensive performance

Performance

& Report

Search and analysis Data snapshot

& entry

Unified

portal

UserDB &

personalized

settings

NBI management

Network

monitoring

ITSM, AD, and

notification

server

Storage and

server

monitoring

Middleware

and database

monitoring

Virtual

resource pool

management

The maintenance center module adopts a flexible development platform and SOA.

Components in the maintenance center module are loosely coupled and work with each other

by transmitting messages. Components work like service providers or consumers. Interfaces

of each component comply with standards and can provide new functions flexibly. The

maintenance center module provides the following features:

Service bus: used for communication between internal modules. With the bus, modules

can be deployed on different computing nodes in distributed mode. The service bus

meets requirements of large systems. When two components are running on the same

node and the same process, a more efficient internal communication mode is automatically used.

Management object database (MODB): provides a unified object database at the

platform layer and ensures consistency between internal models and coordination of modules.

Page 63: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

63

Plug-in mechanism: By using the adaptation container and loading customized

adaptation packages, the maintenance center module can connect to new systems without

affecting running functions. Adaptation packages are provided as plug-in packages.

Plug-ins are parsed and executed in the container, ensuring stable internal logic

programs.

Functions

At the adaptation layer, the maintenance center module provides adaption capabilities to

connect to peripheral systems. After an adaptation package (containing scripts, the jar package

is used in some systems) is added, the maintenance center module can connect to new

management systems. The maintenance center module can connect to the following systems:

Network monitoring system

Storage and server monitoring system

Middleware and database monitoring system

Virtual resource pool management system

ITSM system

Existing LDAP system

Notification server (such as a mailbox or SMSGW).

By default, the maintenance center module can connect to some common systems, such as

HUAWEI eSight, HUAWEI FusionManager, CA Spectrum and Performance Center, and

other third-party management software.

The basic layer provides the following services and functions:

Centralized comprehensive monitoring services, allowing users to view resource status and topologies

Comprehensive alarm services, which collect alarm information from other systems and support alarm query in multiple dimensions

Comprehensive performance services, allowing users to view device performance

Report management Various O&M reports are provided, including predefined unified

O&M reports. Reports at each layer in the IT system are provided, including server

reports, virtualization reports, network reports, and database reports. In addition, report generation tasks can be flexibly scheduled.

The intelligence layer provides analysis functions.

Users can search for information in databases and system logs, and associated information is displayed by entry.

The presentation layer provides the following functions:

Provides a unified portal, from which users can go to other pages or other systems. The portal saves users' personalized setting and records users' usage habits.

Provides the northbound interfaces (NBIs) for adapting to various upper-layer

management systems. Information about resources, alarms, and performance is open to upper-layer systems, and users can query analysis results through the API.

Resource Pool Management Module

Module Overview

Page 64: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

64

This resource pool management module is dedicated for the local data center and can be

accessed by hardware devices from various vendors and virtual platforms. It dispatches

resources for the global resource management subsystem based on service level and resource

load by performing computing, storage, and network automation. It also provides local

templates and image management, monitors resources, and manages measurements.

Architecture

Figure 4-17 shows the architecture of the resource pool management module:

Figure 4-17 Architecture of resource pool management module

Orchestrator

VMPhysical

machine

Block

storageFirewall …

Measurement

management

Image

management

Document

management

Portal Open API

Network

automation

Storage

automation

Computing

automation

Global management layer

Managem

ent

layer

Access

layer

Auto

mation

layer

This J2EE frame-based resource pool management adopts classic three-layer B/S architecture. As regards to deployment, the resource pool management supports active/standby or cluster

deployment. The following functions are supported:

Manages multi-vendor hardware devices

Differences between hardware devices that are provided by various manufacturers are

masked off to achieve unified hardware management. A unified physical resource

management system is established to effectively manage physical resources of the cloud IDCs.

Manages virtualization platforms that are provided by various manufacturers

Virtualization platforms that are provided by various manufacturers, such as VMware

and HUAWEI FusionSphere, are managed in a unified manner.

Rapid service customizing

Graphic service processing is provided to facilitate service process customizing and

expanding for customers.

Functions

Page 65: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

65

The resource pool management module has three layers: access layer, management layer, and

automation layer. The following describes functions of each layer respectively:

Access layer: including Portal and Open API modules.

− Portal: Enables the unified management of the computing, storage, network resources

of the local data center and images, templates, and software packages of the VM,

provides various dashboards to display various customer-concerned management indicators of the data center.

− Open API: Provides RESTful, SOAP, and SDK API.

Management layer: including resource module, image management, measurement management.

− Resource management: Plans the resource pools of computing, storage, and network

based on network location, service level, and function, allocates resources based on

the requirements of the global management, manages the capacity and monitors the performance of the applied resources.

− Image management: Manages VM images and software package images. Images can

be imported or exported from the subsystem of the local resource management system and be quoted by the global resource management subsystem.

− Measurement management: Records events of resource application, changes, return,

and power-on/off to the measurement files and reports the events periodically to the

global resource management subsystem.

Automation layer: including Orchestrator and computing/storage/network automation

− Orchestrator: Resolves resource applications and configurations into the automatic

allocation commands of the computing/storage/network automation, and provides

graphic service processing to facilitate service process customizing and expanding for customers.

− Computing/storage/network automation: Manages physical machines, network

devices, storage devices, and virtual environment, detects and configure devices

automatically over device management ports, implements automatic virtual resource

application and configuration by dispatching the northbound ports in the virtual

environment. The automation module is Plugin-mechanism based, which facilitates the integration of servers, switches, storage devices, and virtual environments.

4.4.3 Customer Benefits

HUAWEI ManageOne solution is dedicated to the service operation and management of the

distributed cloud data center, which enjoys the following features and advantages:

Supports unified management for multiple data centers.

Implements the self-help resource and strategy applications of customers and supports various O&M modes.

HUAWEI ManageOne solution benefits our customers by the following advantages:

High resource usage rate and low cost

− The server usage rate is raised from 10% below to 50%.

− Resources can be shared across DCs and scheduled by strategy.

− DCs and devices can be reused.

Agile services

− The service rollout time is shortened from 30 days to 30 minutes.

High-reliability computing environment

Page 66: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

66

− High reliability of VMs is supported.

− Data and network security is ensured.

Innovative service mode

− Unified infrastructure and self-service resource management is available.

− Resource usage metering and review are supported.

− Idle data centers can be rented.

4.5 DR

The following DR solutions are based on arrays and are applicable to only the server + SAN

storage architecture.

4.5.1 Overview

DR is the capability to recover from a disaster. Two or more systems with the same functions

are deployed at different locations. They monitor the healthy status of each other and support

function switchover. If one system is unavailable due to an unexpected event, such as a fire,

flood, earthquake, or deliberate human damage, the whole application system falls over to

the DR system at a different location, which ensures continuous system running.

The DR system must provide comprehensive data protection and DR functions. When the

production center fails to run properly, these functions ensure data integrity and service

continuity and allow the DR center to replace the abnormal production center in a timely

manner to recover services and minimize losses.

Evaluation Indexes for DR Systems

Data loss amount and system recovery period are used as indexes for evaluating a DR system

in the industry. The recovery point objective (RPO) and recovery time objective (RTO) are

two industry-accepted standards.

RPO: indicates the requirement for time point. When a disaster occurs, the system and data

must be recovered to their original states at this time point. RPO refers to the maximum data

loss amount tolerated by the system. The smaller the tolerable data loss amount is, the smaller

the value of the RPO must be.

RTO: indicates the requirement for the recovery duration of an information system failure or

service function failure caused by a disaster. RTO refers to the maximum service interruption

duration tolerated by the system. The more urgent the system services are, the smaller the

value of the RTO must be.

The RPO is related to data loss, but the RTO is related to service loss. The RTO and RPO can

be determined only after risk analysis and service influence analysis are conducted based on

service requirements.

The International Share78 Standard classifies the DR system into seven grades, as shown in

Figure 4-18.

CAUTION

Page 67: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

67

Figure 4-18 DR system grades

Cost Remote DR center

Available backup center

Time-based backup

Tier 1: PTAMTier 2: PTAM + Hot standby site

Tier 3: Electronic vaulting

Tier 4: Batch/online database image or log transmission

Tier 5 Software-level, two-site, two-phase commit (integrity of exchange)

Tier 6: Near-zero or zero data loss and remote data mirroring

Tier 7: Near-zero or zero data loss, remote data mirroring, and automatic service switchover

The disaster recovery specifications for information systems classify DR into six levels.

Figure 4-19 DR levels

Disaster recovery specifications for information systems—GB/T 20988-2007

Media-level DR

Data-level DR

Application-level DR

DR Capability

LevelDescription Requisite

Level 1 Basic support

Standby premises support

Electronic transmission and

some device support

Electronic transmission and full device support

Real-time data transmission and full device support

Zero data loss and remote cluster

support

Level 2

Level 3

Level 4

Level 5

Level 6

· Full backup is performed at least once every week.

· Management regulations for media access, verification, and dump are formulated.

· Complete testing and drill are performed for disaster recovery plans.

· Data, communications lines, and network devices are transferred at preset time.

· Management regulations for standby premises are formulated.

· Agreements on emergent device and network supply are signed.

· Some data, communications lines, and network devices are configured.

· Electronic data transmission is performed several times per day.

· Full-time management personnel are assigned for standby premises.

· All required data, communications lines, and network devices are configured and get ready.

· 24/7 running is supported, and technical support and O&M management are available.

· Remote data replication is supported.

· Standby networks support automatic or central switchover capabilities.

· Data is remotely backed up in real time to achieve zero data loss.

· Applications support real-time seamless switchovers.

· Remote clustered systems support real-time monitoring and automatic switchovers.

More than 2 days

More than 24 hours

More than 12 hours

Several minutes to 2

days

Several minutes

Several hours to 2 days

1 to 7 days

1 to 7 days

Several hours to 1 day

Several hours to 1 day

0 to 30 minutes

0

RTO RPO

The higher DR grade requires more complicated technology and more investment and has

greater impact on service systems. Meanwhile, it is more difficult to perform scalability of

solutions and manage the DR function. Therefore, the DR solution of a data center is made by

Page 68: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

68

taking several factors into consideration, including service requirements, DR investment,

implementation difficulty, and management complexity.

DR Technologies

DR technologies include the database-based DR technology, host-based DR technology,

virtual gateway-based DR technology, and storage-based DR technology.

Database-based DR technology

The database-based DR technology sends the database logs to a different location and

synchronizes the data to the remote host to guarantee consistency between the local and

remote databases. This scheme has the minimum error rate by using the database

software to keep data consistency. Moreover, it has the highest switching speed.

However, this scheme is supported only by several database vendors with cutting-edge

technologies, such as Oracle's Dataguard and DB2's HADR.

Host-based DR technology

The volume replication software or third-party data replication software installed on

servers is used to capture the high-layer write I/O, and remote servers are connected by using Transmission Control Protocol (TCP) or IP networks to remotely replicate data.

With the host-based DR technology, the high-layer I/O is captured by the OS-level

processes. As a result, the host CPU and memory resources are consumed. The

mainstream software supports the local caching technology to reduce consumption of the

host CPU and memory resources. The main OS-level DR technology providers and their products are Veritas's VRR, FalconStor's CDP DR set, and Vision's Double-take HA.

Virtual gateway-based DR technology

A storage gateway is added to the SAN between front-end application servers and

back-end storage systems. The storage gateway is different from common network

gateways. Use the Huawei VIS gateway as an example. The Huawei VIS gateway

connects to servers at the front end and storage devices at the back end. The VIS gateway

serves as a policeman in the storage network and controls all I/Os. It supports local storage system applications as well as remote data replication.

The virtual DR technology features powerful functions. Data replication is performed by

storage gateways, so impact on host performance is minimized. In addition, virtual

storage gateways help consolidate front-end heterogeneous servers and back-end storage

devices of different brands.

Storage-based DR technology

The storage-based DR technology is implemented based on storage systems (such as disk

arrays and NAS). Value-added functions built in storage systems replicate data to the

remote end in synchronous or asynchronous mode over the IP, dense wavelength

division multiplexing (DWDM), or FC network. Devices of mainstream storage vendors support the storage-based DR technology.

Compared with the host-based DR technology, the storage-based DR technology

separates data from host applications, which minimizes impact on host running. In

addition, data is replicated by mirroring, and the high-speed cache is used to accelerate

I/O. Therefore, time point difference of data at two ends is small. Moreover, storage systems support error tolerance, which ensures running performance and data reliability.

Huawei IDC DR Solutions

Based on different application scenarios and service requirements of the Huawei cloud

platform, the Huawei IDC solution provides the cloud platform active/active VIS DR solution and cloud platform array replication-based DR solution to support data center-level DR.

Page 69: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

69

Table 4-11 Cloud platform DR solutions

Solution Type Solution Function Cost

Active/Active D

R

Cloud platform

active/active VIS DR solution

Both the production and DR sites

can provide external services.

VMs can be migrated from one site to the other site.

The RPO is near zero, and the RTO is minute-level.

This solution is applicable to

only the server +SAN storage architecture.

High

Active/Standby

DR

Array

replication-based DR solution

The graphical DR management

UI is available.

The RPO is minute-level, and the RTO is less than 4 hours.

This solution is applicable to

only the server +SAN storage

architecture.

Medium

4.5.2 Active/Active VIS DR Solution

Solution Architecture

By using the active/active VIS DR solution, cloud platforms and storage arrays are deployed

on local and remote sites. VIS nodes from the same VIS cluster and hosts from the same

cluster are deployed on local and remote sites in active/active mode. By using the mirroring

technology of the VIS system, these VIS nodes provide storage access services for local and

remote sites simultaneously. Storage access services can be switched over seamlessly.

Automatic DR switchover is achieved by using the VM HA function.

Page 70: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

70

Figure 4-20 Active/active VIS DR solution architecture

Active-active VIS DR is based on the across-array mirroring replication function of VIS.

When new data is written from a host to the VIS6600T, the VIS6600T writes the data to

production volumes and mirrored volumes simultaneously. After data is written into the two

volumes successfully, the VIS6000 returns a success signal to the host. In the mirroring mode,

data in mirrored volumes must be the same as that in production volumes. Mirroring can be

deployed locally and remotely. Compared with other backup technologies, the mirroring

technology features better data protection.

The VIS6600Ts in two sites use the active-active cluster technology. Nodes generally work at

the same time and concurrently process service requests from hosts. Nodes back up each other.

When one or multiple nodes are faulty, other nodes automatically take over services of the

faulty node or nodes rapidly, ensuring service continuity. The VIS cluster is required to

provide large layer 2 networking for the VIS in the two sites.

In virtualization environments, virtualized management nodes (such as FusionManager, UHM,

and VRM) in active/standby mode are deployed in the two sites that have a far distance to

continuously support the VM functions such as HA and live migration. Heartbeat

communication between active nodes and standby nodes is required. Therefore, the large layer

2 networking and active/standby gateway redundancy mechanism are required. You can use

an aggregation switch or core switch as a gateway.

The VIS6600T cluster can balance services among multiple nodes for processing. The service

sharing mode is called load balancing. Load balancing fully utilizes resources and improves

system work efficiency and performance. Users can maximize benefits from the cluster

system investment. The division of the VIS cluster is determined by a LUN. A VIS quorum

disk is used to determine which VIS uses the LUN service.

Page 71: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

71

To prevent performance from being affected by high latency of remote storage mirroring, the

distance between the production site and the DR site must be less than 100 km, and the

single-trip network loopback latency between sites is equal to or less than 5 ms. Therefore,

storage planes in sites must use optical fibers to ensure better latency and meet actual

bandwidth requirements.

Solution Selection Restrictions and limitations

− This solution applies only to virtualized storage with big LUNs but does not applies

to virtualized storage with small LUNs.

− This solution depends on the mirroring and cluster capabilities of the VIS6600T with

big LUNs. The distance between the production site and the DR site must be less than 100 km.

− Considering the performance and compatibility, this solution uses only the FC SAN

other than the IP SAN. In addition, hosts connect to VIS devices through FC connections other than iSCSI connections.

− This solution can be used only between two data centers other than among multiple data centers.

Network bandwidth between the production site and the DR site

Bandwidth configuration suggestions are made for large layer 2 networking, FC storage links, and IP network connection between the two sites and the quorum disk.

− Large layer 2 networking: The two sites need large layer 2 networking to support

virtualization management and VIS cluster management. The large layer 2

networking transmits virtualization management data, live migration data, and VIS cluster management data.

Estimated bandwidth required by virtualization management data: 100 Mbit/s (peak

value)

Estimated bandwidth required by VIS cluster management data: 100 Mbit/s (peak value)

Bandwidth required by live migration of a VM: Bw = [(D+Mem/Tn) x C%]/RBw

Table 4-12 Factors that affect VM live migration bandwidth

Bw Specifies the network bandwidth (Mbit/s).

Tn Specifies the total migration time (s). The maximum time period of VM

migration is 20 minutes.

Mem Specifies the VM memory size (MB).

MU Specifies the actual memory usage. The value is generally 70%.

RBw Specifies the actual bandwidth usage. The value is generally 0.8.

D Specifies the memory change rate. The memory change rate is generally 2.5

MB/s for 1 GB memory.

C% Specifies the memory compression rate. The current value is 100% (no

compression).

Page 72: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

72

//Assume that it takes about 30 seconds for a VM with 2 GB memory to implement

live migration, 516 Mbit/s is required. The bandwidth is calculated as follows: [(2.5 x 2 + 2000 x 0.7/30) x 8 x 100%)/0.8 = 516 Mbit/s

You can refer to the preceding formula to calculate the bandwidth required by multiple VMs based on specifications. Live migration is rarely implemented.

− FC storage link

FC switches between the two sites must be connected.

FC storage bandwidth: Bw = (PIO x VM) x 8/RBW

Table 4-13 Factors affect FC storage bandwidth

Bw Specifies the network bandwidth (Mbit/s).

PIO Specifies the peak I/O data volume (MB/s) of a VM.

VM Specifies the number of VMs that need DR.

RBw Specifies the actual bandwidth usage. The value is generally 0.8.

//Assumes that the storage bandwidth of each VM is 10 Mbit/s, and the number of

VMs that need DR is 30, the total bandwidth of the FC link is 300 Mbit/s (30 x 10 Mbit/s).

− IP network connection between the two sites and the quorum disk

You are advised to use 1 Mbit/s bandwidth to support the IP network connection.

4.5.3 Array Replication-based DR Solution

Solution Architecture

The array replication-based DR solution implements remote data replication between the

production center and the DR center by using the remote replication function of the storage

system. In this solution, the UltraVR DR management software is used to synchronize cloud

platform management data between the production center and the DR center, and provide

the DR management function. When a disaster occurs in the production center, the DR center

uses the UltraVR to restore the Huawei cloud platform. This ensures the continuity of cloud

platform services in the production center.

Page 73: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

73

Figure 4-21 Array replication-based DR solution

In this solution, one cloud platform is deployed on the production center to provide a service

system platform for users, and another cloud platform is deployed on the DR center to provide

cloud DR protection. When a disaster occurs in the production center, the cloud platform and

related service systems of the production center are switched to the cloud platform of the DR

center, ensuring service availability.

Cloud platform management data is synchronized from the production center to the DR center

by using the UltraVR. The UltraVR can communicate with VRMs and the local IP SAN, and

forward messages between the production center and DR center by using message interfaces.

The UltraVR communicates with the VRM through Representational State Transfer

(REST) to obtain related information (such as VM information) and perform operations

(such as starting and stopping the VM by the VRM).

The UltraVR communicates with arrays through SMI-S or TLV to perform array-related

operations, for example, issue synchronization commands to arrays and run switchover commands.

The production center communicates with the DR center by using the UltraVR, such as

the DR configuration synchronization and association between the production center and

the DR center.

All VMs running on the production center cloud platform are stored on the production storage

resource pool. Remote VM DR is implemented by using remote replication technologies of

Huawei OceanStor T series storage. Some LUNs can be set to LUNs that require DR, and

some LUNs can be set to LUNs that do not require DR.

Remote replication is data mirroring based on the LUN level. The active LUN provides the

data access function, and the standby LUN copies data from the active LUN. When the active

LUN is faulty, the standby LUN can take over services from the active LUN. After the active

LUN recovers, services are switched over from the standby LUN to the active LUN. This

ensures user application continuity and data availability.

When the remote replication relationship is established between the active LUN in the

production center and the standby LUN in the DR center, an initial synchronization

process will start. That is, all data on the active LUN is copied to the standby LUN. Then

synchronous or asynchronous replication is implemented based on configuration.

Synchronous replication requires a short distance between two sites. Therefore, this solution adopts asynchronous replication.

Page 74: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

74

Asynchronous replication: A snapshot is created for the active and standby LUNs respectively

according to the specified interval (1 to 1440 minutes, within one day). The active LUN

snapshot ensures data consistency. The standby LUN snapshot is used to ensure availability

for the standby LUN data when exceptions occur during synchronization.

Solution Selection Restrictions and limitations

− This solution supports only the Huawei cloud platform but does not support heterogeneous platforms such as VMWare and XenServer.

− This solution supports only virtualized storage with big LUNs but does not support

virtualized storage with small LUNs. (A single set of Huawei Symantec IP SAN

supports only 256 groups of synchronous mirroring and 128 groups of asynchronous mirroring, which does not meet LUN requirements.)

− This solution does not support synchronous replication.

− This solution depends on the remote replication function of Huawei IP SAN. Because

Huawei IP SAN cannot update VM I/O cache data to disks during remote replication, it cannot be ensured that all VMs that need DR can be started.

− VM configuration data is synchronized by using UltraVR, and VM disk data is

replicated by the IP SAN in asynchronous mode, so the synchronization cycles of

VM configuration data and disk data are inconsistent. During DR switchover, VM

configuration data is updated, while VM disk data is rolled back to the previous snapshot. As a result, data is inconsistent.

Network bandwidth between the production site and the DR site

Network bandwidth for replication does not compromise host operating performance but

affects the RPO of the DR system. If the bandwidth allocated to the replication links of

the DR system is too low and data amount of the service system in the production center

is large, data on the production disk arrays cannot be replicated to the DR center in time.

If a disaster occurs in the production center, data may lose. On the contrary, sufficient

bandwidth can ensure that data in the production center is replicated to the DR center in time. This reduces the RPO.

You can calculate the required bandwidth as follows: Suppose in a replication DR system:

− The peak data write bandwidth of the host is BS.

− The average data write bandwidth of the host is BA.

− The network bandwidth between the DR center and production center is BI.

− The required RPO is T0.

− The network bandwidth is smaller than disk array read/write bandwidth and storage network bandwidth.

During time T, data amount in the production center with peak data write bandwidth D =

(BS – BI) x T. To meet the requirement of the RPO, D must be smaller than BI x T0. Therefore, (BS – BI) x T is smaller than BI x T0 or BI is greater than (BS x T)/(T + T0).

Recommended: BA < BI < BS

The data transfer amount is expressed in B/s and bandwidth is expressed in b/s. 10 b is equal to 1 B. For example, 2000 Mb/s is equal to 200 MB/s (2000 Mb/s/10).

If the IP replication bandwidth is R Mb/s, it is (R/10) MB/s. Because 20% to 30% bandwidth

loss exists during data transfer, the maximum data transfer bandwidth is (R/10) MB/s x 70%, or (0.7 x R/10) MB/s.

NOTE

Page 75: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

75

Calculate required network bandwidth based on site requirements.

4.5.4 Comparisons Between DR Solutions

Table 4-14 Comparisons between DR solutions

Solution Function Application Scenario

Cloud

platform

active/active

VIS DR

solution

Both the production and DR sites can

provide external services.

VMs can be migrated from one site to the other site.

The RPO is near zero, and the RTO is minute-level.

Two sites provide services

simultaneously.

VMs can be migrated across sites.

The RPO is second-level, and the RTO is minute-level.

Array

replication-b

ased DR solution

Provides a graphical DR management

page, simplifies management, and

facilitates operations.

The synchronization cycle of site data

can be set flexibly based on site

requirements and site network status.

The cycle will affect the RPO.

The RPO is minute-level, and the RTO

is less than 4 hours.

Active/standby DR is required

for the Huawei cloud platform.

Two sites do not provide services

simultaneously. The RPO is less

than 30 minutes, and the RTO is hour-level.

4.6 Backup

4.6.1 Overview

Data backup is used to protect data. Based on certain backup policies, application data is

duplicated and stored on preset storage media for data recovery when an online system fault

occurs.

Backup System Components Backup software

Excellent backup software can incorporate acceleration, automation, DR, and

consistency policies into a control platform to simplify processing, improve operation

speed, reduce storage costs, and simplify management. In addition, backup system

security, validity, stability, and scalability are ensured.

Backup network

Backup networks can be the SAN, LAN, MAN, WAN, and hybrid

SAN+/LAN/MAN/WAN. Backup networks provide data transfer channels and play a crucial part in data backup implementation.

Backup storage media

Backup storage media is used to carry data, which is classified into two types: physical

and virtual. Storage media must be reliable. Otherwise, data on the storage media will lose.

Page 76: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

76

Backup management

An excellent backup system must be equipped with qualified software and hardware. In

addition, backup policies and management plans must provide high availability. Backup

policies of a complex system must be configured based on application and service types. Qualified backup policies must be configured by consideration of various factors.

Backup Modes Full backup

A full backup backs up all data or applications at a certain time point. To implement this,

a tape is used to back up the entire system, including the system and all data in the

system.

Incremental backup

Incremental backup backs up only the files that have changed since the last full or incremental backup.

Differential backup

Differential backup backs up all files that have been changed (based on archive bit) since

the last full or incremental backup. Only the first full backup data and the last differential backup data are restored.

Synthetic backup

The synthetic backup data is created from a recent full backup and subsequent

incremental backups. During synthetic backup, original data is not involved, which

minimizes impact on the CPU of application servers. In addition, the new synthetic backup can be used to protect data, set new sites, and test systems.

Backup System Networking

Common backup system networking has two structures: LAN-based and LAN-free.

In the LAN-based structure, backup data is transmitted over the Ethernet. One or

multiple servers are configured as backup server(s) to manage policies or media

read/write of the backup system. A tape library is connected to the backup server. The

LAN-based structure helps reduce investment and supports tape library sharing and

centralized backup management. However, the LAN-based structure puts high pressure on network transmission.

The LAN-free backup system is built on the SAN. It adopts a new architecture where

tape libraries and disk arrays are independent fiber nodes. When multiple hosts share a

tape library, data streams are directly transmitted from disk arrays to the tape

library without traversing networks. In this way, no network bandwidth is occupied. The

LAN-free structure supports centralized data backup management, fast backup, and tape

library sharing, and puts little pressure on network transmission. However, the LAN-free

structure requires high investment.

Rules for Designing a Backup System

Data backup is used to ensure data restoration, eliminating the worries of the system operators

and users. The data backup solution varies according to application scenarios. Generally, a

complete backup system must meet the following requirements:

Stability

The main function of a backup system is to provide a data protection method. For this

reason, the stability and reliability of a backup system become the most important issues.

Page 77: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

77

The backup system must be compatible with the OS and must be able to quickly and

effectively restore data when a disaster occurs.

Comprehensiveness

In complex computer network environment, there may be various operating platforms

(such as UNIX, NetWare, Windows NT, and VMS) and various application systems,

such as Enterprise Resource Planning (ERP), databases, and clusterware systems.

Backup software that supports various OSs, databases, and typical applications is recommended.

Automation

Many systems have restrictions on backup start time and backup window due to

their work natures. Data backup can be performed at midnight when the service system

carries the least load. However, such midnight operation will increase burdens on the

system administrator. Therefore, the backup solution must be able to provide automatic

backup functions and automatically manage backup media equipment. During the

automatic backup process, the backup system must also provide logs and automatically

generate alarms when exceptions occur.

High performance

An increasing amount of data is generated and updated frequently to meet the increasing

service demand. If data backup cannot be complete during the off-hour, the data backup

performed during working hours affects system performance. This dilemma requires

taking the data backup speed into consideration when planning backups to ensure that data backup can be accomplished within the specified time period.

Effective maintenance for the service system

Data backup may affect service system performance. It is important to use effective

technical measures to prevent the server system, database system, and network system

from being affected by data backup. In addition, data that is restored by the backup

system must be valid and complete.

Simple operation

Data backup is used in different areas. Operators who perform data backup belong to

different levels. An intuitive and simple GUI is required to shorten operators' learning curve, reduce their working pressure, and facilitate data backup configuration.

Real time

Some key tasks require 24 hour uninterrupted operation. Some files of these tasks may

still open during data backup. In this case, measures are taken to check file size in real

time and perform event trace to ensure that all system files have been correctly backed up.

Huawei IDC Backup Solutions

Huawei provides the various backup solutions as described in Table 4-15 to meet different

requirements of application scenarios, application types, IT infrastructures, and investment

budgets.

Table 4-15 Huawei IDC backup solutions

Solution Type Solution Description

VM backup VM-level

backup This solution is based on a VM snapshot backup

system developed by Huawei based on the

virtualization platform. This solution can back up VM data and does not require any agent nor any backup

Page 78: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

78

Solution Type Solution Description

client on VMs.

File-level cloud

backup

CommVault

cloud backup

In this solution, the CommVault backup software is

used to construct a centralized backup system. Files,

mainstream databases, and applications on mainstream

OS platforms can be backed up. The backup function

serving as a service is provided for end users over the

Internet.

Traditional file

backup

NAS

file-level backup

This solution provides backup resources by sharing

directories.

Integration

backup

HDP backup The HDP3500E, a Huawei-developed software and

hardware integrated backup device, integrates

mainstream NBU backup software. The HDP3500E

has powerful and reliable backup functions and features flexible networking and easy configuration.

Traditional

backup

NBU backup In this solution, the NBU backup software is used to

construct a centralized backup system, which can

connect to physical tape libraries, NAS storage

devices, and virtual tape libraries to provide large-capacity backup storage media.

4.6.2 VM-level Backup Solution

Solution Architecture

A local backup system is set up to create snapshots for VMs by using the snapshot function of

the cloud platform and back up VM snapshots. The backup process does not affect VM

running. When VM data is lost or deleted mistakenly, the local backup system can be used to

restore VMs.

Figure 4-22 shows the VM-level backup architecture.

Page 79: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

79

Figure 4-22 VM-level backup architecture

FM

VRMVRM

CNACNA

HyperDPDisp

atcher

HyperDP

ProcessorCNACNA

HyperDP

Processor

Main StorageBackup Storage

Local Dir/NASMain Storage

Backup Storage

Local Dir/NAS

Backup control flow

Backup data flow

This solution uses the Huawei-developed HyperDP system. The HyperDP backup system

consists of the HyperDP backup server and backup storage medium.

The HyperDP backup server has components including Dispatcher and Processor. It is

recommended that the HyperDP backup server be deployed on VMs.

Dispatcher: schedules data backup and restoration tasks and manages HyperDP Processor.

Processor: manages backup and restoration tasks and backs up and restores VMs.

The HyperDP backup server role can be set to HyperDP Dispatcher and HyperDP Processor.

A backup domain has only one HyperDP Dispatcher for scheduling data backup and

restoration tasks and managing HyperDP Processor. A backup system has multiple HyperDP

Processors which are responsible for VM backup and restoration.

The local directory of the backup server or NAS shared storage can be used as backup storage

medium.

Local directory/NAS: The local directory refers to the storage of the backup server. The

additional storage space allocated to VMs when the backup server is deployed on VMs can be

used only by the backup server. NAS is shared storage which can be shared by multiple

backup servers.

VRM: provides VM snapshot interfaces and allows query of physical storage information of

virtual volumes.

Primary storage: creates snapshots for virtual volumes and obtains differential bitmaps and

backup data.

VM backup policies can be set on the HyperDP system. The time point for generating VM

snapshots, backup start time, and backup cycles can be set. Periodical full backup and

incremental backup are supported. Moreover, VMs can be restored to the specified time point.

Solution Selection

Restrictions and limitations:

Page 80: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

80

This solution supports VM-level backup only for cloud hosts on the Huawei cloud

platform. Backup for VMs on the VMware and Citrix platforms in not supported.

This solution provides backup based on VM snapshots. The VM snapshot feature has the following limitations which affect the VM snapshot-based backup:

− A VM supports at most 32 snapshots.

− VMs with shared volumes mounted do not support snapshots.

− Linked clone VMs do not support snapshots.

− VMs created by templates do not support snapshots.

The IP SAN with big LUNs supports only virtual storage-based VM backup and

restoration, but does not support block device-based VM backup and restoration. During

VM backup, related disk data and some configuration data of VMs are backed up, and

data on the VM memory and peripherals such as CD-ROMs and USBs is not backed up.

Backup data integrity depends on the function of creating consistent snapshots and

support for data integrity. FusionCompute V100R003C00 supports the function of

creating consistent snapshots only for Windows OSs that support virtual software switch

(VSS). (VMware and Citrix also have this limitation). The function of creating consistent

snapshots is dependent on the VSS capability of Windows OSs. This function reduces

the probability of blue screen of death (BSOD) for VMs that run on Windows OSs and

support VSS. However, it cannot exterminate BSOD nor ensure data integrity for Linux

OSs and Windows OSs that do not support VSS. When BSOD occurs on VMs and VMs

cannot be restored after being restarted, you are advised to use backup data at other time

points to restore the VMs.

4.6.3 CommVault Cloud Backup Solution

Solution Architecture

CommVault cloud backup solution uses CommVault Simpana backup management software

to create a centralized backup system locally, and end users install backup client software on

their computers. Files, mainstream databases, and mainstream application systems on users'

computers can be backed up and restored through the backup client.

CommVault Simpana software provides a centralized backup system management platform.

Simpana integrates backup and recovery module, archiving module, replication module,

resource module, and search module. This simplifies data lifecycle management. This ensures

rapid and reliable data backup and restoration.

A CommVault cloud backup system consists of the backup management software, backup

server, and backup storage device. The backup management software is the core of the entire

backup system. The current version adopts CommVault Simpana backup management

software.

The Simpana includes:

CommServe

The CommServe manages all operations of the data backup, archiving, and recovery. It is

also responsible for maintaining CommCell configuration and data storage information, managing all tasks, policies, user security and module licensing.

Only control information passes through the CommServe software module; not the data

itself. It also houses the meta database catalog. This database includes metadata about the

nature and location of the data that is stored. The centralized event manager logs all events, providing unified notification of important events.

MediaAgent

Page 81: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

81

The MediaAgent transfers data between the client computer(s) and the storage media.

Each MediaAgent communicates locally or remotely with one or more storage devices, which contain the storage media.

The MediaAgent software is designed to be storage-media independent. Therefore, it is capable of supporting a wide variety of storage models.

iDataAgent

A client system provides the resources necessary for an iDataAgent to access the data.

Clients can be either physical machines independent on the network or VMs in a

clustered environment. Data accessed from the client can reside on local disks or mounted file systems via a LAN or SAN.

Client systems must be accessible from the CommServe and supporting MediaAgent at

all times during backup and restoration operations. This connection is necessary to

exercise control over the operation and to update indexing and tracking data on these systems.

Figure 4-23 shows the CommVault backup solution architecture.

Figure 4-23 CommVault backup solution architecture

Data Center

Firewall

StorageDMZ area

Lessee 1

Lessee 2

Backup server

Media server

Backup agent

Backup data flow

Backup control flow

The CommServe and Media Agent are deployed on the data center intranet, and the Proxy

Server and Web Console Server are deployed on the DMZ for access from external users.

Backup data flow is transmitted to the Proxy Server first, and then transmitted to the backup

system in the intranet. The backup data flow is indicated by red dotted lines in the preceding

figure. The Web Console Server provides a web console GUI, and allows users to view, back

up, and restore data through the console GUI. The backup control flow is indicated by green

dotted lines in the preceding figure.

Tenants install backup agents locally and access the Web Console Server to perform

operations such as configuring backup policies. The Simpana backup system isolates tenants.

After tenants access the Simpana backup system, tenants can only view and perform

operations for their own resources.

Page 82: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper 4 IDC Architecture

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

82

User backup data flow must be transmitted to the Proxy Server first, and then the Proxy

Server sends the data flow to the Media Agent. On the Simpana backup system, users can

configure load balancing policies to prevent heavy load of a Media Agent.

Solution Selection

Restrictions and limitations:

This solution does not support backup and restoration for the entire OS.

This solution does not support LAN-free and Server-free backup architectures.

4.6.4 Comparison Between Backup Solutions

Table 4-16 Comparison between backup solutions

Solution Type Solution Function Application Scenario

VM backup VM-level backup Supports system volume

backup.

Supports data backup and

restoration for a single VM.

It is not required to install backup agents on VMs.

1. Public clouds provide VM

backup services.

2. The VMs running on the

Huawei cloud platform are backed up centrally.

File-level cloud

backup

CommVault cloud

backup

Supports file data backup of

VMs and physical machines.

Supports the backup of

mainstream file systems,

databases, and applications.

The backup function serving

as a service is provided for

end users over the Internet,

and end users can customize

backup policies.

Public clouds provide data cloud

backup services.

Page 83: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper A Acronyms and Abbreviations

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

83

5 IDC Reference Models

5.1 Reference Model for a FusionStorage Small-scale Scenario

5.1.1 Networking

Figure 5-1 Networking in a FusionStorage small-scale scenario

Internet

Resource Pool

30*RH2288H V2

GE 10GE

O&M Network

CE6850

…….

GE

3328

FW E1000E-X5 FW E1000E-X5

Resource pool capabilities: 1268 vCPU, 2934 GB memory, 238.32 TB storage, and four cabinets

CSB/eSight

The reference model for a small-scale scenario supports a maximum of 30 servers to provide

1268 vCPU, 2934 GB memory, 238.32 TB storage, and offer cloud host and VPC services.

Page 84: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper A Acronyms and Abbreviations

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

84

In the small-scale scenario, the access layer and core layer are integrated to reduce the

network costs of small-scale projects.

A pair of E1000E-X5 firewalls is used to provide vFW service and other security features.

CSB and ManageOne hardware and software are configured to provide cloud host operation

and resource pool maintenance capabilities.

5.1.2 Device Configuration

Table 5-1 Device configuration in a FusionStorage small-scale scenario

Device

Classification Device

Type Configuration Quantity Remarks

Computing and

storage devices

RH2288

H server

MCNA:

CPU: E5-2695

Memory: 128 GB

Hard disk: two 600

GB SAS disks,

twelve 2 TB SATA

disks, and one 800 GB SSD card

RAID card: SR120

SAS/SATA RAID

card

NIC: two 10GE

NICs

2 Computing

storage resource pool

LCNA:

CPU: E5-2695

Memory: 128 GB

Hard disk: two 600

GB SAS disks,

twelve 2 TB SATA

disks, and one 800

GB SSD card

RAID card: SR120

SAS/SATA RAID card

NIC: two 10GE

NICs

3

Page 85: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper A Acronyms and Abbreviations

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

85

Device Classification

Device Type

Configuration Quantity Remarks

SCNA:

CPU: E5-2695

Memory: 128 GB

Hard disk: two 300

GB SAS disks,

twelve 2 TB SATA

disks, and one 800 GB SSD card

RAID card: SR120

SAS/SATA RAID card

NIC: two 10GE NICs

25

RH2288

server

CSB:

CPU: E5-2650

Memory: 128 GB

Hard disk: two 300 GB SAS disks

NIC: GE NIC

2 Deploys the

CSB database.

IPSAN

S5500T

Cache: 8 GB

Hard disk: five 600 GB SAS disks

1

RH2288

server

RH2288:

CPU: two E5-2640 CPUs

Memory: 32 GB

Hard disk: three 300

GB SAS disks (2.5)

LOM: four GE LOMs

NIC: one 4 x GE NIC

RAID card: SR320BC+BBU

PS: two 750 W PSs

1 Deploys the

eSight.

Network BMC

manage

ment

access switch

3328 4 Supports

server BMC

and other

device

management

access.

Page 86: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper A Acronyms and Abbreviations

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

86

Device Classification

Device Type

Configuration Quantity Remarks

Manage

ment

access switch.

One S5328, which

provides two 10GE optical modules

2 Supports CSB

and eSight

service and storage access.

Core

switch

One CE6850, which

provides thirty 10GE

optical modules and

two 40GE optical modules

2

Firewall One E1000E-X5, which

provides two 10GE

optical modules

2

Virtualization

software

FusionS

phere

V100R003C10

platinum edition

60

Distributed

storage software

FusionSt

orage

Virtualization edition

(per TB), which

includes one-year

exclusive upgrade

service

720

O&M system

software

CSB V100R001C20 1

O&M

management software

Manage

One

OC 1

eSight 1

Page 87: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper A Acronyms and Abbreviations

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

87

5.2 Reference Model for a FusionStorage Large-scale Scenario

5.2.1 Networking

Figure 5-2 Networking in a FusionStorage large-scale scenario

Internet

40*RH2288H V2

GE 10GE

O&M Network

...

CSS

40*RH2288H V2

...

CSS

GE

40GE

10GE

FW E8000E-X8

10GE

FW E8000E-X8

CSS

CE6850 CE6850CE6850CE6850

CE12804

...

128*RH2288H V2

Resource pool capabilities: 5568 vCPU, 12984 GB memory, 1022.32 TB storage, and 15 cabinets

3328

The reference model for a large-scale scenario supports a maximum of 128 servers to provide

5568 vCPU, 12984 GB memory, 1022.32 TB storage, and offer cloud host and VPC services.

The large-scale scenario network consists of the access layer and core layer. The access

switches are a pair of CE6850s stacking to provide access for 48 servers and provide a pair of

40GE uplink ports to connect to the core switches. The core switches are a pair of CE12804s.

A pair of E8000E-X8 firewalls are used to provide vFW service and other security features.

CSB and ManageOne hardware and software are configured to provide cloud host operation

and resource pool maintenance capabilities.

Page 88: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper A Acronyms and Abbreviations

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

88

5.2.2 Device Configuration

Table 5-2 Device configuration in a FusionStorage large-scale scenario

Device Classification

Device Type

Configuration Quantity Remarks

Computing and

storage devices

RH2288

H server

MCNA:

CPU: E5-2695

Memory: 128 GB

Hard disk: two 600 GB

SAS disks, twelve 2 TB

SATA disks, and one 800 GB SSD card

RAID card: SR120 SAS/SATA RAID card

NIC: two 10GE NICs

2 Computing

storage resource pool

LCNA:

CPU: E5-2695

Memory: 128 GB

Hard disk: two 600 GB

SAS disks, twelve 2 TB

SATA disks, and one 800 GB SSD card

RAID card: SR120 SAS/SATA RAID card

NIC: two 10GE NICs

3

SCNA:

CPU: E5-2695

Memory: 128 GB

Hard disk: two 300 GB

SAS disks, twelve 2 TB

SATA disks, and one 800 GB SSD card

RAID card: SR120 SAS/SATA RAID card

NIC: two 10GE NICs

123

RH2288

server

CSB:

CPU: E5-2650

Memory: 128 GB

Hard disk: two 300 GB SAS disks

NIC: GE NIC

2 Deploys the

CSB database.

Page 89: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper A Acronyms and Abbreviations

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

89

Device Classification

Device Type

Configuration Quantity Remarks

IPSAN

S5500T

Cache: 8 GB

Hard disk: five 600 GB

SAS disks

1

RH2288

server

RH2288:

CPU: two E5-2640 CPUs

Memory: 32 GB

Hard disk: three 300 GB

SAS disks (2.5)

LOM: four GE LOMs

NIC: one 4 x GE NIC

RAID card:

SR320BC+BBU

PS: two 750 W PSs

1 Deploys the

eSight.

Network Manage

ment

access switch.

3328 4 Supports server

BMC and other

device

management

access.

Access

switch

One CE6850, which

provides forty 10GE optical

modules and one 40GE optical modules

2

Core switch

One CE12804, which

provides one 24-port 40GE

optical interface board, four

40GE optical modules, and

one 48-port GE interface

board

2

Firewall One E1000E-X5,

which provides two LPUF40

interface boards, four 10GE

interface subcards, and one

80 G X8&X16 firework

service board

2

Virtualization

software

FusionS

phere

V100R003C10 platinum

edition

256

Distributed

storage software

FusionSt

orage

Virtualization edition (per

TB), which includes

one-year exclusive upgrade service

3072

O&M system

software

CSB V100R001C20 1

Page 90: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper A Acronyms and Abbreviations

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

90

Device Classification

Device Type

Configuration Quantity Remarks

O&M

management software

Manage

One

OC 1

eSight 1

5.3 Reference Model for the Rack Server + SAN Scenario

5.3.1 Networking

Figure 5-3 Networking in the rack server + SAN scenario

Service GE 10GE

Resource pool capabilities: 366 vCPU, 1107 GB memory, 89.4 TB storage

Management GE Storage GE

…….

Internet

32*RH2288H V2

O&M Network

……

CSS

10GE

FW E8000E-X5

10GE

FW E8000E-X5

CSS

CE5850CE5850

CE6850

...

……

CSSCE5850CE5850

S5500T

The reference model for the RH2288 + SAN scenario supports a maximum of 32 servers to

provide 366 vCPU, 1107 GB memory, 89.4 TB storage, and offer cloud host and VPC

services.

Page 91: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper A Acronyms and Abbreviations

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

91

The RH2288 + SAN scenario network consists of the access layer and core layer. The access

layer supports the access of the management plane, server plane, and storage plane. These

three planes are isolated from one to another. The access layer uses a pair of CE6850 switches

to provide two 48GE accesses and a pair of 10GE ports to connect to the core layer. The core

layer uses a pair of CE6850 switches. Each CE6850 provides forty-eight 10GE ports to

support the uplink access and SAN access.

A pair of E1000E-X5 firewalls is used to provide vFW service and other security features.

CSB and ManageOne hardware and software are configured to provide cloud host operation

and resource pool maintenance capabilities.

5.3.2 Device Configuration

Table 5-3 Device configuration in the rack server + SAN scenario

Device

Classification Device

Type Configuration Quantity Remarks

Computing

resource

RH2288

H server

CPU: E5-2695

Memory: 128 GB

Hard disk: two 600 GB SAS disks

NIC: three double-port GE

NICs

32 Computing

resource pool

RH2288

server

CSB:

CPU: E5-2650

Memory: 128 GB

Hard disk: two 300 GB

SAS disks

NIC: double-port GE NIC

2 Deploys the

CSB database.

RH2288

server

RH2288:

CPU: two E5-2640 CPUs

Memory: 32 GB

Hard disk: three 300 GB

SAS disks (2.5)

LOM: four GE LOMs

NIC: one 4 x GE NIC

RAID card:

SR320BC+BBU

PS: two 750 W PSs

1 Deploys the

eSight.

Page 92: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper A Acronyms and Abbreviations

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

92

Device Classification

Device Type

Configuration Quantity Remarks

Storage

resource

OceanSt

or S5500T

Cache: 16 GB

Enclosure: one controller

enclosure and three disk enclosures

Hard disk: seventy-two 600 GB SAS disks

2 Provides

storage resources.

OceanSt

or S5500T

Cache: 16 GB

Enclosure: one controller

enclosure and two disk enclosures

Hard disk: seventy-two 600

GB SAS disks

6

Network Server

access switch

One 5850, which provides one

48GE port and two 10GE

optical ports and optical modules

8 Accesses

server

services,

management,

and storage.

Core

switch

One CE6850, which provides

twenty-four 10GE optical modules

2 Converges

services,

management, and storage.

Firewall E1000E-X5 2

Virtualization

software

FusionS

phere

V100R003C10 platinum

edition

64

O&M system

software

CSB V100R001C20 1

O&M

management

software

Manage

One

OC 1

eSight 1

Page 93: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper A Acronyms and Abbreviations

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

93

5.4 Reference Model for the Blade Server + SAN Scenario

5.4.1 Networking

Figure 5-4 Networking in the blade server + SAN scenario

10GE

Resource pool capabilities: 484 vCPU, 1180 GB memory, 117 TB storage

Management GE

…….

Internet

E9000 blade server

CH121 blade

2*E9000 enclosures

32*CH121 blades

O&M Network

10GE

FW E8000E-X5

10GE

FW E8000E-X5

CSS

CE6850

S5500T

E9000 blade server

CH121 blade

The reference model for the E9000 + SAN scenario supports a maximum of 32 servers to

provide 484 vCPU, 1180 GB memory, 117 TB storage, and offer cloud host and VPC

services.

Externally, the access layer and core layer in the E9000 + SAN scenario are integrated. The

switching modules of the server subracks support the access capability of server blades. The

core layer uses a pair of CE6850 switches to support the uplink access and the SAN access.

A pair of E1000E-X5 firewalls is used to provide vFW service and other security features.

CSB and ManageOne hardware and software are configured to provide cloud host operation

and resource pool maintenance capabilities.

5.4.2 Device Configuration

Page 94: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper A Acronyms and Abbreviations

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

94

A Acronyms and Abbreviations

ACL access control list

ASPF application specific packet filter

CIFS common Internet file system

CLI command-line interface

CSB Cloud Service Brokerage

CSS cluster switch system

DHCP Dynamic Host Configuration Protocol

DMZ demilitarized zone

EBS Elastic Block Storage

EIP elastic IP

ELB Elastic Load Balance

ICP Internet content provider

IDC Internet Data Center

ISM Integrated Storage Management

ISP Internet service provider

ISV independent software vendor

ITIL Information Technology Infrastructure Library

MPP Massively Parallel Processing

NAT Network Address Translation

NFS Network File System

RAID redundant array of independent disks

SAN storage area network

SAS serial attached SCSI

SATA Serial ATA

Page 95: V100R001C31 White Paper - huawei/media/CNBG/Downloads/Product/IT/en...opportunities and challenges for IDCs. Traditional IDCs face challenges on big data processing performance and

IDC Solution

White Paper A Acronyms and Abbreviations

Issue 01 (2013-07-30) Huawei Proprietary and Confidential

Copyright © Huawei Technologies Co., Ltd.

95

SLA service level agreement

SSD solid-state drive

UDS Universal Distributed Storage

VFW virtual fire wall

VLAN virtual local area network

VM virtual machine

VNC Virtual Network Computing

VPC virtual private cloud

VPN virtual private network

VRF virtual routing and forwarding