dell · 7 9 vxrack system with neutrino overview 11 about this...
TRANSCRIPT
VxRack™ System 1000 withNeutrinoVersion 1.1
Administrator Guide302-003-040
01
Copyright © 2016 EMC Corporation. All rights reserved. Published in the USA.
Published August 2016
EMC believes the information in this publication is accurate as of its publication date. The information is subject to changewithout notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind withrespect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for aparticular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicablesoftware license.
EMC², EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and othercountries. All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com).
EMC CorporationHopkinton, Massachusetts 01748-91031-508-435-1000 In North America 1-866-464-7381www.EMC.com
2 VxRack System 1000 with Neutrino 1.1 Administrator Guide
7
9
VxRack System with Neutrino overview 11
About this guide............................................................................................12VxRack Neutrino introduction........................................................................ 12VxRack Neutrino end users............................................................................13Conceptual architecture................................................................................ 14Hardware...................................................................................................... 15
Brick................................................................................................ 15Node................................................................................................17Rack.................................................................................................20
Software .......................................................................................................27Base software.................................................................................. 27Platform Service...............................................................................28Cloud Compute Service.................................................................... 29EMC Native Hybrid Cloud..................................................................32EMC Elastic Cloud Storage object storage.........................................33
Account overview 35
Account (domain)..........................................................................................36Account tenancy.............................................................................. 38
Default account.............................................................................................39Account creation........................................................................................... 40
Initial project of an account..............................................................41
Roles overview 43
VxRack Neutrino account-scoped roles..........................................................44Administrator roles.......................................................................... 44Monitor roles................................................................................... 46Mapping VxRack Neutrino roles to OpenStack Keystone roles.......... 46
OpenStack project-scoped roles....................................................................48Tasks performed by role ............................................................................... 49
Get started with VxRack Neutrino 55
Account setup administrative workflow......................................................... 56Log in to VxRack Neutrino ............................................................................. 57Add nodes to the Cloud Compute Service...................................................... 58VxRack Neutrino Dashboard UI......................................................................59
Health..............................................................................................59Node allocation................................................................................61Cloud Compute Service.................................................................... 61Storage............................................................................................ 62System.............................................................................................63
Figures
Tables
Chapter 1
Chapter 2
Chapter 3
Chapter 4
CONTENTS
VxRack System 1000 with Neutrino 1.1 Administrator Guide 3
Alerts............................................................................................... 63
Accounts 65
Create an account......................................................................................... 66Add users and groups to an account..............................................................66
Add local users to an account.......................................................... 66Add local groups to an account........................................................ 69Add external users/groups from an LDAP/AD server to an account... 70
Edit an account............................................................................................. 71Delete an account......................................................................................... 71
Users/Groups 73
Add a user.....................................................................................................74Edit a user.....................................................................................................74Disable a user............................................................................................... 74Delete a user.................................................................................................75Add a group.................................................................................................. 75Edit a group...................................................................................................75Delete a group...............................................................................................76
Projects 77
Create project................................................................................................78Add a user to a project.................................................................................. 78Add a group to a project................................................................................ 79Remove a user from a project........................................................................ 79Remove a group from a project...................................................................... 80Edit a project.................................................................................................80View access roles in a project........................................................................80Disable a project........................................................................................... 81Delete project................................................................................................81
Roles 83
Edit a user's account-level role or project inherited role.................................84Edit a user's project-level role....................................................................... 84Edit a group's account-level role or project inherited role...............................85Edit a group's project-level role..................................................................... 85
Cloud Compute Service 87
Cloud compute introduction..........................................................................88Overview tab................................................................................................. 88
Instances......................................................................................... 88vCPUs.............................................................................................. 90Memory............................................................................................91Nova used storage........................................................................... 91OpenStack Mitaka components....................................................... 91
Volumes tab..................................................................................................91Configuration tab.......................................................................................... 92
Aodh................................................................................................92Ceilometer....................................................................................... 93Cinder.............................................................................................. 94
Chapter 5
Chapter 6
Chapter 7
Chapter 8
Chapter 9
CONTENTS
4 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Glance............................................................................................. 95Heat.................................................................................................96Horizon............................................................................................ 96Memcached..................................................................................... 97Neutron............................................................................................97Neutron network.............................................................................. 98Nova compute..................................................................................99Nova................................................................................................ 99RabbitMQ.......................................................................................101ScaleIO.......................................................................................... 102
OpenStack Dashboard tab.......................................................................... 104Manage tab.................................................................................................104
Nodes 105
Node information........................................................................................ 106Nodes view.................................................................................... 106Node details view...........................................................................110
Node actions...............................................................................................113Add a node to the Cloud Compute Service......................................113Remove a node from the Cloud Compute Service............................114Suspend a node.............................................................................115Resume a node.............................................................................. 116Reboot a node................................................................................116Shut down a node.......................................................................... 117Power on a node............................................................................ 118Power off a node............................................................................ 118Reset a node.................................................................................. 119Transfer a platform node................................................................ 120Delete a node.................................................................................121
Networks 123
Networks.....................................................................................................124
Storage 125
Introduction to VxRack Neutrino storage......................................................126Disk architecture......................................................................................... 127
Performance node..........................................................................127Capacity node................................................................................ 128
ScaleIO components................................................................................... 128Protection domains and storage pools........................................................ 130
Storage pool node limits................................................................ 131Volumes......................................................................................................133
Ephemeral and persistent volumes................................................ 133How volumes are created in OpenStack......................................... 134How volumes are provisioned from storage pools...........................135
ScaleIO limitations......................................................................................136Volume size is in 8 GB increments..................................................136Used storage capacity in VxRack Neutrino is double the size of avolume created in OpenStack.........................................................136
Storage UI page...........................................................................................137Overview tab..................................................................................138Frontend tab.................................................................................. 140Backend tab...................................................................................140
Chapter 10
Chapter 11
Chapter 12
CONTENTS
VxRack System 1000 with Neutrino 1.1 Administrator Guide 5
Components 143
Introduction to Platform and Cloud Compute Service components...............144Platform Service components......................................................................145Cloud Compute Service components........................................................... 147
Reports 149
Overview..................................................................................................... 150Reports summary........................................................................................ 150Health......................................................................................................... 151Performance ...............................................................................................151Alerts.......................................................................................................... 152
Modify alert definitions.................................................................. 152Settable alerts............................................................................... 156Actions for managing alerts............................................................159Send alerts through email.............................................................. 159
Capacity......................................................................................................160Simulate adding instances.............................................................161
Chargeback.................................................................................................161Edit chargeback service level settings............................................ 162Export and import chargeback rule definitions............................... 163
Inventory.....................................................................................................163License....................................................................................................... 164Logs............................................................................................................ 164Playbook logs..............................................................................................165Customize columns in report tables............................................................ 165Toggle the legend in report tables............................................................... 166Set a reporting period..................................................................................166Export a report............................................................................................ 168
License, ESRS, upgrade, and security settings 171
License....................................................................................................... 172Obtain license................................................................................172Upload license............................................................................... 172View license...................................................................................172
EMC Secure Remote Support....................................................................... 173Register with ESRS......................................................................... 173Edit ESRS settings.......................................................................... 173Disable call home.......................................................................... 174Deregister from ESRS..................................................................... 174
Upgrade VxRack Neutrino software..............................................................175Set EMC Online Support credentials...............................................175Upload upgrade package and perform upgrade..............................176What to expect after a Base OS patch upgrade............................... 176What to expect after a Services patch upgrade............................... 176
Security certificates.....................................................................................177Generate security certificates......................................................... 177Upload a custom NGINX security certificate.................................... 177
Chapter 13
Chapter 14
Chapter 15
CONTENTS
6 VxRack System 1000 with Neutrino 1.1 Administrator Guide
VxRack Neutrino UI manages and supports the OpenStack environment........................14VxRack Neutrino conceptual architecture.......................................................................15Front and back views of performance brick.................................................................... 16Front and back views of capacity brick........................................................................... 17Minimum rack configurations ........................................................................................21Maximum single rack configurations..............................................................................23Maximum rack configuration with performance bricks................................................... 24Maximum rack configuration with capacity bricks.......................................................... 25Four-rack configuration example with mixed performance and capacity bricks...............26How cloud compute node capacity is consumed in OpenStack...................................... 31VxRack Neutrino Cloud Compute Service and Native Hybrid Cloud integration............... 32VxRack Neutrino account maps to an OpenStack Keystone domain................................37Adding LDAP users to an account...................................................................................38Enterprise Cloud Administrator managing accounts....................................................... 39Service provider Cloud Administrator managing accounts..............................................39Local users in the Default account................................................................................. 39VxRack Neutrino account structure................................................................................ 40Automatic creation of the initial project and network at account creation.......................41Mapping of VxRack Neutrino administrator roles to OpenStack Keystone roles ..............47Mapping of VxRack Neutrino monitor roles to OpenStack Keystone roles....................... 47VxRack Neutrino administrator workflow........................................................................57OpenStack consumption of VxRack Neutrino ScaleIO block storage ............................ 127ScaleIO storage devices on a performance cloud compute node.................................. 128ScaleIO storage devices on a capacity cloud compute node.........................................128VxRack Neutrino ScaleIO components on nodes.......................................................... 130Performance and capacity protection domains and storage pools................................131Performance storage pool node limits..........................................................................132Capacity storage pool node limits................................................................................132Even distribution of volume data chunks across devices in the performance storage pool....................................................................................................................................136Raw storage capacity in the Storage page ................................................................... 137Front-end and back-end block storage......................................................................... 138Component map for Platform and Cloud Compute nodes............................................. 145
1234567891011121314151617181920212223242526272829
303132
FIGURES
VxRack System 1000 with Neutrino 1.1 Administrator Guide 7
FIGURES
8 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Performance brick (p series) specifications....................................................................16Capacity brick/node (i series) specifications..................................................................17Performance node cloud compute and memory specifications.......................................17Performance node useable storage................................................................................18Capacity node useable storage...................................................................................... 19Minimum rack specifications......................................................................................... 21Maximum single rack specifications.............................................................................. 23Maximum rack specifications........................................................................................ 26VxRack Neutrino base OS software................................................................................ 27VxRack Neutrino provisioning of OpenStack vCPUs........................................................ 29VxRack Neutrino account-scoped roles.......................................................................... 44OpenStack project roles.................................................................................................48Tasks performed by VxRack Neutrino roles and by project administrator and memberroles.............................................................................................................................. 49Dashboard health icons.................................................................................................59Alert icon severity description........................................................................................63Examples of possible local users with their account and project roles ...........................68LDAP/AD settings.......................................................................................................... 70Default OpenStack flavors............................................................................................. 89Volume parameters....................................................................................................... 91Default Aodh configuration properties........................................................................... 93Default Ceilometer configuration properties...................................................................93Default Cinder configuration properties......................................................................... 94Default Glance configuration properties.........................................................................95Default Heat configuration properties............................................................................ 96Default Horizon configuration properties....................................................................... 96Default memcached configuration properties.................................................................97Default Neutron configuration properties....................................................................... 97Default Neutron network configuration properties..........................................................98Default Nova Compute configuration properties.............................................................99Default Nova configuration properties............................................................................99Default RabbitMQ configuration properties.................................................................. 101Default ScaleIO configuration properties......................................................................102Infrastructure icons......................................................................................................106Node health icons....................................................................................................... 106Node status states.......................................................................................................107Service icons............................................................................................................... 109Disk and storage device information displayed on the Disks tab..................................111Node actions in the Nodes page.................................................................................. 113Summary of volume characteristics in OpenStack and VxRack Neutrino.......................134Volume creation methods in OpenStack...................................................................... 134How VxRack Neutrino used capacity is calculated ....................................................... 137Tiles on the Overview tab of the Storage page..............................................................138Platform Service components...................................................................................... 146Cloud Compute Service components........................................................................... 148List of reports shown in the VxRack Neutrino UI........................................................... 150Summary of health reports.......................................................................................... 151Summary of performance reports.................................................................................151Summary of alert reports............................................................................................. 152Alerts that can be set by the Cloud Administrator in the VxRack Neutrino UI Reportssection ....................................................................................................................... 156
12345678910111213
141516171819202122232425262728293031323334353637383940414243444546474849
TABLES
VxRack System 1000 with Neutrino 1.1 Administrator Guide 9
Administrator options for managing alerts................................................................... 159Summary of capacity reports....................................................................................... 160Summary of system inventory reports.......................................................................... 163Summary of licensing reports...................................................................................... 164Summary of logs reports..............................................................................................164Examples of previous, last, and current time selection settings....................................168
505152535455
TABLES
10 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 1
VxRack System with Neutrino overview
This chapter provides a product description of VxRack™ System with Neutrino and itsconceptual architecture.
l About this guide....................................................................................................12l VxRack Neutrino introduction................................................................................ 12l VxRack Neutrino end users....................................................................................13l Conceptual architecture........................................................................................ 14l Hardware.............................................................................................................. 15l Software ...............................................................................................................27
VxRack System with Neutrino overview 11
About this guideThe content in this document is directed at VxRack™ System with Neutrino administratorswho have access to the VxRack Neutrino management UI. Administrators include theCloud Administrator and the Account Administrator. You can learn about theseadministrator roles in VxRack Neutrino account-scoped roles on page 44. Cloud andAccount Administrators perform the following tasks.
l Provide and manage an open source infrastructure (compute, network, and storageresources) for users who work in the OpenStack and Pivotal Cloud Foundryenvironments.
l Manage users and roles in accounts and projects. You can read more about theseconcepts in VxRack Neutrino account-scoped roles on page 44.
This guide focuses on the tasks, workflows, and concepts related to administratorfunctionality in the VxRack Neutrino UI. It does not provide in-depth detail on OpenStackoperations. For OpenStack operations performed in the OpenStack Dashboard UI, refer tothe VxRack System with Neutrino 1.1 OpenStack Implementation Guide and to http://docs.openstack.org/ for OpenStack documentation for the Mitaka release.
VxRack Neutrino introductionVxRack Neutrino is a turnkey private cloud-in-a-box that provides a platform for cloud-native, Platform 3 applications. It delivers a web-scale, self-scaling, self-resilientapplication infrastructure for a secure, elastic private cloud. VxRack Neutrino enablesenterprises and internet or application service providers to rapidly deploy an OpenStackenvironment, which allows them to simply manage cloud compute services for theirusers.
VxRack Neutrino is a hyper-converged infrastructure platform that uses OpenStack forcloud computing with built-in hardware for block storage. Hyper-converged infrastructureintegrates logically independent, software-defined compute, storage, and networkingtechnologies on scale-out hardware.
VxRack Neutrino works well for customers who have the following use case needs:
l Cloud-native applications. The VxRack Neutrino platform limits vendor lock-in, whichenables customers to develop and deploy cloud-native applications that are elastic,resilient, and horizontally scalable.
l DevOps environment. The VxRack Neutrino platform treats infrastructure like codewhich helps IT move to a DevOps model while maintaining regulatory compliance.
l Less-critical workloads. The VxRack Neutrino platform runs workloads that do notrequire a Platform 2 infrastructure such as test/development environments and lesscritical services like SharePoint and mobile applications.
Key benefits of VxRack Neutrino include:
l Turnkey private cloud can be efficiently managed, provisioned, and monitored via theintuitive VxRack Neutrino management UI.
l OpenStack configuration and deployment is simplified and streamlined.
l Physical infrastructure capacity can be dynamically shifted to instantly respond tochanging consumer computing demands.
l Built-in chargeback reporting is provided to easily track cloud compute resource andservice costs.
VxRack System with Neutrino overview
12 VxRack System 1000 with Neutrino 1.1 Administrator Guide
l Infrastructure capacity planning and forecasting is easily accomplished in the UI.
VxRack Neutrino end usersVxRack Neutrino is designed for enterprises and internet or application service providersthat need to set up complex OpenStack environments quickly and to grow compute andstorage capacity on the fly. Datacenter and network administrators in these organizationscan use the UI to perform the following tasks in the role of a Cloud or AccountAdministrator.
l Set up and manage the physical infrastructure of the OpenStack private cloud
l Add, remove, and manage users who work in OpenStack, assign them to projects,and give them roles so they have the appropriate access to compute, network, andstorage resources
l View reports on the physical and virtual resource use and capacity of the OpenStackprivate cloud
Users work in the OpenStack Dashboard UI to manage virtual resources.
Adding nodes to the Platform and Cloud Compute Services as part of the initial VxRackNeutrino installation creates a new OpenStack environment, and through the UI,administrators can view and manage the resources in the OpenStack environment. TheOpenStack environment that is created is a pure OpenStack environment because theVxRack Neutrino software management layer sits below it. There are two sets of users inthese dual environments: the VxRack Neutrino Cloud and Account Administrators andMonitors who can log in to and use the VxRack Neutrino UI, and the users who work in theOpenStack Dashboard UI and cannot log in to and use the VxRack Neutrino UI.
VxRack Neutrino UIYou can access the UI if you are a:
l VxRack Neutrino Cloud Administrator
l VxRack Neutrino Account Administrator
l VxRack Neutrino Cloud Monitor
l VxRack Neutrino Account Monitor
Each type of administrator can access all or parts of the UI, depending on theiradministrator role. For more information on the administrator roles, refer to VxRackNeutrino account-scoped roles on page 44.
Each type of monitor can access all or parts of the Reports section of the UI, dependingon their monitor role. For more information on the monitor role, refer to Monitor roles onpage 46.
The VxRack Neutrino Cloud Administrator uses the UI to set up the OpenStack-basedprivate cloud and manage its infrastructure to accommodate the computing demands ofthe users who work in OpenStack and Pivotal Cloud Foundry environments. The CloudAdministrator manages the compute, network, and storage infrastructure that supportsthe cloud, but does not actually use the cloud. The Cloud Administrator can view all partsof the UI and can perform any task in the UI.
The Cloud Administrator can give users the administrator role within the scope of anaccount to make them Account Administrators. The Account Administrators can then loginto the UI to manage the users and projects in their respective accounts. They can alsomanage the resources within projects such as OpenStack virtual machines (instances)and volumes.
The Cloud Monitor has a read-only view of all the reports listed under the Reports sectionin the UI; the reports contain data on the entire VxRack Neutrino private cloud.
VxRack System with Neutrino overview
VxRack Neutrino end users 13
The Account Monitor has a read-only view of a subset of reports under the Reportssection of the UI; the data in the reports is limited to the account for which the AccountMonitor is authorized.
OpenStack environmentUsers work in the OpenStack Dashboard UI to perform their daily computing tasks(launching instances, creating volumes, provisioning resources, and so on) withinOpenStack projects. Users access and consume the compute, storage, and networkresources provided by VxRack Neutrino, but they cannot log in to the VxRack Neutrino UI.
Figure 1 VxRack Neutrino UI manages and supports the OpenStack environment
VxRack Neutrino Management UI
OpenStack Dashboard UI
UI used by
OpenStack users
VxRack Neutrino administrators and monitors
UI used by
VxRack Neutrino
Adding nodes to the VxRack Neutrino Cloud Compute and Platform Services
creates a new OpenStack environment
Conceptual architectureVxRack Neutrino is an EMC-built integrated system that consists of software servicesrunning on top of pre-configured, racked hardware components. VxRack Neutrinohardware and software are described in more detail in the following sections. The high-level architecture is depicted in the following figure.
VxRack System with Neutrino overview
14 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Figure 2 VxRack Neutrino conceptual architecture
Cloud Compute Service
(contains OpenStack Nova compute
service and provisions the Nova virtual
machines (instances) in OpenStack
Pla!orm Service
(deploys the OpenStack environment and provides management software, including UI
3 max expansion 40U racks
EMC Na"ve Hybrid Cloud
(optional Pivotal Cloud Foundry PaaS)runs on
Hardware
So#ware
first 40U rack
Cloud Compute Service runs on 3 to 177 nodes
Platform Serviceruns on 3 nodes
HardwareVxRack Neutrino hardware consists of one to four racks in which a collection of bricks,switches, and power supplies are housed.
Brick
A brick is an EMC-supplied modular 2U chassis (occupies two shelves in the rack) thatcontains physical compute, storage, and network resources (NICs). There are two types ofbricks in the VxRack Neutrino system: a performance brick and a capacity brick.
Performance brick
A performance brick (p series brick) comprises four blades, which are referred to asnodes. Each node is attached to four internal 2.5 inch solid state disks (SSDs). Therefore,there are 16 SSDs in a brick. As shown in the following figure, the front of the brickcontains the disks, and the back of the brick contains the nodes that are connected tothe disks.
VxRack System with Neutrino overview
Hardware 15
Figure 3 Front and back views of performance brick
disks attached to Node 1
Front of brick (disk view)
disk slots
disks attached to Node 2
disks attached to Node 3 disks attached to Node 4
Back of brick (node view)Node 4 Node 3
Node 1Node 2power supply 1
power supply 2
A performance brick can have either 16 400 GB SSDs or 16 800 GB SSDs. Theperformance brick is available in different models to optimize performance for differentworkloads. The following table describes the specifications for the different performancebrick models.
Table 1 Performance brick (p series) specifications
Brick model CPU frequency CPU cores Logical cores(hyperthreaded)
Memory Raw storage*
p412 (400 GB SSDs) 2.4 GHz 48 96 512 GB 5.6 TB**
p812 (800 GB SSDs) 2.4 GHz 48 96 512 GB 12 TB***
p416 (400 GB SSDs) 2.6 GHz 64 128 1,024 GB 5.6 TB**
p816 (800 GB SSDs) 2.6 GHz 64 128 1,024 GB 12 TB***
p420 (400 GB SSDs) 2.6 GHz 80 160 2,048 GB 5.6 TB**
p820 (800 GB SSDs) 2.6 GHz 80 160 2,048 GB 12 TB***
*The raw storage numbers use the decimal system (base 10). Storage device manufacturers measure capacity using the decimalsystem (base 10), so 1 gigabyte (GB) is calculated as 1 billion bytes.
**The brick raw storage subtracts out the 200 GB operating system storage requirement per node. For bricks with 400 GB SSDs:1,600 GB/node - 200 GB for operating system = 1,400 GB/node. 1,400 GB/node x 4 nodes = 5.6 TB/brick.
***The brick raw storage subtracts out the 200 GB operating system storage requirement per node. For bricks with 800 GB SSDs:3,200 GB/node - 200 GB for operating system = 3,000 GB/node. 3,000/GB node x 4 nodes = 12 TB/brick.
Capacity brick
A capacity brick has a single node and 24 internal disks. The internal disks in a capacitybrick/node consist of one 400 GB solid state drive (SSD), one 800 GB SSD, and 22 harddisk drives (HDDs), each with 1.8 TB of storage capacity. The SSDs are used for theoperating system (200 GB) and for caching (1 GB) to improve HDD performance.
As shown in the following figure, the front of the brick contains the disks, and the back ofthe brick is the node view. The entire 2U brick is one node.
VxRack System with Neutrino overview
16 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Figure 4 Front and back views of capacity brick
Front of brick (disk view)
Back of brick (node view)
node
Disk 0 is 400 GB OS SSD
Disk 1 is 800 GB caching SSD
Disks 2-23 are 1.8 TB HDDs
The capacity brick is available in different models to optimize performance for differentworkloads. The following table describes the specifications for the different capacitybrick models.
Table 2 Capacity brick/node (i series) specifications
Brick model CPU frequency CPU cores Logical cores(hyperthreaded)
Memory Raw storage*
i1812 2.4 GHz 12 24 128 GB 39.6 TB
i1816 2.6 GHz 16 32 256 GB
i1820 2.6 GHz 20 40 512 GB
*22 HDDs x 1.8 TB HDD = 39.6 TB. This raw storage number uses the decimal system (base 10).
Node
The nodes within the bricks are installed with a SUSE Linux Enterprise Server 12operating system at the EMC factory. Nodes include CPU, memory, and networkinterfaces. The nodes within a rack are interconnected via 10 GbE and 1 GbE leaf-spinenetworks. There are two types of nodes in the VxRack Neutrino system: performancenodes and capacity nodes. A performance node is connected to 4 SSDs. A capacity nodeis connected to 24 internal disks (2 SSDs and 22 HDDs).
Performance node
Performance nodes have the following specifications depending on the p-series brickmodel and the SSD drive pack size.
Table 3 Performance node cloud compute and memory specifications
Brick model (p series) CPU frequency CPU cores Logical cores(hyperthreaded)
Memory
p412 (400 GB SSDs) 2.4 GHz 12 24 128 GB
VxRack System with Neutrino overview
Node 17
Table 3 Performance node cloud compute and memory specifications (continued)
Brick model (p series) CPU frequency CPU cores Logical cores(hyperthreaded)
Memory
p812 (800 GB SSDs) 2.4 GHz 12 24 128 GB
p416 (400 GB SSDs) 2.6 GHz 16 32 256 GB
p816 (800 GB SSDs) 2.6 GHz 16 32 256 GB
p420 (400 GB SSDs) 2.6 GHz 20 40 512 GB
p820 (800 GB SSDs) 2.6 GHz 20 40 512 GB
The following table presents the raw and useable storage available on a performancenode. The amount of useable storage on a node, that is, the amount of storage that isavailable for volume allocation, is considerably less than its raw storage. This is due totwo factors in the VxRack Neutrino ScaleIO block storage system: one is the sparecapacity percent in the system, and the other is the storage required for data protection.
The spare capacity is the capacity that is reserved in the system to allow full recovery ofdata in the event a node fails. This spare capacity is automatically calculated by VxRackNeutrino and varies depending on how many nodes are running the Cloud ComputeService in the VxRack Neutrino system. Spare capacity can require between 33 and 4percent o the raw cloud compute capacity available in the system, depending on thenumber of nodes in the system.
The ScaleIO block storage system also requires storage for data protection purposes;volume data is mirrored across SSD storage devices. For this reason, a volume created inOpenStack takes up twice its volume size in the VxRack Neutrino system; a 16 GB volumecreated in OpenStack occupies 32 GB of space in the VxRack Neutrino system (for moreinformation, refer to Used storage capacity in VxRack Neutrino is double the size of avolume created in OpenStack on page 136).
The following table presents a range of node useable storage capacity depending on thenumber of nodes in the system.
Table 4 Performance node useable storage
Number of nodesin storage pool*
Spare capacitypercent
Nodes with 400 GB SSDs Nodes with 800 GB SSDs
Raw storage pernode (GB)**
Useable storageper node (GB)***
Raw storage pernode (GB)**
Useable storageper node (GB)***
3 34 1,303.9 430.3 2,794.0 922.0
4 26 1,303.9 482.4 2,794.0 1,0343.8
5 20 1,303.9 521.5 2,794.0 1,117.6
6 18 1,303.9 534.6 2,794.0 1,145.5
7 16 1,303.9 547.6 2,794.0 1,173.5
8 14 1,303.9 560.7 2,794.0 1,201.4
9 12 1,303.9 573.7 2,794.0 1,229.4
10 10 1,303.9 586.7 2,794.0 1,257.3
12 10 1,303.9 586.7 2,794.0 1,257.3
VxRack System with Neutrino overview
18 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 4 Performance node useable storage (continued)
Number of nodesin storage pool*
Spare capacitypercent
Nodes with 400 GB SSDs Nodes with 800 GB SSDs
Raw storage pernode (GB)**
Useable storageper node (GB)***
Raw storage pernode (GB)**
Useable storageper node (GB)***
24 6 1,303.9 612.8 2,794.0 1,257.3
36 4 1,303.9 625.9 2,794.0 1,313.2
48 4 1,303.9 625.9 2,794.0 1,341.1
*A performance storage pool is a group of SSD devices on a set of performance nodes; there is a maximum of 48 nodes per storagepools. For more information, refer to Protection domains and storage pools on page 130.
**The raw storage numbers subtract out the 200 GB operating system storage requirement per node. For nodes with 400 GB SSDs:1,600 GB/node - 200 GB for operating system = 1,400 GB/node (base 10) = 1,303.9 GB/node (base 2). For nodes with 800 GBSSDs: 3,200 GB/node - 200 GB for operating system = 3,000 GB/node (base 10)= 2,794.0 GB/node (base 2). See the storagecapacity units section below for an explanation of base 10 versus base 2 capacity numbers,
***The useable storage per node is calculated as: raw storage/node - spare capacity percent = x. Half of x is used for ScaleIO blockstorage data protection, so then x must be divided by 2 to get the useable storage available for volume allocation. For example,with 3 nodes in a performance storage pool: 1,303.9 GB/node - 34% spare capacity = 860.6 GB ÷ 2 = 430.3 GB useable storage/node (base 2).
Storage capacity unitsStorage device manufacturers measure capacity using the decimal system (base 10), so 1gigabyte (GB) is calculated as 1 billion bytes. For example, a performance node with four400 GB SSDs has 1,600 GB (or 1.6 TB) of capacity using base 10. Storage capacity in theVxRack Neutrino UI is reported using the binary system (base 2) of measurement, whichcalculates 1 GB as 1,073,741,824 bytes. The way decimal and binary numeral systemsmeasure a GB is what causes a 1.6 TB node to appear as 1.5 TB in the VxRack NeutrinoUI.
Capacity node
Refer to Table 2 on page 17 for capacity node specifications.
The following table presents a range node useable storage capacity depending on howmany nodes are in the system.
Table 5 Capacity node useable storage
Number of nodes in storagepool*
Spare capacity percent Raw storage per node(TB)**
Useable storage per node(TB)***
3 34 36.0 11.9
4 26 36.0 13.3
5 20 36.0 14.4
6 18 36.0 14.8
7 16 36.0 15.1
8 14 36.0 15.5
9 12 36.0 15.9
VxRack System with Neutrino overview
Node 19
Table 5 Capacity node useable storage (continued)
Number of nodes in storagepool*
Spare capacity percent Raw storage per node(TB)**
Useable storage per node(TB)***
10 10 36.0 16.2
11 10 36.0 16.2
12 10 36.0 16.2
*A capacity storage pool is a group of HDD storage devices on a set of capacity nodes; there is a maximum of 12 nodes percapacity storage pool. For more information, refer to Protection domains and storage pools on page 130.
**22 HDDs x 1.8 TB HDD = 39.6 TB (base 10) = 36.0 TB (base 2)
***The useable storage per node is calculated as: raw storage/node - spare capacity percent = x. Half of x is used for ScaleIO blockstorage data protection, so then x must be divided by 2 to get the useable storage available for volume allocation. For example,with 3 nodes in a capacity storage pool: 36.0 TB/node - 34% spare capacity = 23.8 TB ÷ 2 = 11.9 TB useable storage/node (base2).
Rack
A rack is deployed into a data center and is the 40-unit enclosure that houses thephysical hardware. Each rack is a fully clustered system that includes nodes, switches,and storage disks. A 40-unit rack can include a combination of performance and capacitybricks. A VxRack Neutrino system can have a minimum of one rack and a maximum offour racks.
Minimum rack configurations
A VxRack Neutrino system requires adding three performance nodes to the PlatformService as part of the initial VxRack Neutrino installation. For this reason, a rackconfiguration must always have at least one performance brick. In addition, a minimumof three nodes (either performance or capacity nodes) must be added to the CloudCompute Service.
The minimum rack configurations with performance and capacity nodes are shown in thefollowing figure.
VxRack System with Neutrino overview
20 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Figure 5 Minimum rack configurations
40 GbE switch (32 ports)
40 GbE switch (32 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
p-series brick
empty
node
node
node
node
p-series bricknode
node
node
node
power distribution unit
minimum rack configurationusing performance bricks for cloud compute
minimum rack configuration using capacity bricks for cloud compute
empty
empty
empty
empty
empty
empty
empty
empty
40 GbE switch (32 ports)
40 GbE switch (32 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
p-series brick
empty
node
node
node
node
power distribution unit
empty
empty
empty
empty
empty
empty
i-series brick/node
i-series brick/node
i-series brick/node
service tray service tray
The minimum entry-level rack with performance nodes includes eight nodes (twoperformance bricks), two 10 GbE switches for uplinks to the data center and data trafficbetween nodes, a power distribution unit (PDU), a service tray for a laptop, and two 1 GbEmanagement switches. In this configuration, three nodes run the Platform Service, andthe remaining five nodes are available for the Cloud Compute Service.
The minimum entry-level rack with capacity nodes includes seven nodes (oneperformance brick, and three capacity bricks), two 10 GbE switches for uplinks to thedata center, a power distribution unit (PDU), a service tray for a laptop, and two 1 GbEmanagement switches. In this configuration, three performance nodes run the PlatformService, and the three capacity nodes are available for the Cloud Compute Service. (Theone available performance node cannot be used for cloud compute because a minimumof three performance nodes is required to run the Cloud Compute Service.)
The following table describes the minimum rack specifications.
Table 6 Minimum rack specifications
Minimumrackconfiguration
Number ofplatformnodes
Number ofcloudcomputenodes
Number ofdisks
Raw storagecapacity (TB)*
Raw storage forcloud computeuse (TB)*
Useable storagefor cloudcompute use(TB)**
Withperformancenodes
3 5 32 SSDs 21.8 TB8 nodes x 3 TB/node = 24 TB (base
13.6 TB5 cloud computenodes x 3 TB/node
5.4 TB
VxRack System with Neutrino overview
Rack 21
Table 6 Minimum rack specifications (continued)
Minimumrackconfiguration
Number ofplatformnodes
Number ofcloudcomputenodes
Number ofdisks
Raw storagecapacity (TB)*
Raw storage forcloud computeuse (TB)*
Useable storagefor cloudcompute use(TB)**
10) = 21.8 TB(base 2)
= 15 TB (base 10)= 13.6 TB (base 2)
With capacitynodes
3 3 20 SSDs, 66HDDs
116.2 TB3 performancenodes x 3 TB/node= 9 TB (base 10)
3 capacity nodes x39.6 TB/node =118.8 TB (base 10)
9 TB + 118.TB =127.8 TB (base 10)= 116.2 TB (base2)
108.1 TB3 capacity nodes x39.6 TB/node =118.8 TB (base 10)= 108.1 TB (base2)
35.6 TB
*TB calculated using the binary system (base 2) of measurement.
**The useable cloud compute storage is considerably less than the raw storage available for cloud compute use due to ScaleIOdata protection and spare capacity requirements. Refer to Used storage capacity in VxRack Neutrino is double the size of a volumecreated in OpenStack on page 136 for more information.
Maximum single rack configurations
In the first rack, performance bricks can be added individually up to a maximum of 9 totalperformance bricks. For a first rack configuration using capacity bricks, after the minimum3 capacity brick minimum is met, capacity bricks can be added individually up to amaximum of 8 total capacity bricks, as shown in the following figure.
VxRack System with Neutrino overview
22 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Figure 6 Maximum single rack configurations
maximum single rack configuration with capacity bricks
40 GbE switch (32 ports)
40 GbE switch (32 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
p-series brick
empty
node
node
node
node
power distribution unit
empty
i-series brick/node
i-series brick/node
i-series brick/node
service tray
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
40 GbE switch (32 ports)
40 GbE switch (32 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
p-series brick
empty
node
node
node
node
p-series bricknode
node
node
node
power distribution unit
maximum single rack configuration with performance bricks
empty
service tray
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
single rack configuration example with mixed performance and capacity bricks
40 GbE switch (32 ports)
40 GbE switch (32 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
empty
power distribution unit
empty
service tray
i-series brick/node
i-series brick/node
i-series brick/node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
The following table describes the maximum single rack specifications.
Table 7 Maximum single rack specifications
Maximumsingle rackconfiguration
Number ofplatformnodes
Number ofcloudcomputenodes
Number ofdisks
Raw storagecapacity (TB)*
Raw storagecapacity forcloud computeuse (TB)*
Useable storagefor cloudcompute use(TB)**
Withperformancenodes
3 33 144 SSDs 98.2 TB36 nodes x 3 TB/node = 108 TB(base 10) = 98.2TB (base 2)
90.0 TB33 cloud computenodes x 3 TB/node= 99 TB (base 10)= 90.0 TB (base 2)
43.2 TB
With capacitynodes
3 8 32 SSDs, 176HDDs
299.0 TB4 performancenodes x 3 TB/node= 12 TB (base 10)
8 capacity nodes x39.6 TB/node =316.8 TB (base 10)
12 TB + 316.8 TB =328.8 TB (base 10)
288.1 TB8 capacity nodes x39.6 TB/node =316.8 TB (base 10)= 288.1 TB (base2)
125.3 TB
VxRack System with Neutrino overview
Rack 23
Table 7 Maximum single rack specifications (continued)
Maximumsingle rackconfiguration
Number ofplatformnodes
Number ofcloudcomputenodes
Number ofdisks
Raw storagecapacity (TB)*
Raw storagecapacity forcloud computeuse (TB)*
Useable storagefor cloudcompute use(TB)**
= 299.0 TB (base2)
*TB calculated using the binary system (base 2) of measurement.
** The useable cloud compute storage is considerably less than the raw storage available for cloud compute use due to ScaleIOdata protection and spare capacity requirements. Refer to Used storage capacity in VxRack Neutrino is double the size of a volumecreated in OpenStack on page 136 for more information.
Maximum multiple rack configurations
When adding expansion racks to the first rack, an aggregation block is required on thefirst rack. The aggregation block consists of two 40 GbE switches and two 1 GbE switches.The following figure shows the maximum 4-rack configuration using performance bricks.
Figure 7 Maximum rack configuration with performance bricks
3 expansion 40U racksfirst 40U rack
40 GbE switch (32 ports)
40 GbE switch (32 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
p-series brick
empty
empty
node
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
power distribution unit
node
node
node
node
node
node
node
node
node
node
node
nodep-series brick
p-series brick
p-series brick
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
power distribution unit
power distribution unit
node
node
node
node
node
node
node
node
p-series brick
p-series brick
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
node
node
node
node
node
node
node
node
p-series brick
p-series brick
service trayservice tray
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
power distribution unit
power distribution unit
node
node
node
node
node
node
node
node
p-series brick
p-series brick
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
node
node
node
node
node
node
node
node
p-series brick
p-series brick
service tray
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
power distribution unit
power distribution unit
node
node
node
node
node
node
node
node
p-series brick
p-series brick
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
node
node
node
node
node
node
node
node
p-series brick
p-series brick
service tray
aggregation block
The following figure shows the maximum 4-rack configuration using capacity bricks.
VxRack System with Neutrino overview
24 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Figure 8 Maximum rack configuration with capacity bricks
3 expansion 40U racksfirst 40U rack
40 GbE switch (32 ports)
40 GbE switch (32 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
p-series brick
empty
empty
node
node
node
node
power distribution unit
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
service tray
aggregation block
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
power distribution unit
power distribution unit
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
service tray
i-series brick/node
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
power distribution unit
power distribution unit
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
service tray
i-series brick/node
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
power distribution unit
power distribution unit
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
service tray
i-series brick/node
The following figure shows an example of a 4-rack configuration with mixed performanceand capacity bricks.
VxRack System with Neutrino overview
Rack 25
Figure 9 Four-rack configuration example with mixed performance and capacity bricks
3 expansion 40U racksfirst 40U rack
40 GbE switch (32 ports)
40 GbE switch (32 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
10 GbE switch (64 ports)
10 GbE switch (64 ports)
1 GbE switch (48 ports)
1 GbE switch (48 ports)
p-series brick
empty
empty
node
node
node
node
i-series brick/node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
i-series brick/node
i-series brick/node
power distribution unitservice tray
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
10 GbE switch (64 ports)
10 GbE switch (64 ports)
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
power distribution unit
power distribution unit
service tray
1 GbE switch (48 ports)
1 GbE switch (48 ports)
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
10 GbE Switch (64 ports)
10 GbE Switch (64 ports)
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
p-series bricknode
node
node
node
service tray
1 GbE switch (48 ports)
1 GbE switch (48 ports)
power distribution unit
power distribution unit
p-series bricknode
node
node
node
p-series bricknode
node
node
node
10 GbE switch (64 ports)
10 GbE switch (64 ports)
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
i-series brick/node
power distribution unit
service tray
1 GbE switch (48 ports)
1 GbE switch (48 ports)
power distribution unit
aggregation block
The following table describes the specifications of maximum four-rack configurationsusing performance and capacity nodes.
Table 8 Maximum rack specifications
Maximumsingle rackconfiguration
Number ofplatformnodes
Number ofcloudcomputenodes
Number ofdisks
Raw storagecapacity (TB)*
Raw storagecapacityavailable forcloud computeuse (TB)*
Useable storagefor cloudcompute use(TB)**
Withperformancenodes
3 177 720 SSDs 491.1 TB180 nodes x 3 TB/node = 540 TB(base 10) = 491.1TB (base 2)
482.9 TB177 cloud computenodes x 3 TB/node= 531 TB (base 10)= 482.9 TB (base2)
239.0 TB
With capacitynodes
3 44 98 SSDs, 902HDDs
1,592.9 TB3 performancenodes x 3 TB/node= 9 TB (base 10)
44 capacity nodesx 39.6 TB/node =1,742.4 TB (base10)
1,584.7 TB44 capacity nodesx 39.6 TB/node =1,742.4 TB (base10) = 1,584.7 TB(base 2)
716.7 TB
VxRack System with Neutrino overview
26 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 8 Maximum rack specifications (continued)
Maximumsingle rackconfiguration
Number ofplatformnodes
Number ofcloudcomputenodes
Number ofdisks
Raw storagecapacity (TB)*
Raw storagecapacityavailable forcloud computeuse (TB)*
Useable storagefor cloudcompute use(TB)**
9 TB +1,742.4 TB=1,751.4 TB (base10) = 1,592.9(base 2)
*TB calculated using the binary system (base 2) of measurement.
** The useable cloud compute storage is considerably less than the raw storage available for cloud compute use due to ScaleIOdata protection and spare capacity requirements. Refer to Used storage capacity in VxRack Neutrino is double the size of a volumecreated in OpenStack on page 136 for more information.
SoftwareVxRack Neutrino software comprises:
l The base software that is installed on the nodes at the EMC factory. (This software isnot packaged in Docker containers, but includes the Docker open source software.Docker automates the deployment of applications inside software containers.)
l The software that is packaged in Docker containers, which includes the PlatformService and the Cloud Compute Service.
The Platform and Cloud Compute Services run on nodes. The Platform and CloudCompute Services comprise multiple software components that are packaged in Dockercontainers that run on the nodes. The Docker containers (referred to as components inthe VxRack Neutrino UI ) are the actual processes that run on the nodes. Many processescan run on a single node.
Base software
The base software contains the operating system, associated RPM (RPM PackageManager) files, Docker, and anything that is not delivered as a container (such as theVxRack Neutrino code that manages and reports on nodes and networks). The basesoftware is referred to as the OS software in the VxRack Neutrino UI. The EMC OS 1.1.0.0base software includes the items listed in the following table.
Table 9 VxRack Neutrino base OS software
Base OS software Description
SUSE Linux Enterprise Server(SLES) 12
Operating system
Linux kernel with Kernel-basedVirtual Machine (KVM)
KVM provides a virtualization infrastructure for Linux that enables users to run multiple virtualmachines with Linux or Windows images. In VxRack Neutrino, KVM works with QEMU (QuickEmulator) to create the QEMU-KVM hypervisor for the virtual machines running in the OpenStackenvironment (for more information, refer to Cloud Compute Service on page 29).
VxRack System with Neutrino overview
Software 27
Table 9 VxRack Neutrino base OS software (continued)
Base OS software Description
Docker server/client version:1.8.3
Docker is a container virtualization model that runs within the SLES 12 operating system. Dockercontainers wrap up a piece of software in a complete package that contains everything it needs torun: code, runtime, system tools, and system libraries. Docker containers are isolated but sharethe SLES 12 operating system; the containers run directly on the operating system. The PlatformService comprises a set of Docker containers, as does the Cloud Compute Service.
Platform Service
The VxRack Neutrino Cloud Administrator adds three nodes to the Platform Service duringVxRack Neutrino installation. The Platform Service provides the software managementplatform for the OpenStack private cloud infrastructure. The Platform Service comprisesthe following components.
l OpenStack management components: Nova, Cinder, Neutron, Heat, Ceilometer,Glance, and Horizon
l VxRack Neutrino management components such as monitoring, reporting, and the UI
Each component is a distributed software system that provides functionality to the cloudcompute infrastructure. These underlying components are deployed as collections ofDocker containers onto the three Platform Service nodes. The Docker containers rundirectly on the SUSE Linux Enterprise Server 12 operating system that is installed on thenodes.
The following OpenStack components create a virtual computing environment.
Compute (Nova)
OpenStack Compute (Nova) is a cloud computing fabric controller. It is designed tomanage and automate pools of compute resources. In the VxRack Neutrino privatecloud, Compute uses the KVM hypervisor technology.
Block Storage (Cinder)
OpenStack Block Storage (Cinder) manages persistent block storage for use withOpenStack compute instances. The OpenStack block storage system manages thecreation, attachment, and detachment of volumes to instances. When users createCinder volumes to provision OpenStack instances, the volumes use the VxRackNeutrino ScaleIO block storage.
Networking (Neutron)
OpenStack Networking (Neutron) is the service that manages networks and IPaddresses. OpenStack Networking gives users self-service capability, even overnetwork configurations. Users can create their own networks, control traffic, andconnect servers and devices to one or more networks.
Orchestration (Heat)
Heat is a service that orchestrates multiple composite cloud applications usingtemplates, through a native OpenStack REST API. These templates enable you tocreate most OpenStack resource types and provide more advanced functionality,such as instance autoscaling and nested stacks.
Telemetry (Ceilometer)
The OpenStack Telemetry (Ceilometer) service aggregates usage and performancedata across the services deployed in an OpenStack cloud. This powerful capability
VxRack System with Neutrino overview
28 VxRack System 1000 with Neutrino 1.1 Administrator Guide
enables Cloud and Account Monitors to view metrics globally or by individualdeployed resources.
Alarming (Aodh)
The OpenStack Alarming (Aodh) service provides alarms and notifications based onmetrics. Aodh provides the ability to trigger alarms based on defined rules againstsample or event data collected by Ceilometer.
Image Service (Glance)
The OpenStack Image Service (Glance) provides discovery, registration, and deliveryservices for disk and server images. If you provision multiple virtual machines(instances), you can use stored images as templates to get new instances up andrunning more quickly and consistently than if you install an operating system andindividually configure additional services.
Dashboard (Horizon)
OpenStack Dashboard (Horizon) provides users with a graphical interface to access,provision, and automate cloud-based resources.
In addition to the listed OpenStack components, other components enable the VxRackNeutrino OpenStack cloud compute infrastructure. The individual components(containers) that comprise the Platform Service are described in more detail in PlatformService components on page 145.
Cloud Compute Service
After the Platform Service is installed, the Cloud Administrator adds a minimum of threenodes and a maximum of 177 nodes to the Cloud Compute Service. The cloud computenodes run the OpenStack Nova compute service. The individual components (containers)that comprise the Cloud Compute Service are described in more detail in Cloud ComputeService components on page 147.
The cloud compute nodes that the Cloud Administrator adds to the OpenStack privatecloud via the VxRack Neutrino UI provide the CPUs, memory, and storage that users needto run their applications in OpenStack. The compute and storage capacity on the cloudcompute nodes is consumed by the users in OpenStack in the following way. Each cloudcompute node can have 12, 16, or 20 CPU cores (calculated using: CPU sockets x coresper CPU) depending on the brick model.
Each CPU core will present 2 logical cores because of hyper-threading. This means a nodewith 12 CPU cores will actually present 24 logical cores to the operating system on thatnode. (2 sockets x 6 cores x 2 logical cores (hyper-threaded) = 24 logical cores thatOpenStack will see available on the node.) OpenStack sees these 24 logical cores as 24virtual CPUs (vCPUs). In OpenStack, virtual machine (VM) instances emulate a vCPU thatis used by the operating system running in the VM. The comparison of CPU cores, logicalcores, and vCPUs per node is shown in the following table.
Table 10 VxRack Neutrino provisioning of OpenStack vCPUs
VxRack Neutrino OpenStack
Brick model Number of CPU cores pernode
Number of logical coresper node due tohyperthreading
Number of vCPUs per node
p412, p812, i1812 12 24 24
p416, p816, i1816 16 32 32
VxRack System with Neutrino overview
Cloud Compute Service 29
Table 10 VxRack Neutrino provisioning of OpenStack vCPUs (continued)
VxRack Neutrino OpenStack
Brick model Number of CPU cores pernode
Number of logical coresper node due tohyperthreading
Number of vCPUs per node
p420, p820, i1820 20 40 40
vCPUs are used to configure VMs (instances) in OpenStack. The vCPUs can be dividedinto OpenStack flavors. Flavors are OpenStack virtual hardware templates that define thenumber of vCPUs, memory, and storage capacity. Depending on the flavor an instance isassociated with, there could be 1, 2, 4 or more vCPUs per VM. VxRack Neutrino provides28 pre-configured flavors in OpenStack (for more information on flavors, refer to Table18 on page 89). Users pick a flavor when they launch an instance in OpenStack. Forexample, a user who wants to launch an instance with a medium amount of compute andstorage capacity might pick the v4.xlarge flavor to configure his instance; this flavorincludes 4 vCPUs, 16 GB RAM, and 80 GB of storage.
How the compute and storage capacity of a VxRack Neutrino cloud compute node isconsumed by users in OpenStack is depicted in the following diagram.
VxRack System with Neutrino overview
30 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Figure 10 How cloud compute node capacity is consumed in OpenStack
Flavors:
v4.medium - 1 vCPU, 4 GB RAM, 8 GB SSD
v4.large - 2 vCPU, 8 GB RAM, 32 GB SSD
v4.xlarge - 4 vCPU, 16 GB RAM, 80 GB SSD
v4.2xlarge - 8 vCPU, 32 GB RAM, 160 GB SSD
Instance 2
(v4.medium
flavor)
Instance 3
(v4.xlarge
flavor)
User can provision and launch instances
using OpenStack Dashboard UI to develop and deploy applications
Instance 5
(v4.2xlarge flavor)
12 CPU cores
vCPUvCPU
vCPUs are used to configure
virtual machines (instances)
of different sizes (flavors)
vCPU
vCPU
Instance 6
(v4.xlarge
flavor)
vCPU
Instance 1
(v4.xlarge
flavor)
vCPU
Instance 4
(v4.large
flavor)
vCPU
24 vCPUs
Nova compute server
vCPU
CPU core
CPU core
CPU core
CPU core
CPU core
CPU core
CPU core
CPU core
CPU core
CPU core
CPU core
CPU core
24 logical CPUs display as 24 vCPUs in OpenStack
vCPU vCPU vCPU vCPU vCPU vCPU vCPU vCPU vCPU vCPU vCPU
cloud compute node
(performance node in a p412/p812 brick)
24 logical CPUs
Instance 7
(v4.medium
flavor)
vCPU
VxRack Neutrino
VxRack System with Neutrino overview
Cloud Compute Service 31
Note
Terminology for a node is different in the VxRack Neutrino and OpenStack environments.A VxRack Neutrino cloud compute node is referred to as a Nova compute server inOpenStack (also referred to as a hypervisor or a compute host). The Nova computeservers in OpenStack use the QEMU-KVM hypervisor. QEMU uses the KVM virtualizationlayer and together they make a hypervisor. In OpenStack, the hypervisor type for Novacompute servers displays as QEMU.
EMC Native Hybrid Cloud
The EMC Native Hybrid Cloud, an optional Platform as a Service (PaaS) product, can runon the VxRack Neutrino Cloud Compute Service and can be used to develop, test, anddeploy software applications. The Native Hybrid Cloud enables users to deploy theirapplications to the VxRack Neutrino OpenStack environment, as shown in the followingdiagram. The Native Hybrid Cloud is installed separately from VxRack Neutrino, andincludes Pivotal Cloud Foundry, EMC monitoring and reporting, automatic installationenabled by Ansible, and custom integration with VxRack Neutrino and IT infrastructure.VxRack Neutrino is required to install the Native Hybrid Cloud. For more information onthe Native Hybrid Cloud, refer to the EMC Native Hybrid Cloud Solutions Guide and the EMCNative Hybrid Cloud Reference Architecture Guide.
Figure 11 VxRack Neutrino Cloud Compute Service and Native Hybrid Cloud integration
VxRack Neutrino
Cloud Compute Service
Cloud Foundry users to test their
applications in OpenStack
EMC Native Hybrid Cloud
Pivotal Cloud Foundry
deploys
used by
Cloud Foundry users to create and deploy
applications in Pivotal Cloud Foundry
runs on
pushes apps to VxRack Neutrino
Cloud Compute Service to run
on OpenStack
runs onCloud Foundry apps run
on OpenStack instances
used by
VxRack System with Neutrino overview
32 VxRack System 1000 with Neutrino 1.1 Administrator Guide
EMC Elastic Cloud Storage object storage
EMC Elastic Cloud Storage (ECS) software integrates with the VxRack Neutrino CloudCompute Service and can be used to provide object storage for the following OpenStackresources.
l OpenStack instances running cloud-native applications. OpenStack users canupload, copy, edit, delete and download objects to and from containers (ECSbuckets) in a particular project using the OpenStack Dashboard UI. See the VxRackSystem with Neutrino 1.1 OpenStack Implementation Guide for the actions that can beperformed by OpenStack project role.
l OpenStack Cinder volume backups. The OpenStack Cinder block storage service usesECS as its backup volume store. ECS stores volume backups in an ECS bucket underthe ECS namespace (project) that corresponds to the user's project ID.
Configuring VxRack Neutrino to use ECS as its object store requires EMC Global Servicesto configure VxRack Neutrino and ECS together; this can be done during the initial VxRackNeutrino installation or at some point after VxRack Neutrino is installed.
VxRack System with Neutrino overview
EMC Elastic Cloud Storage object storage 33
VxRack System with Neutrino overview
34 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 2
Account overview
This chapter describes a fundamental VxRack Neutrino concept: the account.
l Account (domain)..................................................................................................36l Default account.....................................................................................................39l Account creation................................................................................................... 40
Account overview 35
Account (domain)The VxRack Neutrino architecture is built around the concept of an account. An account isa logical container inside of which users, groups, and projects are created. In a multi-tenant environment, accounts are used to separate tenants. An account can own one ormore OpenStack projects. An account could represent a department or business unit, likea finance department, or it could represent an entire company like Acme Corp. (For moreinformation, see Account tenancy on page 38.) Each account in the VxRack Neutrinoprivate cloud has a unique ID and name.
An account is layered over an OpenStack Keystone domain and maps to a singleOpenStack Keystone domain, as shown in the following schematic drawing. When aVxRack Neutrino Cloud Administrator creates an account in the VxRack Neutrino UI, acorresponding domain with that account's name is automatically created in OpenStackand is visible in the OpenStack Dashboard UI. The domain is a central concept of theOpenStack Keystone Identity Service and is the high-level container that OpenStack usesto group projects containing users and groups. Both the VxRack Neutrino account and itsassociated OpenStack Keystone domain contain the same projects with the same sets ofusers and groups. Each user, group, or project is owned by exactly one account/domain.
Note
The OpenStack Keystone domain that corresponds to a VxRack Neutrino account cannotbe deleted, disabled, or modified by users working in OpenStack. VxRack Neutrinoaccounts can only be created, modified, disabled, or deleted by VxRack Neutrino Cloudand Account Administrators using the VxRack Neutrino UI (see Accounts on page 65 formore information).
Account overview
36 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Figure 12 VxRack Neutrino account maps to an OpenStack Keystone domain
VxRack Neutrino
Finance domain
Project 1
User4
Project 2 Project 3
User5
User7User6
User5 User6 User7
User1, User2,
User3
r1, Us
Group AUser4
User 10
User8, User9
Group B
User8, User9
Group B
User8, User9
Group B
User 10 User 10
User1, User2,
User3
r1, Us
Group A
User6 User7
Finance account
Project 1
User4
Project 2 Project 3
User5
User7User6
User5 User6 User7
User1, User2,
User3
r1, User2
Group AUser4
User 10
User8, User9
Group B
User8, User9
Group B
User8, User9
Group B
User 10 User 10
User1, User2,
User3
r1, User2
Group A
User6 User7
Keystone Identity
Service
A user is someone who works within the scope of an OpenStack project and whoperforms cloud compute tasks such as launching and provisioning instances.
A group is a collection of users.
A project is an organizational unit in the OpenStack private cloud to which you can assignusers and groups. It is a logical resource container in which users and groups get roles,and it is the base unit of ownership in OpenStack. An OpenStack user or group must beassociated to a project to perform their computing tasks. All resources in OpenStack areowned by a specific project.
The VxRack Neutrino Platform Service account component (see Platform Servicecomponents on page 145) authenticates users in an account and determines theirassociated roles. It is based on the OpenStack Keystone Identity Service, which is thebacking identity and access control system for VxRack Neutrino. The VxRack Neutrinoaccount component acts like a supervisor that sets up permissions/roles in a domain sothat the OpenStack Keystone Identity Service can apply its normal operations on it.Actual user/group/role assignment/domain/project operations are carried out internallyvia the Keystone v3.6 API.
The Cloud Administrator can add users and groups to an account locally or through aLightweight Directory Access Protocol (LDAP)/Active Directory (AD) external identityprovider. LDAP and AD support is the same as provided by OpenStack. An account canhave LDAP/AD users or local users; an account cannot have a mixture of LDAP/AD andlocal users. After the Cloud Administrator decides which type of users will be added to anaccount, the decision cannot be changed.
In the following schematic drawing, the Finance account contains users and groups thatwere added through an LDAP/AD server. After users/groups are added to the account,they can be associated to one or several projects.
Account overview
Account (domain) 37
Figure 13 Adding LDAP users to an account
Finance account
Audit project
LDAP/AD Server
User4
User1, User2, User3
Revenue Forecast project
VxRack Neutrino
Acquisitions project
Group AUser5
Group B
User8, User9User7User6
Group B
User8, User9User5 User6 User7
User1, User2, User3 Us Us Us Us User2er2er2er2er2
Group A
User4
Finance domain
Audit project Revenue Forecast project
Acquisitions project
Group B
User8, User9User5 User6 User7User1, User2, User3 Us Us Us Us User2er2er2er2er2
Group A
User4
Keystone Identity
Service
Account tenancyAn account is used to partition users. A multi-tenant environment in VxRack Neutrino isset up so that each tenant has its own account. An account provides tenancy and usermanagement.
The VxRack Neutrino Cloud Administrator (and only the Cloud Administrator) createsaccounts in the VxRack Neutrino UI. The Cloud Administrator can then assign the AccountAdministrator role to a specific person who manages the projects and users within thataccount.
The Cloud Administrator can set up accounts in either an enterprise scenario or in aninternet or application service provider scenario.
Enterprise scenarioIn the enterprise scenario, the Cloud Administrator is essentially the root user for anenterprise and provides a shared pool of cloud compute, storage, and network resourcesto the organizations in the enterprise. The enterprise organizations can be thought of astenants and are represented by accounts in VxRack Neutrino. Each organization's usersuse OpenStack Nova instances to run their applications in their enterprise's privatecloud. They have access to their own secure private cloud in a web-based infrastructurevia the OpenStack Dashboard UI. Users can create, deploy, and terminate instances asneeded.
The enterprise Cloud Administrator ensures that the users in each organization (account)have sufficient compute and storage capacity on which to run their application workloadsin OpenStack. The Cloud Administrator monitors the consumption of the VxRack Neutrinocompute and storage resources and can add capacity by adding nodes to the system.
The following schematic drawing shows an example of how an enterprise's finance,sales, and development organizations could be divided into accounts.
Account overview
38 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Figure 14 Enterprise Cloud Administrator managing accounts
Acme Corp. Accounts
Audit Project
Finance Account Sales Account Dev Account
Acme Corp. Cloud Administrator
RevenueForecast Project
Acquisitions Project
Q4 Goals Project
Test Project
QA Project
Service provider scenarioIn the service provider scenario, the Cloud Administrator has root access to a collection ofseparate enterprises. In this case, each enterprise is a tenant that is represented by aVxRack Neutrino account. The service provider Cloud Administrator provides cloudcompute, storage, and network resources to the different enterprises depending on theirsubscription to resources. The service provider Cloud Administrator can dynamicallyreallocate virtual computing resources per enterprise customer demand.
The following schematic drawing shows an example of how a service provider'senterprise customers (Acme Corp and CompanyABC) could be divided into accounts.
Figure 15 Service provider Cloud Administrator managing accounts
CompanyA Service Provider Accounts
IT
Project
ProductA
Project
Acme Corp Account CompanyB Account
Tracking
Project
TestLab
Project
Bonus
Project
ProductB
Project
Staffing
Project
CompanyA Service Provider
Cloud Administrator
Webhelp
Project
Brochure
Project
EngSupport
Project
Default account
The VxRack Neutrino Default account is automatically created at installation. The Defaultaccount represents the entire VxRack Neutrino private cloud platform; it is the systemaccount. It includes the admin project and the service project, which are automaticallypopulated with a set of local users, as shown in the following figure.
Figure 16 Local users in the Default account
Default account
Cloud Administrator
admin project
cpsa
csa
admin
service project
aodh
cinder
glance
heat
neutron
ceilometer csa
ec2api
nova
Account overview
Default account 39
The admin project in the Default account contains three local system users:
l admin (Cloud Administrator). When you log in to the VxRack Neutrino UI for the firsttime, you log in to the Default account (domain) as the admin user, which makes youthe Cloud Administrator. The Cloud Administrator is like a super-user who can accessand configure all the resources in the VxRack Neutrino private cloud infrastructure.
l csa and cpsa. The csa and cpsa local users are internal service users that VxRackNeutrino sets up as part of its internal configuration for various housekeepingoperations.
The service project in the Default account contains eight internal local service users.
l OpenStack service users - There are seven service users that are associated with theOpenStack Aodh, Ceilometer, Cinder, Glance, Heat, Neutron, and Nova services.These service users act on behalf of the OpenStack services and are configured sothat the OpenStack Keystone Identity Service can work with those OpenStackservices and serve as the authentication/identity authority for them.
l csa and ec2api - The csa and ec2api service users are internal service users thatVxRack Neutrino sets up as part of its internal configuration for various housekeepingoperations.
The Cloud Administrator can create a project in the Default account and add users to theproject. However, it is recommended you keep the Default account as is and create a newaccount in which to add users. When you create an account (other than the Defaultaccount), a project is automatically created as well as a Neutron network for that project(see Account creation on page 40 for more information).
The following figure shows the overall account structure.
Figure 17 VxRack Neutrino account structure
Account A
Project 1 Project 2
users and groups users and groups
Account B
Project 3 Project 4
users and groups users and groups
jec
Default account
Cloud Administrator
admin project
local system users
service project
local internal service users
Account creationAfter the Cloud Administrator adds at least three nodes to the Cloud Compute Service,the Cloud Administrator must set up an account in addition to the Default system accountthat installs with VxRack Neutrino. When the Cloud Administrator creates an account,VxRack Neutrino automatically creates the following three items:
l An initial project within the account/domain. The project has a default name ofproject_<accountname>.
l A network within the initial project. The network is created using the OpenStackNeutron service and is named default_network.
l Two rules that are added in the initial project's default security group to allow bothICMP and TCP SSH traffic. Normally, a project's default security group allows trafficonly to and from instances in the same default security group.
Account overview
40 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Additional projects that are created within an account/domain do not get thedefault_network or updated security group. For additional projects you must create aprivate network and you must manually add rules to the project's default security groupto allow SSH and ICMP traffic, as explained in Create project on page 78.
The following figure depicts the automatic creation of initial projects and their associateddefault_network at account creation.
Figure 18 Automatic creation of the initial project and network at account creation
Finance Account Sales Account Dev Account
Projectproject_Finance project_Sales project_Dev
This initial project is
automatically created
when the account is created.
default_network
Project
Project Project
This network is automatically
created in OpenStack Neutron
for the project_Finance when
the account is created.
default_network default_network
Project
Project Project Project
Project
See Create an account on page 66 for information on how to create an account in theVxRack Neutrino UI.
Initial project of an accountThe initial project is simply the first project that is automatically created by VxRackNeutrino when a Cloud Administrator creates a new account.
The initial project is always named project_<accountname> in the VxRack Neutrinoaccount/domain. This initial project name can be changed at any time. The initial projecthas no special authorization privileges; it is just a normal project like any other project inan account.
VxRack Neutrino creates the initial project so that when the Cloud Administrator starts toadd users and roles to the account, there is a project to associate the user with. Forexample, when the Cloud Administrator adds a user Fred to a new account namedFinance and gives him an admin role there, the Cloud Administrator can associate himwith the existing initial project in that account. When a user is created and associatedwith a project, that project becomes the user's primary project. In our example Fred hasthe project named project_Finance as his primary project. If he logs in to the OpenStackDashboard UI, he will log in to the project_Finance; this is his home project and fromthere he can navigate to other projects he has access to.
Primary project
The primary project is the project that a user is associated to when the user iscreated in the VxRack Neutrino UI. The OpenStack Dashboard UI will pick that projectas the scope when that user logs into the OpenStack Dashboard UI. When anaccount is first created and the Cloud Administrator is adding users to the account, ifthe Cloud Administrator has not created any additional projects, the CloudAdministrator will associate users to the initial account; therefore, the initial projectis the primary project for those users. The Cloud Administrator can also createadditional projects in the account and associate a new user to one of those projects,making that other project the user's primary project.
Account overview
Initial project of an account 41
Account overview
42 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 3
Roles overview
This chapter describes VxRack Neutrino roles and OpenStack project roles.
l VxRack Neutrino account-scoped roles..................................................................44l OpenStack project-scoped roles............................................................................48l Tasks performed by role ....................................................................................... 49
Roles overview 43
VxRack Neutrino account-scoped rolesThe following considerations determine a user's scoped role in VxRack Neutrino:
l The actual role
l The level at which the role is scoped. In the Default account, roles are scoped at thecloud level. In any other account, roles are scoped at the account or project level.
The following list provides the four possible account-level scoped roles that exist inVxRack Neutrino:
l Cloud Administrator
l Account Administrator
l Cloud Monitor
l Account Monitor
Only these four account-level scoped roles can access the VxRack Neutrino UI. Thefollowing table shows the relationships between role, account level, and VxRack Neutrinoscoped roles.
Table 11 VxRack Neutrino account-scoped roles
Role Account level VxRack Neutrino scoped role
admin Default account Cloud Administrator
admin Any account (other than Default account) Account Administrator
monitor Default account Cloud Monitor
monitor Any account (other than Default account) Account Monitor
Account-scoped roles can access particular accounts with either an admin or a monitorrole. Each user has a view of the VxRack Neutrino UI based on that scoped role.
For example, a user with the admin role in the Default account logs into the Defaultaccount as the Cloud Administrator and can view everything in the UI, that is, informationpertaining to all accounts in the VxRack Neutrino system. A user with the admin role inthe Sales account logs into the Sales account as an Account Administrator and can viewinformation pertinent only to the Sales account.
Cloud and Account Administrators can use the UI to assign project-level roles to users,but users with project-level roles are unable to access the UI. Project-level users use theOpenStack Dashboard UI to perform their virtual cloud compute tasks. For example, anAccount Administrator could assign a user the admin role in a project, making the user aProject Administrator. This Project Administrator could then log in to the project in theOpenStack Dashboard UI, but would be unable to log in to the VxRack Neutrino UI.Project-level roles are described in OpenStack project-scoped roles on page 48.
Administrator rolesCloud Administrator and Account Administrator are the two VxRack Neutrinoadministrative roles.
Roles overview
44 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Cloud AdministratorThe Cloud Administrator administers the entire physical data center where the VxRackNeutrino OpenStack private cloud is set up. The Cloud Administrator is the highest levelrole and uses the VxRack Neutrino UI most of the time.
The Cloud Administrator has full access to the UI and can perform the following tasks:
l Performs the initial installation of the VxRack Neutrino OpenStack private cloud,which includes the base OS software and the Platform Service
l Adds nodes to the Cloud Compute Service, which supports instance and volumecreation in OpenStack
l Creates and deletes accounts
l Manages users (local users and federated users associated with external identityproviders like LDAP and AD)
l Manages user role assignments
l Changes quotas in OpenStack projects, for example, changes the default quotas forthe maximum number of volumes or instances per project
What the Cloud Administrator can see and manage in the VxRack Neutrino private cloudis determined by project-level roles. The Cloud Administrator must have the admin andopenstack_admin roles in the admin project of the Default account to see and manageeverything in the private cloud; this includes all resources within projects in everyaccount in the private cloud. The initial admin user that is included with VxRack Neutrinoat install time includes these roles.
For all new Cloud Administrators that are created in the VxRack Neutrino system, it isrecommended that they be given the admin and openstack_admin roles in the adminproject of the Default account, and the heat_stack_owner role if the new CloudAdministrator is expected to be working in OpenStack. The OpenStack project-levelopenstack_admin and heat_stack_owner roles are described in Table 12 on page 48.
Account AdministratorThe Account Administrator manages an account and can access all projects within theaccount. The Account Administrator can create Project Administrators within the accountand have them manage the individual projects. The VxRack Neutrino UI providesinformation at the account level.
In an enterprise scenario, an account represents a department, such as the QualityEngineering (QE) department, within a company. The Account Administrator wouldmanage the users in the QE department. In a service provider scenario, an accountrepresents a complete company, such as CompanyX. The CompanyX AccountAdministrator is a customer of the service provider and would manage the users withintheir own company.
The Account Administrator performs the following tasks:
l Creates and deletes projects within the account
l Manages users within the account (the account can have either local users orfederated users that were added to the account by the Cloud Administrator)
l Manages user role assignments within the account
Roles overview
Administrator roles 45
Note
Although the OpenStack Dashboard UI supports user management tasks such as creatingand deleting users and adding or removing them from projects, Cloud and AccountAdministrators must use the VxRack Neutrino UI for all user management tasks.
An Account Administrator can always see and manage all users and projects in theiraccount. An Account Administrator can see and manage the resources (instances andvolumes) within the projects in the account when the Cloud Administrator assigns theAccount Administrator the admin role for all projects (existing and future) in the accountby granting them the admin role in the Project Inherited Roles field when the AccountAdministrator user is created.
If for some reason the Cloud Administrator does not select the admin role in the ProjectInherited Roles field when the Account Administrator user is created, the AccountAdministrator can assign themselves an admin role in any project(s) in their accountwhere they want to see and access the project resources.
Note
When a Cloud or an Account Administrator assigns an account-level and a project-levelrole to a user in the VxRack Neutrino UI, the UI presents all the roles in the system,regardless of whether they apply to the user being created. The UI does not allowinappropriate role assignments.
Monitor rolesCloud Monitor and Account Monitor are the two VxRack Neutrino monitor roles.
The Cloud Monitor and Account Monitor roles have read-only access to the Dashboardand Reports sections of the VxRack Neutrino UI at the system and account level,respectively. They can view aggregated system-level or account-level data in the Reportssections of the UI, depending on the scope of their role.
Cloud MonitorIn the UI Reports section, the Cloud Monitor can access all the monitoring and reportinginformation available in the system (health, performance, alerts, capacity, chargeback,inventory, license, and logs). The Cloud Monitor can view the Health, Node Allocation,Storage, System, and Alerts sections in the Dashboard.
Account MonitorIn the UI Reports section, the Account Monitor can access all the monitoring andreporting information available in the account (performance, alerts, capacity, chargeback,and inventory). The Account Monitor can view the Account section in the Dashboard,which lists the number of projects, users, and groups in the account, and the Alertssection, which lists all the alerts for the account.
Mapping VxRack Neutrino roles to OpenStack Keystone rolesBoth the VxRack Neutrino management UI layer and the OpenStack environment use theOpenStack Keystone Identity Service to authenticate users and determine user roles.
For example, when a user is assigned an admin role in the VxRack Neutrino Salesaccount, the Account component in the VxRack Neutrino Platform Service works with theOpenStack Keystone Identity Service to determine that this user is now:
Roles overview
46 VxRack System 1000 with Neutrino 1.1 Administrator Guide
l Account Administrator in the VxRack Neutrino Sales account
l Domain Administrator in the OpenStack Sales domain
As shown in the following figure, a VxRack Neutrino Cloud Administrator maps to a CloudAdministrator in OpenStack.
Figure 19 Mapping of VxRack Neutrino administrator roles to OpenStack Keystone roles
Cloud Admin
Cloud Admin
admin role in Default account
has 3 roles in admin project of Default account:
admin, openstack_admin, heat_stack_owner
Account Admin
admin role in any account
other than Default account
(e.g, “Sales Account”)
admin role in default domain
admin role in admin project
of default domain
VxRack Neutrino UI
Keystone Identity Service
Keystone uses the Cloud Admin role to allow full
access and management of all domains and projects
in the VxRack Neutrino UI
Keystone uses the Account Admin role to allow full access
and management of one domain and the projects within it
in the VxRack Neutrino UI
There is no equivalent OpenStack role that maps to the
VxRack Neutrino Account Admin role. An Account
Admin can only log into the OpenStack Dashboard UI
if he or she has a role in a project (project admin or
project member).
A VxRack Neutrino Account Administrator does not map to a role in OpenStack becauseaccess to OpenStack resources is based on projects, not domains/accounts. Only whenVxRack Neutrino Account Administrators have a role in a project can they log in to theOpenStack Dashboard UI. For example, if a VxRack Neutrino Account Administrator hadan admin role in a marketing project, the Account Administrator could log in to themarketing project as a Project Administrator.
As shown in the following figure, the VxRack Neutrino monitor role has no equivalent inOpenStack. It is irrelevant in OpenStack and is used only in the VxRack Neutrino UI toview reports.
Figure 20 Mapping of VxRack Neutrino monitor roles to OpenStack Keystone roles
Cloud Monitor
monitor role in Default account
Account Monitor
monitor role in any account other than
the Default account (e.g, “Sales account”)
There is no equivalent OpenStack role that maps to the
VxRack Neutrino Cloud and Account monitor roles.
Keystone Identity Service
Keystone uses the Cloud Monitor role to allow
read-only access to all domains and projects
in the VxRack Neutrino UI
Keystone uses the Account Monitor role to allow
read-only access to one domain and the projects
within it in the VxRack Neutrino UI
VxRack Neutrino UI
Roles overview
Mapping VxRack Neutrino roles to OpenStack Keystone roles 47
OpenStack project-scoped rolesThe roles that can be assigned to users in OpenStack projects are described in thefollowing table.
Table 12 OpenStack project roles
Project-level role Description Who can assign this role
admin Project Administrator. A Project Administrator manages the resourceswithin a project. Resources within a project include OpenStack Cindervolumes, Nova instances, snapshots, and networks. A ProjectAdministrator can work only within an OpenStack project. Tasks performedby a Project Administrator include:
l Create/delete Cinder volumes and attach them to Nova instances
l Launch/delete Nova instances
l Create volume and instance snapshots
l Create logical networks
Cloud Administrator andAccount Administrator
_member_ Project Member. A Project Member has no administrative privileges, butcan complete the following tasks:
l Create/delete Cinder volumes and attach them to Nova instances
l Launch/delete Nova instances
l Create volume and instance snapshots
Cloud Administrator andAccount Administrator
openstack_admin OpenStack Administrator. An OpenStack Administrator has access tovolumes and instances in ALL projects in OpenStack. Only the CloudAdministrator can grant this project-level role. The Cloud Administrator cangrant this role to a user in any project in any account/domain within theVxRack Neutrino system. After the Cloud Administrator assigns this role,the user has similar privileges to those of the Cloud Administrator, butthey are limited to OpenStack services. The openstack_admin role actslike a cross-project, cross-domains role that allows the openstack adminuser to access/modify resources in all projects.Note: Cloud Administrators that are created after the initial one atinstallation must receive openstack_admin privileges to have full CloudAdministrator access to all project resources.
Cloud Administrator
heat_stack_owner Heat stack owner. The heat_stack_owner role gives a user the ability tocreate and run Heat Orchestration Templates (HOT) in OpenStack. Forexample, a user named Joe with the heat_stack_owner role in anOpenStack project creates a template for a stack (group of OpenStackresources) in his project. The template contains definitions for twoinstances, a private network, a subnet, and two network ports, whichresults in the project having two instances connected by a private network.
Cloud Administrator andAccount Administrator
heat_stack_user Heat stack user. The OpenStack Orchestration Service (Heat) uses thisinternal role and automatically assigns it to internal users that the Servicecreates during stack deployment. By default, this role restricts APIoperations. VxRack Neutrino Cloud and Account Administrators should notassign this role to users working in OpenStack under normal operatingconditions.
Internal role used byOpenStack orchestrationservice (Heat)
Roles overview
48 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 12 OpenStack project roles (continued)
Project-level role Description Who can assign this role
service Service user. VxRack Neutrino uses this internal role and automaticallyassigns it to internal OpenStack service users (Ceilometer, Cinder, Nova,Neutron, and so on) in the Default account. VxRack Neutrino Cloud andAccount Administrators should not assign this role to users working inOpenStack under normal operating conditions.
Internal role used by VxRackNeutrino
Tasks performed by roleThe tasks that can be performed in the VxRack Neutrino UI by each VxRack Neutrino roleand the tasks that can be performed in the OpenStack Dashboard UI by project adminand member roles are listed in the following table.
For a more detailed list of the OpenStack resource tasks that can be performed by theCloud Administrator, Project Administrator, and Project Member roles, see the VxRackSystem with Neutrino 1.1 OpenStack Implementation Guide.
Table 13 Tasks performed by VxRack Neutrino roles and by project administrator and member roles
VxRack Neutrino account-level roles (access to VxRack NeutrinoUI)
OpenStack project roles(access to OpenStackDashboard UI only)
Task Cloud Admin Account Admin CloudMonitor
AccountMonitor
ProjectAdmin
ProjectMember*
Tenancy
Create accounts(tenants)
X
Delete accounts X
User management
Create local users andassign them to projects
X (in all projects) X (in all projects inan account)
Create/delete projects X (in all accounts) X (in one account)
Delete local users X (in all accounts) X (in one account)
Assign account-leveland project-level rolesto local users
X (in all accounts) X (in one account)
Create local groups ofusers for better rolemanagement
X (in all accounts) X (in one account)
Add external users toaccounts via an IdentityProvider (LDAP/AD)
X (in all accounts)
Assign account-leveland project-level roles
X (in all accounts) X (in one account)
Roles overview
Tasks performed by role 49
Table 13 Tasks performed by VxRack Neutrino roles and by project administrator and memberroles (continued)
VxRack Neutrino account-level roles (access to VxRack NeutrinoUI)
OpenStack project roles(access to OpenStackDashboard UI only)
Task Cloud Admin Account Admin CloudMonitor
AccountMonitor
ProjectAdmin
ProjectMember*
to users who wereadded via an externalIdentity Provider andassign them to projects
Revoke roles from usersand groups who wereadded into account froman external IdentityProvider
X (in all accounts) X (in one account)
Compute/Storage
Upload operatingsystem images in theOpenStack Glancecontainer so they can beused to launchinstances
X X X X
Create non-defaultflavors (OpenStackvirtual hardwaretemplates)
X (can performthis action in theOpenStackDashboard UI ifCloud Adminbelongs to aproject)
Create and run HeatOrchestration Templates(HOT) (templates allowfor the creation of mostOpenStack resourcetypes)
X (can do this inOpenStackDashboard UI ifCloud Admin hasheat_stack_ownerrole)
X (can do this inOpenStackDashboard UI ifAccount Adminhasheat_stack_ownerrole)
Use HOT (use theresources defined forthe appropriateOpenStack project)
X (can do this inOpenStackDashboard UI ifCloud Admin hasheat_stack_userrole)
X (can do this inOpenStackDashboard UI ifAccount Adminhasheat_stack_userrole)
Launch instances(OpenStack VMs) usingthe default flavors andimages uploaded in
X (can do this inOpenStackDashboard UI ifCloud Admin
X (can do this inOpenStackDashboard UI ifAccount Admin
X X
Roles overview
50 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 13 Tasks performed by VxRack Neutrino roles and by project administrator and memberroles (continued)
VxRack Neutrino account-level roles (access to VxRack NeutrinoUI)
OpenStack project roles(access to OpenStackDashboard UI only)
Task Cloud Admin Account Admin CloudMonitor
AccountMonitor
ProjectAdmin
ProjectMember*
OpenStack Glancecontainer
belongs to aproject)
belongs to aproject)
Provision instances withCinder volumes backedby VxRack Neutrinoblock storage (create/delete volume that canbe attached/detachedto/from an instance)
X (can do this inOpenStackDashboard UI ifCloud Adminbelongs to aproject)
X (can do this inOpenStackDashboard UI ifAccount Adminbelongs to aproject)
X X
Perform snapshot-related operations onblock storage andinstances
X (can do this inOpenStackDashboard UI ifCloud Adminbelongs to aproject)
X (can do this inOpenStackDashboard UI ifAccount Adminbelongs to aproject)
X X
Delete/stop/restartinstances
X (can do this inOpenStackDashboard UI ifCloud Adminbelongs to aproject)
X (can do this inOpenStackDashboard UI ifAccount Adminbelongs to aproject)
X X
Create/configure logicalnetworks
X (can do this inOpenStackDashboard UI ifCloud Adminbelongs to aproject)
X
Monitoring and reports
Get metering/chargeback informationper account
X (in all accounts) X (in one account) X (in allaccounts)
X (in oneaccount)
Analyze theperformance of differentcomponents of thesystem - nodes,compute instances,networks, and blockstorage
X (in all accounts) X (in one account) X (in allaccounts)
X (in oneaccount)
Analyze theperformance of computeinstances and CPU/
X (in all accounts) X (in one account) X (in allaccounts)
X (in oneaccount)
Roles overview
Tasks performed by role 51
Table 13 Tasks performed by VxRack Neutrino roles and by project administrator and memberroles (continued)
VxRack Neutrino account-level roles (access to VxRack NeutrinoUI)
OpenStack project roles(access to OpenStackDashboard UI only)
Task Cloud Admin Account Admin CloudMonitor
AccountMonitor
ProjectAdmin
ProjectMember*
memory/storage/network utilization
Report near-real timedata on key metrics forhardware and services
X (in all accounts) X (in allaccounts)
View data of key metricsover configurableperiods of time
X (in all accounts) X (in one account) X (in allaccounts)
X (in oneaccount)
Perform capacityplanning and analyzetrends
X (in all accounts) X (in one account) X (in allaccounts)
X (in oneaccount)
Store and search logs X (in all accounts) X (in allaccounts)
Monitor the health andutilization of theinfrastructureenvironment - nodes,disks, switches,CPU.memory utilization,temperature/voltage,availability, and SLA
X X
View alerts and performalert actions (such asacknowledging orassigning alerts)
X (in all accounts) X (in one account) X (in allaccounts)
X (in oneaccount)
Configure alerts(enabling new alerts,configuring alertconsole)
X
Configure EMC SecureRemote Services (ESRS)for enabling support ofthe VxRack Neutrinosystem
X
Run periodic health-check scripts
X
Upgrade/Licensing
Download latestsoftware patches and
X
Roles overview
52 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 13 Tasks performed by VxRack Neutrino roles and by project administrator and memberroles (continued)
VxRack Neutrino account-level roles (access to VxRack NeutrinoUI)
OpenStack project roles(access to OpenStackDashboard UI only)
Task Cloud Admin Account Admin CloudMonitor
AccountMonitor
ProjectAdmin
ProjectMember*
upgrade both the fabricand the servicecomponents
Upgrade/replacephysical elements of theVxRack Neutrino system(bricks, nodes, disks)
X
View license/subscription informationfor all components
X X
Procure and apply newlicenses forcomponents/services
X
Enable licenses for onlythose services thatcustomer needs
X
* Project member has the _member_ role in an OpenStack project.
Roles overview
Tasks performed by role 53
Roles overview
54 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 4
Get started with VxRack Neutrino
This chapter describes the general administrative workflow for setting up an account,logging in to the VxRack Neutrino UI for the first time, viewing the Dashboard UI, andadding nodes to the Cloud Compute Service.
l Account setup administrative workflow................................................................. 56l Log in to VxRack Neutrino ..................................................................................... 57l Add nodes to the Cloud Compute Service.............................................................. 58l VxRack Neutrino Dashboard UI..............................................................................59
Get started with VxRack Neutrino 55
Account setup administrative workflow
The general workflow starts with a top-down approach with the Cloud Administratorcreating an account. (The Cloud Administrator role and other VxRack Neutrino roles aredescribed in VxRack Neutrino account-scoped roles on page 44.) The Cloud Administratoradds users to the account and associates them with the account's initial project. After theCloud Administrator associates users with the account's initial project, the CloudAdministrator may decide to assign OpenStack project-level roles to the users or to leaveproject-level role assignment to an Account Administrator. For more information onproject-related roles, see OpenStack project-scoped roles on page 48.
The Cloud Administrator can then assign VxRack Neutrino account-level roles to users inthe account. The first VxRack Neutrino role that the Cloud Administrator is likely to assignto a user is the admin role in the account, making that user the Account Administrator.The Account Administrator can perform all future tasks in that account. For example, theAccount Administrator can create projects in the account, add users to those projects,and assign project-level roles to the users in the projects.
The general Cloud Administrator workflow is as follows:
Procedure
1. Create an account.
2. Add users and/or groups to this account. Users and groups can be added into anaccount by one of the following ways:
l adding them locally, or
l importing them from an external LDAP or AD identity provider.
Important: Local users/groups and users/groups from an external identity providercannot be mixed in one account. At the time of account creation, the CloudAdministrator must decide if the users and groups are to be created locally or addedfrom an external source. Once this decision is made, it cannot be changed. For moreinformation on adding users, see Add users and groups to an account on page 66.
3. Associate users/groups with the account's initial project; this makes the initial projectthe users/groups' primary project.
4. Assign OpenStack project-level roles to these users/groups in the primary project, ifdesired.
(It is not required that the Cloud Administrator assign project-level roles to users. TheCloud Administrator could decide to leave that task to an Account Administrator.)
5. Assign VxRack Neutrino account-level roles to users.
At a minimum, one user in the account must be assigned the admin role at theaccount level, making this person the Account Administrator.
6. At this point, the Cloud Administrator can decide to either:
l delegate all further account actions to the Account Administrator, or
l do all user/group/role/project management in the account.
The general VxRack Neutrino administrative workflow is shown in the following figure.The administrative workflow in any use case is determined by the number of tasks theCloud Administrator chooses to delegate to the Account Administrator. The figureillustrates a common use case.
Get started with VxRack Neutrino
56 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Figure 21 VxRack Neutrino administrator workflow
Default Account
Account A Account B Cloud Admin adds external
users/groups to this
account from LDAP/AD
identity provider
Account Admin
can create additional local users/groups
and assign them rolescan assign roles to users/groups
at account and project level
(can’t create new users/groups)
Account Admin assigns admin role to
a user/group in the account’s initial
project - this person is Project Administrator
Account Admin
Account Admin can create more
projects if desired, add existing
users/groups to the projects
and assign them project roles
Project Administrator works in OpenStack Dashboard UI only
(has no access to the VxRack Neutrino UI)
Cloud Admin adds locally
created users/groups
to this account
project_Account A
project_Account A
Project Admin
assigns admin role at the
account level to a user in each account -
this person is the Account Administratorproject_Account B
Account Admin assigns admin role
to a user/group in the account’s initial
project - this person is Project Administrator
Account Admin can create more
projects if desired, add existing
users/groups to the projects, and
assign them project rolesproject_Account B
Project Admin Cloud Admin can create
more projects in each
account if desired, add
existing users to projects,
and assign users project
roles
Cloud Admin assigns users/groups to the account’s
initial project and gives them OpenStack project roles
Creates accounts with either users from external LDAP/AD provider OR local users
Cloud Admin assigns users/groups to the
account’s initial project and gives them OpenStack project roles
Project Administrator works in OpenStack Dashboard UI only
(has no access to the VxRack Neutrino UI)
admin project
Cloud Administrator
Log in to VxRack NeutrinoYou must log in to the VxRack Neutrino UI so that you can add nodes to the CloudCompute Service, which provide the compute and storage capacity for the OpenStackprivate cloud.
Before you begin
The Platform Service must be installed on three nodes in the VxRack Neutrino system.
When you log in to VxRack Neutrino for the first time, you log in to the Default account(domain) as the admin user or Cloud Administrator.
Get started with VxRack Neutrino
Log in to VxRack Neutrino 57
Procedure
1. Type the virtual IP address or host name of the VxRack Neutrino system in yourbrowser's address bar:
https://vxrackneutrino_virtual_ip or https://vxrackneutrino_virtual_hostname
2. Enter the account, username, and password.
l Account: Defaultl Username: adminl Password: my_password
The Dashboard view of the VxRack Neutrino UI displays (see VxRack NeutrinoDashboard UI on page 59 for more information). You are logged in to the UI as CloudAdministrator and you are associated with the admin project in the Default account.VxRack Neutrino automatically creates the admin project and the Default account aspart of the Platform Service installation.
3. In the upper right corner of the UI, click the downward-facing arrow next to admin andselect Logout to log out of the system.
Add nodes to the Cloud Compute ServiceAfter logging in to VxRack Neutrino for the first time, the Cloud Administrator must addnodes to the Cloud Compute Service. Adding nodes to the Cloud Compute Serviceprovides the virtual compute and storage capacity required for the creation/managementof virtual machines (instances) and volumes in OpenStack.
Before you begin
You must have Cloud Administrator privileges.
There must be three existing nodes in the VxRack Neutrino system that are running thePlatform Service; these are different nodes from the nodes that run the Cloud ComputeService.
Procedure
1. Click Infrastructure > Nodes.
2. On the Nodes page, click Manage > Add to Service.
3. On the Add Nodes to Service page, select the nodes you want to add to the CloudCompute Service by clicking the Select button next to each node in the Actionscolumn. You must select a minimum of three nodes.
4. Click Deploy Service.
A progress banner appears. The Cloud Compute Service deployment should takeseveral minutes, depending on the number of nodes that you selected.
Results
After the nodes have been added to the Cloud Compute Service, Cloud Computedisplays in the Service column next to each node on the Nodes page, verifying that theseare now cloud compute nodes. You should also see the number of cloud compute nodes,as well as node and disk health, storage and cloud compute capacity information in theDashboard UI, as described in the following section.
Get started with VxRack Neutrino
58 VxRack System 1000 with Neutrino 1.1 Administrator Guide
VxRack Neutrino Dashboard UIThe Dashboard displays the overall VxRack Neutrino system status. Each tile displays acertain aspect of the system. The Dashboard view varies by VxRack Neutrino role.
l Cloud Administrator - has full access to the Dashboard UI which provides an overviewof the health, cloud compute, and storage resources in the system. The CloudAdministrator can see the Health, Node Allocation, Cloud Compute Service, Storage,System, and Alerts tiles. These tiles are described in the following sections.
l Account Administrator - can see the Account tile listing the number of projects, users,and groups in the account and can see the Alerts tile listing the alerts in the account.
l Cloud Monitor - can see the Health, Node Allocation, Storage, System, and Alertstiles.
l Account Monitor - can see the Account tile listing the number of projects, users, andgroups in the account and can see the Alerts tile listing the alerts in the account.
HealthThe health tile provides health status information on the VxRack Neutrino infrastructure(nodes, disks, networks, storage) and software services (Platform and Cloud Compute).
Table 14 Dashboard health icons
Infrastructureitem
When clicked,launches to:
Icon Healthstatus
Description
Node node healthreport in
Reports >
Health >Infrastructure >Nodes
Good The node is online and available and functioning withinnormal parameters. This health indicator does not apply tothe node disks.
Suspect The node can be contacted but its health is suspect. A nodein this state cannot be added to the Cloud Compute Service orhave the Platform Service transferred to it. The CloudAdministrator should call EMC Global Services totroubleshoot/fix the node's hardware.
Degraded The node can be contacted but its health is degraded. A nodein this state cannot be added to the Cloud Compute Service orhave the Platform Service transferred to it. The CloudAdministrator should call EMC Global Services totroubleshoot/fix the node's hardware.
Severelydegraded
The node can be contacted but its health is severelydegraded. A node in this state cannot be added to the CloudCompute Service or have the Platform Service transferred toit. The Cloud Administrator should call EMC Global Services totroubleshoot/fix the node's hardware.
Unknown The node cannot be contacted and might be defective. A nodein this state cannot be added to the Cloud Compute Service orhave the Platform Service transferred to it. The CloudAdministrator should call EMC Global Services totroubleshoot/fix the node's hardware.
Get started with VxRack Neutrino
VxRack Neutrino Dashboard UI 59
Table 14 Dashboard health icons (continued)
Infrastructureitem
When clicked,launches to:
Icon Healthstatus
Description
Disk disk healthreport in
Reports >
Health >Infrastructure >Storage
Same icons as Node.
Network network healthreport in
Reports >
Health >Infrastructure >Network
Same icons as Node.
Storage Infrastructure >
Storage page
Not installed The storage health icons are the same as those listed belowfor the Platform and Cloud Compute Services, with the
exception that the Not Installed icon may display next to
Storage on the initial install of the Platform Service beforenodes have been added to the Cloud Compute Service. Asingle health icon indicates the health of the storage system,specifically the ScaleIO Meta Data Manager (MDM) three-node cluster. The MDMs are ScaleIO components that run onthe three platform nodes and they manage storage requestsfor the entire VxRack Neutrino ScaleIO-based storage system(for more information, refer to Meta Data Managers (MDM) onpage 129).
Platform Platformcomponenthealth report in
Reports >
Health >Infrastructure >Platform
Good The service is available and functioning within normalparameters.
Partiallydegraded
The service health is partially degraded. It requires furtherinvestigation by the Cloud Administrator and might requireEMC Global Services support.
Degraded The service health is degraded. It requires furtherinvestigation by the Cloud Administrator and might requireEMC Global Services support.
Severelydegraded
The service health is severely degraded. It requires furtherinvestigation by the Cloud Administrator and might requireEMC Global Services support.
Unknown The service health cannot be detected. Health information forthe Platform Service components cannot be retrieved. Itrequires further investigation by the Cloud Administrator andmight require EMC Global Services support.
Cloud Compute Cloud Computecomponenthealth report in
Reports >
Health > Cloud
Same icons as Platform Service.
Get started with VxRack Neutrino
60 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 14 Dashboard health icons (continued)
Infrastructureitem
When clicked,launches to:
Icon Healthstatus
Description
Compute >Components
Node allocation
A donut chart displays the used and unallocated (unused) nodes in the VxRack Neutrinosystem using colored segments. The colored segments in the donut chart are:l Light gray: represents unallocated nodes. A mouse hover over this segment indicates
the number of unallocated nodes, for example, Unallocated:1.
l Dark blue: represents Platform Service nodes. A mouse hover over this segmentindicates the number of platform nodes, for example, Platform:3.
l Blue-gray: represents Cloud Compute nodes. A mouse hover over this segmentindicates the number of cloud compute nodes, for example, Cloud Compute:8.
The number of platform, cloud compute, and unallocated nodes in the VxRack Neutrinosystem is listed beneath the donut chart.
Note when you first log in to the Dashboard, you will see only three used platform nodes;the remaining nodes will be designated as unallocated in the donut chart. There will notbe any cloud compute nodes displayed because no nodes have been added to the CloudCompute Service.
Cloud Compute Service
The Cloud Compute Service tile provides information about the OpenStack environment.Three donut charts display information about the number of instances (OpenStack virtualmachines), number of used/available virtual CPUs (vCPUs), and used/available computememory. This tile also shows the Nova Used Storage, which is the total amount of blockstorage that the OpenStack instances are consuming. A more detailed description of thecloud compute view is provided in Overview tab on page 88.
When you first log in to the Dashboard, all the donut charts in this Cloud Compute Servicetile are empty (gray) because no nodes have been added to the Cloud Compute Serviceand there are zero instances.
Instances
The total number of instances in the OpenStack environment is displayed as anumber within a colored donut chart. The donut chart is comprised of coloredsegments in varying shades of blue; each colored segment represents an instanceflavor type. A mouse hover over each colored segment indicates the flavor type andnumber of instances of that particular flavor (for example, v4.medium: 6).
vCPUs
A donut chart displays the number of vCPUs that is being used from the number ofavailable vCPUs. The vCPUs in use is represented by the green segment in the donutchart; the light gray segment represents the number of unused vCPUs.
Memory
A donut chart displays the amount of memory that the OpenStack instances areusing out of the total amount of available compute node memory. The total available
Get started with VxRack Neutrino
Node allocation 61
compute node memory is the aggregate memory of the cloud compute nodes in thesystem. The green segment in the donut chart represents the memory used byinstances, and the light gray segment represents the amount of unused cloudcompute node memory. Depending on the VxRack Neutrino brick model, a node canhave 128, 256, or 512 GB of memory.
StorageThe Storage tile provides information on block storage capacity use for cloud computepurposes via a donut chart and IOPs/bandwidth information for the block storage systemvia a dynamic graph.
Donut Chart
A donut chart displays the spare, unused, protected (used), degraded, and failedblock storage capacity for cloud compute use using colored segments. Clicking theSystem and Capacity buttons changes the view of the donut chart.
In the System view, the colored segments in the donut chart are:
l Dark gray: represents spare capacity. Spare capacity is the amount of capacityreserved for system use when recovery from failure is required. This capacitycannot be used for storage purposes. It is equivalent to 10 percent of the totalamount of block storage on the cloud compute nodes. A mouse hover over thissegment indicates the spare capacity, for example, Spare: 11.5 TB.
l Light gray: represents unused capacity. This is the additional storage availablefor cloud compute purposes. A mouse hover over this segment indicates theunused capacity, for example, Unused: 25.2 TB.
l Green: represents used protected capacity. Protected capacity is the quantity ofcapacity that is fully protected (primary and secondary copies of the data exist).A mouse hover over this segment indicates the protected capacity, for example,Protected: 53.1 TB.
l Yellow: represents degraded capacity. The capacity is available, but is notprotected.
l Red: represents failed capacity. This is the amount of capacity that isunavailable (neither primary, nor secondary copies).
In the Capacity view, the spare and unused capacity is represented the same way inthe donut chart (dark gray and light gray segments). But the used capacity isrepresented by a dark blue segment that shows thick-provisioned volume capacityand a light blue segment that shows thin-provisioned capacity. A mouse hover overthis segment indicates the used capacity type, for example, Thick: 49.7 TB.Thick- and thin-provisioned volumes are defined in Table 19 on page 91.
The center of the donut chart displays the total block storage capacity on the cloudcompute nodes. This does not represent the total amount of capacity available forvolume allocation. The amount of capacity for volume allocation is significantly lessthan this number (less than half of this number) because of the way ScaleIO protectsdata in the block storage system.
IOPs/Bandwidth Graph
The dynamic graph next to the donut chart can be used to view real-time IOPs andbandwidth information. Clicking the IOPS button displays the IOPS performancestatistics of the block storage system. Moving colored graph lines represent real-time data on total IOPs, read IOPs, and write IOPs for storage.
Get started with VxRack Neutrino
62 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Clicking the Bandwidth button displays the throughput performance statistics of theblock storage system. Moving colored graph lines represent real-time data on totalbandwidth, read bandwidth, and write bandwidth in KB/s.
System
The System tile contains the following information.
l Version - indicates the Platform/Cloud Compute Service software version
l Memory - aggregate memory of all the nodes in the VxRack Neutrino system
l Raw Storage - aggregate block storage capacity of all the nodes in the VxRackNeutrino system
l ESRS - indicates the status of the system's configuration with EMC Secure RemoteSupport (ESRS). If the VxRack Neutrino system is not registered, the status displaysas Not Configured. If the system is registered, but the ESRS call home feature isdisabled, the status displays as Configured - CallHome Disabled. If the systemis registered and the call home feature is enabled, it displays as Configured.
Alerts
The Alerts tile lists notifications for VxRack Neutrino system components that mightrequire the Cloud Administrator's attention. Each alert contains a brief description,timestamp, and severity icon. Clicking an alert in this pane launches to a more detaileddescription of the alert in the Reports > Alerts section of the UI.
Table 15 Alert icon severity description
Icon Alert severity
Critical
Major
Minor
Information
For more information on alerts, refer to Alerts on page 152.
Get started with VxRack Neutrino
System 63
Get started with VxRack Neutrino
64 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 5
Accounts
This chapter describes the tasks that can be performed in the Accounts section of theVxRack Neutrino UI.
l Create an account................................................................................................. 66l Add users and groups to an account......................................................................66l Edit an account..................................................................................................... 71l Delete an account................................................................................................. 71
Accounts 65
Create an account
Before you begin
You must have Cloud Administrator privileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, click New Account.
The Accounts page opens.
3. On the General tab of the Accounts page, in the Account Name field, type the name ofthe account.
4. Optional. In the Description field, type a description of this account.
5. Click Save & Next.
After you finish
You must add users and/or groups to the account.
Add users and groups to an accountIn the UI, two types of users and groups can be added to an account:
l local users and groups
l external users and groups from an LDAP or AD identity provider
Note
You cannot add a mixture of local and external users and groups to an account; you caneither add local users and groups OR external users and groups from an LDAP or ADidentity provider to an account.
In the UI, on the Accounts > account_name > Users/Groups page, you can add usersthrough one of two options.
Option Description
Local Users &Groups
Users come from a local source and will be saved to the OpenStack KeystoneIdentity Service that is part of VxRack Neutrino.To add local users, see Add local users to an account on page 66.
To add local groups, see Add local groups to an account on page 69.
External Users &Groups
Users come from an external LDAP/AD server outside of VxRack Neutrino andwill be saved to the LDAP/AD server.To add external users and groups, see Add external users/groups from anLDAP/AD server to an account on page 70.
Add local users to an account
Before you begin
You must have Cloud Administrator or Account Administrator privileges.
Accounts
66 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Procedure
1. In VxRack Neutrino, on the Accounts > account_name > Users/Groups page, click LocalUsers & Groups > Local Users > New User.
The New User page opens.
The Cloud Administrator must add the Account Administrator first.
2. In the User Name field, type the user name.
3. In the optional Description and Email fields, type a description and the user's emailas needed.
4. In the Password field, type the user's password and then type it again in the ConfirmPassword field.
5. Associate the user with the account's initial project by selecting theproject_accountname in the Primary Project drop-down list.
For any user to see, access, or manage resources within projects (like instances andvolumes) that user must be associated with a project (have a role in a project). It isrecommended to associate a user with the account's initial project and give that usera role in the initial project, even if that user is not expected to work in OpenStack.
6. In the Primary Project Roles drop-down list, select a project role. For the AccountAdministrator, select the admin role for the project_accountname initial project.This makes the Account Administrator the Project Administrator in this project.Project-level roles are described in OpenStack project-scoped roles on page 48.
7. In the Account/Domain Roles drop-down list, select the Account Admin (admin) role.
8. In the Project Inherited Roles drop-down list, select a project role; this project role willbe assigned to the user in all projects (existing and future) in this account. For theAccount Administrator, if you want this user to have a member role in the existingproject_accountname initial project and all projects that will be created in thisaccount, select _member_.
9. Verify that the Enabled field is set to True. The user is enabled by default, meaningthat the user will be able to authenticate to the account and projects within theaccount. If you do not want this user to access the account and its projects, set theEnabled field to False.
10. Select the Add Another checkbox and then click Save.
The Account Administrator user appears in the Local Users page.
11. Add any other users to the account. Either the Cloud Administrator or AccountAdministrator can add these users.
The following table provides examples of users with their roles that could be added tothe account. An account most likely would not have four Account Administrators. Thetable shows four examples as a reference for the types of role combinations thatcould be assigned to an Account Administrator.
Accounts
Add local users to an account 67
Table 16 Examples of possible local users with their account and project roles
Userdescription
Account/domainrole
Primaryprojectroles
Projectinheritedroles
Roles intheprimaryproject
Roles inotherprojects intheaccount
Project visibility inthe account in theVxRack Neutrino UI
Project visibility/access in theOpenStackDashboard UI
AccountAdmin 1(the useradded in theabove steps)
admin admin _member_ admin,_member_
_member_ Can see all projectresources in allprojects in theaccount
Can see, access, andmanage projectresources in the primaryprojectCan see and access (butnot manage) projectresources in all non-primary projects (in theaccount)
AccountAdmin 2
admin admin,heat_stack_owner
admin,heat_stack_owner
Can see projectresources in theprimary project in theaccount
Can see, access, andmanage projectresources in the primaryproject.Can create and runOpenStack HeatOrchestration Templates(HOT) in the primaryproject.
AccountAdmin 3
admin admin admin admin rolein Project 1
Can see projectresources in theprimary project and inProject 1 in theaccount
Can see, access, andmanage projectresources in the primaryproject and in Project 1
AccountAdmin 4
admin openstack_admin
openstack_admin
Can see projectresources in ALLprojects in the entireOpenStack privatecloud)
Can see, access, andmanage projectresources in ALL projectsin the entire OpenStackprivate cloud
AccountMonitor
monitor monitor monitor Can see all users,groups, projects in theaccount and accountreports
Cannot use theOpenStack DashboardUI
ProjectAdmin 1
admin _member_ admin,_member_
_member_ Cannot use the UI Can see, access, andmanage projectresources in the primaryprojectCan see and access (butnot manage) projectresources in all non-primary projects (in theaccount)
ProjectAdmin 2
admin admin Cannot use the UI Can see, access, andmanage project
Accounts
68 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 16 Examples of possible local users with their account and project roles (continued)
Userdescription
Account/domainrole
Primaryprojectroles
Projectinheritedroles
Roles intheprimaryproject
Roles inotherprojects intheaccount
Project visibility inthe account in theVxRack Neutrino UI
Project visibility/access in theOpenStackDashboard UI
resources in the primaryproject
ProjectMember 1
_member_ _member_ _member_role inProject 1
Cannot use the UI Can see and accessresources in both theprimary project andProject 1
ProjectMember 2
_member_ _member_ _member_ _member_ Cannot use the UI Can see and accessresources in all projects(in the account)
Add local groups to an account
Before you begin
You must have Cloud Administrator or Account Administrator privileges.
Procedure
1. In VxRack Neutrino, on the Accounts > account_name > Users/Groups page, click LocalGroups > New Group.
2. On the Group Information tab of the New Group page:
a. In the Group Name field, type the name of the group.
b. In the optional Description field, type a description for the group as needed.
c. Select the appropriate account-level role for this group (if applicable) in theAccount/Domain Roles drop-down list.
d. To give the group a project role in all projects (existing and future) in this account,select a project role in the Project Inherited Roles drop-down list. For example, ifyou want to give the group an admin role in all projects that will be created in thisaccount, select admin
3. Click the Group Members tab.
a. In the Available Users list, click the plus sign next to the users that you want toadd to this group. They move to the Selected Users list.
b. Optional. If you need to add another group to the account, select the Add Anothercheckbox.
c. Click Save.
The new group appears on the Accounts > account_name > Users/Groups > Local Groupspage.
4. Add other groups to the account as needed.
Accounts
Add local groups to an account 69
After you finish
If you did not assign the group any project inherited roles, you must add the group to aproject and assign a project-level role to the group (see Add a group to a project on page79).
Add external users/groups from an LDAP/AD server to an account
Before you begin
You must have Cloud Administrator privileges.
The maximum number of external users that can be added to VxRack Neutrino via the UIis 10,000 users.
Procedure
1. In VxRack Neutrino, on the Accounts > account_name > Users/Groups page, clickExternal Users & Groups.
2. On the New External Provider page, type in values for the UI fields described in thefollowing LDAP/AD settings table.
Table 17 LDAP/AD settings
UI field Description
Server URL Full URL, including the protocol, address, and port. For example: ldap://localhost:389
User Bind Distinguished Name LDAP/AD login user. For example: cn=admin,dc=example,dc=com
Password Password of the LDAP/AD login user.
User Tree Distinguished Name Specifies where to find the LDAP/AD users. For example: ou=users,dc=example,dc=com
Group Tree Distinguished Name Specifies where to find the LDAP/AD groups. For example: ou=groups,dc=example,dc=com
Advanced settings:
Query Scope Specifies the scope of the search. Valid values are One level or Subtree.
User Filter Optional. Specifies an LDAP/AD search filter for users. For example:memberof=cn=Users,ou=workgroups,dc=example,dc=org
Group Filter Optional. Specifies an LDAP/AD search filter for groups. For example: description=Marketing
User Class Name Type of the user class. For example: inetOrgPersonNote: The User Class Name field does not accept special characters (such as #&!).
Group Class Name Type of the group class. For example: groupOfNamesNote: The Group Class Name field does not accept special characters (such as #&!).
User Attribute Name Indicates the LDAP/AD attribute used to identify a user. For example: sAMaccountName
Group Attribute Name Indicates the LDAP/AD attribute used to identify a group. Used for searching the directory by groups.For example: CN
3. Click Validate Connection.
VxRack Neutrino performs checks for proper LDAP/AD user authentication, connectionstatus with the LDAP/AD server, and syntax in the LDAP/AD fields.
Accounts
70 VxRack System 1000 with Neutrino 1.1 Administrator Guide
4. Click Save.
New users/groups appear in the Users/Groups > External Users or External Groupspage.
After you finish
After adding external users and groups to an account, you must add them to projects andassign them roles in the projects.
l Add a user to a project on page 78
l Add a group to a project on page 79
You must also assign them account-level roles as needed.
l Edit a user's account-level role or project inherited role on page 84
l Edit a group's account-level role or project inherited role on page 85
Edit an account
Before you begin
You must have Cloud Administrator or Account Administrator privileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account you want to edit,click Edit.
3. On the General tab of the Accounts > account_name page, a Cloud Administrator canedit the Account Name and Description fields. (An Account Administrator cannot editthese fields.)
4. Click the Users/Groups tab of the Accounts > account_name page to:
l Add a local user or group (see Add local users to an account on page 66 and Addlocal groups to an account on page 69).
l Add external users or groups from LDAP (see Add external users/groups from anLDAP/AD server to an account on page 70 ).
l Edit a user or group (see Edit a user on page 74 and Edit a group on page 75).
l Disable a user (see Disable a user on page 74).
l Delete a user or group (see Delete a user on page 75 and Delete a group on page76).
Delete an account
Before you begin
You must have Cloud Administrator privileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, locate the account you want to delete in the table. In theActions column next to the account you want to delete, click the downward-facingarrow in the drop-down list and select Delete.
Accounts
Edit an account 71
3. At the prompt, click Yes.
Results
During the deletion process, an Account Deactivated status appears beside the accounton the Accounts page. When the account is deleted, all projects within it are deleted, andthe account no longer appears in the list of accounts in the Accounts page.
Accounts
72 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 6
Users/Groups
This chapter describes the tasks that can be performed in the Accounts > account_name >Users/Groups page of the VxRack Neutrino UI.
l Add a user.............................................................................................................74l Edit a user.............................................................................................................74l Disable a user....................................................................................................... 74l Delete a user......................................................................................................... 75l Add a group.......................................................................................................... 75l Edit a group...........................................................................................................75l Delete a group.......................................................................................................76
Users/Groups 73
Add a user
For information on adding local users to accounts, refer to Add local users to anaccount on page 66.
For information on adding external users from an LDAP/AD identity provider into anaccount, refer to Add external users/groups from an LDAP/AD server to an account onpage 70.
For information on adding users to projects, refer to Add a user to a project on page 78.
Edit a user
Before you begin
You must have Cloud Administrator or Account Administrator privileges.Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account where the user islocated, click Edit.
3. On the Accounts > account_name page, click the Users/Groups tab.
4. On the Local Users tab of the Accounts > account_name > Users/Groups page, in theActions column next to the user you want to edit, click Edit.
Note that only local users, not external users, can be edited.
5. Edit the following fields:
l User Name
l Description
l Password
l Email
l Account/Domain Roles
l Project Inherited Roles
l Enabled
6. Click Save.
The user is updated.
After you finish
If you want to edit the user's project-level role, see Edit a user's project-level role on page84.
Disable a userWhen you disable a user, that user still exists in the account but can no longer accessprojects.
Before you begin
You must have Cloud Administrator or Account Administrator privileges.
Users/Groups
74 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account where the user islocated, click Edit.
3. On the Accounts > account_name page, click the Users/Groups tab.
4. On the Local Users tab of the Accounts > account_name > Users/Groups page, click thedownward-facing arrow in the drop-down list in the Actions column next to the useryou want to disable, and select Disable User.
Note: Cloud Administrators (other than the one created at installation) and AccountAdministrators can disable themselves. To reauthenticate, another CloudAdministrator must help them.
Delete a user
Before you begin
You must have Cloud Administrator or Account Administrator privileges.
Any user that is enabled, that is, can authenticate to projects, can be deleted.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account where the user islocated, click Edit.
3. On the Accounts > account_name page, click the Users/Groups tab.
4. On the Local Users tab of the Accounts > account_name > Users/Groups page, click thedownward-facing arrow in the drop-down list in the Actions column next to the useryou want to delete, and select Delete User.
5. At the prompt, click Yes.
Note: Cloud Administrators (other than the one created at installation) and AccountAdministrators can delete themselves and cannot reauthenticate.
Add a group
For information on adding local groups to accounts, refer to Add local groups to anaccount on page 69.
For information on adding external groups to accounts, refer to Add external users/groups from an LDAP/AD server to an account on page 70.
For information on adding groups to projects, refer to Add a user to a project on page78.
Edit a group
Before you begin
You must have Cloud Administrator or Account Administrator privileges.
Users/Groups
Delete a user 75
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account where the group islocated, click Edit.
3. On the Accounts > account_name page, click the Users/Groups tab.
4. Click Local Groups.
5. In the Actions column next to the group you want to edit, click Edit.
6. On the Group Information tab, you can edit the following fields:
l Group Name
l Description
l Account/Domain Roles- Edit the group's account-level role (Account Admin(admin) or Account Monitor (monitor), if applicable.
l Project Inherited Roles - Edit the group's project-level roles that are propagated inany existing projects and in future projects that are created within the account
7. Click Save.
8. Click the Group Members button to add more members to the group by clicking theplus sign next to the member(s) you want to add in the Available Users list. You canremove members from the group by clicking the minus sign next to the member(s) youwant to remove in the Selected Members list.
9. Click Save.
After you finish
If you would like to edit the group's project-level role(s), see Edit a group's project-levelrole on page 85.
Delete a group
Before you begin
You must have Cloud Administrator or Account Administrator privileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account where the group islocated, click Edit.
3. On the Accounts > account_name page, click the Users/Groups tab.
4. Click Local Groups.
5. In the Actions column next to the group you want to delete, click Delete Group.
6. At the prompt, click Yes.
Users/Groups
76 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 7
Projects
This chapter describes the tasks that can be performed in the Accounts > account_name >Projects page of the VxRack Neutrino UI.
l Create project........................................................................................................78l Add a user to a project.......................................................................................... 78l Add a group to a project........................................................................................ 79l Remove a user from a project................................................................................ 79l Remove a group from a project.............................................................................. 80l Edit a project.........................................................................................................80l View access roles in a project................................................................................80l Disable a project................................................................................................... 81l Delete project........................................................................................................81
Projects 77
Create project
Before you begin
You must have Cloud Administrator or Account Administrator privileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account in which you want tocreate a project, click Edit.
3. On the Accounts > account_name page, click the Projects tab.
4. Click New Project.
5. On the Project Information tab of the New Project page, enter a Project Name and anoptional Description, if desired.
6. Verify the Enabled field is set to True. The project is enabled by default, meaning thatusers can authenticate to it (when they are added later). If you want to create thisproject but do not want users to access it yet, set the Enabled field to False.
7. Perform one of the following actions:
l If you want to create another project, select the Add Another checkbox, then clickSave.
l If you are only creating this project, click Save.
Results
A message at the top of the page indicates that the project has been successfully addedto the account. The new project appears in the list of projects on the Projects page.
After you finish
You must create a private network for your newly created project.
Your newly created OpenStack project has a default security group that allows trafficbetween any instances in the same default security group. However, you must manuallyadd rules to the default security group to allow SSH and ICMP traffic. (Refer to OpenStackdocumentation on how to add rules to the default security group: http://docs.openstack.org/user-guide/configure_access_and_security_for_instances.html.)
To add users and/or groups to the project, refer to Add a user to a project on page 78.
Add a user to a project
Before you begin
You must have Cloud Administrator, Account Administrator, or openstack_adminprivileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account that contains theproject in which you want to add the user, click Edit.
3. On the Accounts > account_name page, click the Projects tab.
4. Locate the project in the list, and in the Actions column next to the project, click Edit.
Projects
78 VxRack System 1000 with Neutrino 1.1 Administrator Guide
5. On the Edit Project page, click Project Members.
6. In the Available Users list, click the plus sign next to the user's name.
The user moves to the Selected Users list.
7. In the Selected Users list, assign the appropriate project-level role(s) in the drop-down list next to the user's name.
8. Click Save.
A message at the top of the UI indicates the project is successfully updated.
Add a group to a project
Before you begin
You must have Cloud Administrator, Account Administrator, or openstack_adminprivileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account that contains theproject in which you want to add the group, click Edit.
3. On the Accounts > account_name page, click the Projects tab.
4. Locate the project in the list, and in the Actions column next to the project, click Edit.
5. On the Edit Project page, click the Project Groups tab.
6. In the Available Groups list, click the plus sign next to the group's name.
The group moves to the Selected Groups list.
7. In the Selected Groups list, assign the appropriate project-level role(s) in the drop-down list next to the group's name.
8. Click Save.
A message at the top of the UI indicates the project is successfully updated.
Remove a user from a project
Before you begin
You must have Cloud Administrator, Account Administrator, or openstack_adminprivileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account that contains theproject where you want to remove the user, click Edit.
3. On the Accounts > account_name page, click the Projects tab.
4. In the Actions column next to the project you want to edit, click Edit.
5. On the Edit Project page, click Project Members.
6. In the Selected Users list, click the minus sign next to the user's name.
7. Click Save.
Projects
Add a group to a project 79
Remove a group from a project
Before you begin
You must have Cloud Administrator, Account Administrator, or openstack_adminprivileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account that contains theproject where you want to remove the group, click Edit.
3. On the Accounts > account_name page, click the Projects tab.
4. In the Actions column next to the project you want to edit, click Edit.
5. On the Edit Project page, click Project Groups
6. In the Selected Groups list, click the minus sign next to the group's name.
7. Click Save.
Edit a project
Before you begin
You must have Cloud Administrator, Account Administrator, or openstack_adminprivileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account that contains theproject you want to edit, click Edit.
3. On the Accounts > account_name page, click the Projects tab.
4. In the Actions column next to the project you want to edit, click Edit.
5. On the Project Information tab of the Edit Project page, you can edit the Project Name,Description, and Enabled fields. (An enabled project means that users canauthenticate to it and access it. A disabled project means that users will not be ableto authenticate to it.)
6. Click Save.
View access roles in a project
Before you begin
You must have Cloud Administrator, Account Administrator, or openstack_adminprivileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account that contains theproject in which you want to view roles, click Edit.
Projects
80 VxRack System 1000 with Neutrino 1.1 Administrator Guide
3. On the Accounts > account_name page, click the Projects tab.
4. In the Actions column next to the project in which you want to view roles, click ViewAccess Roles.
5. On the Access Roles page:
a. Click the Project Members tab to see a list of users who have access to this projectand their project roles and inherited roles.
b. Click the Project Groups tab to see a list of groups who have access to this projectand their project roles and inherited roles.
The Project Roles column shows the roles that each user/group was assigned to theproject when the user/group was created in the system. (This is the role that wasassigned to the user/group in the Primary Project Roles field when the user wascreated.)
The Inherited Roles column shows the roles that each user/group inherits in allprojects in the account. (This is the role that was assigned to the user/group in theProject Inherited Roles field when the user/group was created.)
Disable a project
Before you begin
You must have Cloud Administrator, Account Administrator, or openstack_adminprivileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account that contains theproject you want to disable, click Edit.
3. On the Accounts > account_name page, click the Projects tab.
4. Locate the project in the list and click the downward-facing arrow in the drop-down listin the Actions column and select Disable Project in the drop-down list.
5. At the prompt, click Yes.
Results
Users cannot authenticate with this project; the project exists, but users cannot access it.Note that you do not have to disable a project in order to delete it.
Delete project
Before you begin
You must have Cloud Administrator, Account Administrator, or openstack_adminprivileges.
Any project that is enabled, that is, users can authenticate to it, can be deleted.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account that contains theproject you want to delete, click Edit.
Projects
Disable a project 81
3. On the Accounts > account_name page, click the Projects tab.
4. Locate the project in the list and click the downward-facing arrow in the drop-down listin the Actions column and select Delete Project in the drop-down list.
5. At the prompt, click Yes.
Results
The project, including all its resources, such as instances and volumes, is deleted.
Projects
82 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 8
Roles
This chapter describes how to assign project-level and account-level roles to users andgroups in the VxRack Neutrino UI.
l Edit a user's account-level role or project inherited role.........................................84l Edit a user's project-level role................................................................................84l Edit a group's account-level role or project inherited role.......................................85l Edit a group's project-level role............................................................................. 85
Roles 83
Edit a user's account-level role or project inherited role
Before you begin
You must have Cloud Administrator privileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account where the user islocated, click Edit.
3. On the Accounts > account_name page, click the Users/Groups tab.
4. On the Local Users tab (or External Users tab if the user was added into the accountexternally), in the Actions column next to the user's name, click Edit.
5. On the Edit User page:
a. If you want to edit the user's account-level role, in the Account/Domain Roles field,select either the Account Admin (admin) role or the Account Monitor (monitor)role.
b. If you want to edit the user's project inherited roles, in the Project Inherited Rolesfield, select the appropriate project roles that you want this user to inherit in allexisting and future projects in the account. The project inherited role(s) you selectallows you to give this user the same role(s) in all projects present and future ofthe account.
If you want the user to have access to a subset of projects in the account (asopposed to all of the projects in the account), you need to assign the user role(s)at the project level (see Add a user to a project on page 78).
6. Click Save.
Edit a user's project-level role
Before you begin
You must have Cloud Administrator or Account Administrator privileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column, click Edit next to the account where theuser is located.
3. On the Accounts > account_name page, click the Projects tab.
4. Locate the project in the list, and in the Actions column next to the project in whichthe user is located, select Edit.
5. On the Edit Project page, click Project Members.
6. In the Selected Users list, select the appropriate project roles from the drop-down listnext to the user's name.
7. Click Save.
Roles
84 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Edit a group's account-level role or project inherited role
Before you begin
You must have Cloud Administrator privileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column next to the account where the group islocated, click Edit.
3. On the Accounts > account_name page, click the Users/Groups tab.
4. On the Local Groups tab (or the External Groups tab if the user was added into theaccount externally), in the Actions column next to the group's name, click Edit.
5. On the Edit Group page:
a. If you want to edit the group's account-level role, in the Account/Domain Rolesfield, select either the Account Admin (admin) role or the Account Monitor(monitor) role.
b. If you want to edit the group's project inherited roles, in the Project Inherited Rolesfield, select the appropriate project roles that you want this group to inherit in allexisting and future projects in the account.
6. Click Save.
Edit a group's project-level role
Before you begin
You must have Cloud Administrator or Account Administrator privileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Accounts.
2. On the Accounts page, in the Actions column, click Edit next to the account where thegroup is located.
3. On the Accounts > account_name page, click the Projects tab.
4. Locate the project in the list, and in the Actions column next to the project in whichthe group is located, click Edit.
5. On the Edit Project page, click Project Groups.
6. In the Selected Groups list, select the appropriate project roles in the drop-down listnext to the group's name.
7. Click Save.
Roles
Edit a group's account-level role or project inherited role 85
Roles
86 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 9
Cloud Compute Service
This chapter describes the views in the Services > Cloud Compute section of the VxRackNeutrino UI.
l Cloud compute introduction.................................................................................. 88l Overview tab......................................................................................................... 88l Volumes tab..........................................................................................................91l Configuration tab.................................................................................................. 92l OpenStack Dashboard tab.................................................................................. 104l Manage tab.........................................................................................................104
Cloud Compute Service 87
Cloud compute introductionIn the VxRack Neutrino UI, you can access the Cloud Compute page by clicking Services >Cloud Compute in the navigation pane. The Cloud Compute page has five tabs: Overview,Volumes, Configuration, OpenStack Dashboard, and Manage. These tab views arediscussed in the following sections.
Cloud Compute UI VisibilityThe views that are available to VxRack Neutrino administrators on the Cloud Computepage are summarized in the following table.
VxRack NeutrinoAdministrator
Overview tabvisible?
Volumes tabvisible?
Configuration tabvisible?
OpenStackDashboard tabvisible?
Manage tab visible?
Cloud Administrator yes yes yes yes yes
AccountAdministrator
no yes no yes no
Important: You must be associated with a project, that is, have a role in a project, to viewinformation on the Cloud Compute page.
If you are an Account Administrator and do not have a role in any project, you will beunable to view volume information on the Cloud Compute page. To view the informationon the Cloud Compute page, you must add yourself to at least one project and giveyourself a role in that project (see Add a user to a project on page 78). Once you have arole in a project, you can select the project in the Project Scope drop-down list on theCloud Compute page and view the volumes in that project.
Overview tabThe Overview tab on the Cloud Compute page provides the Cloud Administrator with adashboard view of the cloud compute resources in the VxRack Neutrino system.
The page gives read-only information on the number of instances, the number of usedand available vCPUs, the amount of used and available compute memory, the amount ofused compute storage, and the versions of the OpenStack Mitaka components in theVxRack Neutrino system.
Instances
The total number of instances in the OpenStack environment is displayed as a numberwithin a colored circle. The circle is comprised of colored segments - each representingan instance flavor type. The flavor type and number of instances of that particular flavor(for example, p3.large: 2) is indicated when you move a mouse over a coloredsegment. So, in this example, of the total instances in the OpenStack private cloud, twoare instances of the v3.large flavor.
Instances are the running virtual machines within the OpenStack private cloud. Aninstance is created (launched) by users working in the OpenStack Dashboard UI. Userswork within the context of a project and launch an instance by sourcing it from anOpenStack image and selecting a flavor.
OpenStack images are virtual machine templates and can be standard installation mediasuch as ISO images. Images contain bootable file systems that are used to launch
Cloud Compute Service
88 VxRack System 1000 with Neutrino 1.1 Administrator Guide
instances. Flavors are OpenStack virtual hardware templates that define sizes for RAM,disk, number of vCPUs, and so on. The disks (volumes) associated with instances areephemeral, meaning that they effectively disappear when an instance is terminated.Persistent storage is provided for instances by attaching volumes to running instances.Persistent storage means that the volume outlives any other resource and is alwaysavailable, regardless of the state of a running instance. For more information onpersistent volumes, refer to Ephemeral and persistent volumes on page 133.
The VxRack Neutrino default installation provides the 28 flavors listed in the followingtable.
Table 18 Default OpenStack flavors
OpenStackflavor name
vCPUs RAM* Root disk** Ephemeraldisk***
Storage pool fromwhich the root andephemeral diskstorage isprovisioned****
Use cases
General purpose GB
v4.medium 1 4 GB 0 GB 8 GB Performance (SSD) Small and mid-sizedatabases, dataprocessing tasks thatrequire additionalmemory, cachingcollections of instances,and for running backendservers for SAP, MicrosoftSharePoint, clustercomputing, and otherenterprise applications.
v4.large 2 8 GB 0 GB 32 GB Performance (SSD)
v4.xlarge 4 16 GB 0 GB 80 GB Performance (SSD)
v4.2xlarge 8 32 GB 0 GB 160 GB Performance (SSD)
v3.medium 1 4 GB 0 GB 8 GB Capacity (HDD)
v3.large 2 8 GB 0 GB 32 GB Capacity (HDD)
v3.xlarge 4 16 GB 0 GB 80 GB Capacity (HDD)
v3.2xlarge 8 32 GB 0 GB 160 GB Capacity (HDD)
Compute optimized
p4.large 2 4 GB 0 GB 32 GB Performance (SSD) High performance front-end collections ofinstances, web-servers,batch processing,distributed analytics,high performance scienceand engineeringapplications, ad serving,massively multiplayeronline (MMO) gaming,video-encoding, anddistributed analytics
p4.xlarge 4 8 GB 0 GB 80 GB Performance (SSD)
p4.2xlarge 8 16 GB 0 GB 160 GB Performance (SSD)
p3.large 2 4 GB 0 GB 32 GB Capacity (HDD)
p3.xlarge 4 8 GB 0 GB 80 GB Capacity (HDD)
p3.2xlarge 8 16 GB 0 GB 160 GB Capacity (HDD)
p3.4xlarge 16 32 GB 0 GB 320 GB Capacity (HDD)
p3.8xlarge 32 64 GB 0 GB 640 GB Capacity (HDD)
Memory optimized
e4.large 2 16 GB 0 GB 32 GB Performance (SSD) Memory-optimizedinstances for highperformance databases,distributed memorycaches, in-memoryanalytics, genomeassembly and analysis,
e4.xlarge 4 32 GB 0 GB 80 GB Performance (SSD)
e4.2xlarge 8 64 GB 0 GB 160 GB Performance (SSD)
e3.large 2 16 GB 0 GB 32 GB Capacity (HDD)
e3.xlarge 4 32 GB 0 GB 80 GB Capacity (HDD)
Cloud Compute Service
Instances 89
Table 18 Default OpenStack flavors (continued)
OpenStackflavor name
vCPUs RAM* Root disk** Ephemeraldisk***
Storage pool fromwhich the root andephemeral diskstorage isprovisioned****
Use cases
e3.2xlarge larger deployments ofSAP, MicrosoftSharePoint, and otherenterprise applications.
8 64 GB 0 GB 160 GB Capacity (HDD)
e3.4xlarge 16 128 GB 0 GB 320 GB Capacity (HDD)
e3.8xlarge 32 256 GB 0 GB 640 GB Capacity (HDD)
Storage optimized
k3.xlarge 4 32 GB 0 GB 800 GB Capacity (HDD) NoSQL databases likeCassandra andMongoDB, scale outtransactional databases,data warehousing,Hadoop, and cluster filesystems.
k3.2xlarge 8 64 GB 0 GB 1,600 GB Capacity (HDD)
k3.4xlarge 16 128 GB 0 GB 3,200 GB Capacity (HDD)
k3.8xlarge 32 256 GB 0 GB 6,400 GB Capacity (HDD)
*GB for RAM, root disk capacity, and ephemeral disk capacity is based on the binary system (base 2) of measurement. See Storagecapacity units on page 19 for details.
**When an instance is launched with any one of the listed flavors, two ephemeral volumes are created (root and ephemeral disks).The root disk is the primary ephemeral volume that the base image is copied into; it stores the operating system. The 0 GB size ofthe root disk allows the root disk to take the size of the image used to launch the instance, with a minimum root disk size of 8 GB.( VxRack Neutrino block storage requires that volume sizes must be in increments of 8 GB.) For example, when an instance issourced from an image that is 870 MB in size, the root disk would be 870 MB, but then rounded up to the next multiple of 8, whichis 8 GB. An instance sourced from an image that is 870 MB in size, with the v4.large flavor selected, would result in a launchedinstance with a root disk of 8 GB, and an ephemeral disk of 32 GB.
***The secondary ephemeral volume that stores data.
****Flavor names that contain the number 3 as the second character denote that the instance's storage will be provisioned out ofthe VxRack Neutrino capacity storage pool which contains HDD storage devices. Flavor names that contain the number 4 as thesecond character denote that the instance's storage will be provisioned out of the performance storage pool which contains SSDstorage devices. Refer to Protection domains and storage pools on page 130 for more information on VxRack Neutrino storagepools.
vCPUs
A colored circle displays the number of virtual CPUs (vCPUs) that is being used out of thenumber of available vCPUs. The vCPUs in use is represented by the green portion of thecircle, and the gray portion of the circle represents the number of unused vCPUs. A vCPUis the virtual processor of an OpenStack instance (virtual machine). vCPUs run onphysical CPUs (cores), and OpenStack instances are allocated a certain number of vCPUseach, depending on the flavor selected at instance creation.
The number of vCPUs in the VxRack Neutrino private cloud is determined by the numberof cloud compute node CPU cores in the system. Each cloud compute node can have 12,16, or 20 CPU cores depending on the brick model. Each CPU core will present 2 logicalcores because of hyper-threading. This means a node with 12 CPU cores will actuallypresent 24 logical cores to the operating system on that node. The logical cores in VxRackNeutrino are the equivalent of vCPUs in OpenStack. For example, a node with 12 CPU
Cloud Compute Service
90 VxRack System 1000 with Neutrino 1.1 Administrator Guide
cores (24 logical CPU cores) in VxRack Neutrino provides 24 vCPUs on that node inOpenStack. A VxRack Neutrino system that has a total of three cloud compute nodes with12 CPU cores per node has a total number of 72 available vCPUs in OpenStack (12 CPUcores per node x 2 (hyperthreaded) = 24 logical cores per node x 3 nodes = 72 logicalcores = 72 vCPUs).
Memory
A colored circle displays the amount of memory that the OpenStack instances are usingout of the total amount of available compute node memory. The total available computenode memory is the aggregate memory of the compute nodes in the system. The greenportion of the circle represents the memory used by instances, and the gray portion of thecircle represents the amount of unused compute node memory. Depending on the size ofthe VxRack Neutrino brick, a node can have 128, 256, or 512 GB of memory.
Nova used storage
The total quantity of block storage that the OpenStack instances are consuming ispresented on the Cloud Compute page. This number is the storage consumed by the rootand ephemeral disks of the running OpenStack instances.
OpenStack Mitaka components
A table lists the OpenStack Mitaka service components with their version numbers. Thesecomponents comprise the OpenStack environment and are installed as part the VxRackNeutrino Platform Service deployment. For more information on these components, referto Platform Service on page 28.
Volumes tabThe Cloud Compute > Volumes tab provides Cloud and Account Administrators a read-onlyview of the volumes in the OpenStack private cloud.
In the OpenStack Dashboard UI, users create volumes and attach them to instances sothey have the block storage capacity they need for cloud computing. Volumes are createdusing the OpenStack Cinder service, which is backed by VxRack Neutrino's ScaleIOstorage. For more information on how to create and manage volumes in OpenStack referto http://docs.openstack.org/user-guide/dashboard_manage_volumes.html.
Each VxRack Neutrino administrator's project scope determines what that administratorwill see in the Volumes tab. The Project Scope drop-down list on the Volumes tab showsthe projects in which the administrator has access rights, and only the volumesassociated with those projects display in the list of volumes on the Volumes tab. Forexample, the Project Scope drop-down list contains the admin project for a CloudAdministrator; this results in a listing of all the volumes associated with all the projects inthe OpenStack private cloud. An Account Administrator might have several projects listedin the Project Scope drop-down list, and when they click each project, the volumesassociated with that project display in the list on the Volumes tab. The list of volumes onthe Volumes tab contains the information described in the following table.
Table 19 Volume parameters
Parameter Description
Cinder Name The name that the user gives to the volume when it is created in OpenStack.
Cloud Compute Service
Memory 91
Table 19 Volume parameters (continued)
Parameter Description
Attached To The name of the OpenStack instance (virtual machine) to which the volume is attached.
Capacity The amount of storage capacity in GB. The minimum volume capacity is 8 GB. If you create a volume witha capacity that is not a multiple of 8, VxRack Neutrino will round up the volume size to the next multipleof 8. For example, if you create a volume in OpenStack with 10 GB of capacity, VxRack Neutrino willround up the volume size to 16 GB.
The volume capacity shown in the Capacity column in the table on the Volumes tab is actually half thespace that the volume requires in the VxRack Neutrino ScaleIO-based storage system. For example, a
volume capacity might display as 24 GB in the Volumes tab (because that was the size of the volumewhen it was created in OpenStack), but this volume actually requires 48 GB of space in the VxRackNeutrino ScaleIO-based storage system. The volume requires 48 GB in VxRack Neutrino because ScaleIOstores two copies of the volume data. The total 48 GB of volume capacity is accounted for in thecalculation of the used compute storage shown in the green protected storage segment in the Capacity
donut in the Infrastructure > Storage > Overview page of the UI.
Storage Name The VxRack Neutrino block storage (ScaleIO) UUID that is automatically generated when the volume is
created in OpenStack. This ScaleIO UUID also appears in the Infrastructure > Storage > Frontend >
Volumes view in the VxRack Neutrino UI (see Frontend tab on page 140).
Protection Domain The protection domain (set of cloud compute nodes) from which the volume was provisioned. There aretwo types of protection domains: performance and capacity. The cloud compute nodes with SSDs aregrouped in performance protection domains, for example, the Compute_Performance1 protectiondomain. The cloud compute nodes with HDDs are grouped in capacity protection domains, for example,the Compute_Capacity1 protection domain. For more information, see Protection domains and storagepools on page 130.
Storage Pool The storage pool (set of physical storage devices in a protection domain) from which the volume wasprovisioned. There is one storage pool per protection domain. The SSD devices are grouped inperformance storage pools, for example the Compute_Performance storage pool. The HDD devices aregrouped in capacity storage pools, for example the Compute_Capacity storage pool. For moreinformation, see Protection domains and storage pools on page 130
Bandwidth The rate of data transfer; throughput in bytes per second.
IOPs Input/output operations per second.
Configuration tabThe Cloud Compute > Configuration tab lists the default configuration properties for theOpenStack cloud compute components (containers) in the VxRack Neutrino system. Thefollowing sections describe the default configuration properties for these components.
AodhThe cc_aodh component contains the OpenStack Alarming (Aodh) service which triggersalarms based on metrics and events collected from the OpenStack cloud.
The default configuration properties for the cc_aodh component are described in thefollowing table. The default values for a subset of these properties can be configuredusing the VxRack Neutrino Cloud Compute Controller (C3) CLI, refer to the VxRack Systemwith Neutrino 1.1 OpenStack Implementation Guide. For more information on Aodh
Cloud Compute Service
92 VxRack System 1000 with Neutrino 1.1 Administrator Guide
configuration properties, refer to http://docs.openstack.org/mitaka/config-reference/telemetry/alarming_service_config_opts.html.
Table 20 Default Aodh configuration properties
Aodh property Default value Description
service_ip vxrack_neutrino_virtual_IPfor example, 10.299.42.196
VxRack Neutrino virtual IP address
aodh_debug False Enables or disables debugging output in logs.
aodh_verbose True Enables or disables verbose log output (True sets INFO log leveloutput).
service_user_pass ldilZ4NW20lnlORoKfsK4ASed6XfRoUdRJHeR/HA92r63FZr8G3ALQJJKWNj3cSm
A randomly generated password used by the internal aodh serviceuser to authenticate to Keystone.
requires ceilometermysql
rabbitmq
The cc_aodh component requires the cc_ceilometer, mysql-galera,and cc_rabbitmq components to run.
CeilometerThe cc_ceilometer component contains the OpenStack Telemetry (Ceilometer) servicewhich collects and aggregates usage and performance data for the OpenStack cloud.
The default configuration properties for the cc_ceilometer component are described inthe following table. The default values for a subset of these properties can be configuredusing the VxRack Neutrino Cloud Compute Controller (C3) CLI, refer to the VxRack Systemwith Neutrino 1.1 OpenStack Implementation Guide. For more information on Ceilometerconfiguration properties, refer to http://docs.openstack.org/mitaka/config-reference/telemetry.html.
Table 21 Default Ceilometer configuration properties
Ceilometer property Default value Description
cpu_interval 600 Polling interval in seconds to collect use and performance data onCPUs.
min_instances 2 This property is used in the OpenStack Heat template to indicate theminimum number of instances that should be created. It is a propertyof the Heat template, not the ceilometer.conf file.
ceilometer_debug False Enables or disables debugging output in logs.
metering_secret ecipass Secret key for signing telemetry messages.
ceilometer_verbose True Enables or disables verbose log output (True sets INFO log leveloutput).
service_ip vxrack_neutrino_virtual_IPfor example, 10.299.42.196
VxRack Neutrino virtual IP address
disk_interval 600 Polling interval in seconds to collect use and performance data ondisks.
Cloud Compute Service
Ceilometer 93
Table 21 Default Ceilometer configuration properties (continued)
Ceilometer property Default value Description
network_interval 600 Polling interval in seconds to collect use and performance data onnetworks.
service_user_pass ITQ9lT4gFL2IlHUzPiwFbNSX09r8m5PIigri/2sOQ6gG17+ZaBIWnUqP1Q0806/D
A randomly generated password used by the internal Ceilometerservice user to authenticate to Keystone.
meter_interval 600 Metering interval in seconds to query a meter source.
requires nova-controller The cc_ceilometer component requires the cc_nova_controllercomponent to run.
CinderThe cc_cinder_controller component contains the OpenStack Block Storage (Cinder)service and manages the persistent block storage used by OpenStack computeinstances. Cinder manages the creation, attachment, and detachment of volumes toinstances. Cinder volumes created in OpenStack use the ScaleIO block storage providedby VxRack Neutrino.
The default configuration properties for the cc_cinder_controller component aredescribed in the following table. The default values for a subset of these properties canbe configured using the VxRack Neutrino C3 CLI, refer to the VxRack System with Neutrino1.1 OpenStack Implementation Guide. For more information on Cinder configurationproperties, refer to http://docs.openstack.org/mitaka/config-reference/block-storage/block-storage-sample-configuration-files.html#cinder-conf.
Table 22 Default Cinder configuration properties
Cinder property Default value Description
max_age 0 Number of seconds between subsequent usage refreshes.
quota_snapshots 10 Number of volume snapshots allowed per project.
quota_backup_gigabytes 1000 Total amount of storage, in gigabytes, allowed for backups perproject.
use_default_quota_class True Enables or disables use of default quota class with defaultquota.
quota_consistencygroups 10 Number of consistency groups allowed per project.
quota_gigabytes 1000 Total amount of storage, in gigabytes, allowed for volumes andsnapshots per project.
ecs_enabled False Set this to True only for VxRack Neutrino configurations wherean EMC Elastic Cloud Storage (ECS) appliance is connected forobject storage.
service_ip vxrack_neutrino_virtual_IPfor example, 10.299.42.196
The VxRack Neutrino virtual IP address.
reservation_expire 86400 Number of seconds until a reservation expires.
Cloud Compute Service
94 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 22 Default Cinder configuration properties (continued)
Cinder property Default value Description
quota_backups 10 Number of volume backups allowed per project.
cinder_verbose True Enables or disables verbose log output (True sets INFO log leveloutput).
volume_type SCALEIO Default volume type to use.
cinder_debug False Enables or disables debugging output in logs (True sets DEBUGlog level output).
quota_volumes 10 Number of volumes allowed per project.
service_user_pass 4dz44YpH1IsS68ATxFbF6oZUfr1WXVLetFnhw/7WCuNidxBZWqDczAFwrpJb1kZX
A randomly generated password used by the internal Cinderservice user to authenticate to Keystone.
requires nova-controller The cc_cinder_controller component requires the cc_nova_controller component to run.
GlanceThe cc_glance_controller component contains the OpenStack Image (Glance) service andstores virtual machine images and maintains a catalog of available images. These imagesare used to launch or create new instances in OpenStack.
The default configuration properties for the cc_glance_controller component aredescribed in the following table. The default values for a subset of these properties canbe configured using the VxRack Neutrino Cloud Compute Controller (C3) CLI, refer to theVxRack System with Neutrino 1.1 OpenStack Implementation Guide. For more information onGlance configuration properties, refer to http://docs.openstack.org/mitaka/config-reference/image-service/sample-configuration-files.html#glance-api-conf.
Table 23 Default Glance configuration properties
Glance property Default value Description
glance_debug False Enables or disables debugging output in logs.
service_ip vxrack_neutrino_virtual_IPfor example, 10.299.42.196
VxRack Neutrino virtual IP address.
glance_verbose False Enables or disables verbose log output.
service_user_pass EkoFHLVXfvMs9yB016pjmP76ildHsOsAJLk/jdFmdWUcPtGRYqYg/8PaJcqeZnC+
A randomly generated password used by the internalGlance service user to authenticate to Keystone.
requires mysqlrabbitmq
The cc_glance_controller component requires themysql_galera and cc_rabbitmq components to run.
glance_redundancy_enabled True Enables or disables the ability of Glance servers tostore their images on shared storage. This propertyshould always be set to True.
Cloud Compute Service
Glance 95
HeatThe cc_heat_controller component contains the OpenStack Orchestration (Heat) serviceand orchestrates applications in the VxRack Neutrino OpenStack private cloud usingtemplates, through a native OpenStack REST API. It manages the lifecycle ofinfrastructure and applications within the OpenStack cloud.
The default configuration properties for the cc_heat_controller component are describedin the following table. The default values for a subset of these properties can beconfigured using the VxRack Neutrino Cloud Compute Controller (C3) CLI, refer to theVxRack System with Neutrino 1.1 OpenStack Implementation Guide. For more information onHeat configuration properties, refer to http://docs.openstack.org/mitaka/config-reference/orchestration.html.
Table 24 Default Heat configuration properties
Heat property Default value Description
heat_debug False Enables or disables debugging output in logs.
heat_verbose True Enables or disables verbose log output (True sets INFO log leveloutput).
service_user_pass RkEyYfTF4+CruJdP9R2/LN3vMbhl3ixaiE0TKoL/oPfgvspwM7LbBF7vJmymoiuQ
A randomly generated password used by the internal Heatservice user to authenticate to Keystone.
requires nova-controllercinder-controller
neutron-controller
The cc_heat_controller component requires thecc_nova_controller, cc_cinder_controller, andcc_neutron_controller components to run.
service_ip vxrack_neutrino_virtual_IPfor example, 10.299.42.196
VxRack Neutrino virtual IP address.
HorizonThe cc_horizon component contains the OpenStack Dashboard UI for all OpenStackcomponents.
The default configuration properties for the cc_horizon component are described in thefollowing table. The default values for a subset of these properties can be configuredusing the VxRack Neutrino Cloud Compute Controller (C3) CLI, refer to the VxRack Systemwith Neutrino 1.1 OpenStack Implementation Guide. For more information on Horizonconfiguration properties, refer to http://docs.openstack.org/mitaka/config-reference/dashboard.html.
Table 25 Default Horizon configuration properties
Horizon property Default value Description
simple_ip_management False Enables or disables the ability to simplify IP assignment fordeployments with multiple floating IP networks. This propertyshould not be enabled.
service_ip vxrack_neutrino_virtual_IPfor example, 10.299.42.196
VxRack Neutrino virtual IP address.
Cloud Compute Service
96 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 25 Default Horizon configuration properties (continued)
Horizon property Default value Description
debug_mode False Enables or disables debugging output in logs.
service_user_pass 26OmW6rRiCI7is/yMne+0iD7Ufw98Pc0ObYtEouY3sfoetDYCwLEIm9H8GJPoBJA
A randomly generated password used by the internal Horizonservice user to authenticate to Keystone.
requires nova-controller The cc_horizon component requires the cc_nova_controllercomponent to run.
password_retrieve False Enables or disables a user's ability to retrieve a forgottenpassword from Horizon.
MemcachedThe cc_memcached component contains the open source memcached caching system,which the cc_horizon component uses to optimize OpenStack Dashboard UI performanceby alleviating database load.
The default configuration properties for the cc_memcached component are described inthe following table. For more information on memcached configuration properties, referto https://github.com/memcached/memcached/wiki#configuration.
Table 26 Default memcached configuration properties
Memcached property Default value Description
port 11211 The port on the platform node where the cc_memcached component runs.
service_ip vxrack_neutrino_virtual_IPfor example,10.299.42.196
VxRack Neutrino virtual IP address.
NeutronThe cc_neutron_controller component contains the OpenStack Networking (Neutron)service. It creates the networking dependencies of an OpenStack virtual machine(instance) and creates/manages OpenStack virtual networks.
The default configuration properties for the cc_neutron_controller component aredescribed in the following table. The default values for a subset of these properties canbe configured using the VxRack Neutrino Cloud Compute Controller (C3) CLI, refer to theVxRack System with Neutrino 1.1 OpenStack Implementation Guide. For more information onOpenStack Neutron configuration properties, refer to http://docs.openstack.org/mitaka/config-reference/networking.html.
Table 27 Default Neutron configuration properties
Neutron property Default value Description
neutron_verbose True Enables or disables verbose log output (True sets INFO log leveloutput).
Cloud Compute Service
Memcached 97
Table 27 Default Neutron configuration properties (continued)
Neutron property Default value Description
bootstrap_floating_ip_block for example,10.252.21.0/26
The floating IP block to be used for this VxRack Neutrino deployment.
service_ip vxrack_neutrino_virtual_IPfor example,10.299.42.196
VxRack Neutrino virtual IP address.
neutron_l3_ha True Enables or disables VxRack Neutrino high availability. This propertyshould never be disabled.
external_dns_servers for example, 137.0.0.1 A comma separated list of DNS servers external to the OpenStackprivate cloud. These servers should be reachable from the VxRackNeutrino Platform nodes.
snat_enabled True Enables or disables Source Network Address Translation (SNAT) onrouters in the OpenStack private cloud. SNAT is enabled on routers bydefault.
deployment_cidr A Classless Inter-Domain Routing (CIDR) that has all the VxRackNeutrino platform nodes in the current deployment.
service_user_pass 3pKvmP5+KlRgMbsNWISJ3Au7joC9tvGcO91uEdWiAGA6CoQncr7pixrSK/RluWOd
A randomly generated password used by the internal Neutron serviceuser to authenticate to Keystone.
requires mysqlrabbitmq
The cc_neutron_controller component requires the mysql-galera andcc_rabbitmq components to run.
neutron_debug False Enables or disables debugging output in logs. (True sets the DEBUGlog level output.)
Neutron networkThe cc_neutron_network component provides the routing gateway for the virtual tenantnetworks in OpenStack.
The default configuration properties for the cc_neutron_network component aredescribed in the following table. For more information on OpenStack Neutron networkconfiguration properties, refer to http://docs.openstack.org/mitaka/config-reference/networking.html.
Table 28 Default Neutron network configuration properties
Neutron networkproperty
Default value Description
requires neutron-controller The cc_neutron_network component requires the cc_neutron_controllercomponent to run.
service_ip vxrack_neutrino_virtual_IPfor example,10.299.42.196
VxRack Neutrino virtual IP address.
Cloud Compute Service
98 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 28 Default Neutron network configuration properties (continued)
Nova computeThe cc_nova_compute component contains the OpenStack Nova compute engine. Thiscomponent runs on all cloud compute nodes in the VxRack Neutrino system.
The default configuration properties for the cc_nova_compute component are describedin the following table. For more information on Nova configuration properties, refer to http://docs.openstack.org/mitaka/config-reference/compute/config-options.html.
Table 29 Default Nova Compute configuration properties
Nova computeproperty
Default value Description
use_custom_flavors true Enables or disables the use of flavors other than the default OpenStackflavors included with VxRack Neutrino.
volume_type SCALEIO Volume type used by OpenStack instances. By default, volumes created inOpenStack use the ScaleIO storage provided by VxRack Neutrino.
libvirt_type KVM Libvirt domain type. Libvirt is the inspector to use for inspecting thehypervisor layer.
service_user_pass 2caAI/Ogo7XzzJJ4xxlET2hc0fV5UXilatZaiHLHZg6WpGkD+R/jcRSENg6cBqg0
A randomly generated password used by the internal Nova compute serviceuser to authenticate to Keystone.
requires nova-controllerneutron-controller
The cc_nova_compute component requires the cc_nova_controller andcc_neutron_controller components to run.
network_type NEUTRON Type of network used by OpenStack virtual machines (instances).
NovaThe cc_nova_controller component contains the OpenStack Compute (Nova) service. It isa cloud computing fabric controller designed to manage and automate pools of computerresources. In the OpenStack private cloud, the cc_nova_controller component uses theKVM hypervisor technology.
The default configuration properties for the cc_nova_controller component are describedin the following table. The default values for a subset of these properties can beconfigured using the VxRack Neutrino Cloud Compute Controller (C3) CLI, refer to theVxRack System with Neutrino 1.1 OpenStack Implementation Guide. For more information onNova configuration properties, refer to http://docs.openstack.org/mitaka/config-reference/compute/config-options.html.
Table 30 Default Nova configuration properties
Nova property Default value Description
bandwidth_poll_interval 600 Interval in seconds to pull network bandwidth usageinformation. Set to -1 to disable. Set to 0 to run at the defaultrate.
Cloud Compute Service
Nova compute 99
Table 30 Default Nova configuration properties (continued)
Nova property Default value Description
nova_debug False Enables or disables debugging output in logs.
block_device_allocate_retries_intreval 3 Waiting time interval in seconds between block deviceallocation retries on failures.
libvirt_type KVM Hypervisor type.
block_device_allocate_retries 60 Waiting time interval in seconds that Nova waits for a volumeit created in Cinder to be available when booting fromvolume and source.
quota_ram 51200 Megabytes of instance RAM allowed per project.
service_user_pass zwj50ik6BxSLo65fxaThIUKRH08kGxPrXqA9gA8sWZm1p53M9xXmG
A randomly generated password used by the internal Novaservice user to authenticate to Keystone.
network_type NEUTRON Type of network used by OpenStack virtual machines(instances).
port 8774 Platform node port where the cc_nova_controller componentis running.
load-balanced true Enables or disables load balancing.
quota_security_groups 10 Number of security groups per project.
quota_security_group_rules 20 Number of security rules per security group.
quota_server_groups 10 Number of server groups per project.
quota_injected_files 5 Number of injected files allowed.
quota_metadata_items 128 Number of metadata items allowed per instance.
quota_floating_ips 10 Number of floating IPs allowed per project.
scheduling_filter RetryFilter,AvailabilityZoneFilter,RamFilter, ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,DifferentHostFilter,SameHostFilter, JsonFilter,SimpleCIDRAffinityFilter
Scheduling filters that are used by the OpenStack Novaservice to determine on which host (cloud compute node) aninstance should launch. Additional filters can be included ina comma-separated list. For more information on OpenStackscheduling filters, see http://docs.openstack.org/mitaka/config-reference/compute/scheduler.html.
quota_injected_file_content_bytes 10240 Number of bytes allowed per injected file.
cpu_allocation_ratio 2.0 Virtual CPU to logical CPU allocation ratio.
service_ip vxrack_neutrino_virtual_IPfor example,10.299.42.196
VxRack Neutrino virtual IP address.
quota_instances 10 Number of instances allowed per project.
enable_network_quota False Enables or disables quota checking for tenant networks.
Cloud Compute Service
100 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 30 Default Nova configuration properties (continued)
Nova property Default value Description
nova_verbose True Enables or disables verbose log output (True sets INFO loglevel output).
quota_cores 20 Number of instance cores allowed per project.
quota_key_pairs 100 Number of key pairs per user.
quota_server_group_members 10 Number of servers per server group.
requires mysqlrabbitmq
The cc_nova_controller component requires the mysql-galeraand cc_rabbitmq components to run.
ram_allocation_ratio 1.0 Virtual RAM to physical RAM allocation ratio, which affects allRAM filters.
quota_injected_file_path_length 255 Length of injected file path.
quota_fixed_ips -1 Number of fixed IP addresses allowed per project (thisproperty should be at least the number of instancesallowed).
RabbitMQThe cc_rabbitmq component contains the open source rabbitmq message brokersoftware, which provides the Advanced Message Queuing Protocol (AMQP) layer for allthe OpenStack components.
The default configuration properties for the cc_rabbitmq component are described in thefollowing table. For more information on OpenStack rabbitmq configuration properties,refer to http://docs.openstack.org/mitaka/config-reference/orchestration/orchestration_rpc.html.
Table 31 Default RabbitMQ configuration properties
RabbitMQ property Default value Description
service_ip vxrack_neutrino_virtual_IPfor example,10.299.42.196
VxRack Neutrino virtual IP address.
bootstrapped true Indicates if the rabbitmq cluster (the three platform nodes running thecc_rabbit component) has been bootstrapped.
cookie 7ad464e56d4d24beee83f24c744eaa17
Random cookie file that is generated when the cc_rabbit containers arestarted up. The 3-node cluster uses this cookie to authenticatecommunication between the nodes.
user_pass YtRFETsc9Nuc4MhsgR9T8CF87ETM5ygIZfLzef24jwKq3R4OyQvc8eqvbyZdKxlM
A randomly generated password used by the internal rabbitmq user toauthenticate to Keystone.
cluster_members for example:
rabbit@platformnode1
rabbit@platformnode2
Names of the three platform nodes running the cc_rabbitmq component inthe format: rabbit@<platform_node_name>
Cloud Compute Service
RabbitMQ 101
Table 31 Default RabbitMQ configuration properties (continued)
RabbitMQ property Default value Description
rabbit@platformnode3
user_name openstack The user name used by the OpenStack services to connect to rabbitmq.
port 5672 RabbitMQ broker port on the node where the cc_rabbitmq component isrunning.
ScaleIOThe scaleio_sds component provides access to the block storage on the storage devicesof the cloud compute node where this component is installed. The ScaleIO data server(SDS) manages the storage capacity of a single node. It provisions the storage for thevolumes created in OpenStack.
The default configuration properties for the scaleio_sds component are described in thefollowing table.
Table 32 Default ScaleIO configuration properties
ScaleIO property Default value Description
sdc_binary /opt/emc/scaleio/drv_cfg
The path to where the ScaleIO SDC configuration binary (drv_cfg)is located.
service_pools [u'Compute_Performance1:Compute_Performance',u'Compute_Performance1:Compute_Capacity']
List of default storage pools. This list is static; it does not changeregardless of what types or how many storage pools are in theVxRack Neutrino system. This list includes two storage pools(Compute_Performance and Compute_Capacity) and theprotection domains to which they belong(Compute_Performance1 and Compute_Capacity1, respectively).A storage pool is a set of storage devices in a protection domain.For more information, see Protection domains and storagepools on page 130.
server_certificate_path /tmp/keytool_crt.pem The path to a certificate used by ScaleIO.
service_name Compute_Performance1 Default protection domain. This value is static; it does not changeregardless of what types or how many protection domains are inthe VxRack Neutrino system. A protection domain is a set ofnodes. For more information, see Protection domains and storagepools on page 130.
host_disk_path /dev/disk/by-id The path on hosts where ScaleIO volumes appear.
provisioning_type thickprovisioned Volumes can be thick provisioned (space is pre-allocated whenthe volume is created) or thin provisioned (space is onlyconsumed when data is written to the volume). The default isthick provisioned.
verify_server_certificate False Configures the client to check the server certificate whileconnecting.
unmap_volume_before_deletion False Ensures that the volume is not mapped to any SDC beforedeletion, because in OpenStack, a volume can be deleted
Cloud Compute Service
102 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 32 Default ScaleIO configuration properties (continued)
ScaleIO property Default value Description
automatically when terminating instances. When this property isset to True, a volume will be unmapped from any SDC beforedeletion (an error in an unmap operation will be ignored if thevolume is not mapped). When this property is set to False, nounmap operation will be performed before deletion.
host 10.299.42.196 VxRack Neutrino virtual IP address.
storage_pool_name Compute_Performance Default storage pool. This value is static; it does not changeregardless of what types or how many storage pools are in theVxRack Neutrino system.
protection_domain_name Compute_Performance1 Default protection domain. This value is static; it does not changeregardless of what types or how many protection domains are inthe VxRack Neutrino system. A protection domain is a set ofnodes. For more information, see Protection domains and storagepools on page 130
user admin Cloud Administrator user.
storage_pools Compute_Performance1:Compute_Performance
Default storage pool. This value is static; it does not changeregardless of what types or how many storage pools are in theVxRack Neutrino system. This value lists the default storage pool(Compute_Performance) and the protection domain to which itbelongs (Compute_Performance1).
force_delete True Enables or disables the ability to delete all ScaleIO dataimmediately.
round_volume_capacity True Provides the ability to create or extend a volume to a size which isnot a multiple of 8 GB. (The granularity of ScaleIO volumes is inmultiples of 8 GB.) When set to True, every request is rounded upand generates a warning message that is written to a log statingthe requested and actual size of the volume. When set to False,every request for a volume size that is not a multiple of 8 will failand generate a specific error message.
password klv4/VK6H6IQvXJzmclfdY/FITqunQQi8pAtzJnO/ZI=
Cloud Administrator password.
port 445 Port on the cloud compute node where the scaleio-sdscomponent is running.
container_disk_path /var/scaleio/dev/disk/by-id
The path in the container where ScaleIO volumes appear.
cache_size 10 Size of lookups in MB to store by ScaleIO library.
cache_enabled True Enables or disables caching of system query and lookups byScaleIO library.
secure_password True Enables or disables encryption of the password defined inconfiguration file.
client /opt/emc/scaleio/drv_cfg
The path to where the ScaleIO SDC configuration binary (drv_cfg)is located.
Cloud Compute Service
ScaleIO 103
Table 32 Default ScaleIO configuration properties (continued)
OpenStack Dashboard tabThe OpenStack Dashboard tab on the Cloud Compute page enables VxRack NeutrinoCloud and Account Administrators to launch directly to the OpenStack Dashboard UI withsingle sign-on functionality. Once in the OpenStack Dashboard UI, the Cloud and/orAccount Administrator can perform any OpenStack operation such as launchinginstances or creating volumes, as long as they have a role in a project.
Manage tabThe Cloud Administrator accesses the Manage tab on the Cloud Compute page only whencircumstances require that the entire VxRack Neutrino system be shut down and thenrestarted. The system shutdown/restart process is required in scenarios where thesystem has to be moved, a power upgrade to the datacenter is needed, or if a single rackrequires aggregation switches for an expansion rack. The system shutdown/restartprocess is documented in Appendix B of the VxRack System with Neutrino 1.1 HardwareGuide. Do not use the options in the Manage tab without first reviewing the stepsoutlined in shutdown/restart procedure.
The Manage tab has two drop-down options: Activate Service and Inactivate Service.During the system shutdown process, the Cloud Administrator selects Inactivate Serviceto stop all the instances (VMs) running on the cloud compute nodes and to inactivate theCloud Compute Service running on the cloud compute nodes before shutting down all thenodes in the system. Similarly, as part of the system restart procedure, the CloudAdministrator powers on the nodes in the system, and then selects Activate Service toactivate the Cloud Compute Service on all cloud compute nodes in the system.
Note
Do not select either the Inactivate Service or Activate Service options without followingthe system shutdown/restart instructions outlined in the VxRack System with Neutrino 1.1Hardware Guide.
Cloud Compute Service
104 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 10
Nodes
This chapter describes the information that can be viewed and the tasks that can beperformed in the Infrastructure > Nodes section of the VxRack Neutrino UI.
l Node information................................................................................................ 106l Node actions.......................................................................................................113
Nodes 105
Node informationNode information can be viewed by the Cloud Administrator in the Infrastructure > Nodespage. Details of each node can be viewed by clicking the node name in the Nodes page.
Nodes viewThe Nodes page, which provides general information on the nodes, bricks, and racks inthe VxRack Neutrino system, is visible only to the Cloud Administrator. From this page,the Cloud Administrator can perform actions on the nodes, such as adding or removingthe Platform/Cloud Compute Service to/from a particular node, or shutting down a nodeto perform maintenance activities.
This page contains a list of the rack, brick, and node infrastructure that underlies theVxRack Neutrino system. The icons in the table on the Nodes page denote the followinginfrastructure objects.
Table 33 Infrastructure icons
Icon Denotes:
Rack
Performance node (node with SSDs)
Capacity node (node with HDDs)
By default, the Nodes page lists all the nodes in the VxRack Neutrino system. However,the Cloud Administrator can use the filter on the Nodes page to view nodes by service(Cloud Compute, Platform, or Unallocated) or by storage type (Performance or Capacity).
The following sections describe the information provided in the table for each node andrack in the VxRack Neutrino system. At the brick level, the only information displayed onthe Nodes page is the brick model, for example p816 (see Brick on page 15 for moreinformation on brick specifications).
Health
Node health can be good, suspect, degraded, severely degraded, or unknown, asdescribed in the following table. Note that node health and storage device health arereported separately. For example, a node can be in good health even if a storage devicehas degraded health. Storage device health can be viewed in the nodes detail Diskstab on page 111 and in the Infrastructure > Storage > Backend > Capacity Health view.
Table 34 Node health icons
Icon Health Description
Good The node is online and available and functioning within normal parameters (but itsdisks might not be).
Suspect The node can be contacted and its health is suspect. A node in this state cannot beadded to the Cloud Compute Service or have the Platform Service transferred to it. TheCloud Administrator should call EMC Global Services to troubleshoot/fix the node'shardware.
Nodes
106 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 34 Node health icons (continued)
Icon Health Description
Degraded The node can be contacted but its health is degraded. A node in this state cannot beadded to the Cloud Compute Service or have the Platform Service transferred to it. TheCloud Administrator should call EMC Global Services to troubleshoot/fix the node'shardware.
Severely degraded The node can be contacted but its health is severely degraded. A node in this statecannot be added to the Cloud Compute Service or have the Platform Servicetransferred to it. The Cloud Administrator should call EMC Global Services totroubleshoot/fix the node's hardware.
Unknown The node cannot be contacted and might be defective. A node in this state cannot beadded to the Cloud Compute Service or have the Platform Service transferred to it. TheCloud Administrator should call EMC Global Services to troubleshoot/fix the node'shardware.
Health icons are shown next to each node and rack under the HW Health column in thetable on the Nodes page. The health of each node is aggregated next to each rack listedin the table. For example, if there is one rack with three bricks (total of 12 nodes), and 11nodes have good health and one has severely degraded health, the health icons next to
the rack display as: .
Status
Node status indicates the configuration state of the software and services on a node, andis independent of the health of a node. A node has an Operational status even when thenode suffers a hardware failure, or if the Platform/Cloud Compute Service malfunctions.
There are three types of status: resting, transitional, and active.
l Resting: The node will remain in this status until told to do otherwise.
l Transitional: VxRack Neutrino is waiting for an event before moving the node to aresting status.
l Active: VxRack Neutrino is actively managing a transition from an active statustowards a resting status.
Each node can have one of the following statuses in the Status column on the Nodespage, as described in the following table.
Table 35 Node status states
Status (as displayed in UI) Status type Description
Deleting Transitional A node might appear in this status temporarily until the node disappearscompletely. This status prevents use or discovery of the node during the deletionprocess.
Modifying Transitional Temporarily blocks other operations on the node while the system is in flux. Forexample, this status appears during disk removal operations. The node willreturn to the previous status when the operation is complete.
Operational Resting The node is in an operational state. If it is a platform or cloud compute node, thePlatform Service or Cloud Compute Service is configured to run on the node. If itis an unallocated node, it is online and functioning normally.
Nodes
Nodes view 107
Table 35 Node status states (continued)
Status (as displayed in UI) Status type Description
Removing Service Transitional The node is being removed from the Cloud Compute Service.This might take some time if there is a lot of data on the storage devices thatneeds to be moved to devices on other cloud compute nodes. To check if thenode is being removed from the Cloud Compute Service properly, click
Infrastructure > Storage > Backend > Capacity Health. You should see thatthe node is in a Removing state and that the data is being rebalanced off thenode.
Remove Service Failed Resting A failure occurred during transfer of the Platform Service between nodes. If thetransfer succeeds in installing the Platform Service on the new node, but doesnot uninstall the Platform Service from the old node, the old node is placed intothe Remove Service Failed status. The new node has an Operational status. Theold node can be recovered as an operational unallocated node by reimaging thenode.
Resuming Active VxRack Neutrino is actively managing a transition to the Operational status (or,in a failure, back to Suspended).
Resume Failed Resting Resuming the Platform or Cloud Compute Service on a node failed and mighthave left the node in an indeterminate state, therefore, the node is not in eitherSuspended or Operational status. Suspend, resume, and removing/transferringthe service from the node might be attempted from this status. A failed Resumeor a failed Transfer can leave a node in the Resume Failed status. Recovery toOperational can be attempted by doing a Resume on that node. If that operationfails or is inappropriate, try a Suspend on the node to get it into the Suspendedstatus. The Resume action might have failed because the node is unhealthy andneeds replacement or repair.
Service Suspended Resting Services were suspended on the node, but the remaining work to suspend thenode failed. Suspend the node again to get to Suspended status, or resume thenode to get to Operational status.
Suspended Resting The Platform/Cloud Compute Service is configured down (quiesced) on the node.
Suspending Active VxRack Neutrino is actively managing a transition to the Suspended status or, inthe case of failure, back to Operational.
Suspend Failed Resting Suspending the Platform or Cloud Compute Service on a node failed and mighthave left the node in an indeterminate state, therefore, the node is not in eitherSuspended or Operational status. Suspend, resume, and removing/transferringthe service from the node might be attempted from this status. A secondSuspend can be tried on the node to get the node into Suspended status, or aResume can be tried to put the node back to Operational. The Suspend actionmight have failed because the node is unhealthy and needs replacement orrepair.
Transfer Away Active The Platform Service on this node is being transferred away from this node toanother node.
Transfer To Transitional The Platform Service is being transferred to this node from another node.
Unknown Resting The status of the node could not be determined.
Nodes
108 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Service
The Service column denotes whether a node has the Platform Service, Cloud ComputeService, or no service running on it (that is, unallocated). The icons in the table on theNodes page denote the following services.
Table 36 Service icons
Icon Denotes:
Platform Service node
Cloud Compute Service node
Unallocated node
The VxRack Neutrino system requires a minimum of three nodes running the PlatformService (platform nodes) and three nodes running the Cloud Compute Service (cloudcompute nodes).
Capacity
The total raw storage capacity is displayed for each node in the Capacity column. The rawstorage capacity for each node is aggregated into the total raw storage capacity for therack. The total raw storage capacity value of the rack(s) in the VxRack Neutrino system isdisplayed at the top of the Nodes page.
There are several important things to consider about VxRack Neutrino raw storagecapacity:
l The total raw storage capacity for a node is not the same as the total storage capacityavailable for cloud compute use. A portion of the raw capacity on a node is availablefor cloud compute use. For example, a performance cloud compute node that has 1.5TB of raw capacity might have 1.3 TB of that capacity available for cloud compute use(because the remaining 200 GB is allocated to operating system requirements).Information on total storage available for cloud compute use and consumption ofcloud compute storage is presented in the Infrastructure > Storage > Backend >Capacity Usage view of the UI.
l Similarly, the total raw storage capacity of the VxRack Neutrino system is not thesame as the total storage capacity available for cloud compute use. The total rawstorage capacity number that is displayed at the top of the Nodes page represents allstorage in the system; this is all the storage on platform nodes, cloud computenodes, and unallocated nodes (including the storage that is used by the operatingsystem on the nodes, and the storage used for caching on capacity cloud computenodes). The total storage available for cloud compute use can be viewed in the Freefor Volume Allocation (Net) field in theInfrastructure > Storage > Frontend > Volumesview.
l Storage capacity in the VxRack Neutrino UI is reported using the binary system (base2) of measurement, which calculates 1 GB as 1,073,741,824 bytes. Storage devicemanufacturers measure capacity using the decimal system (base 10), so 1 gigabyte(GB) is calculated as 1 billion bytes. For example, a performance node with four 400GB SSDs has 1,600 GB (or 1.6 TB) of capacity using the decimal system ofmeasurement. But since VxRack Neutrino uses the binary system of measurement,the 1.6 TB node appears as 1.5 TB on the Nodes page.
Nodes
Nodes view 109
Memory
The physical memory in GB is displayed for each node in the Memory column. Dependingon the brick model, a node can have 128, 256, or 512 GB of memory. The node memoryvalues are aggregated into values for each rack in the system. The total memory for therack(s) in the VxRack Neutrino system is displayed at the top of the Nodes page. Thememory values that display in the UI use the binary system (base 2) of measurement.
Cores
The number of physical cores (CPUs) is listed for each node in the Cores column.Depending on the brick model, a node can have 12, 16, or 20 physical cores. Next to thenumber of physical cores is a number in parentheses. The number in parentheses is thenumber of logical cores for the node due to hyper-threading. By default, the number oflogical cores for a node is double the number of its physical cores. For example, a valueof 12 (24) in the Cores column means that the node has 12 physical cores and 24 logicalcores. A node with 12 physical cores will actually present 24 logical cores to theoperating system on that node. OpenStack sees these 24 logical cores as 24 virtual CPUs(vCPUs).
The number of physical and logical cores per node is aggregated into values for each rackin the system. The total number of physical and logical cores in the VxRack Neutrinosystem is displayed at the top of the Nodes page.
Node details viewThe node details view provides the Cloud Administrator with detailed information on aspecific node in the VxRack Neutrino system. It is accessed by clicking a node name inthe Nodes page.
The specific node you are viewing is indicated at the top of the page in this format:Nodes / <rack name> - <brick serial number> - <node name>. The node hardware health andstatus are also displayed at the top of the page along with any service the node isrunning (Cloud Compute, Platform, or Unallocated).
The upper section of this page contains two tiles: Hardware and Network.
The lower section of this page contains three tabs: Disks, Components, and Details.
These areas of the node details view are described in the following sections.
Hardware tile
The Hardware tile provides the following information:
l ID: The universally unique identifier (UUID) associated with the physical hardware ofthe node. (for example: 95ecdf80-64b9-11e4-906e-00163566263e).
l Capacity: The total raw storage capacity in TB of the node.
l Memory: The physical memory in GB of the node.
l Cores: The number of physical cores (CPUs) the node contains. The number of logicalcores the node contains is displayed in parentheses next to the number of physicalcores.
Network tile
The Network tile provides the following information:
Nodes
110 VxRack System 1000 with Neutrino 1.1 Administrator Guide
l FQDN: The fully qualified domain name of the node. (for example: lehi-rack1.lab.acme.com)
l Internal IPv4: The node's IPv4 address within the internal VxRack Neutrinomanagement LAN.
l External IPv4: The node's IPv4 address for the public network.
l IPMI IPv4: The node's IPv4 address for Remote Management Module (RMM) access.
Disks tab
This lower section of the node details view provides information on the disks that areconnected to the node, and the storage devices that are associated with the disks. TheDisks tab contains information on the name, health, capacity, and applicable actions thatcan be performed on each disk/storage device of a node. When the Cloud Administratorclick a disk or storage device in the list, the information on the selected disk/devicepopulates in the detail tile to the right of the list.
Each performance node contains four solid state disks (four 400 GB disks or four 800 GBdisks) and each capacity node contains 24 disks (two SSD disks and 22 HDD disks). Oneach performance or capacity node, there is also a 30 GB SATA DOM disk where the /bootpartition and operating system reside. This boot disk is a disk on motherboard (DOM)flash drive with a SATA interface that is plugged into the node's motherboard.
Disks are designated by the disk icon . Each disk in the VxRack Neutrino system has atleast one partition, which might correspond to the whole disk. A disk partition is called a
storage device. Storage devices are designated by the storage device icon . The disksand storage devices that are running the operating system display with the operatingsystem icon . The storage devices that are used for ScaleIO cloud compute storagedisplay with the ScaleIO icon . The storage devices on capacity nodes that are
used for caching reads display with the read flash cache icon . Read flashcaching maximizes the performance of the slower HDD devices within the system. Formore information on node disk architecture and caching, see Disk architecture on page127.
The disk and storage device information that displays per node in the Disks tab issummarized in the following table.
Table 37 Disk and storage device information displayed on the Disks tab
Node type Disk type Number of disks Number of storagedevices on disk
Disk capacity* Disk/storagedevice health**
Performancenode
SSDs 1
1
1
1
2 (OS and ScaleIO)
1 (ScaleIO)
1 (ScaleIO)
1 (ScaleIO)
372.50 GB or 745.1 GB Good
Suspect
Degraded
Severelydegraded
Unknown
DOM 1 N/A 29.8 GB
Total disks: 5 Total storage devices: 5
Capacity node SSDs 1
1
2 (OS and RFcache)
1 (RFcache)
HDDs 22 22 (ScaleIO) 1.6 TB
DOM 1 N/A 29.8 GB
Nodes
Node details view 111
Table 37 Disk and storage device information displayed on the Disks tab (continued)
Node type Disk type Number of disks Number of storagedevices on disk
Disk capacity* Disk/storagedevice health**
Total disks: 25 Total storage devices:25
*Capacity as shown in VxRack Neutrino UI uses the binary system (base 2) of measurement. See Storage capacity units on page 19for more information.
Disk/storage device health icon descriptions are similar to those described for node health in Table 34 on page 106.
ActionsIn the event that disks need to be replaced, actions on disks and their associated storagedevices will be required. Procedures on how to replace disks (customer replaceableunits, or CRUs) are described in the VxRack System with Neutrino 1.1 Hardware Guide.Actions that the Cloud Administrator can perform on disks and storage devices via theDisks tab vary depending on the type of node and disk.
l On an unallocated node: The Cloud Administrator can delete one or more disks byclicking the Delete Disk button next to the disk. (On unallocated nodes, actions canonly be taken on disks, not storage devices.)
l On a cloud compute node: The Cloud Administrator can remove a ScaleIO or RFcachestorage device from the Cloud Compute Service by clicking the Remove from Servicebutton next to the storage device. This removes the selected storage device(s) from ashared pool of storage (for more information, refer to Storage pool on page 130). If astorage device does not have a Remove from Service button next to it in the Actionscolumn, the storage device cannot be removed from the Cloud Compute Servicebecause it is being used for operating system requirements. After all the storagedevices are removed from the Cloud Compute Service on a particular disk, that diskcan be deleted. The Cloud Administrator can delete a disk by clicking the Delete Diskbutton next to the disk. If a disk does not have a Delete Disk button next to it in theActions column, the disk cannot be deleted because it is being used for operatingsystem requirements.
Note
After a disk has been replaced, the Cloud Administrator must manually add itsstorage device(s) to the Cloud Compute Service by clicking the Add to Service buttonnext to the device(s).
l On a platform node: No actions can be taken on its disks or storage devices.
Components tabThe Components tab in the lower section of the node details view lists the components,that is, the containers or processes that are running on the node.
Components are designated by the component icon . Components are listed with theirhealth status, version number, and what service it is part of (Cloud Compute Service orPlatform Service). Clicking Cloud Compute or Platform in the Service column next to aparticular component launches to the Infrastructure > Components page where thecomplete list of Cloud Compute and Platform Service components can be viewed. Refer to Introduction to Platform and Cloud Compute Service components on page 144 for moreinformation.
Nodes
112 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Details tabThe Details tab in the lower section of the node details view provides a list of nodespecifications, including rack name, brick and node model and serial numbers, and theVxRack Neutrino base OS software version running on the node.
Node actionsThe Cloud Administrator can perform various actions on nodes from the Nodes page. Thefollowing table provides a summary.
Table 38 Node actions in the Nodes page
Action Cloud computenode
Platform node Unallocated node Refer to:
Add Service X Add a node to the Cloud ComputeService on page 113
Remove Service X Remove a node from the CloudCompute Service on page 114
Suspend X X X Suspend a node on page 115
Resume X X X Resume a node on page 116
Reboot X X X Reboot a node on page 116
Shutdown X X X Shutdown a node on page 117
Power On X X X Power on a node on page 118
Power Off X X X Power off a node on page 118
Reset X X X Reset a node on page 119
Transfer X Transfer a platform node on page120
Delete X Delete a node on page 121
Add a node to the Cloud Compute ServiceAfter a new brick is added to a rack, or when a second rack is added to the VxRackNeutrino system, or after a node has been replaced, the Cloud Administrator can addadditional nodes to the Cloud Compute Service. The additional nodes increase the cloudcompute and storage capacity available in the OpenStack private cloud.
Before you begin
You must have Cloud Administrator privileges.
There must be unallocated node(s) in the VxRack Neutrino system to add to the CloudCompute Service.
Procedure
1. On the Nodes page, click the downward-facing arrow on the Manage button and selectAdd to Service.
Nodes
Node actions 113
The Add Nodes to Service page appears with a list of unallocated nodes that can beadded to the Cloud Compute Service.
2. In the Actions column next to the node(s) or rack(s) that you want to add to the CloudCompute Service, click the Select button.
When you click the Select button next to a rack, all the unallocated nodes in all thebricks within that rack are automatically selected.
3. Click Deploy Service.
Results
The selected node is added to the Cloud Compute Service. The Nodes page shows thatthe selected node is running the Cloud Compute Service and is no longer unallocated.
Remove a node from the Cloud Compute ServiceNormally the Cloud Administrator does this to reduce the number of nodes in the CloudCompute Service if cloud computing demands are lower than expected in the OpenStackprivate cloud. However, the Cloud Administrator might also do this to allow a node to bereplaced for maintenance reasons. EMC Global Services personnel is required onsite fornode maintenance replacement, but this action could be done by the Cloud Administratorworking together with EMC Global Services personnel.
Before you begin
You must have Cloud Administrator privileges.
There must be a minimum of three nodes running the Cloud Compute Service in theVxRack Neutrino system.
There must be enough unused storage capacity in the VxRack Neutrino ScaleIO storagesystem to rebalance the existing volume data when a node is removed from the CloudCompute Service.
If there are instances running on the node you want to remove from the Cloud ComputeService, you must migrate the instances off the node to another node. Otherwise, whenyou remove the node from the Cloud Compute Service, all instances are terminated andlost on the node. To migrate instances off the node before you remove it, do thefollowing:
1. On the Cloud Compute page in the VxRack Neutrino UI, click the OpenStackDashboard tab to launch to the OpenStack Dashboard UI.
2. In the OpenStack Dashboard UI, click System > Hypervisors.
3. In the Hypervisor table, locate the node you are going to remove and look in theInstances column to see the number of instances running on the node. If the numberis greater than 0, you must migrate the instances.
4. To migrate an instance, click System > Instances.
5. In the Instances table, locate the host/node you are going to remove from the CloudCompute Service, and in the Actions column next to that host/node, select LiveMigrate Instance from the drop-down list.
6. In the Live Migrate window, select a New Host and click Live Migrate Instance.
7. Repeat steps 5 and 6 for each instance that is running on the host/node you want toremove.
Procedure
1. On the Nodes page, click the downward-facing arrow on the Manage button and selectRemove from Service.
Nodes
114 VxRack System 1000 with Neutrino 1.1 Administrator Guide
The Remove Nodes from Service page appears with a list of nodes running the CloudCompute Service.
2. In the Actions column next to the nodes or racks from which you want to remove theCloud Compute Service, click the Select button.
When you click the Select button next to a rack, all the cloud compute nodes in all thebricks within that rack are automatically selected.
3. Leave the Force option set to the default False setting.
Note
Set the Force option to True only if you want to remove node(s) from the CloudCompute Service regardless of their current state (for example, even if they areunreachable). Using the Force option is not recommended.
4. Click Remove Service.
Results
The selected node is removed from the Cloud Compute Service. The Nodes page showsthat the selected node is now unallocated, and no longer running the Cloud ComputeService.
Suspend a nodeSuspending a node quiesces all activity in the containers that are running on that nodeand all VMs (instances) running on that node are stopped. Nodes are suspended prior toa maintenance operation, such as replacing memory on a node. You must suspend anode so it is in a suitable state to shut down.
Before you begin
You must have Cloud Administrator privileges and work with EMC Global Servicespersonnel.
You must suspend a platform/cloud compute/unallocated node before shutting it down,rebooting it, resetting it, or powering it off.
Procedure
1. On the Nodes page, in the Actions column next to the platform/cloud compute nodethat you want to suspend, select Suspend from the drop-down list.
The Suspend Node page appears.
2. In the Reason field, type the reason you are suspending this node.
3. Leave the Force option set to the default False setting.
Note
Set the Force option to True only if you want the Suspend Node operation to completeregardless of the current state of the node and without allowing it to quiesce itscomponents. Using the Force option is not recommended.
4. Click Suspend.
Results
A message indicates that the node is in the Suspended status.
Nodes
Suspend a node 115
When you want to bring back the node from a Suspended status, you must select Resumein the Actions column to bring it back to an operational state (Operational status) (see Resume a node on page 116).
Resume a nodeThis action moves a node out of Suspended status into Operational status.
Before you begin
You must have Cloud Administrator privileges.
The node must have a Suspended status.
The same Cloud Administrator who suspended the node must resume the node.
Procedure
1. On the Nodes page:
Option Description
To resume asingle node:
In the Actions column next to the cloud compute/platform nodeyou want to resume, select Resume from the drop-down list. TheResume Node page appears.
To resumemultiple nodes:
In the Bulk Actions drop-down list, select Resume. The BulkResume Nodes page appears with a list of nodes that areavailable to be resumed. Click the Select button next to eachnode you want to resume.
2. In the Reason field, type the reason that you are resuming this node.
3. Leave the Force option set to the default False setting.
Note
Set the Force option to True only if you want the Resume Node operation to completeregardless of the current state of the node and without allowing it to resume itscomponents in an orderly fashion. Using the Force option is not recommended.
4. Click Resume.
Results
A message indicates that the node has been successfully transitioned from a Suspendedstatus to an Operational status.
Reboot a nodeThe Cloud Administrator reboots nodes only during troubleshooting.
Before you begin
You must have Cloud Administrator privileges.
You must suspend the platform/cloud compute/unallocated node before rebooting it.See Suspend node on page 115 for more information.
Procedure
1. On the Nodes page:
Nodes
116 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Option Description
To reboot asingle node:
In the Actions column next to the cloud compute/platform/unallocated node you want to reboot, select Reboot from thedrop-down list. The Reboot Node page appears.
To rebootmultiple nodes:
In the Bulk Actions drop-down list, select Resume. The BulkReboot Nodes page appears with a list of nodes that are availableto be rebooted. Click the Select button next to each node youwant to reboot.
2. If you want the node ID LED on while the node is being rebooted, set the LED field toON.
3. Leave the Force option set to the default False.
Note
Set the Force option to True only if you want the Reboot Node operation to completeregardless of the current state of the node. Using the Force option is notrecommended.
4. Click Reboot.
Results
A message indicates that the selected node has successfully rebooted, and the node hasa status of Operational in the Nodes page.
Shut down a nodeWorking with EMC Global Services personnel, the Cloud Administrator would shut down anode to perform maintenance and repair activities, such as replacing memory on a node,or to troubleshoot certain issues. EMC Global Services personnel is required onsite for allnode maintenance and repair activities (other than disk replacement).
Before you begin
You must have Cloud Administrator privileges.
You must suspend the platform/cloud compute/unallocated node before shutting itdown. See Suspend node on page 115 for more information.
Procedure
1. On the Nodes page:
Option Description
To shut down asingle node:
In the Actions column next to the cloud compute/platform/unallocated node you want to shut down, select Shutdown fromthe drop-down list. The Shutdown Node page appears.
To shut downmultiple nodes:
In the Bulk Actions drop-down list, select Shutdown. The BulkShutdown Nodes page appears with a list of nodes that areavailable to be shut down. Click the Select button next to eachnode you want to shut down.
2. If you want the node ID LED light on during the shut-down process and until the nodeis online, set the LED field to ON. This action is recommended when replacing memoryon the node.
Nodes
Shut down a node 117
3. Click Shutdown.
Results
A message indicates that the selected node has been successfully shut down and thenode has a status of Offline on the Nodes page.
Power on a nodeThe Cloud Administrator powers on nodes that were shut down for maintenance.
Before you begin
You must have Cloud Administrator privileges.
The selected nodes must have been either powered off or shut down.
Procedure
1. On the Nodes page:
Option Description
To power on asingle node:
In the Actions column next to the cloud compute/platform/unallocated node you want to power on, select Power On from thedrop-down list. The Power On Node page appears.
To power onmultiple nodes:
In the Bulk Actions drop-down list, select Power On. The BulkPower On Nodes page appears with a list of nodes that areavailable to be powered on. Click the Select button next to eachnode you want to power on.
2. If you want the node ID LED on while the node is being powered on, set the LED fieldto ON.
3. Click Power On.
Results
A message indicates that the selected node has been successfully powered on and thenode has a status of Operational on the Nodes page.
Power off a nodeThis action allows the Cloud Administrator to power off selected unallocated nodes(nodes that are not running the Cloud Compute or Platform Service). Powering off a nodeis more abrupt than a shutdown, and the node is powered off regardless of its state. It isrecommended that you use the power off option with caution, and only on unallocatednodes.
Before you begin
You must have Cloud Administrator privileges.
Procedure
1. On theNodes page:
Option Description
To power off asingle node:
In the Actions column next to the unallocated node you want topower off, select Power Off from the drop-down list. The PowerOff Node page appears.
Nodes
118 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Option Description
To power offmultiple nodes:
In the Bulk Actions drop-down list, select Power Off. The BulkPower Off Nodes page appears with a list of nodes that areavailable to be powered off. Click the Select button next to eachnode you want to power off.
2. If you want the node ID LED on during the powering off process and until the node isonline, set the LED field to ON.
3. Set the Reimage field:
l If you want to reimage the disk with the operating system that is connected to thisnode, select True. The Reimage field should only be set to True if you are replacinga system disk (disk with the operating system on it) and want to reimage thesystem disk that is connected to this node. Reimaging the system disk reloads thebase OS software on the node when the node is powered back on.
l If you do not need to replace a system disk, select False.
4. Click Power Off.
Results
Powering off the node can take several minutes. A message indicates that the selectednode has been powered off and the node has a status of Offline on the Nodes page. TheSystem Power button on the front panel of the node indicates Off.
Reset a nodeThis action allows the Cloud Administrator to power down a node and then to power itback on in a single action. The Cloud Administrator uses this action only on unallocatednodes in troubleshooting scenarios, and only after trying to reboot the node. The resetnode action is abrupt because it powers the node off and on without regard to its state.For this reason, it is recommended that Cloud Administrators reset nodes with caution.
Before you begin
You must have Cloud Administrator privileges.
You must first suspend the unallocated node before resetting it (see Suspend node onpage 115 for more information).
Procedure
1. On the Nodes page:
Option Description
To reset a singlenode:
In the Actions column next to the unallocated node you want toreset, select Reset from the drop-down list. The Reset Node pageappears.
To reset multiplenodes:
In the Bulk Actions drop-down list, select Reset. The Bulk ResetNodes page appears with a list of nodes that are available to bereset. Click the Select button next to each node you want to reset.
2. If you want the node ID LED on while it is being powered off and until the node isonline, set the LED field to ON.
3. Set the Reimage field:
l If you want to reimage the disk with the operating system that is connected to thisnode, select True. The Reimage field should only be set to True if you are replacing
Nodes
Reset a node 119
a system disk (disk with the operating system on it) and want to reimage thesystem disk that is connected to this node. Reimaging the system disk reloads thebase OS software on the node when the node is powered back on.
l If you do not need to replace a system disk, select False.
4. Click Reset.
Results
A message indicates that the selected node has been reset and the node has a status ofOperational on the Nodes page.
Transfer a platform nodeThis action moves the Platform Service from one node to another within a brick, from onebrick to another, or from one rack to another. As hardware is added to the system, theCloud Administrator uses this action to distribute the three platform nodes to differentbricks or racks. The Cloud Administrator might also transfer a platform node if a node oroperating system disk fails. In such a failure, it is recommended that the transfer becompleted as soon as possible. This timeliness maintains the highest level of availabilityfor the Platform Service. Only one platform node can be transferred at a time.
Before you begin
You must have Cloud Administrator privileges.
Note
If a platform node must be transferred for node/disk maintenance purposes, the CloudAdministrator works together with EMC Global Services personnel to perform themaintenance.
An unallocated node to which the source platform node can be transferred must exist inthe VxRack Neutrino system.
Procedure
1. On the Nodes page, in the Actions column next to the platform node you want totransfer, select Transfer from the drop-down list.
The Transfer Node page appears with a list of unallocated nodes where the sourceplatform node can potentially be transferred.
2. In the Select Destination area of the Transfer Node page, click the Select button in theActions column next to the destination platform node.
3. Leave the Force option set to the default False setting.
Note
Set the Force option to True only if you want the Transfer Node operation to completeregardless of the current state of the source platform node and without allowing thePlatform Service to transfer its components in an orderly fashion. Using the Forceoption is not recommended.
4. Click Transfer.
Results
A message indicates the transfer from the source platform node to the destinationplatform node.
Nodes
120 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Delete a nodeThis action allows the Cloud Administrator to remove an unallocated node (node that isnot running the Cloud Compute or Platform Service) from the VxRack Neutrino system.The Cloud Administrator deletes a node when it needs to be replaced, or when a brickneeds to be removed from the VxRack Neutrino system.
Before you begin
You must have Cloud Administrator privileges.
Note
For node replacement, the Cloud Administrator works with EMC Global Servicespersonnel to replace the node.
An unallocated node(s) must exist in the VxRack Neutrino system. You cannot delete anode that has the Cloud Compute or Platform Service running on it.
If you want to delete a node that has the Cloud Compute Service running on it, you mustfirst remove the node from the service (refer to Remove a node from the Cloud ComputeService on page 114).
If you want to delete a node that has the Platform Service running on it, you must firsttransfer the Platform Service to a different node (refer to Transfer a platform node on page120).
Procedure
1. On the Nodes page, in the Actions column next to the unallocated node(s) you want todelete, select Delete from the drop-down list.
The Delete Node page appears with a warning that deleting the node(s) will remove itfrom the VxRack Neutrino inventory.
2. Click Delete.
Results
A message indicates that the node has been deleted. The node is no longer visible on theNodes page. The node ID LED will be on and the node will be powered off.
Nodes
Delete a node 121
Nodes
122 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 11
Networks
This chapter describes the information that can be viewed in the Infrastructure > Networksection of the VxRack Neutrino UI.
l Networks.............................................................................................................124
Networks 123
NetworksThe Cloud Administrator can view network information on the Infrastructure > Networkpage.
The Network page is visible only to the Cloud Administrator and provides an overall viewof the VxRack Neutrino network information. The network IP addresses shown on thispage were configured during the initial operating system and Platform Serviceinstallation. This page is divided into three sections: Storage, System, and a list ofnetwork names within the rack(s).
The Storage section lists the block storage virtual IP addresses for the primary,secondary, and tiebreaker meta data managers (refer to Meta Data Managers (MDM) onpage 129 for more information). These virtual IP addresses are used internally to providehighly available storage.
The System section lists the IP address range for the Floating IP Block. These customer-defined IP addresses can be assigned to virtual machines in the OpenStack environment.This section also lists the external IP address and fully qualified domain name (FQDN) ofthe rack.
For each rack in the VxRack Neutrino system, networks are listed by name with their IPaddress and type (MGMT or DATA). A management network is a private VLANmanagement network associated with the 1 GbE switches in the rack and a data networkis the public customer LAN data network associated with the 10 GbE switches in the rack.
Networks
124 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 12
Storage
This chapter provides an overview of VxRack Neutrino block storage (ScaleIO) andexplains the content that is visible to Cloud Administrators in the Infrastructure > Storagesection of the VxRack Neutrino UI.
l Introduction to VxRack Neutrino storage..............................................................126l Disk architecture................................................................................................. 127l ScaleIO components........................................................................................... 128l Protection domains and storage pools................................................................ 130l Volumes..............................................................................................................133l ScaleIO limitations..............................................................................................136l Storage UI page...................................................................................................137
Storage 125
Introduction to VxRack Neutrino storageThe content in this section is directed at Cloud Administrators who manage the computeand storage capacity in VxRack Neutrino. The Cloud Administrator must ensure thatsufficient storage capacity is available for the cloud compute needs of the users workingin the OpenStack environment. The Storage page of the VxRack Neutrino UI(Infrastructure > Storage) is meant to inform the Cloud Administrator of overall blockstorage capacity. If storage thresholds are close to being exceeded, the UI alerts theCloud Administrator to add more nodes to the system.
VxRack Neutrino storage uses EMC ScaleIO block storage technology. In the VxRackNeutrino system, the available storage on the devices attached to nodes running theCloud Compute Service is automatically grouped together in shared storage pools basedon node type (refer to protection domains and storage pools on page 130 for moreinformation).
Note
Cloud Administrators should use the Storage page only to view the used and availableblock storage capacity for cloud compute consumption. The page is meant to be atroubleshooting tool that EMC Global Services can use to assist Cloud Administrators inresolving storage issues. Changing the default storage configuration or performing anystorage configuration tasks on the Storage page is not recommended. For example,creating volumes to provision OpenStack instances should be done in the OpenStackDashboard UI in the Project > Compute > Volumes page, not in the VxRack NeutrinoStorage page.
VxRack Neutrino ScaleIO storage is provided by the nodes running the Cloud ComputeService, that is, the cloud compute nodes. The cloud compute nodes run the scaleio-sdscomponent/container which is part of the Cloud Compute Service. The SDS (ScaleIO DataServer) allows ScaleIO to use the storage on the node's storage devices (refer to ScaleIOData Server (SDS) on page 129 for more information). The ScaleIO storage devicesprovide the storage for volumes created in OpenStack. There are two types of volumes inOpenStack: ephemeral volumes are created when an instance is created, and persistentvolumes are created by users to store the data for that instance (refer to Ephemeral andpersistent volumes on page 133 for more information). There is also a persistent volumeused by the OpenStack Glance component to store cloud compute images. All OpenStackvolumes are provisioned through the ScaleIO storage devices residing on the cloudcompute nodes, as shown in the following figure.
Storage
126 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Figure 22 OpenStack consumption of VxRack Neutrino ScaleIO block storage
User creates a persistent volume
and attaches it to an instance
Instance
OpenStack Glance volume
stores the operating system
images used to create instances
OS images volume
instance data volume
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
performance nodes running Cloud Compute Service
VxRack Neutrino
ep
he
me
ral
volu
me
s
all volumes created in OpenStack
use ScaleIO block storage
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
A key benefit of VxRack Neutrino ScaleIO storage is its elasticity. If more compute poweror storage is required, the VxRack Neutrino Cloud Administrator can simply add morenodes, or remove nodes if required. Adding or removing resources into VxRack Neutrinotriggers an automatic rebalance/rebuild of the data within the remaining devices. Therebalance/rebuild process simplifies VxRack Neutrino storage management, by makingthe environment more dynamic and flexible. The VxRack Neutrino software immediatelyresponds to the changes, rebalancing the storage distribution and achieving a layout thatoptimally suits the new configuration. You can grow block storage in small or largeincrements.
When the Cloud Administrator adds a new node to the Cloud Compute Service, the newdisks are automatically added to a storage pool. A rebalance automatically kicks off torebalance the data across all available disks within the same storage pool. Similarly,when the Cloud Administrator removes a node from the Cloud Compute Service, its disksare automatically removed from the storage pool and a rebuild operation is initiated. Arebuild automatically recreates the missing data across the remaining disks in the samestorage pool, thus keeping capacity balanced within a storage pool.
Disk architectureVxRack Neutrino ScaleIO block storage uses the storage on the storage devices thatreside on the disks connected to each node in the Cloud Compute Service. Theperformance and capacity nodes in the VxRack Neutrino system contain different types ofdisks, as described in the following sections.
Performance node
Four SSD disks are connected to each performance cloud compute node:
l One SSD disk contains the operating system and has two storage devices(partitions): one is a 200 GB storage device that contains the operating system, andthe other storage device is 200 GB or 600 GB, depending on the p-series brick model.
l Three disks each have a single storage device, which is either 400 GB or 800 GB,depending on the p-series brick model.
Therefore, as shown in the following figure, the four disks connected to each performancecloud compute node have a total of five storage devices; four storage devices areavailable for ScaleIO storage and one storage device contains the operating system.
Storage
Disk architecture 127
Figure 23 ScaleIO storage devices on a performance cloud compute node
performance cloud compute node
SSD SSD disks
storage devices
200 GB
200 GB
400 GB 400 GB 400 GB
SSD SSD
ScaleIO
OS
ScaleIO ScaleIO ScaleIO
Capacity node
Two SSD disks and 22 HDD disks are connected to each capacity cloud compute node:
l One SSD disk contains the operating system and has two storage devices(partitions): one is a 200 GB storage device that contains the operating system, andthe other storage device is 200 GB that is used for read caching (see ScaleIO readflash cache on page 128 for more information). The second SSD disk has one 800 GBstorage device and is also used for caching. The purpose of the two SSD cachestorage devices is to accelerate the reads of the HDD devices. The SSD disks are notavailable for storage in the capacity cloud compute node.
l 22 HDD disks each have a single storage device of 1.8 TB storage capacity.
The following figure shows that the 24 disks connected to each capacity cloud computenode have a total of 25 storage devices; 22 storage devices are available for ScaleIOstorage, one storage device contains the operating system, and two storage devices areused caching purposes.
Figure 24 ScaleIO storage devices on a capacity cloud compute node
SSD SSD
capacity cloud compute node
200 GB
1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB
storage devices
disks
1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB 1.8 TB 800 GB
200 GB
HDDHDD HDDHDDHDDHDDHDDHDDHDDHDDHDDHDDHDDHDDHDDHDD HDDHDDHDDHDDHDDHDD
Cache
Cache
OS ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO ScaleIO
ScaleIO read flash cache
The VxRack Neutrino ScaleIO block storage on each capacity cloud compute node uses aread flash cache (RFcache) feature that allocates space on two storage devices forcaching reads. The read flash caching maximizes the performance of the slower HDDdevices within the system. ScaleIO RFcache is a read-only caching strategy designed forhigh-workload systems that have a read-heavy workflow. It does not provide for cachingof write-heavy workflows.
ScaleIO componentsThe VxRack Neutrino block storage architecture includes the following ScaleIO softwarecomponents:
ScaleIO Data Client (SDC)
The SDC is a ScaleIO component that exposes ScaleIO volumes as block devices tothe application that resides on the same node where the SDC is installed.
Storage
128 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Application IO requests go through the SDC on the cloud compute node to theappropriate ScaleIO Data Servers (SDSs), where the storage for servicing theapplication IO request resides. The SDC is installed on all nodes running the CloudCompute Service and the Platform Service. The SDCs on cloud compute nodesexpose volumes to the virtual machines in OpenStack. The SDCs on the platformnodes are used to initiate the creation and provide location information about theOpenStack Glance image volume; they are not used to access any other volumes orapplication data.
ScaleIO Data Server (SDS)
The SDS is a ScaleIO component that provides access to the block storage on thestorage devices of the cloud compute node where the SDS is installed. The SDSmanages single-node capacity and is installed on all nodes in the Cloud ComputeService. The SDS owns local storage that contributes to the ScaleIO storage pools.An SDS runs on every cloud compute node that contributes some or all of its localstorage space (HDDs, SSDs) to the aggregated pool of storage within the ScaleIOvirtual SAN. The role of the SDS is to perform the back-end IO operations asrequested by an SDC.
Meta Data Managers (MDM)
The MDM is a ScaleIO component that contains the metadata to configure andmonitor the VxRack Neutrino block storage system. The three-node MDM clusterservices the requests to manage storage, stores all of the cluster configuration data,handles errors ands failures, and initiates system rebuild/rebalance tasks. When thePlatform Service is deployed, two MDMs are deployed on two different platformnodes: one MDM is assigned the primary role and the other is secondary. The MDMuses an active/passive methodology with a tiebreaker component where the primarynode is active, and the secondary is passive. A tiebreaker component, which isdeployed on a third platform node, arbitrates between the two MDMs and can decidewhich is primary and which is secondary. The three-node MDM cluster manages theentire VxRack Neutrino block storage system.
ScaleIO Gateway
The Gateway is a ScaleIO component that acts as the front-end to the primary MDM.The ScaleIO Gateway interacts on behalf of the VxRack Neutrino Services' REST APIrequests to manage volumes and monitor performance. It is the primary HTTP/HTTPSREST endpoint used by OpenStack to initiate ScaleIO storage commands. TheScaleIO Gateway resides on all three platform nodes.
ScaleIO Controller
The ScaleIO Controller is a VxRack Neutrino component that manages the ScaleIOadministration. This component manages adding and removing nodes from ScaleIO,creates new storage pools when necessary, and participates in various maintenanceoperations. The ScaleIO Controller resides on all three platform nodes.
The MDMs and the Gateway elements of ScaleIO are part of the Platform Service, but thecloud compute nodes provide the storage for ScaleIO. The ScaleIO block storage softwarecomponents are installed on nodes and create a virtual SAN layer exposed to theapplications residing on the nodes, as shown in the following diagram. The ScaleIOstorage uses the disks on the nodes and the LAN infrastructure to create a virtual SANthat has all of the benefits of external storage.
Storage
ScaleIO components 129
Figure 25 VxRack Neutrino ScaleIO components on nodes
VxRack Neutrino p-series brick
platform node
SDC
virtual SAN layer
devices devices devices ScaleIO devices
platform nodeplatform node
application
performance cloud compute node
primary MDM(scaleio-mdm component)
secondary MDM (scaleio-mdm component)
tiebreaker MDM (scaleio-mdm component)
gateway(scaleio-gateway component)
gateway(scaleio-gateway component)
SDC
ScaleIO devices
SDC
ScaleIO devices
SDC
ScaleIO devices
SDC
ScaleIO devices
SDS (scaleio-sds component)
SDS (scaleio-sds component)
SDS (scaleio-sds component)
SDS (scaleio-sds component)
SDS (scaleio-sds component)
SDC SDC SDC
mapped volumes
VxRack Neutrino p-series brick
application
performance cloud compute node
application
performance cloud compute node
application
performance cloud compute node
application
performance cloud compute node
gateway(scaleio-gateway component)
Protection domains and storage poolsVxRack Neutrino block storage also includes protection domains and storage pools.
Protection domain
A protection domain consists of a set of nodes. A node can belong to only oneprotection domain at any one time. By VxRack Neutrino policy, a node placed in aScaleIO protection domain is allocated to a single service such as Cloud Compute.
Storage pool
A storage pool is a set of physical storage devices in a protection domain. Eachstorage device belongs to one (and only one) storage pool. When a volume iscreated, it spans over all the ScaleIO storage devices within a storage pool. Astorage pool can only contain a single type of disk grouped together; SSDs and HDDscannot be grouped together in the same storage pool.
When nodes are added to the Cloud Compute Service, the unused storage of each nodeis added to a protection domain. Within the protection domain, a storage pool isautomatically created and the unused storage devices connected to the compute nodesare automatically grouped into this storage pool. There is a 1:1 mapping of storage poolto protection domain.
There are two types of possible protection domains/storage pools in VxRack Neutrino;performance and capacity. As shown in the following figure, SSD devices fromperformance nodes are automatically grouped in the performance protection domain/storage pool, and HDD devices from capacity nodes are automatically grouped in thecapacity protection domain/storage pool. Depending on the number of nodes in thesystem, there can be multiple performance or capacity protection domains/storage pools(refer to Storage pool node limits on page 131 for more information).
The figure also shows how VxRack Neutrino block storage uses a sliced distributedvolume layout across the nodes in a storage pool that results in a highly scalable parallelI/O system (refer to Volumes on page 133 for more information).
Storage
130 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Figure 26 Performance and capacity protection domains and storage pools
Cloud Compute Service
pro
tect
ion
do
ma
in 1
pe
rform
an
ce
sto
rag
e p
oo
l
SSD
sH
DD
s
SDSSDSSDS
volu
me
s in p
erfo
rma
nce
stora
ge
po
ol
performance node SDSSDSSDS
SDS SDS SDS
SDS SDS SDS
cap
acity
sto
rag
e p
oo
l
pro
tect
ion
do
ma
in 2
capacity nodecapacity node
capacity node capacity nodecapacity node
capacity node
volu
me
s in ca
pa
city stora
ge
po
ol
performance node
performance nodeperformance nodeperformance node
performance node
vol 1
vol 2
vol 3
vol 4
vol 5
vol 6
In VxRack Neutrino v1.1, all accounts and projects use pools of shared storage from thecapacity and performance storage pools. Individual nodes cannot be grouped intocustomized storage pools and protection domains for individual accounts. In otherwords, storage cannot be allocated to specific accounts. The VxRack Neutrino v1.1behavior is that compute and storage nodes are shared across accounts.
Storage pool node limits
Performance storage poolThe number of performance storage pools is determined by the number of performancenodes in the p-series bricks that have been added to the Cloud Compute Service. Eachperformance storage pool must have a minimum of 3 performance nodes and a maximumof 48 performance nodes. In a 4-rack system, the maximum number of performancenodes available for the Cloud Compute Service is 177 nodes, as shown in the followingfigure.
Storage
Storage pool node limits 131
Figure 27 Performance storage pool node limits
Nodes 1-48
.
.
.
Performance Storage Pool 1
Nodes 97-144
.
.
.
Performance Storage Pool 3
Nodes 145-177
.
.
.
Performance Storage Pool 4
Nodes 49-96
.
.
.
Performance Storage Pool 2
VxRack Neutrino automatically groups performance nodes into performance storagepools based on the 3-node minimum and 48-node maximum per storage pool. Forexample, if a Cloud Administrator adds 60 performance nodes at once to the CloudCompute Service, VxRack Neutrino will automatically create two performance storagepools.
However, there are some considerations to bear in mind when storage pool limits arecrossed. As an example, consider the scenario where there are 48 performance cloudcompute nodes in the system and they are part of one performance storage pool. Now theCloud Administrator wants to add two more performance nodes to the Cloud ComputeService and attempts to do so in the VxRack Neutrino UI. However, the CloudAdministrator receives a UI error message that says a minimum of three performancenodes is required to create an additional storage pool. In this scenario, the CloudAdministrator could remove one node from the existing storage pool. Then the CloudAdministrator could add three nodes at once to the system to create a secondperformance storage pool. There would then be 47 nodes in the first storage pool, and 3nodes in the second storage pool.
Capacity storage poolThe number of capacity storage pools is determined by the number of capacity nodes inthe i-series bricks that have been added to the Cloud Compute Service. Each capacitystorage pool must have a minimum of 3 capacity nodes and a maximum of 12 capacitynodes. In a 4-rack system, the maximum number of capacity nodes available for theCloud Compute Service is 44 nodes, as shown in the following figure.
Figure 28 Capacity storage pool node limits
Nodes 1-12
Capacity Storage Pool 1
.
.
Nodes 13-24
Capacity Storage Pool 2
.
.
Nodes 25-36
Capacity Storage Pool 3
.
.
Nodes 37-44
Capacity Storage Pool 4
.
.
VxRack Neutrino automatically groups capacity nodes into capacity storage pools basedon the 3-node minimum and 12-node maximum per storage pool. For example, if a CloudAdministrator adds 30 capacity nodes at once to the Cloud Compute Service, VxRackNeutrino will automatically create three capacity storage pools.
However, there are some considerations to bear in mind when storage pool limits arecrossed. As an example, consider the scenario where there are 24 capacity cloud
Storage
132 VxRack System 1000 with Neutrino 1.1 Administrator Guide
compute nodes in the system and they are part of two capacity storage pools; eachstorage pool has 12 capacity nodes. Now the Cloud Administrator wants to add one morecapacity node to the Cloud Compute Service and attempts to do so in the VxRackNeutrino UI. However, the Cloud Administrator receives a UI error message that says aminimum of three capacity nodes is required to create an additional storage pool. In thisscenario, the Cloud Administrator could remove two nodes from one of the existingstorage pools. Then the Cloud Administrator could add three nodes at once to the systemto create a third capacity storage pool. There would then be 3 capacity storage pools inthe system, each consisting of 12 nodes, 10 nodes, and 3 nodes.
VolumesIn the VxRack Neutrino system, the term volume refers to a logical volume that isdistributed in 1 MB chunks across the physical storage devices within a storage pool.Users create volumes in the OpenStack Dashboard UI to provide storage for instances.The volumes are backed by the ScaleIO storage that the VxRack Neutrino cloud computenodes provide.
Ephemeral and persistent volumes
There are two types of volumes that are created in OpenStack: ephemeral and persistent.An ephemeral volume is used by an instance and disappears when the instance isterminated. A persistent volume is attached to an instance, but it remains intact when theinstance disappears. Both ephemeral and persistent volumes use the ScaleIO blockstorage provided by VxRack Neutrino.
When you create, or launch, an instance using the OpenStack Dashboard UI, twoephemeral volumes are created for that instance. For example, if you create an instancewith the p4.large flavor, two volumes are created, an 8 GB volume and a 32 GB volume.The 8 GB volume is the primary ephemeral disk (referred to as a root disk in OpenStack)that stores the operating system, and the 32 GB volume is the secondary ephemeral disk(referred to as the ephemeral disk in OpenStack).
In addition to the ephemeral volumes automatically created at instance creation, userscan create persistent volumes to attach to instances by using the OpenStack Cinder(block storage) service. These volumes are backed by VxRack Neutrino ScaleIO storageand they are persistent: they can be detached from one instance and re-attached toanother, and the data remains intact. When a volume is attached to an instance, it isexposed locally to the applications as a block device.
In the VxRack Neutrino OpenStack private cloud, when ephemeral and persistentvolumes are created they are defined as thick by default, but you can configure volumesto be thin provisioned using the OpenStack Dashboard UI. When a volume is thickprovisioned, it means that the space is pre-allocated when the volume is created. Onthin-provisioned volumes, space is only consumed when data is written to the volume.For information on how to configure thin provisioned volumes in OpenStack, refer to theVxRack System with Neutrino 1.1 OpenStack Implementation Guide.
The following table summarizes the characteristics of volumes in the OpenStack andVxRack Neutrino UIs.
Storage
Volumes 133
Table 39 Summary of volume characteristics in OpenStack and VxRack Neutrino
Volume in OpenStackDashboard UI is referredto as:
How is thevolumecreated?
Volume existswhen aninstance isterminated?
Volume backedby VxRackNeutrinoScaleIOstorage?
Thick-provisionedvolume by default?
Volume inVxRackNeutrino UI isreferred to as:
Ephemeral:
l Root disk: primaryephemeral volumethat contains theoperating system andthe application image
l Ephemeral disk:secondary ephemeralvolume
Automaticallycreated whenuser launches aninstance inOpenStack.
No Yes Yes Volume
Persistent User creates it inOpenStack, see Table 40 on page134.
Yes Yes Yes
How volumes are created in OpenStack
The following table describes the four ways to create volumes in OpenStack.
Table 40 Volume creation methods in OpenStack
How volumes are created inOpenStack:
Volumes created: In the OpenStack Dashboard UI:
Create an instance from an image. two ephemeralvolumes
1. Click Project > Compute > Instances > Launch Instance .
2. In the Instance Boot Source field, select Boot fromImage.
Create a volume and attach toinstance(s).
one persistentvolume
1. To create a volume, click Project > Compute > Volumes >
Create Volume.
2. To attach the volume to an instance, click Project >
Compute > Volumes > Manage Attachments.
Create a volume from an image. Whenyou source an instance from that imageand then launch the instance, you have abootable persistent volume. When youdelete the instance, you still have apersistent volume with the instance dataon it and you can boot an instance fromthat volume.
one persistentvolume
1. Click Project > Compute > Instances > Launch Instance .
2. In the Instance Boot Source field, select Boot fromImage (creates a new volume).
3. Create a volume by entering the Device Size and DeviceName for the volume.
Create a volume from a snapshot. one persistentvolume
1. Click Project > Compute > Volumes > Create Volume.
Storage
134 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 40 Volume creation methods in OpenStack (continued)
How volumes are created inOpenStack:
Volumes created: In the OpenStack Dashboard UI:
Note
A volume created from a snapshot inOpenStack displays as a snapshot in theVxRack Neutrino UI.
2. In the Volume Source field, select Volume and select thesnapshot from the list.
For more information on how to create and manage volumes and instances in theOpenStack Dashboard UI, refer to the OpenStack End User Guide.
Volumes created in OpenStack display in the VxRack Neutrino UI on the Services > CloudCompute > Volumes tab and in the Frontend tab Volume view of the Infrastructure >Storage page.
How volumes are provisioned from storage pools
Ephemeral volumesWhen a user creates an instance in OpenStack, the flavor that the user picks to create theinstance determines from which type of storage pool the ephemeral volumes will beprovisioned out of (See Table 18 on page 89 for a list of flavors.) For example, if a usercreates an instance with the v3.xlarge flavor, the ephemeral volumes are provisioned outof one of the capacity storage pools. If a user creates an instance with the v4.mediumflavor, the ephemeral volumes are provisioned out of one of the performance storagepools. Flavor names that contain the number 3 as the second character denote that theinstance's storage will be provisioned out of a capacity storage pool and flavor namesthat contain the number 4 as the second character denote that the instance's storage willbe provisioned out of a performance storage pool.
Persistent volumesWhen a user creates a persistent volume in OpenStack via Cinder, the volume type(Capacity or Performance) determines out of which type of storage pool the volume isprovisioned. Note that an instance's ephemeral volumes could be provisioned out of theperformance storage pool (because it was created using the p4.large flavor), but a usercould create a persistent volume with a volume type of capacity to attach to thatinstance. In this example, the instance ephemeral volumes are using storage from aperformance storage pool, but the persistent volume attached to it is using storage froma capacity storage pool.
Once the volume type is selected to create the storage for the volume, all the data chunksof the volume are stored in the storage devices belonging to that storage pool. Every datachunk is mirrored; that is, each data chunk is stored on the storage of two differentnodes, as shown in the following figure. Volume data is kept within the boundaries of astorage pool. A given volume belongs to only one storage pool and a given storage poolbelongs to only one protection domain. Distributing a volume over all the nodes in astorage pool ensures the highest and most stable and consistent performance possible,as well as the rapid recovery and redistribution of data.
Storage
How volumes are provisioned from storage pools 135
Figure 29 Even distribution of volume data chunks across devices in the performance storage pool
performance storage pool
performance protection domain
SDS10.282.57.141
SDS10.282.57.145
SDS10.282.57.144
SDS10.282.57.146
SDS10.282.57.141
Node 1
SDS10.282.57.148
SDS10.282.57.147
SDS10.282.57.149
SDS10.282.57.141
Node 1
SDS10.282.57.151
SDS10.282.57.150
volume 1
chunk 1
chunk 2
chunk 3
chunk 4
volume 2
chunk 1
chunk 2
chunk 3
chunk 4
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
device
ScaleIO
performance cloud compute node
performance cloud compute node
performance cloud compute node
performance cloud compute node
performance cloud compute node
performance cloud compute node
performance cloud compute node
performance cloud compute node
When volumes are created in OpenStack, VxRack Neutrino evaluates all storage pools inthe Cloud Compute Service and determines which to use. The evaluation filters outstorage pools that are not of the requested type (that is, if capacity is requested it willfilter out performance storage pools) as well as storage pools with insufficient usablecapacity to support the request. Once it has narrowed down the list of storage pools, ituses a round robin technique to select the storage pool. The round robin techniqueensures that volumes are distributed in a balanced manner across storage pools andeliminates hot spots.
ScaleIO limitationsThis section highlights the limitations of VxRack Neutrino ScaleIO block storage.
Volume size is in 8 GB increments
ScaleIO volume size is limited to a basic granularity of 8 GB. If volume size is not amultiple of 8, the size is rounded up. For example, a request to create a persistent volumeof 100 GB creates a volume of 104 GB. In OpenStack, the volume is displayed with itsrequested size (100) instead of its actual size (104). A corresponding warning message isprinted to the Cinder log.
This limitation also applies when you create a volume from an image in OpenStack. Forexample, a request to create a volume from an image that is 220 MB in size results in avolume of 8 GB.
Used storage capacity in VxRack Neutrino is double the size of a volume created inOpenStack
An important consideration for storage planning is that due to the way ScaleIO providesdata protection by mirroring data chunks, only 50 percent of the raw storage capacityshown in the VxRack Neutrino UI is actually available for usage. For example, a 56 GBvolume created in OpenStack actually occupies 112 GB of storage in VxRack Neutrino.The 56 GB volume displays as a 56 GB volume on the Services > Cloud Compute >Volumes and Infrastructure > Storage > Frontend > Volume pages in the VxRack NeutrinoUI. However, the actual 112 GB capacity that the volume requires is accounted for in theCapacity donut in the Storage page. For example, in the following screenshot, there is 7.8TB of available capacity in the VxRack Neutrino block storage system. After the 56 GBvolume is created in OpenStack, the light gray segment of the Capacity donut would show7.69 TB of available capacity (7.8 TB - .112 TB= 7.69 GB).
Storage
136 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Figure 30 Raw storage capacity in the Storage page
The following table shows how used capacity would be determined when a user createsan 80 GB and 10 GB volume in OpenStack.
Table 41 How VxRack Neutrino used capacity is calculated
User creates 80 GB volume inOpenStack; used capacity in VxRackNeutrino is:
User creates 10 GB volume inOpenStack; used capacity in VxRackNeutrino is:
Thick-provisioned persistent volume 160 GB* 32 GB**
Thin-provisioned persistent volume 2 MB 2 MB
* 80 GB volume created in OpenStack, this is divisible by 8, so ScaleIO does not have to round up the volume size, used capacityin VxRack Neutrino is 80 x 2 = 160 GB.
** 10 GB volume created in OpenStack, this is not divisible by 8, so ScaleIO rounds up volume size to 16 GB. Used capacity inVxRack Neutrino is 16 x 2 = 32 GB.
1 MB is the minimum size of a thin-provisioned volume. 8 GB is the minimum size of a thick-provisioned volume.
Storage UI pageThis section describes the three Overview, Frontend, and Backend tabs on the Storagepage of the VxRack Neutrino UI.
Frontend storage refers to the SDCs and their mapped volumes. This storage is calledfrontend because the SDCs interact directly with the applications running on the cloudcompute node and the volumes provide the elastic block storage needed to run theapplications on the node. Backend storage refers to the SDSs and their associatedstorage devices that are grouped into the Performance storage pool, as shown in thefollowing diagram.
Storage
Storage UI page 137
Figure 31 Front-end and back-end block storage
SDC10.282.57.141
front-end storage
SDC10.282.57.142
SDC10.282.57.143
mapped volumes mapped volumes mapped volumes
SDS10.282.57.142
SDS10.282.57.143
SDS10.282.57.141
back-end storage
application application
SDC10.282.57.144
mapped volumes
SDS10.282.57.144
application
/dev/sda2
performance storage pool
application
cloud compute node
/dev/sdb1
/dev/sdc1
/dev/sdd1
/dev/sdb2
/dev/sdc1
/dev/sdd1
/dev/sde1
/dev/sda2
/dev/sdb1
/dev/sdc1
/dev/sdd1
/dev/sdb2
/dev/sdc1
/dev/sdd1
/dev/sde1
cloud compute node cloud compute node cloud compute node
Overview tab
The tiles on the Overview tab provide a visual overview of storage system status. Thedefault view is for the entire storage system, but you can configure this view to presentinformation for a specific protection domain by selecting a protection domain in theEntire System drop-down list. You can then drill down and select a view for a specificstorage pool in that protection domain by selecting the storage pool in the EntireProtection Domain drop-down box.
The tiles on the Overview tab are dynamic, and contents are refreshed at 10-secondintervals. The following table describes the tiles shown on the Overview tab.
Table 42 Tiles on the Overview tab of the Storage page
Tile Description
Capacity Displays a donut chart with colored segments that show the spare, protected, failed, and unusedblock storage capacity for cloud compute use. The values are written next to the coloredsegments. The colored segments in the donut chart are:
l Dark gray: represents spare capacity. Spare capacity is the amount of capacity that isreserved for system use, when recovery from failure is required. This capacity cannot be usedfor storage purposes. VxRack Neutrino automatically adjusts the spare capacity percent foreach storage pool, based on what is required to survive a single node failure without dataloss or data unavailability.
l Green: represents protected capacity. Protected capacity is the used storage capacity that isfully protected (primary and secondary copies of the data exist).
l Red: represents failed capacity. This is the amount of capacity that is unavailable (neitherprimary, nor secondary copies).
l Light gray: represents unused capacity. This additional storage is available for cloudcompute purposes. However, only half of the unused capacity shown in the light gray
Storage
138 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 42 Tiles on the Overview tab of the Storage page (continued)
Tile Description
segment is available for volume allocation, since ScaleIO requires double the volume size toprotect data. So if there is 70.0 TB of unused capacity in the system, 35.5 TB is available forvolume allocation (see Used storage capacity in VxRack Neutrino is double the size of avolume created in OpenStack on page 136). The amount of capacity available for volume
allocation can be viewed in the Free for Volume Allocation (Net) field in the Storage >
Frontend > Volumes view.
The center of the donut chart displays the total block storage capacity on the cloud computenodes. Note that this does not represent the total amount of capacity available for volumeallocation.
The inner edge of the donut chart displays the amount of capacity used to store snapshots. Thearc displays the total amount of available data. The filled (bronze) part represents the capacityused by original data volumes, and the hollow (outlined) part represents the capacity used forsnapshot volumes. This displays the ratio of snapshot usage. Storage capacity details such asthe amount storage the system is using for thick-provisioned volumes, thin-provisioned volumes,
and snapshots can be viewed in the Storage > Backend tab by clicking the VxRack Neutrino
system object in the table, and clicking Capacity in the details pane that displays next to thetable. The Backend tab on page 140 provides more detailed information on snapshot usage.
Note
Storage capacity in the VxRack Neutrino UI is reported using the binary system (base 2) ofmeasurement, which calculates 1 GB as 1,073,741,824 bytes (see Storage capacity units onpage 19 for more information).
Bandwidth Displays the bandwidth performance statistics of the block storage system. Summarizes thereads, writes, and totals of throughput.
IOPS Displays the IOPS performance statistics of the block storage system. Summarizes the reads,writes, and totals of IOPS.
Volumes Displays the number of volumes mapped to SDCs, the number of volumes defined across thesystem, and the free capacity. The large number in the center of this tile is the number of volumesthat are mapped to SDCs. Volumes mapped to SDCs on cloud compute nodes are exposed to thespecified cloud compute node, creating a block device that can be used by instances residing onthat node.Defined volumes are the total number of volumes that have been created in the system, but notnecessarily mapped to an SDC.
The amount of free capacity shown on this tile is the maximum amount that can be used forcreating a new volume. This amount takes into account how much raw data is needed formaintaining RAID 1 and system spares. Note that the number of volumes and the total capacityinclude snapshots.
SDCs Displays the number of SDCs (platform and cloud compute nodes) in the system. The largenumber in the center is the number of SDCs connected to the MDM. The defined number includesall SDCs defined in the system (some of which may be disconnected from the MDM).
Protection Domains Displays the number of protection domains and the number of storage pools in the system. Thereis one storage pool per protection domain.
SDSs Displays the number of SDSs (cloud compute nodes) in the system and the number of storagedevices defined in storage pools.
Storage
Overview tab 139
Table 42 Tiles on the Overview tab of the Storage page (continued)
Tile Description
Management Displays the status of the three-node MDM cluster. When you hover your mouse pointer over thistile, a tooltip displays the IP addresses used by the MDM cluster. Actives = 3/3 means that 3 ofthe 3 MDM cluster members are active (this is when all members (primary, secondary, andtiebreaker MDMs, are in a normal state). Replicas = 2/2 means that the primary and secondaryMDMs are in a normal state (the primary and secondary MDMs are replica MDMs).
Frontend tab
The Frontend tab provides a tabular view with detailed information on SDCs (platformand cloud compute nodes), volumes, and snapshots. This page gives the CloudAdministrator information on how volumes are mapped to SDCs, that is, which platformor cloud compute node a particular volume is mapped to. When a volume is mapped toan SDC, the platform or cloud compute node has access to the volume and exposes itlocally to applications as a standard block device.
Click any object in the table to find more information about that object. The more detailedinformation will appear in the pane beside the table.
You can select different table views in the View drop-down list. You can select SDC,Snapshot, or Volume to arrange the information in the table by SDC, snapshot, orvolume.
l SDC - Arranging by SDC lists the platform and cloud compute nodes in the system.Information on the node IP address, MDM connection status, and the number ofmapped volumes is presented for each SDC (node).
l Snapshot - Arranging by snapshot lists the volumes and snapshots in a storage pool.Information on the number of snapshots per volume, SDC volume mapping, totalvolume capacity, and snapshot used capacity is presented for each volume andsnapshot.
l Volume - Arranging by volume lists the volumes and snapshots in a storage pool.Information on the size, type (thick, thin, or snapshot), number of mapped SDCs(nodes), and data obfuscation (by default, volume data is not obfuscated) ispresented for each volume and snapshot.
Note
The volume or snapshot name shown in the Volume table view is the UUID that ScaleIOautomatically generates for the volume/snapshot when a user creates it in OpenStack. Tofind the OpenStack volume name that corresponds to the ScaleIO UUID shown in theVolume view, you must navigate to the Cloud Compute > Volumes page in the UI andmatch the ScaleIO UUID in the Storage Name column with the OpenStack volume name inthe Cinder Name column. The ScaleIO UUID can also be found in OpenStack. In theOpenStack Dashboard UI, navigate to the OpenStack volume name in the Project >
Compute > Volumes page; when you click the volume, in the volume details page, theScaleIO UUID is shown as the native_vol_name parameter.
Backend tab
The Backend tab provides detailed information on the following items:
Storage
140 VxRack System 1000 with Neutrino 1.1 Administrator Guide
l Overall block storage system
l protection domains
l SDSs (cloud compute nodes) in the protection domains
l Storage pools
l Storage devices in the storage pools
Click any object in the table to find more information about that object. The more detailedinformation will appear in the pane beside the table. For example, you can click acapacity protection domain in the table, and click Read Flash Cache in the pane next tothe table to view information on the cache size, percent of cache used, and read hit ratefor that protection domain.
You can select different table views in the View drop-down list. For example, the Overviewtable view provides a general overview of the capacity and health of block storageobjects. Information on the total capacity, capacity in use, bandwidth, and IOPs ispresented for the block storage objects.
The Configuration table view provides an overview of the number of objects per type inthe block storage system with their capacity. This view lets you determine the amount offree capacity available for creating an additional volume. Information on the totalcapacity, SDSs, devices, storage pools, volumes, and free capacity for volume allocationis presented for the block storage objects.
You can arrange the information in the table by either:
l SDS (Actions > Arrange by SDSs)
l Storage Pool (Actions > Arrange by Storage Pools)
Arranging by SDS presents a list of the cloud compute nodes within each protectiondomain. Clicking the right-facing arrow next to each SDS node opens a list of its storagedevices and the storage pool to which they belong.
Arranging by storage pool displays the storage pool in each protection domain. Clickingthe right-facing arrow next to a storage pool opens a list of all the SDS nodes and theirdevices in the storage pool.
You can also filter the information presented in the table by using the Filter options. Youcan enter text in the search box and in the drop-down box next to it, select By ProtectionDomain, By SDS, By Storage Pool, or By Device, and then select Show Filtered in theFilter drop-down box to display your filtered results in the table. You can also click blockstorage objects in the table and then select Show Filtered in the Filter drop-down list todisplay information for only those selected objects.
Storage
Backend tab 141
Storage
142 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 13
Components
This chapter describes the Platform Service and Cloud Compute Service components thatcan be viewed in the Infrastructure > Components section of the VxRack Neutrino UI.
l Introduction to Platform and Cloud Compute Service components.......................144l Platform Service components..............................................................................145l Cloud Compute Service components................................................................... 147
Components 143
Introduction to Platform and Cloud Compute Service componentsThe Platform and Cloud Compute Services are comprised of multiple open sourcesoftware components. VxRack Neutrino integrates the open source software componentsinto an integrated platform of converged hardware and system software. The termcomponents refers to the Docker containers that are running on the nodes. Docker is asoftware development technology that packages application code and operating systembinaries in a single entity: the container. Docker itself is a self-sufficient runtime for Linuxcontainers.
The Cloud Administrator can view the components and the nodes on which they run onthe Infrastructure > Components page of the VxRack Neutrino UI. Only the CloudAdministrator can view the Components page. This page can be useful to the CloudAdministrator in troubleshooting scenarios where the Cloud Administrator wants toquickly view which node(s) a particular component is running on, or the software versionof a particular component. The Components page is divided into two tabs, Platform andCloud Compute. The Cloud Administrator uses these tabs to view the components thatcomprise the Platform Service and Cloud Compute Service, respectively. The followingfigure illustrates which components run on platform and cloud compute nodes.
Components
144 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Figure 32 Component map for Platform and Cloud Compute nodes
cloud compute node
cc_nova_compute
platform node
cc_rabbitmq
cc_memcached
cc_horizon
cc_ceilometer
cc_aodh
keystone
nginx
haproxy
mysql-galera
db-controller
mnr-allinone
elasticsearch
logstash
logcourier
kibana
cc_neutron_controller
cc_nova_controller
cc_glance_controller
cc_cinder_controller
cc_heat_controller
OpenStack service controllers
internal orchestration/ deployment components
monitoring/loggingcomponents
OpenStack services
load balancers
messaging and cachingcomponents
block storagemanagementcomponents
databasecomponents
OpenStack Nova cloud compute service
network routing component
logcourier
logging component
scaleio-mdm*
scaleio-gateway
scaleio-tiebreaker**
* scaleio-mdm component runs on two different platform nodes
** scaleio-tiebreaker component runs on third platform node
(scaleio-mdm and scaleio-tiebreaker would never run on the same platform node)
cc_neutron_network
scaleio-controller
scaleio-sds
ospf-usvc
block storage (ScaleIO) data server component
consul
system health component
ospf-usvc
c3
bedrock
component-registry
node-inventory
registry
platform-controller
etcd-service
upgrade
ui
license
esrs
account
consul
Platform Service componentsOn the Platform tab, the Platform Service components are listed and designated by the
component icon . Each Platform Service component is listed with its name, healthstatus, and software version. Under each Platform Service component, the nodes runningthat component are listed. The nodes are listed by their FQDN and are represented by a
Platform Service icon or a Cloud Compute Service icon . The Cloud Administratorcan view more detailed information on a particular node by clicking the node FQDN,which launches to the Node details view on page 110. The following table lists thecomponents that comprise the Platform Service.
Components
Platform Service components 145
Table 43 Platform Service components
Platform Servicecomponent
Runs on: Description
account platform node Authenticates users in VxRack Neutrino accounts and determines their associatedroles. The account component is based on the OpenStack Keystone Identity Service.
bedrock platform node Automates the install/upgrade of the VxRack Neutrino system. Contains Ansibleautomation scripts (playbooks) for install, upgrade, and add/remove node operations.
c3 platform node Manages the cloud compute infrastructure. The Cloud Compute Controller (c3)component is a VxRack Neutrino API used by internal Cloud Compute subcomponentsto orchestrate services or manage internal configuration. It can also be used byexternal systems to query the VxRack Neutrino cloud compute system.
component-registry platform node Keeps track of which components have been deployed on which nodes.
consul platform and cloudcompute nodes
Stores information about the health of the VxRack Neutrino components. Thisinformation is used by the component-registry component.
db-controller platform node Acts as an interface between the MySQL common database and the components thataccess it. The db-controller component creates, lists, and deletes databases for otherVxRack Neutrino components to keep the MySQL database secure.
elasticsearch platform node Stores the processed, time-stamped log entries received from the logstash PlatformService component and then indexes the logs based on the fields provided bylogstash.
esrs platform node Provides remote monitoring, diagnosis, and repair through EMC Secure RemoteServices (ESRS), which implements VPN access between the VxRack Neutrino datacenter and EMC Customer Service. Used for alerting EMC Customer Service personnelwhen attention is required. EMC personnel use ESRS for remote access and debuggingof customer equipment.
etcd-service platform node Stores information about the state of the VxRack Neutrino system.
haproxy platform node Balances the TCP traffic.
keystone platform node Authenticates and authorizes VxRack Neutrino account users using the OpenStackKeystone Identity Service.
kibana platform node Takes the analyzed data held in the elasticsearch Platform Service component andpresents it in the VxRack Neutrino Reports UI as graphs and charts. The kibanacomponent is the web visualization front end for logging data; it presents anaggregated view of logs in the UI.
license platform node Uploads license files.
logcourier platform and cloudcompute nodes
Collects logs from all of the components in the VxRack Neutrino system.
logstash platform node Parses the log entries collected from multiple logcourier components, extractsinformation (log level, client IP, class name, category, etc.), processes it into acommon format, and then pushes it to the elasticsearch analytics datastore.
mnr-allinone platform node Collects metrics and detects when metric values indicate a failure or other conditionthat warrants attention and then issues alerts. Monitors the VxRack Neutrino systemand aggregates information into reports.
mysql-galera platform node Stores data as the VxRack Neutrino common database.
Components
146 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 43 Platform Service components (continued)
Platform Servicecomponent
Runs on: Description
nginx platform node Balances both HTML and API traffic. All management traffic is routed through the nginxload balancer.
node-inventory platform node Maintains a list of the nodes in a VxRack Neutrino system, tracks which nodes areallocated to either the Platform or Cloud Compute Services, and maintains certainaspects of the status of the node (for example, whether it is Operational or Offline).
ospf-usvc platform and cloudcompute nodes
Allows VxRack Neutrino components to declare virtual IP (VIP) and route informationthat needs to be configured into the L3 switch using the Open Shortest Path First(OSPF) protocol. This component contains the open source Quagga OSPF process anda VxRack Neutrino internal REST API.
platform-controller platform node Installs the subcomponents associated with the Platform Service.
registry platform node Stores and distributes Docker images.
scaleio-controller platform node Manages the ScaleIO block storage system administration. This component managesadding and removing nodes from ScaleIO, creates new storage pools when necessary,and participates in various maintenance operations. For more information on the blockstorage system, refer to Introduction to VxRack Neutrino storage on page 126.
scaleio-gateway platform node Monitors performance of the block storage system and manages volumes.
scaleio-mdm platform node Services the requests to manage storage. When the Platform Service is deployed, aprimary and a secondary Meta Data Manager are deployed on different platformnodes. Each contains the metadata to configure and monitor the VxRack NeutrinoScaleIO block storage system.
scaleio-sds cloud computenode
Provides access to the block storage on the storage devices of the cloud computenode where this component is installed. The ScaleIO data server (SDS) manages thestorage capacity of a single node.
scaleio-tiebreaker platform node Arbitrates the two MDMs and can decide which is primary and which is secondary.This tiebreaker MDM is deployed on a third platform node, separate from the primaryand secondary MDMs.
ui platform node Provides an aggregated view in the VxRack Neutrino UI.
upgrade platform node Orchestrates upgrade operations. Contains an internal REST API for downloading andperforming upgrades.
Cloud Compute Service componentsOn the Cloud Compute tab, the components that comprise the Cloud Compute Service
are listed and designated by the component icon . Each Cloud Compute component islisted with its name, health status, and software version. Under each Cloud ComputeService component, the nodes running that component are listed. The nodes are listed by
their FQDN and are represented by a Platform Service icon or a Cloud Compute
Service icon . All Cloud Compute Service components run on platform nodes, exceptthe cc_nova_compute component, which runs on cloud compute nodes. The CloudAdministrator can view more detailed information on a particular node by clicking the
Components
Cloud Compute Service components 147
node FQDN, which launches to the Node details view on page 110. The following tablelists the components that comprise the Cloud Compute Service.
Table 44 Cloud Compute Service components
Cloud ComputeService component
Runs on: Description
cc_aodh platform node Contains the OpenStack Alarming (Aodh) API. Provides alarms and notifications basedon metrics and events collected by the OpenStack Ceilometer API.
cc_ceilometer platform node Contains the OpenStack Ceilometer telemetry API. Provides metering, usage reporting,and billing services to individual users of the VxRack Neutrino OpenStack privatecloud.
cc_cinder_controller platform node Contains the OpenStack Cinder block storage API. Manages volumes and orchestratesvolume creation and attachment operations.
cc_glance_controller platform node Contains the OpenStack Glance image service API and the image registry. Imagesrefers to virtual copies of hard disks. Glance allows these images to be used astemplates when deploying new virtual machine instances.
cc_heat_controller platform node Contains the OpenStack Heat orchestration API. Allows users to store therequirements of a cloud application in a template file that defines what resources arenecessary for that application. The template file helps manage the infrastructureneeded for a cloud-native application to run.
cc_horizon platform node Contains the OpenStack Dashboard UI for all the OpenStack components.
cc_memcached platform node Contains the open source memcached caching system, used by the cc_horizoncomponent to optimize OpenStack Dashboard UI performance by alleviating databaseload.
cc_neutron_controller platform node Contains the OpenStack Neutron networking API. Creates the networkingdependencies of a virtual machine and creates/manages virtual networks. Ensuresthat all OpenStack components can communicate with one another.
cc_neutron_network platform node Provides the routing gateway for the virtual tenant networks. Encapsulation/decapsulation of VXLAN packets occur on the platform node where this component isinstalled, and the packets can be forwarded to the physical network.
cc_nova_compute cloud computenode
Contains the OpenStack Nova compute engine; a node that has this container runningon it is running the VxRack Neutrino Cloud Compute Service and is a cloud computenode.
cc_nova_controller platform node Contains the OpenStack Nova compute API. Manages virtual machines (instances) tohandle computing tasks and determines which host a virtual machine is placed onwhen a request to spawn a virtual machine is made.
cc_rabbitmq platform node Contains the open source rabbitmq message broker software, which provides theAdvanced Message Queuing Protocol (AMQP) layer for all the OpenStack components.
Components
148 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 14
Reports
This chapter describes the reports that can be viewed in the Reports section of theVxRack Neutrino UI and describes how to customize reports.
l Overview............................................................................................................. 150l Reports summary................................................................................................ 150l Health................................................................................................................. 151l Performance .......................................................................................................151l Alerts.................................................................................................................. 152l Capacity..............................................................................................................160l Chargeback.........................................................................................................161l Inventory.............................................................................................................163l License............................................................................................................... 164l Logs.................................................................................................................... 164l Playbook logs......................................................................................................165l Customize columns in report tables.................................................................... 165l Toggle the legend in report tables....................................................................... 166l Set a reporting period..........................................................................................166l Export a report.................................................................................................... 168
Reports 149
OverviewUse the Reports section of the VxRack Neutrino UI to see alerts on health, performance,and capacity issues, troubleshoot failures using centralized logs, diagnose performanceand capacity problems, forecast resources, and compute chargeback/showback.
Reports summaryThe VxRack Neutrino Cloud Administrator and Cloud Monitor can see a total of ninereports in the Reports section of the VxRack Neutrino UI: health, performance, alerts,capacity, chargeback, inventory, license, logs, and playbook logs. The VxRack NeutrinoAccount Administrator and Account Monitor see a subset of reports in the VxRackNeutrino UI: performance, alerts, capacity,chargeback, and inventory.
The following table describes the reports available in the VxRack Neutrino UI.
Table 45 List of reports shown in the VxRack Neutrino UI
Report name Description
Health Up-to-the-minute heat maps and availability reports for nodes, disks, switches, and services, plushardware sensor data.
Performance System-wide information on CPU and memory utilization, IOPs, and throughput for nodes and services,and utilization and packet information for network switches.
Alerts Active and historical alerts by severity, category, service, and account. Use the gear icon to configure alerttriggers and clearances.
Capacity Used and available resources, both system-wide and by account. You can also forecast resource usage byrunning a simulation.
Chargeback Information on services and resource charges, per account, and per service. You can configure costs onthe default service level, and add new service levels.
Inventory Service-specific information such as instance data for the Cloud Compute Service.
License Information on your entitlements for Cloud Compute Service CPU cores and storage, and how many arecurrently consumed.
Logs Visualization of log entries by service and category, per node. Highly customizable.
Playbook logs Lists ansible-playbook execution logs, which show verbose output from playbook runs for install/upgrade operations and operations performed on nodes, including adding/removing nodes from theCloud Compute Service.
Reports
150 VxRack System 1000 with Neutrino 1.1 Administrator Guide
HealthThe health reports contain up-to-the-minute heat maps and availability reports for nodes,disks, switches, and services, plus hardware sensor data.
Table 46 Summary of health reports
Health reports Role required
Infrastructure Nodes Heat maps (average over past 15 minutes),summary listings (average over past week),and node details.
Cloud Administrator or CloudMonitor
Storage Heat maps (average over past 15 minutes),summary listings (average over past week),and disk details.
Cloud Administrator or CloudMonitor
Network Heat maps (average over past 15 minutes),summary listings (average over past week),and switch details.
Cloud Administrator or CloudMonitor
Platform Heat maps (average over past 15 minutes),summary listings (average over past week),and component details.
Cloud Administrator or CloudMonitor
Environmental Sensor readings and health. Cloud Administrator or CloudMonitor
Cloud Compute Nodes Heat maps (average over past 15 minutes),summary listings (average over past week),and details of node health.
Cloud Administrator or CloudMonitor
Components Heat maps (average over past 15 minutes),summary listings (average over past week),and details of component health.
Cloud Administrator or CloudMonitor
PerformancePerformance reports include system-wide information on CPU and memory utilization,IOPS and throughput for nodes and services, and utilization and packet information fornetwork switches. For each account, reports include resource utilization and performancemetrics such as IOPS and bandwidth.
Table 47 Summary of performance reports
Performance reports Role required
Infrastructure Storage Disk read and write IOPs and bandwidth Cloud Administrator orCloud Monitor
Network Switch CPU utilization, memory utilization, and port details Cloud Administrator orCloud Monitor
Platform - Nodes CPU utilization, memory utilization, disk IOPs, disk andnetwork throughput
Cloud Administrator orCloud Monitor
Reports
Health 151
Table 47 Summary of performance reports (continued)
Performance reports Role required
Platform -Components
Container CPU and memory utilization, IOPs andthroughput
Cloud Administrator orCloud Monitor
Cloud Compute Nodes CPU utilization, memory utilization, storage IOPs, networkrate
Cloud Administrator orCloud Monitor
Instances Instances, CPU and memory utilization, storage IOPs,network utilization
Account Administrator,Account Monitor, CloudAdministrator, or CloudMonitor
CPU utilization Utilization for instances Account Administrator,Account Monitor, CloudAdministrator, or CloudMonitor
Memoryutilization
Memory utilization for cloud compute notes Account Administrator,Account Monitor, CloudAdministrator, or CloudMonitor
Accounts <Specific> account Instances, and CPU, memory, storage and networkutilization for the selected account
Cloud Administrator
AlertsThese reports include active and historical alerts by severity, category, type, severity, andaccount. Use the gear icon to configure when alerts are triggered and cleared.
Table 48 Summary of alert reports
Alerts reports Description Role required to view alerts Role required to configurealerts
Active alerts Alerts with a state of active appear initiallyin this report.
Cloud Administrator, CloudMonitor, AccountAdministrator, or AccountMonitor
Cloud Administrator or AccountAdministrator
Historical alerts All active and inactive alerts over timeappear in this report.
Accounts Active and historical alerts, by account.
Modify alert definitionsAlert definitions configure the data workflow (filtering, operations, conditions) thatdetermine how alerts will be triggered or cleared.
Before you begin
You must have Cloud Administrator privileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Reports > Alerts.
Reports
152 VxRack System 1000 with Neutrino 1.1 Administrator Guide
2. On the Alerts page, click the gear icon.
3. On the Alert Definitions page, click the alert type folder icon for the alert that you wantto modify. For example, if you want to modify a node alert, click the folder icon withthe Nodes name next to it.
The Nodes folder opens with a list of configurable node alerts.
4. Click the alert name.
5. On the alert definition configuration page, modify the alert definition by refining thefilter settings and click Save. An example follows.
ExampleThis section provides an example of how a Cloud Administrator can modify an alertdefinition by refining the alert filter settings.
In this example, the Cloud Admin wants to modify the High Memory Utilization alertdefinition for instances such that alerts are triggered only for instances in the Financeaccount in projects beginning with rev. (The default alert definition triggers high memoryutilization alerts for all instances in the VxRack Neutrino system.)
Procedure
1. On the Alerts page of the VxRack Neutrino UI, the Cloud Admin clicks the gear iconand in the Alert Definitions page drills down into the Services > Cloud Compute >Instances folder and right clicks the High Memory Utilization alert and selectsConfigure as shown below. (The Cloud Admin can also simply click the alert name.)
Note
If an alert is disabled, you will first have to enable it before you can configure it. Youcan enable an alert by right-clicking it and selecting Enable.
2. The alert definition configuration page opens for this alert as shown in the followingscreenshot.
Reports
Modify alert definitions 153
The default alert filter is for everything, but the Cloud Admin can change this by right-clicking in the Everything box. This brings up the following right-click menu options:l Refine - Allows the Cloud Admin to refine/modify the existing filter. The Cloud
Admin can do this by using a wizard, with an expression, or by a device,component, metric type, etc.
l Edit Expression - Allows the Cloud Admin to directly edit/create an expression inthe box. For example, if the Cloud Admin wants to trigger alerts only for a specifichost (host is 10.282.35.29), he or she could type in the expressionhost=='10.282.35.29'. When typing in the box, the asterisk that is shown in thebox by default should be deleted.
l Invert - Inverts the existing filter (makes the opposite of the filter). For example, forthe filter: host is 10.282.35.29, clicking Invert would make this filter hostis not 10.282.35.29, so alerts would be triggered for all hosts in the systemexcept host 10.282.35.29.
l Remove - Deletes the existing filter.
3. In this example, the Cloud Admin refines the default everything filter by selectingRefine > using a wizard.... The Cloud Admin wants to filter the alerts for a specificaccount named Finance, so the Cloud Administrator begins typing Finance in thebox, and the Finance (property = domainnm) field automatically appears. The CloudAdmin selects this field and clicks OK.The account name corresponds to the domainnm (domain name) property in theVxRack Neutrino reporting logic since a VxRack Neutrino account maps to a singleOpenStack domain.
Reports
154 VxRack System 1000 with Neutrino 1.1 Administrator Guide
4. The Cloud Admin clicks OK and the filter displays as domainnm is Finance, asshown in the following screenshot.
5. The Cloud Admin can build a complex filter by chaining the filters together with And orOr operators. This is done by right-clicking the existing filter in the box and selectingAnd > using a wizard... and then selecting the appropriate parameter. In this example,the Cloud Admin selects the projname field, the starts with operator, and enters revas text.
The final result is a refined filter that triggers High Memory Utilization alerts forinstances in projects that start with rev in the Finance account.
Reports
Modify alert definitions 155
Settable alerts
Table 49 Alerts that can be set by the Cloud Administrator in the VxRack Neutrino UI Reportssection
Category Name Description Generates ESRScall-home event?
Bricks Power Supply Failure Alerts when power supply errorsdetected.
Yes
Disks Direct-Attached Disk Health Alerts when disk health indicates errorsor failure.
Yes
Disk-On-Module Disk Health Alerts when disk health indicates errorsor failure.
Yes
Licensing License Expiration Alerts when license is near expiration orhas expired.
Yes
License Violation (Cores) Alerts when allocated cores exceeds thelicensed limit.
No
License Violation (Storage) Alerts when the allocated storagecapacity exceeds the licensed limit.
No
Nodes BMC Unavailable Alerts when the BMC interface isunavailable.
Yes
Correctable DIMM Error Alerts when excessive correctablememory errors are detected.
Yes
High CPU Utilization Alerts when high CPU utilization isdetected.
No
High File System Utilization Alerts when high file system utilizationis detected.
No
Machine Check Exception Alerts when a machine-check exceptionis detected.
Yes
Node Reboot Alerts when the node has rebooted. No
Node Unavailable Alerts when the node is not reachable. Yes
NTP Service Alerts when the node hardware clock isnot synchronized with its NTP timeserver.
No
Processor CATERR Alerts when a CATERR is detected. Yes
Processor IERR Alerts when an IERR is detected. Yes
System Fan Alerts when fan speed is out of range. Yes
Uncorrectable DIMM error Alerts when an uncorrectable memoryerror is detected.
Yes
Nodes > Temperature Parameters are brick specific, andinclude temperature alerts for physicalbrick components such as LAN NIC,
Alerts when temperature sensor is out ofrange. See UI for complete list.
No
Reports
156 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 49 Alerts that can be set by the Cloud Administrator in the VxRack Neutrino UI Reportssection (continued)
Category Name Description Generates ESRScall-home event?
baseboard CPU, IO module, and powersupply.
Services > CloudCompute > Components
API Endpoint Availability Alerts when API endpoint isunreachable.
Yes
RabbitMQ Cluster Member Mismatch Alerts when RabbitMQ detects a clustermember mismatch between configuredand member nodes.
Yes
RabbitMQ Member Nodes Mismatch Alerts when RabbitMQ detects a nodesmismatch between configured andmember nodes.
Yes
RabbitMQ Overflowing Queue Alerts for RabbitMQ overflowing queueswithout consumers.
Yes
RabbitMQ Partition Size Alerts when RabbitMQ has partitions. Yes
RabbitMQ Queue Size Alerts for RabbitMQ queue messagecount.
Yes
RabbitMQ Resource Utilization Alert for RabbitMQ file descriptors,memory, processes and socketutilization.
Yes
RabbitMQ Triggered Alarms Alert for RabbitMQ memory or diskalarms.
Yes
RabbitMQ xinetd Status Alert for RabbitMQ container xinetdservice status.
Yes
Services > CloudCompute > Instances
Availability, High CPU Utilization, HighMemory Utilization
Alerts on low instance availability, highCPU utilization, and high memoryutilization.
No
Services > CloudCompute > Nodes
Availability, High CPU Utilization, HighDisk Utilization, High Memory Utilization
Alerts on low hypervisor availability,high CPU utilization, high diskutilization, and high memory utilization.
No
Software Components Docker Container Availability Alerts when a Docker container is down. Yes
Docker Containers CPU Utilization Alerts when a Docker container has highCPU utilization.
No
Docker Containers Memory Utilization Alerts when a Docker container has highmemory utilization.
No
Docker Containers Swap Utilization Alerts when a Docker container has highswap utilization.
No
Docker Data Space Alerts when Docker data space usage ishigh.
Yes
Docker Metadata Space Alerts when Docker metadata spaceusage is high.
Yes
Reports
Settable alerts 157
Table 49 Alerts that can be set by the Cloud Administrator in the VxRack Neutrino UI Reportssection (continued)
Category Name Description Generates ESRScall-home event?
Docker Service Availability Alerts when a Docker service is down. Yes
Process Availability Alerts when an application is down on anode.
Yes
Process Availability By Count Alerts when an application is downacross all nodes in the cluster. (Thisalert is based on the count of runningprocesses across the cluster and onlyapplies to the dnsmasq and glance-registry processes.)
Yes
Software Components >Galera
Cluster ConfID Alerts if mismatches in cluster ConfIDvalue are detected.
Yes
Cluster Node Alerts if mismatches in cluster Nodevalue are detected.
Yes
Cluster Size Alerts if mismatches in cluster size aredetected.
Yes
Cluster Status Alerts if mismatches in cluster status aredetected.
Yes
Database Connection Alerts if unable to connect overspecified period.
Software Components >M&R Health
Backend File Count Alerts when backend has too many .tmpfiles, which might indicate a databaseproblem.
No
Backend Processing Delay Alerts when processing delay is toohigh, which might indicate a databaseproblem.
No
Component Error Count Alerts when component produces errors,which might indicate a configurationproblem.
No
Load Balancer Error Count Alerts when load balancer produceserrors, which might indicate aconfiguration problem.
No
Storage Storage Pool Utilization Alerts when the storage pool utilizationexceeds the threshold.
No
Switches Switch FRU Failure Alerts when a switch FRU componentfails.
Yes
Switch Port Down Alerts when switch port is down. Yes
Switch Port High Discard Rate Alerts when switch port incomingdiscards are too high.
Yes
Switch Port High Error Rate Alerts when switch port error is too high. Yes
Reports
158 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 49 Alerts that can be set by the Cloud Administrator in the VxRack Neutrino UI Reportssection (continued)
Category Name Description Generates ESRScall-home event?
Switch Port High Queue Drop Rate Alerts when switch port outgoingdiscards are too high.
Yes
Switch Port High Utilization Alerts when switch port utilization is toohigh.
Yes
Switch Unavailable Alerts when switch is unavailable. Yes
Actions for managing alertsUsers with the Cloud Administrator or the Account Administrator role can mark alerts asacknowledged, and take ownership of alerts. These actions are convenient, optionalways to track, organize, and resolve alerts.
Table 50 Administrator options for managing alerts
Action Description Role required
Acknowledge Mark the alert acknowledged.
l When a MOMENTARY type of alert is acknowledged, the state of the alertchanges from ACTIVE to INACTIVE. INACTIVE alerts are no longer reported
in the Active Alerts report.
l When a DURABLE type of alert is acknowledged, it remains ACTIVE and
remains on the All Alerts report and related subset of reports with Yes in
the ACK column. Also, ownership is automatically assigned to the userwho acknowledged it.
Cloud Administrator orAccount Administrator
Assign Set the owner field to another user.
Force-Close Set any alert to INACTIVE.
Release ownership Release the ownership of an acknowledged alert so some other Administratorcan own it.
Take ownership Take ownership of an alert without affecting the acknowledgment. Anotheruser with Administrator privileges cannot take ownership of the alert to which
ownership is already taken. For that, you need to Release ownership, logout,login again, and then take ownership. Not more than one owner for a particularalert is possible at a time.
Unacknowledge Undo an acknowledge action. Alert state returns to ACTIVE.
Send alerts through emailYou can configure VxRack Neutrino to send alerts to a list of recipients through email.
Before you begin
You must have Cloud Administrator privileges.
Reports
Actions for managing alerts 159
Procedure
1. On the VxRack Neutrino navigation pane, click Reports > Alerts.
2. On the Alerts page, click the gear icon.
3. On the Alert Definitions page, click the Email Settings button at the bottom of thepage.
4. On the SMTP Settings tab, enter values in the From address (email address fromwhich alerts will originate), SMTP host, SMTP port, and SMTP authentication modefields. Click Save.
5. On the Global Context tab, specify email recipients globally by typing the names ofthe intended recipients, separated by commas, in the Email Recipients field. ClickSave.
CapacityCapacity reports show used and available resources, both system-wide and by account.You can also forecast resource usage by running a simulation.
Chargeback reports are at Reports > Capacity in the UI. You need the Cloud Administratoror Cloud Monitor role to see capacity reports for all accounts, and the aggregate reports.With the Account Monitor or Account Administrator role, you can get capacity reports foryour account only.
Table 51 Summary of capacity reports
Capacity reports Description Role required
Infrastructure Storage Disk model raw capacity, free and used capacity, number of disksby type, used and free capacity over time, cloud compute capacityover time, platform capacity over time.
Cloud Administrator orCloud Monitor (seesCapacity reports for allaccounts andaggregate reports)
Account Monitor orAccount Administrator(sees Capacity reportsfor their own account)
Network Port utilization per data switch port.
Platform CPU, storage, and memory usage trends and forecasts perplatform node.
Cloud Compute ComputeNodes Current
CPU, storage, and memory usage trends and forecasts perindividual cloud compute node and in aggregate per cloudcompute node.
CapacityPlanning
Trends and forecasts for cloud compute nodes.
Flavors Per flavor: instances over-provisioned and instances over-used,based on CPU and memory utilization.
Images Heat map of running images, list of images, image utilizationforecast.
ReclaimableInstances
Disk space (GB) - running versus shutdown instances, instancesshut down, volumes.
ReclaimableVolumes
Disk space (GB) - used versus unattached volumes, unattachedvolumes, and snapshots.
Accounts <Specific>Account
Flavors, images, reclaimable instances, and reclaimable volumereports per individual account.
Reports
160 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Simulate adding instancesRun a simulation to learn the amount of memory, disk, and vCPUs that x number of virtualmachines will need over a period of time. The simulation is at the system level.
Before you begin
You must have Cloud Administrator privileges.
Procedure
1. On the VxRack Neutrino navigation pane, click Reports > Capacity.
2. On the Capacity page, click the Cloud Compute tab.
3. Click Capacity Planning.
4. Under Resource Pools, right-click Compute or (Unassociated) and select SimulateAdding Instances.
5. In the Name field, type the name for your simulation.
6. In the Flavor field, select an instance flavor from the list.
7. In the Number of Instances field, type the number of instances for this simulation.
8. In the Date of Change field, select a date in the future on the calendar.
9. (Optional) In the Comments field, type your comments, and then click OK.
Results
The simulation completes and you can observe the CPU, memory, and storage trendreports that are generated. You can modify a simulation by selecting it from the table atthe bottom of the Capacity Planning page.
ChargebackThe chargeback reports provide information on compute, network, and storage resourcecosts, per account and per project. You can configure costs using the default compute/storage service levels, and add new service levels.
Chargeback reports are at Reports > Chargeback in the VxRack Neutrino UI. You need theCloud Administrator or Cloud Monitor role to see chargeback reports for all accounts, andaggregate reports. With the Account Administrator or Account Monitor role, you can getchargeback reports for your account only.
The Accounts tab on the Chargeback page lists all accounts in the VxRack Neutrinosystem with the total resource cost of each account for the current month, previousmonth, and last year.
Clicking an account in the list opens the Chargeback > Accounts > <account_name> pagewhich displays the account chargeback report. This report breaks out the account's costby resource type (CPU, Memory, Network, Storage), and by project.
Clicking a resource type cost displays the information on how that cost was calculated.For example, clicking the Storage resource type displays information such as the servicelevel used to determine the cost (Default) , the current month usage (700 GB), the costper GB ($0.12 /month), the cost for current month ($84), and the cost for the previousmonth ($65). In the Actions column, you can click a link to view the usage history of theresource type (for example, View Storage Usage History).
On the Chargeback > Accounts > <account_name> page, clicking a project in the list ofproject costs associated with the account breaks out costs as follows:
Reports
Simulate adding instances 161
l Cost by resource type - CPU, Memory, Network, Storage
l Instances - Compute Service Level (Default or user-defined), Storage Service Level(Performance or Capacity), Flavor, Image, Cost for Current Month, Cost for PreviousMonth
l Volumes - Volume Name, Provisioning Type (Thin or Thick), Service Level(Performance or Capacity), Current Month Usage, Cost per GB, Cost for Current Month,Cost for Previous Month
Edit chargeback service level settingsVxRack Neutrino chargeback reporting uses service levels to calculate resource costs.Cloud Administrators can configure the VxRack Neutrino out-of-the-box compute andstorage service levels to reflect their appropriate cost structure for compute, storage, andnetwork resources in the system, or they can define additional customized service levels.
Before you begin
This operation requires the Cloud Administrator role in VxRack Neutrino.
Procedure
1. On the VxRack Neutrino navigation pane, click Reports > Chargeback.
2. On the Chargeback page, click the gear icon to open the Groups Management page.
3. To edit the Default compute service level:
a. On the Groups Management > Compute tab, click the Edit button.
b. Edit the settings in the Default compute service level that you want to modify.Editable settings are:
l Network cost per GB transferred
l Memory cost per GB allocated per hour
l CPU cost per vCPU per hour
4. To edit the Capacity or Performance storage service levels:
a. On the Groups Management page, click the Storage tab.
b. Click the storage service level you want to modify, either Capacity or Performance.
The Capacity service level defines storage costs for the capacity nodes that containHDD disks.
The Performance service level defines storage costs for the performance nodesthat contain SSD disks.
c. Click the Edit button and edit the storage cost per GB allocated per month.
5. Select service level members by adding one or more rules. The rules determine whichproperties (device types and names, component types and names, images, flavors,storage pools, projects, and domains/accounts) belong to the service level, based onthe selection critieria that you set up. You can also set up advanced rules, which letyou define service level membership based on any property.
6. Click Show Members to see which devices/components/images/flavors/storagepools/projects/accounts qualify for membership. Modify the rules if necessary.
7. Click Save.
Reports
162 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Export and import chargeback rule definitionsCloud Administrators can export and import chargeback rule definitions. The exportedfiles can be useful as a backup or for importing rules to a new setup.
Before you begin
This operation requires the Cloud Administrator role in VxRack Neutrino.
Procedure
1. On the VxRack Neutrino navigation pane, click Reports > Chargeback.
2. On the Chargeback page, click the gear icon to open the Groups Management page.
3. At the bottom of the Groups Management page (on either the Compute or Storagetabs), click Export Groups or Import Groups.
Option Description
ExportGroups
For the Default compute service level, exports the rule definitions to thefile Groups Management_Compute_yyyy_mm_dd_hh-mm.zip inthe browser download directory. The zip file contains compute-service-levels.csv and PTF-Compute-Chargeback.xml. Forthe storage service levels, exports the rule definitions to the file GroupsManagement_Storage_yyyy_mm_dd_hh-mm.zip in the browserdownload directory. The zip file contains storage-service-levels.csv and PTF-Storage-Chargeback.xml.
ImportGroups
Import an existing zip file of rules settings. The zip file that you importmust contain the files compute-service-levels.csv and PTF-Compute-Chargeback.xml or storage-service-levels.csvand PTF-Storage-Chargeback.xml. Examine files from an exportedgroups zip file to see the required file structure.
InventoryInventory reports provide service-specific information such as instance data for the CloudCompute Service. The report for Cloud Administrators also contains system-wide and per-account listings of logical and hardware resources, including bricks, nodes, switches,containers, etc.
Table 52 Summary of system inventory reports
Inventory reports Description Role required
Infrastructure Nodes Bricks and nodes by model, and list of nodes with node-specificinformation such as service running on the node, number of CPUs,memory, and filesystem/CPU/memory utilization
Cloud Administrator orCloud Monitor
Network Switches by model, VLANs, and list of switches with switch-specificinformation such as IP address, model, number of ports, andavailability
Platform Docker images for platform components with the number of nodesand containers where they are running
Reports
Export and import chargeback rule definitions 163
Table 52 Summary of system inventory reports (continued)
Inventory reports Description Role required
Cloud Compute Instances Cloud compute instances by power state, flavor, image, andattributes
Account Administrator,Account Monitor, CloudAdministrator, or CloudMonitor
Networks OpenStack network connectivity and projects
ImageSummary
Docker images for cloud compute components with the number ofnodes and containers where they are running
Accounts <Specific>Account
Number of instances by power state, flavor, and image per account Cloud Administrator orCloud Monitor
LicenseThese reports provide information on your entitlements for Cloud Compute Service CPUcores and storage, and the number of cores and storage quantity currently consumed.
Table 53 Summary of licensing reports
License reports Role required
Entitlements (service, type, entitlement, storage/coresconsumed, start and end dates )
Cloud Administrator or Cloud Monitor
Cloud compute CPU core, number of entitlementsgraphed with number consumed
Cloud compute storage, number of entitlementsgraphed with number consumed
LogsThese reports contain customizable visualizations of logs collected from all VxRackNeutrino components.
On the Logs page, the logs reports are presented through Kibana, an open source datavisualization platform. Refer to Kibana documentation on www.elastic.com forinformation on generic features such as adding visualizations, managing dashboards,using filters, searching, and the like. The following table describes the log reportsavailable on the Logs page.
Table 54 Summary of logs reports
Logs reports Description Default visualizations Role required
Discover Explore logs interactively. Bydefault, discovers all logentries.
Histogram: count of log entries per 30minutes
Cloud Administrator or CloudMonitor
Documents table: log entries that match thesearch query
Reports
164 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Table 54 Summary of logs reports (continued)
Logs reports Description Default visualizations Role required
Dashboard Summary of log activityduring last 24 hours. Timeperiod and refresh period areconfigurable.
Logs over time: count of log entries every 30minutes
Logs severity: count of log entries by severity
Logs category: count of log entries bycategory (unknown, performance, audit,error, configuration, capacity, availability)
Logs by type: platform versus cloud computenodes
VxRack Neutrino components: count of logentries, by container/component
Logs for platform nodes and services: foreach platform node, count of log entries bycontainer/component
Playbook logsThe Playbook Logs page lists ansible-playbook runs. VxRack Neutrino uses ansible-playbook runs to perform installation and upgrade steps, and to perform nodemodifications when actions are performed on nodes, such as adding or removing nodesfrom the Cloud Compute Service, powering on or off nodes, resuming nodes, and so on.The Playbook Logs page lists the playbooks by name, the status of the playbookexecution, and the creation and update timestamp of the playbook. When the CloudAdministrator clicks a playbook execution log in the list, it shows verbose output from theplaybook that can be an extremely useful tool for troubleshooting.
Customize columns in report tablesIn the tables displayed in the Reports pages of the VxRack Neutrino UI, you can changewhich columns are displayed, and how columns are sorted.
Before you begin
You must have Cloud Administrator, Cloud Monitor, Account Administrator, or AccountMonitor privileges.
Procedure
1. Hover over the top right of any table in the Reports section of the UI (except Logs), andclick the Customize Table Columns icon .
For example, on the Alerts page, hover over the top right of the Active Alerts table,and click .
2. From the Table Customization dialog, select the columns you want to display, and thecolumns to sort by.
3. Click Apply.
Reports
Playbook logs 165
Toggle the legend in report tablesIn the graphs and treemaps displayed in the Reports pages of the VxRack Neutrino UI,you can toggle the legend on or off.
Before you begin
You must have Cloud Administrator, Cloud Monitor, Account Administrator, or AccountMonitor privileges.
Procedure
1. Hover over the top right of any graph or treemap in the Reports section of the UI(except Logs), and click the Toggle legend icon .
For example, on the Health > Infrastructure > Nodes page, in the top right of the NodeHealth treemap, click .
Results
The legend displays below the treemap indicating the values for the treemap colors.
Set a reporting periodReports have default reporting periods, but you can change the reporting period tocontrol the time covered in a report. You can also block out time ranges, such as knownmaintenance periods that would skew metrics.
Before you begin
You must have Cloud Administrator, Cloud Monitor, Account Administrator, or AccountMonitor privileges.
Procedure
1. In the top right of any of the Reports pages (except for the Logs and Playbook Logspages), click the Display icon to show the settings that control the reportingperiod.
2. Specify how frequently you want the values collected for your reports. From theDisplay <x> values drop-down list, select 1 hour, 1 day, 1 week, or real-time.
Real-time values are the actual collected values. Aggregated values (1 hour, 1 day,and 1 week) are the compound of several real-time values. Aggregated values improvereport performance and save disk space. Aggregated values are created and updatedas realtime values are collected. This way, aggregated values are always current andprecise.
3. Specify how you want a value aggregated over its collection time. For an aggregatedvalue (1 hour, 1 day, and 1 week), from the Using x aggregation drop-down list, selectone of average, min, max, sum, last (the last value), count (the count of receivedvalues), or last timestamp.
4. Specify the reporting time range in the Time Range Quick Switch drop-down lists.
a. In the first drop-down list, select previous, last, or current. This drop-downlist specifies when the reporting period starts and ends.
Reports
166 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Examples are shown in the followingtable.
b. In the second drop-down list, select the reporting duration, as shown in thefollowing screenshot. presents a list of possible durations.
The custom option lets you specify an exact span of time, such as 1M2w (1 monthand 2 weeks) or 1h45m (1 hour and 45 minutes).
The calendar value lets you specify an exact date/time range, as shown here:
c. To exclude days and hours from the metrics, click the small calendar icon next tothe second drop-down list. For example, the following calendar excludes metricscollected for 11 to 11:59 PM every Monday.
5. Click Apply.
The reporting period and collection frequency of the data is reflected in the reportsshown in the Reports pages of the VxRack Neutrino UI.
Reports
Set a reporting period 167
Table 55 Examples of previous, last, and current time selectionsettings
Value Explanation
previous The reporting period starts and ends in the past.Examples:
Previous hour If the time is currently 10:15 AM, metrics are for 9:00 AMto 10:00 AM.
Previous week If today is Thursday, March 12, metrics are for theprevious week, Monday March 2 to Sunday, March 8,inclusive.
Previousmonth
If today is March 12, metrics are for the previous month,February 1 to February 28, inclusive.
last The reporting period starts at the appropriate interval counting back from the currenttime, and ends at the current time.Examples:
Last hour If the time is currently 10:15 AM, metrics are for 9:15 AM to10:15 AM.
Last week If today is Thursday, March 12, metrics are for days Thursdayto Thursday, March 5 to March 12.
Last month If today is March 12, metrics are for February 12 to March 12,inclusive.
current The reporting period starts in the past and ends in the future, and includes the currentpoint in time. (Metrics are for a partial reporting period.)Examples:
Current hour If the time is currently 10:15 AM, metrics are for 10:00 AMto 11:00 AM.
Current week If today is Thursday, March 12, metrics are for Monday toSunday, March 9 to March 15, inclusive.
Current month If today is March 12, metrics are for March 1 to March 31,inclusive.
Export a reportYou can export any report to csv, xls, pdf, and other popular formats.
Before you begin
You must have Cloud Administrator, Cloud Monitor, Account Administrator, or AccountMonitor privileges.
Reports
168 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Procedure
1. In the top right of any of the Reports pages (except for the Logs page), click the Export
icon .
2. In the Export dialog, click an output format in the list. The report is generated in theuser's default download location.
Some output types have optional configuration settings.
Reports
Export a report 169
Reports
170 VxRack System 1000 with Neutrino 1.1 Administrator Guide
CHAPTER 15
License, ESRS, upgrade, and security settings
This chapter describes how to upload a license, register for EMC Secure Remote Services(ESRS), perform upgrade tasks, and generate/upload security certificates in the Settingssection of the VxRack Neutrino UI.
l License............................................................................................................... 172l EMC Secure Remote Support............................................................................... 173l Upgrade VxRack Neutrino software......................................................................175l Security certificates.............................................................................................177
License, ESRS, upgrade, and security settings 171
LicenseCloud Administrators can view license information in the License page of the VxRackNeutrino UI (Settings > License). They can also upload new VxRack Neutrino licenses andcheck existing license status, entitlements, or expiration dates.
Obtain license
Before you begin
To generate the VxRack Neutrino license files, you must have received a VxRack NeutrinoLicense Authorization Code (LAC) email from EMC that contains the LAC and a licenseactivation link.
Procedure
1. Go to EMC Software Licensing Central at https://licensing.emc.com by clicking thelicense activation link provided in the VxRack Neutrino LAC email that is sent to EMCcustomers after purchasing VxRack Neutrino.
2. In EMC Software Licensing Central, follow the on-screen instructions and enter theLAC.
3. Generate and download the license files.
4. Upload the license files in the New License page of the VxRack Neutrino UI (see Upload license on page 172).
Upload license
Before you begin
You must have Cloud Administrator privileges.
The VxRack Neutrino license files must have been downloaded from EMC SoftwareLicensing Central at https://licensing.emc.com and stored where the Cloud Administratorcan access them.
Procedure
1. Click Settings > License > New License.
2. Click the Browse button to navigate to your local copy of the license file.
3. Click the Upload button to add the licenses.
View license
Before you begin
You must have Cloud Administrator privileges.
The License page displays the following information in the license list:
l Name: License names for the installed licenses in the VxRack Neutrino system. Thereare two VxRack Neutrino licenses: CLOUDCOMPUTE_CORES andCLOUDCOMPUTE_STORAGE. VxRack Neutrino licenses are purchased in terms ofcores (physical CPUs) and terabytes (TB) of storage. The amount of cores and TB ofstorage a customer is entitled to is determined by the types of nodes that thecustomer has selected at ordering time.
License, ESRS, upgrade, and security settings
172 VxRack System 1000 with Neutrino 1.1 Administrator Guide
l Status: Licensed or Expired
l Type: Permanent, Subscription, or Evaluation
l Entitlement: For storage licenses, describes the maximum storage licensed (forexample, 300 TB). For the cloud compute core license, describes the maximumnumber of cores licensed (for example, 100 cores).
l Start Date: License activation date
l End Date: License expiration date
EMC Secure Remote SupportEMC Secure Remote Support (ESRS) is the recommended way for EMC technical supportto receive notification of potential system issues. Additionally, ESRS enables authorizedEMC personnel to troubleshoot system errors remotely by analyzing logs. CloudAdministrators can view ESRS information on the ESRS page in the VxRack Neutrino UI(Settings > ESRS). On the ESRS page, they can configure ESRS settings and register theVxRack Neutrino system with EMC Support.
The ESRS page shows whether the VxRack Neutrino system is connected to ESRS. If it isconnected, a Connected status with a green checkmark displays in the top right of thepage. If it is not connected, a Not Connected status with a red x displays in the top rightof the page.
Register with ESRS
Before you begin
You must have Cloud Administrator privileges.
VxRack Neutrino license key must be installed.
ESRS must be installed and connected to EMC.
Procedure
1. Click Settings > ESRS, and then click the Register tab.
2. On the Register EMC Secure Remote Service page, enter the following ESRS settings:
l Gateway IP Address (IP address of the ESRS Virtual Edition gateway server)
l Gateway Port (port on the ESRS Virtual Edition gateway server)
l Username
l Password
Use your customer support.emc.com credentials for the Username and Passwordfields.
3. Click Validate Settings.
VxRack Neutrino checks to ensure that the ESRS parameters entered are valid and thatthe ESRS connection can be established.
4. Click Save.
Edit ESRS settings
Before you begin
You must have Cloud Administrator privileges.
License, ESRS, upgrade, and security settings
EMC Secure Remote Support 173
The VxRack Neutrino system must be connected to the ESRS gateway.
Procedure
1. Click Settings > ESRS, and then click the Edit tab.
2. On the Edit EMC Secure Remote Service page, edit the following ESRS settings:
l Gateway IP Address (IP address of the ESRS Virtual Edition gateway server)
l Gateway Port (port on the ESRS Virtual Edition gateway server)
l Username
l Password
3. Click Validate Settings.
VxRack Neutrino checks to ensure that the ESRS parameters entered are valid and thatthe ESRS connection can be established.
4. Click Save.
Disable call home
Before you begin
You must have Cloud Administrator privileges.
The VxRack Neutrino system must be connected to the ESRS gateway.
Cloud Administrators can use this feature to temporarily disable ESRS call home alerts.Cloud Administrators should use this feature during planned maintenance activities orduring troubleshooting scenarios that require taking nodes offline to prevent floodingESRS with unnecessary alerts.
Procedure
1. Click Settings > ESRS, and then click the Disable Call Home tab.
A warning appears and displays the following message: This disables callhomes on critical alerts and will affect EMC's ability tosupport the product properly. Would you like to proceed?"
2. Click Yes.
On the ESRS page, the Call Home status displays as Disabled.
Deregister from ESRS
Before you begin
You must have Cloud Administrator privileges.
The VxRack Neutrino system must be connected to the ESRS gateway.
Procedure
1. Click Settings > ESRS, and then click the Deregister tab.
A warning appears and displays the following message: Product will not beable to call home to EMC with any critical alerts and mayaffect supportability. Are you sure you want to deregisterESRS?
2. Click Yes.
License, ESRS, upgrade, and security settings
174 VxRack System 1000 with Neutrino 1.1 Administrator Guide
Upgrade VxRack Neutrino softwareThe Cloud Administrator can view upgrade information on the Upgrade page of theVxRack Neutrino UI. The Cloud Administrator can use this page to monitor newly releasedsoftware versions, set EMC Online Support credentials to access software updates, andlaunch updates. When updates are available, the software displays an appropriatemessage and the Cloud Administrator can download the upgrade.
Before you begin
You must have Cloud Administrator privileges.
The Cloud Administrator must have an EMC Online Support account to download updatesfrom https://support.emc.com/downloads.
Only patch software releases for the Services and Base OS software can be upgraded by aCloud Administrator in the UI. Major, minor, and service pack upgrades (for example, anupgrade from VxRack Neutrino 1.0.0.1 to 1.1) are performed via the CLI and require EMCGlobal Services personnel.
Procedure
1. Click Settings > Upgrade to view the Upgrade page. The following upgrade informationappears:
l Version: Lists software version history and currently installed version.
l Name: Describes the software update. For example, Neutrino 1.0.0.1 OS Upgradeor Neutrino 1.0.0.1 Services.
l Type: Services or Base OS
Option Description
Services A Services patch upgrade updates all the containerized software for thePlatform and Cloud Compute Services.
Base OS A Base OS patch upgrade updates all the non-containerized software.
2. In the Actions column, click the appropriate button:
l Click the Download button next to the upgrade package listed to download it from https://support.emc.com/downloads. In order to download an upgrade packageyou must Set EMC Online Support credentials on page 175.
l Click the Upgrade button next to the upgrade package listed to perform theupgrade.
l Click the Remove button next to an invalid upgrade package to remove it from yourVxRack Neutrino system.
Set EMC Online Support credentials
Before you begin
You must have Cloud Administrator privileges.
The Cloud Administrator must have an EMC Online Support account to access VxRackNeutrino software update packages from https://support.emc.com/downloads.
In order to download update packages from support.emc.com, the Cloud Administratormust enter the Online Support credentials in the Upgrade Settings page.
License, ESRS, upgrade, and security settings
Upgrade VxRack Neutrino software 175
Procedure
1. On the Upgrade page, click the Settings tab.
2. On the Upgrade Settings page, enter the following EMC Online Support credentials:
l EMC Online Support User Name
l EMC Online Support password
l Proxy Hostname
l Proxy Port
l Proxy Username
l Proxy Password
If you are not using a proxy server to access the update packages, leave the proxyfields empty.
3. Click Save.
Upload upgrade package and perform upgrade
Before you begin
You must have Cloud Administrator privileges.
Procedure
1. On the Upgrade page, click the Upload Package tab.
2. On the Upload Upgrade Package page, click Browse and navigate to the local copy ofthe upgrade package you want to upload.
3. Click Upload.
After the upload completes, the upload package appears in the list of packages in theUpgrade page.
4. Click the Upgrade button in the Actions column to perform the upgrade.
What to expect after a Base OS patch upgradeWhen you patch the Base OS in the VxRack Neutrino UI, all nodes are suspended,upgraded, and then rebooted serially. The length of time required for all nodes to beupgraded depends on the number of nodes in the VxRack Neutrino system.
If no orchestration is in place in the OpenStack environment, you must restart theOpenStack instances.
In the OpenStack Dashboard UI, click Project > Compute > Instances, and in the Actionscolumn, click Start beside each instance.
What to expect after a Services patch upgradeA Services upgrade usually takes approximately 40 minutes to complete, and someVxRack Neutrino functionality may not be available as containers are restarted.
If only the Platform Service is being upgraded, there will be no impact to instancesrunning in the OpenStack environment.
If the Services upgrade includes an upgrade to the Cloud Compute Service, then accessto data on the existing instances will be disrupted during the upgrade.
License, ESRS, upgrade, and security settings
176 VxRack System 1000 with Neutrino 1.1 Administrator Guide
After upgrading the Cloud Compute Service, you must manually restart the instances. Inthe OpenStack Dashboard UI, click Project > Compute > Instances, and in the Actionscolumn, click Start beside each instance.
Security certificatesCloud Administrators can generate two types of security certificates in the Certificatespage of the VxRack Neutrino UI: the OpenStack Keystone Identity Service (Keystone)certificate and the Platform Service (NGINX) certificate. They can also upload a customprivate key/public key certificate pair for the Platform Service to use.
During the initial VxRack Neutrino Platform Service installation, a NGINX private key/public key certificate pair is automatically generated. This certificate key pair is used bythe NGINX load balancer component on the platform nodes to provide the main virtual IPaddress endpoint used by VxRack Neutrino administrators for HTTPS access. All VxRackNeutrino management traffic is encrypted and routed through the NGINX load balancercomponent. The NGINX component balances both HTML and API traffic.
The Keystone private key/public key certificate pair is also automatically generated atinstall time. The private key is used by Keystone to sign each new authentication token.The internal VxRack Neutrino components that interact with Keystone for authenticationvalidate incoming tokens by using the public key to verify the signature.
Note
The system-generated certificates outlined here expire in five years. If you choose to usethis method (versus importing your own certificates) you will need to rotate thecertificates before the expiration date. This does not occur automatically.
Generate security certificates
Before you begin
You must have Cloud Administrator privileges.
Procedure
1. Click Settings > Security to view the Certificates page.
2. On the Certificates page, click Generate Signing Key to generate the Keystone selfsigned certificate.
3. On the Certificates page, click Generate Certificate to generate a new NGINX privatekey/public key certificate pair.
This is the certificate that the administrator's browser will be presented with whenattempting to log in to the system using the VxRack Neutrino virtual IP address. If youwant to upload your own certificate for NGINX to use, refer to Upload a custom NGINXsecurity certificate on page 177.
Upload a custom NGINX security certificate
Before you begin
l You must have Cloud Administrator privileges.
l Note that as of the publication date of this document:
n The certificate file requires an extension of .pem.
License, ESRS, upgrade, and security settings
Security certificates 177
n The key file requires an extension of .key.
Procedure
1. Click Settings > Security to view the Certificates page.
2. On the Certificates page, click Upload Certificate.
3. On the Upload NGINX Certificate page, click the Browse button to navigate to yourlocal private key/public key certificate pair.
The certificate file must have an extension of .pem, and the key file must have anextension of .key.
4. Click Upload.
The NGINX load balancer component restarts on the platform nodes. This requires thatall VxRack Neutrino administrators reload their VxRack Neutrino UI sessions.
License, ESRS, upgrade, and security settings
178 VxRack System 1000 with Neutrino 1.1 Administrator Guide