ceph deployment with dell crowbar - ceph day frankfurt
DESCRIPTION
Paul Brook and Michael Holzerland, DellTRANSCRIPT
![Page 1: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/1.jpg)
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Michael Holzerland:
Paul Brook
Twitter @paulbrookatdell
![Page 2: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/2.jpg)
Confidential
Agenda:
Introduction Inktank & Dell Dell Crowbar Automation, scale
Best Practice with CEPH Cluster Best Practice Networking Whitepapers
Crowbar Demo
![Page 3: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/3.jpg)
3 Confidential
Dell is a certified reseller of Inktank Services, Support and Training.
• Need to Access and buy Inktank Services & Support? • Inktank 1-year subscription packages
– Inktank Pre-Production subscription – Gold (24*7) Subscription
• Inktank Professional Services – Ceph Pro services Starter Pack – Additional days services options
• Ceph Training from Inktank
– Inktank Ceph100 Fundamentals Training – Inktank Ceph110 Operations and Tuning Training – Inktank Ceph120 Ceph and OpenStack Training
![Page 4: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/4.jpg)
OPS
SW
Dell OpenStack Cloud Solution
HW
SW
OPS
“Crowbar”
CloudOps
Software
Services &
Consulting
Reference
Architecture
Confidential 4
![Page 5: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/5.jpg)
Components Involved
http://docs.openstack.org/trunk/openstack-compute/admin/content/conceptual-architecture.html
![Page 6: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/6.jpg)
Data Center Solutions
Crowbar
![Page 7: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/7.jpg)
4) Ergänzende Produkte D
ell
“C
row
ba
r”
Op
s M
an
ag
em
en
t
Core Components &
Operating Systems
Cloud
Infrastructure
Physical Resources
APIs, User Access,
& Ecosystem
Partners
![Page 8: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/8.jpg)
Quantum Cinder
SwiftNova Support
Proxy
Dashboard
Store Nodes
(min 3 nodes)
Controller
AP
I AP
I
UI
AP
IA
PI
Block Device
(SAN/NAS/DAS)
Scheduler
Keystone
Compute Nodes
Controller
Database
Glance
AP
I
RabbitMQ
AP
I
#8
#9 #3
#4
#2
#5
Barclamps!
Automatisierte und einfache Installation
#7 #6
#1
![Page 9: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/9.jpg)
Crowbar Landingpage
• http://crowbar.github.io/
2/28/2014 9
![Page 10: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/10.jpg)
Best Practices
![Page 11: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/11.jpg)
Object Storage Daemons (OSD)
• Allocate sufficient CPU cycles and memory per OSD –2GB memory and 1GHz of AMD or Xeon CPU cycles
per OSD –Hyper Threading can be used in Xeon Sandybridge and
UP • Use SSDs as dedicated Journal devices to improve
random latency –Some workloads benefit from separate journal devices
on SSDs –Rule of Thumb: 6 OSD for 1 SSD
• No Raid Controller –Just JBOD
11
![Page 12: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/12.jpg)
Ceph Cluster Monitors
• Best practice to deploy monitor role on dedicated hardware –Not resource intensive but critical –Using separate hardware ensures no contention for resources
• Make sure monitor processes are never starved for resources – If running monitor process on shared hardware, fence off
resources
• Deploy an odd number of monitors (3 or 5) –Need to have an odd number of monitors for quorum voting –Clusters < 200 nodes work well with 3 monitors – Larger clusters may benefit from 5 –Main reason to go to 7 is to have redundancy in fault zones
• Add redundancy to monitor nodes as appropriate –Make sure the monitor nodes are distributed across fault zones –Consider refactoring fault zones if needing more than 7 monitors
12
![Page 13: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/13.jpg)
Potential Dell Server Hardware Choices
• Rackable Storage Node – Dell PowerEdge R720XD or R515
– INTEL Xeon 2603v2 or AMD C32 Plattform
– 32GB RAM
– 2x 400GB SSD drives (OS and optionally Journals)
– 12x 4TB SATA drives
– 2x 10GbE, 1x 1GbE, IPMI
• Bladed Storage Node – Dell PowerEdge C8000XD Disk
and PowerEdge C8220 CPU
– 2x Xeon E5-2603v2 CPU, 32GB RAM
– 2x 400GB SSD drives (OS and optionally Journals)
– 12x 4TB NL SAS drive
– 2x 10GbE, 1x 1GbE, IPMI
• Monitor Node – Dell PowerEdge R415
– 2x 1TB SATA
– 1x 10GbE
Confidential 13
![Page 14: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/14.jpg)
Configure Networking within the Rack
• Each Pod (e.g., row of racks) contains two Spine switches • Each Leaf switch is redundantly uplinked to each Spine switch • Spine switches are redundantly linked to each other with 2x 40GbE • Each Spine switch has three uplinks to other pods with 3x 40GbE
14
10GbE link 40GbE link
High-Speed Top-of-Rack (Leaf) Switch
Nodes in Rack
High-Speed Top-of-Rack (Leaf) Switch
Nodes in Rack
High-Speed Top-of-Rack (Leaf) Switch
Nodes in Rack
High-Speed End-of-Row
(Spine) Switch
High-Speed End-of-Row
(Spine) Switch
To Other Rows (Pods) To Other Rows (Pods)
![Page 15: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/15.jpg)
Networking Overview
•Plan for low latency and high bandwidth
•Use 10GbE switches within the rack
•Use 40GbE uplinks between racks
•One option: Dell Force10 S4810 switches with port aggregation & Force10 S6000 for aggregation Level with 40GigE
15
![Page 16: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/16.jpg)
Whitepapers!
![Page 17: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt](https://reader035.vdocuments.mx/reader035/viewer/2022081414/54b8233b4a79591e448b45b9/html5/thumbnails/17.jpg)
Questions? - Demo -