cloud storage introduction ( ceph )
TRANSCRIPT
![Page 1: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/1.jpg)
Cloud Storage Introduction
Sept, 2015
![Page 2: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/2.jpg)
CLOUD STORAGE INTROSoftware Define Storage
![Page 3: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/3.jpg)
Storage Trend
> Data Size and Capacity– Multimedia Contents– Big Demo binary, Detail Graphic /
Photos, Audio and Video etc. > Data Functional need
– Different Business requirement– More Data driven process – More application with data– More ecommerce
> Data Backup for a longer period – Legislation and Compliance – Business analysis
![Page 4: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/4.jpg)
Storage Usage
Tier 0Ultra High
Performance
Tier 1High-value, OLTP, Revenue Generating
Tier 2
Backup/Recovery,Reference Data, Bulk Data
Tier 3
Object, Archive,Compliance Archive,Long-term Retention
1-3%
15-20%
20-25%
50-60%
![Page 5: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/5.jpg)
Software Define Storage
> High Extensibility:– Distributed over multiple nodes in cluster
> High Availability: – No single point of failure
> High Flexibility: – API, Block Device and Cloud Supported Architecture
> Pure Software Define Architecture> Self Monitoring and Self Repairing
![Page 6: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/6.jpg)
Sample Cluster
![Page 7: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/7.jpg)
Why using Cloud Storage?> Very High ROI compare to traditional Hard Storage
Solution Vendor > Cloud Ready and S3 Supported> Thin Provisioning> Remote Replication> Cache Tiering> Erasure Coding> Self Manage and Self Repair with continuous
monitoring
![Page 8: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/8.jpg)
Other Key Features
> Support client from multiple OS> Data encryption over physical disk ( more CPU
needed)> On the fly data compression> Basically Unlimited Extendibility > Copy-On-Writing ( Clone and Snapshot ) > iSCSI support ( VM and thin client etc )
![Page 9: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/9.jpg)
WHO USING IT ?Show Cases of Cloud Storage
![Page 10: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/10.jpg)
EMC, Hitachi,HP, IBM
NetApp, Dell,Pura Storage,Nexsan
Promise, Synology,QNAP, Infortrend,ProWare, SansDigitial
![Page 11: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/11.jpg)
Who is doing Software Define Storage
![Page 12: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/12.jpg)
Who is using software define storage?
![Page 13: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/13.jpg)
HOW MUCH?What if we use Software Define Storage?
![Page 14: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/14.jpg)
EPIA-M920Form factor:
– 40 mm 170mm x 170mm
CPU:– fanless 1.6GHz VIA Eden® X4
RAM:– 16G DDR-3-1600 KingStone
Storage:– Toshiba Q300 480G/m-SATA/read:550M, write: 520M
Lan:– Gigabit LAN (RealTek RTL8111G) * 2
Connectivity:– USB3.0 * 4
Price:– $355 (USD) + 2500 (NTD 16G RAM)
![Page 15: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/15.jpg)
HTPC AMD (A8-5545M)
Form factor: – 29.9 mm x 107.6 mm x 114.4mm
CPU:– AMD A8-5545M ( Clock up 2.7GHz / 4M 4Core)
RAM:– 8G DDR-3-1600 KingStone ( Up to 16G SO-DIMM )
Storage:– mS200 120G/m-SATA/read:550M, write: 520M
Lan:– Gigabit LAN (RealTek RTL8111G)
Connectivity:– USB3.0 * 4
Price:– $6980 (NTD)
![Page 16: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/16.jpg)
Enclosure Form factor:
– 215(D) x 126(w) x 166(H) mm
Storage:– Support all brand of 3.5" SATA I / II / III hard disk drive 4 x 8TB = 32TB
Connectivity:– USB 3.0 or eSATA Interface
Price:– $3000 (NTD) + 9200 NTD * 3 = 30600NTD
• (8T HDD) ( 24TB ) * 3 = 96TB
– $3000 (NTD) + 3300 NTD * 3 = 12900NTD• (3T HDD) ( 9TB ) * 3 = 27TB
![Page 17: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/17.jpg)
VIA EPIA-M920> Node = 14000> 512G SSD * 2 = 10000> 3T HDD * 3 + Enclosure
= 12900 ( 9TB) > 30TB total = 36900 * 3
= 110700> It is about the same as
Amazon Cloud 40TB cost over 1 year
![Page 18: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/18.jpg)
AMD (A8-5545M)
> Node = 6980> 3T HDD * 4 + Enclosure
= 16200 ( 12 TB) > 36TB total = 23180 * 3
= 69540> It is about the half of
Amazon Cloud 40TB cost over 1 year
![Page 19: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/19.jpg)
QUICK 3 NODE SETUPDemo basic setup of a small cluster
![Page 20: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/20.jpg)
CEPH Cluster Requirement
> At least 1 MON> At least 3 OSD
– At least 15GB per osd
– Journal better on SSD
![Page 21: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/21.jpg)
ceph-deploy> ssh no password id need
to pass over to all cluster nodes
> echo nodes ceph user has sudo for root permission
> ceph-deploy new <node1> <node2> <node3> – Create all the new MON
> ceph.conf file will be created at the current directory for you to build your cluster configuration
> Each cluster node should have identical ceph.conf file
![Page 22: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/22.jpg)
OSD Prepare and Activ
> ceph-deploy osd prepare <node1>:</dev/sda5>:</var/lib/ceph/osd/journal/osd-0>
> ceph-deploy osd activate <node1>:</dev/sda5>
![Page 23: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/23.jpg)
Cluster Status> ceph status > ceph osd stat > ceph osd dump> ceph osd tree > ceph mon stat > ceph mon dump > ceph quorum_status> ceph osd lspools
![Page 24: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/24.jpg)
Pool Management
> ceph osd lspools> ceph osd pool create <pool-name> <pg-num>
<pgp-num> <pool-type> <crush-ruleset-name>> ceph osd pool delete <pool-name> <pool-
name> --yes-i-really-really-mean-it > ceph osd pool set <pool-name> <key> <value>
![Page 25: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/25.jpg)
CRUSH Map Management> ceph osd getcrushmap -o crushmap.out > crushtool -d crushmap.out -o decom_crushmap.txt > cp decom_crushmap.txt update_decom_crushmap.txt> crushtool -c update_decom_crushmap.txt -o update_crushmap.out > ceph osd setcrushmap -i update_crushmap.out
> crushtool --test -i update_crushmap.out --show-choose-tries --rule 2 --num-rep=2
> crushtool --test -i update_crushmap.out --show-utilization --num-rep=2ceph osd crush show-tunables
![Page 26: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/26.jpg)
RBD Management
> rbd --pool ssd create --size 1000 ssd_block– Create a 1G rbd in ssd pool
> rbd map ssd/ssd_block ( in client ) – It should show up in /dev/rbd/<pool-name>/<block-
name>> Then you can use it like a block device
![Page 27: Cloud Storage Introduction ( CEPH )](https://reader035.vdocuments.mx/reader035/viewer/2022062311/58efc1641a28ab507d8b4615/html5/thumbnails/27.jpg)
Demo Block usage
> It could be QEMU/KVM rbd client for VM> It could be also be NFS/CIFS server ( but you
need to consider how to support HA over that )