lustre quick cheatsheet
DESCRIPTION
Lustre Quick CheatsheetTRANSCRIPT
Lustre quick cheats
MDS - metadata serverMDT - metadata target (the disk)
OSS - Object storage serverOST - Object storage target (the disk)
MGS - management server- holds cluster configuration and exports mount point to clients- lustre clients contact it to mount the file system- can be part of the MDT (commonly done, it is low overhead)- can serve multiple file systems
modprobe -v lustre loads lustreinsmod | grep lustre shows if the kernel modules are loadedrmmod_lustre removes the lustre kernel modules
servers need: - lustre kernel modules - ldiskfs kernel modules - lustre user space "stuff" - a special version of e2fsprogs
clients need a kernel that matches the client kernel modules and the base lustre user space rpm
User space commands:
mkfs.lustrelctllfsmount.lustretune2fstunefs.lustre
by default one inode takes up 4k - consider this when sizing the MDT
Things get much more complex if you are using IB rather than FC, in that case you need to get all your IB setup first. You can use in IB TCP network for lnet traffic, that seems to work well.
==============
don't forget to put the /etc/modprobe.d/lustre.conf to have your lnet interface to use on all your machines
mkfs examples:
MDS:mkfs.lustre --fsname=lustre --mdt --mgs /dev/yourfs(this will include the mgs in the file system)
just mount with mount -t lustre /dev/yourfs /your/mountpointthis usually does not get put in the /etc/fstab
OSS:mkfs.lustre --fsname=lustre --ost --mgsnode=IP_of_MDS@tcp /dev/yourdevice (if using ethernet)mount:mount -t lustre /dev/yourdevice /your/mountpointthis usually does not get put in the /etc/fstab
client:they usually get theirs in the /etc/fstab:IP_of_MDS@tcp:/lustre /mnt/lustre/yourdir lustre defaults, _netdev 0 0 (often flock is a used option of MPI is involved).
if you mount by hand:
mount -t lustre IP_of_MDS:/lustre /mnt/lustre/yourdir
=======================
/etc/modprobe.d/lustre.conf looks like:
options lnet networks="tcp1(eth1)" or "o2ib0", etc.
multiple lnets can be used for load balancing
========================
lfs used to setup striping
first you need to setup your OSTS pools using lctl commands
by default stripes in 4MB breadths
lfs getstripe and lfs setstripe are the commands used
lfs setstripe:
--count is number of OST's to stripe across (-1 means use all available)--size is the stripe size in multiples of 64k (default is 1MB)
=============
This shows the mounts in the pool:
lfs df -hUUID bytes Used Available Use% Mounted onlustre-MDT0000_UUID 130.4G 459.6M 122.5G 0% /lustre[MDT:0]lustre-OST0000_UUID 146.7G 1.4G 137.8G 0% /lustre[OST:0]lustre-OST0001_UUID 146.7G 459.6M 138.8G 0% /lustre[OST:1]
filesystem summary: 293.3G 1.9G 276.6G 0% /lustre
=================================
Good discussion stripe alignment:
h5p://www.nics.tennessee.edu/io-‐:ps
========================================
LNLL's IO guide -‐ lots of good info: h5ps://compu:ng.llnl.gov/LCdocs/ioguide/
=========================================
From NASA good info on striping: h5p://www.nas.nasa.gov/hecc/support/kb/Lustre-‐Basics_224.html