bpug mcollective 20140624
DESCRIPTION
Slides of the Belgian Puppet User Group Meetup "something about MCollective" we held on the 24 of June 2014. The source of these slides can be found at https://github.com/witjoh/BPUG_MCollectiveTRANSCRIPT
Belgian Puppet Users GroupSomething About MCollective
24th of June 2014Hosted by Telenet/Hostbasket
Lochristi - Belgium
Agenda
AgendaOrchestration & MCollective
Hands-on - Setting up MCollective
The 'mco' command
MCollective Agents
And what about Tomorrow ?
Orchestration
Orchestration on Wikipedia
Orchestration describes the automatedarrangement, coordination, and management ofcomplex computer systems, middleware, andservices.
Orchestration in human Language ?Parallel job execution System
MCollectiveMarionette Collective:
FrameworkUses Publish Subscribe MiddlewareVery scalable (form small to huge clusters)Broadcast paradigm (Network is the only source of truth)no central database - no complex naming conventionSimple command line toolsExtremely pluggableCommunity extentions available
MCollectivebuild upon existing middleware
uses existing authentication/authorisation modelsuses existing clustering techniquesuses existing routing/network isolation methods
Marionette CollectivePluggable core
middleware (STOMP compliant)AuthorisationserialisationData Sources (Chef & Puppet supported + Facter[Community])Mcollective as transport (eg, central service inventory system)
MCollective - Components
Overview - Components
The MCollective Server - mcollectived
The MCollective Client
MCollective Middleware Overview
Inside The MCollective Middleware
Middleware ChoicesActiveMQ - preferred
Best testedPerformance is greatPowerfull and flexible security featuresScaleable by clusteringPain in the #$@% to configuredetailed docs on docs.puppetlabs.comConnector is shipped with MCollective
RabbitMQ
Not that good tested as ActiveMQNot documented @ docs.puppetlabsConnector is shipped with MCollective
Generic Stomp Connector (Deprecated)
Custom Connector Plugins
Getting dirty hands
Vagrant boxes
What we needCentos vagrant boxes images
puppetlabs vagrant boxes
Centos 6.5 64bit nocmCentos 6.5 32bit nocm
Minimal centos6.5 vagrant box
centos minimal 64 bit versioncentos minimal 32 bit
My vagrantfile with bridged networking (with puppetlabs centos 6.5i nocm box)
Vagrantfile (showoff download link)
Vagrant setupBased on the Vagrantfile from previous slide.
Only one ActiveMQ server (running on my laptop)
Only the ':johan' image is needed.
mkdir -p bpug_vagrant/puppet ; cd bpug_vagrant (puppet = shared folder)download VagrantfileUsed domainname = koewacht.netchange johan to 'yourname' (should be unique)adjust box_url ( eg file://'downloaded box file' )adjust memory settings (currently 1GB)
starting the vagrant box :
vagrant up 'yourname'Having trouble -- shout !!
logging into your box
vagrant ssh 'yourname'sudo -i
Info we need
The setupOne central ActiveMQ server (already up and running)
Many MCollective nodes
Your Virtual Boxes ...Server roleClient role
Bridged mode, so we can see each others node
Installation done by hand
Info we need before handThe ipaddress ActiveMQ server (dhcp based)
The passwords for configuration files :
client: 29l6wD2mIzbLpbp4GMnUzchHp2XWpKk8N8dcxXCnDRU=server: 04BpZofasX1dDexFsqZcgfM1tkC4VCGI6hoziWMu7zw=Pre-shared key: Gw8nclOGn1YiIMvEAxgeZ7jrL1ErCdZZXm2e7JX2S4o=
( keys are generated with : $ openssl rand -base64 32 )
RequirementsWe are using packages from Puppetlabs repos
Mcollective clients/servers
Working NTP
Ruby
1.8.7/1.9.32.0.0 not supported yet1.9.0/1.9.2 will fail
Ruby stomp +1.2.2
Mcollective + 2.5.0
5MB disk
256 MB ram
Requirements ContinuedMiddleware Broker
500 MB ram
Messaging middleware :
ActiveMQ 5.8 with stomp connectorRabbitMQ 2.8 with stomp connector
Disk Space for Middleware server : 15MB
Some CPU & Network capacity (+2 connections per server)
platforms puppetlabs repo
RHEL 5|6|7
Fedora 19 - 20
Debian Lucid|Precise|Saucy|Sid|Squeeze|Trusty|Wheezy
Installing the packages
Installing Puppetlabs repososfamily == RedHat
$ sudo yum install http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
$ sudo yum install http://yum.puppetlabs.com/puppetlabs-release-fedora-20.noarch.rpm
osfamily == Debian$ wget http://apt.puppetlabs.com/puppetlabs-release-sid.deb
$ sudo dpkg -i puppetlabs-release-sid.deb
$ sudo apt-get update
(replace sid with your version)
Installing ActiveMQ - How I Did It
Installing the packageOn osfamily == RedHat
$ sudo yum install activemq$ sudo chkconfig activemq on
On osfamily == Debian$ sudo apt-get install activemq$ sudo sysv-rc-conf activemq on
ActiveMQ ConfigurationThe /etc/activemq/activemq.xml
Line number correspond to download-able activemq.xml file
Enable Purging the Broker
35 <broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" useJmx="true" schedulePeriodForDestinationPurge="60000">
Disable producerFlowControl & memory cleanup
50 <destinationPolicy>51 <policyMap>52 <policyEntries>53 <!-- MCollective generally expects producer flow control to be turned off. -->54 <policyEntry topic=">" producerFlowControl="false" memoryLimit="1mb"/>55 <!-- MCollective will generate many single-use reply queues,56 which should be garbage-collected after five minutes to conserve memory. -->57 <policyEntry queue="*.reply.>" gcInactiveDestinations="true" inactiveTimoutBeforeGC="300000"/>
ActiveMQ Configuration - continuedThe /etc/activemq/activemq.xml
define logins for clients and servers in simpleAuthenticationPlugins
104 <simpleAuthenticationPlugin>105 <users>106 <authenticationUser username="client" password="29l6wD2mIzbLpbp4GMnUzchHp2XWpKk8N8dcxXCnDRU=" groups="servers,clients,everyone"/>107 <authenticationUser username="server" password="04BpZofasX1dDexFsqZcgfM1tkC4VCGI6hoziWMu7zw=" groups="servers,everyone"/>108 </users>109 </simpleAuthenticationPlugin>
ActiveMQ Configuration - continuedThe /etc/activemq/activemq.xml
Define permissions for clients and servers in authorizationPlugins
110 <authorizationPlugin>111 <map>112 <authorizationMap>113 <authorizationEntries>114 <authorizationEntry queue=">" write="admins" read="admins" admin="admins" />115 <authorizationEntry topic=">" write="admins" read="admins" admin="admins" />116 <authorizationEntry queue="mcollective.>" write="clients" read="clients" admin="clients" />117 <authorizationEntry topic="mcollective.>" write="clients" read="clients" admin="clients" />118 <authorizationEntry queue="mcollective.nodes" read="servers" admin="servers" />119 <authorizationEntry queue="mcollective.reply.>" write="servers" admin="servers" />120 <authorizationEntry topic="mcollective.*.agent" read="servers" admin="servers" />121 <authorizationEntry topic="mcollective.registration.agent" write="servers" read="servers" admin="servers" />122 <authorizationEntry topic="ActiveMQ.Advisory.>" read="everyone" write="everyone" admin="everyone"/>123 </authorizationEntries>124 </authorizationMap>125 </map>126 </authorizationPlugin>
ActiveMQ Configuration - continuedThe /etc/activemq/activemq.xml
Transports - Only one transport should be enabled
156 <transportConnectors>157 <transportConnector name="stomp+nio" uri="stomp+nio://0.0.0.0:61613"/>158 </transportConnectors>
Disable web console (commented out)
170 <!-- disabled for security reasons171 <import resource="jetty.xml"/>172 -->
Fire it up - and check$ service activemq start$ netstat -an | grep 61613$ tail -200f /var/log/activemq/activemq.log | less
In the real worldAdjust firewall (port 61613)
Selinux and equivalents
Installing MCollective Servers
Installing the packageosfamily == RedHat
$ sudo yum install mcollective$ sudo chkconfig mcollective on
osfamily == Debian$ sudo apt-get install ruby-stomp mcollective$ sudo sysv-rc-conf mcollective on
MCollective Server Configuration/etc/mcollective/server.cfg
(based on the downloadable server.cfg)
user and password are also defined in activemq.xml on messaging server
6 plugin.activemq.pool.size = 17 plugin.activemq.pool.1.host = activemq.koewacht.net8 plugin.activemq.pool.1.port = 616139 plugin.activemq.pool.1.user = server10 plugin.activemq.pool.1.password = 04BpZofasX1dDexFsqZcgfM1tkC4VCGI6hoziWMu7zw=
pre-shared-key form earlier slides
17 # Security provider18 securityprovider = psk19 plugin.psk = Gw8nclOGn1YiIMvEAxgeZ7jrL1ErCdZZXm2e7JX2S4o=
Check the libdir directory
22 libdir = /usr/libexec/mcollective
Fire it up - and verify$ service mcollective start
$ netstat -an | grep 61613tcp 0 0 192.168.10.223:50737 192.168.10.231:61613 ESTABLISHED
The MCollective Client
Installing the MCollective Client Packageosfamily == RedHat
$ sudo yum install mcollective-client
oSfamily == Debian$ sudo apt-get install mcollective-client
Configuring the MCollective Client(based on the downloadable client.cfg)
user and password are also defined in activemq.xml on messaging server
3 connector = activemq4 plugin.activemq.pool.size = 15 plugin.activemq.pool.1.host = activemq.koewacht.net6 plugin.activemq.pool.1.port = 616137 plugin.activemq.pool.1.user = client8 plugin.activemq.pool.1.password = 29l6wD2mIzbLpbp4GMnUzchHp2XWpKk8N8dcxXCnDRU=9 plugin.activemq.heartbeat_interval = 30
pre-shared-key form earlier slides
17 # Security provider18 securityprovider = psk19 plugin.psk = Gw8nclOGn1YiIMvEAxgeZ7jrL1ErCdZZXm2e7JX2S4o=
Check the libdir directory
22 libdir = /usr/libexec/mcollective
Testing the Setup so Far
Testing the Basic SetupThe MCollective Ping Test
low level query
[vagrant@johan ~]$ mco pingactiveMQ.koewacht.net time=176.15 msjohan.koewacht.net time=185.95 ms
TroubleshootingAre the passwords & user/groups correct
middleware server : activemq.xmlmcollective server.cfgmcollective client.cfg
Networking
check for port 61613
MCollective Command Line Client
Introduction mco command-line clientConnector
Clients uses 2 plugins
connector plugin (connection to middleware)ActiveMQsecurity plugin (sign & optionally encript data)PSK (pre-shared key)same connectors on all MCollective components(clients/servers/middleware)
Introduction mco command-line clientInventory
builtin plugin
gathers info about MCollective server
server configurationserver statsavailable pluginsConfiguration ClassesFacts (aka facter)
Introduction mco command-line clientInventory - example run
$ mco inventory heliotrope Inventory for heliotrope: Inventory for heliotrope: Server Statistics: Version: 2.5.0 Start Time: Mon Apr 14 03:11:12 -0700 2014 Config File: /etc/mcollective/server.cfg Collectives: mcollective Main Collective: mcollective Process ID: 1334 Total Messages: 16 Messages Passed Filters: 13 Messages Filtered: 3 Expired Messages: 0 Replies Sent: 12 Total Processor Time: 38.56 seconds System Time: 128.22 seconds
Agents: discovery rpcutil
Data Plugins: agent fstat
Configuration Management Classes: No classes applied
Facts: No facts known
Inventory continuedcustom output format
ruby script
use it as script argument
inventory do format "%20s %8s %10s %-20s" fields {[ identity, facts["architecture"],facts["operatingsystem"], facts["operatingsystemrelease"]]}end
$ mco inventory --script inventory.mcgeode x86_64 CentOS 6.4sunstone amd64 Ubuntu 13.10heliotrope x86_64 CentOS 6.5
Discoverymc plugin
built in
defined in client.cfg (mc plugin)
13 # Use auto-discovery14 default_discovery_method = mc
sends broadcast queries
mco plugin doc mc
flatfile plugin
list of hostnames from file
mco plugin doc flatfile
Discoveryflatfile plugin
$ cat /path/to/hostlistfireagateheliotrope
$ mco rpc rpcutil ping --disc-method flatfile --disc-option /path/to/hostlistDiscovering hosts using the flatfile method .... 2
* [ ============================================================>] 2 / 2
heliotropeTimestamp: 1385012042fireagateTimestamp: 1385012044
Finished processing 2 / 2 hosts in 146.13 ms
mco rpc rpcutil is how to invoke a direct call to the API without using the client application.
MCollective's filtersCan be used on all MCollective commands
$ mco help <command>
Host Filters -W, --with FILTER Combined classes and facts filter -S, --select FILTER Compound filter combining facts and classes -F, --wf, --with-fact fact=val Match hosts with a certain fact -C, --wc, --with-class CLASS Match hosts with a certain config management class -A, --wa, --with-agent AGENT Match hosts with a certain agent -I, --wi, --with-identity IDENT Match hosts with a certain configured identity
MCollective filters - examples$ mco find -with-identity /i/$ mco find -with-identity /^web\d/$ mco find -with-class webserver$ mco find -with-fact operatingsystem=CentOS$ mco find -with-agent package
Filters requires the mc Discovery Plugin.Flatfile discovery only supports identity filter
MCollective combined filtersTypes of combined filters
Puppet Classes & Facter facts
$ mco ping --with "/^web\d/ operatingsystem=CentOS"
Select filter
combination of
Factes and ClassesBoolean Logic ( AND - OR - NOT|! )
$ mco ping --select "operatingsystem=CentOS and /nameserver/"$ mco ping --select "operatingsystem=CentOS and !environment=dev"$ mco ping --select "( /httpd/ or /nginx/ ) and is_virtual=true"
CentOS hosts named web followed by a number.Ping only CentOS hosts which have the nameserver class applied to them.Ping every CentOS host which isn’t in the dev environment.match virtualized hosts with either the httpd or nginx Puppet class applied to them.
Add limitations to MCollective commandLimit option
Control how many servers get the request
--one--limit--limit matching server
$ mco ping --limit 15$ mco ping --one --with-fact operatingsystem=CentOS$ mco ping --limit 5 --with-class webserver$ mco ping --limit 33% --with-class webserver
Fifteen servers of any typeOnly one CentOS serverFive servers which have the webserver Puppet class applied to themOne third of the servers which have the webserver Puppet class applied to them
Add limitations to MCollective commandbatch option
Controls how many servers receive the request in batch
Controls time between batches
$ mco ping --batch 5 --batch-sleep 30 --with-fact country=de$ mco package upgrade sudo --batch 10 --batch-sleep 20
Ping batches of five German servers every 30 secondsFast upgrade sudo in batches of ten servers spaced twenty seconds apart
Controlling mco command output--json
output in json format
--no-progress
supress status bar
--verbose
timing discoveryfull RPC statistics
Factskey/value pairs inventory server
Facter generate facts
Installing facterosfamily == RedHat
$ sudo yum install facter
osfamily == Debian$ sudo apt-get install facter
Facter facts & MCollectiveconfigure /etc/mcollecive/server.cfg
30 # facter31 factsource=yaml32 plugin.yaml=/etc/mcollective/facts.yaml
Generate a facts.yaml file
$ facter -y > /etc/mcollective/facts.yaml
optionally add a crontab
$ cat /etc/cron.d/facts.sh*/30 * * * * facter -y >/etc/mcollective/facts.yaml
restart mcollective
$ mco inventory nodename
MCollective & Puppet ClassesOnly works with puppet
Puppet agents :
writes classes.txt$statedir (/var/lib/puppet/state)agent node runs MCollective serverpuppet agent --configprint classfilemust match classesfile /etc/mcollective/server.cfg
We can simulate puppet classes by faking a classes.txt in /etc/mcollective/classes.txt
MCollective AgentsExtending MCollective
MCollective AgentsConnector
Agents uses 2 plugins
connector plugin (connection to middleware)ActiveMQsecurity plugin (sign & optionally encript data)PSK (pre-shared key)same connectors on all MCollective components(clients/servers/middleware)
Agent Parts
Agent part (servers)
DDL (servers & clients)
Client part (clients)
Common part (servers & clients)
MCollective Agent - Installing from PackagesFrom the PuppetLabs Repositories
Install on every MCollective Server
Many community MCollective Agents (eg. github)
osfamily == RedHat$ sudo yum install mcollective-filemgr-agent$ sudo yum install mcollective-nettest-agent$ sudo yum install mcollective-package-agent$ sudo yum install mcollective-service-agent
osfamily == Debian$ sudo apt-get install mcollective-filemgr-agent$ sudo apt-get install mcollective-nettest-agent$ sudo apt-get install mcollective-package-agent$ sudo apt-get install mcollective-service-agent
MCollective Agent - Inside the Package[vagrant@johan ~]$ rpm -ql mcollective-package-common-4.3.0-1.el6.noarch/usr/libexec/mcollective/mcollective/agent/package.ddl/usr/libexec/mcollective/mcollective/util/package/usr/libexec/mcollective/mcollective/util/package/base.rb/usr/libexec/mcollective/mcollective/util/package/packagehelpers.rb/usr/libexec/mcollective/mcollective/util/package/puppetpackage.rb
[vagrant@johan ~]$ rpm -ql mcollective-package-agent-4.3.0-1.el6.noarch/usr/libexec/mcollective/mcollective/agent/package.rb
[vagrant@johan ~]$ rpm -ql mcollective-package-client-4.3.0-1.el6.noarch/usr/libexec/mcollective/mcollective/application/package.rb
MCollective Agent - The ComponentsThe DLL file
DDL = Data Description Language
Definition remote methods
Description input format
Description generated output
metadata
authorversionlicense...
Used for Validating Input
If you stick to code convention
MColelctive Agent - The ComponentsMCollective Agent DLL Example
[vagrant@johan ~]$ cat /usr/libexec/mcollective/mcollective/agent/package.ddlmetadata :name => "package", :description => "Install and uninstall software packages", :author => "R.I.Pienaar", :license => "ASL 2.0", :version => "4.3.0", :url => "https://github.com/puppetlabs/mcollective-package-agent", :timeout => 180
requires :mcollective => "2.2.1"
["install", "update", "uninstall", "purge"].each do |act| action act, :description => "#{act.capitalize} a package" do input :package, :prompt => "Package Name", :description => "Package to #{act}", :type => :string, :validation => :shellsafe, :optional => false, :maxlength => 90
output :output, :description => "Output from the package manager", :display_as => "Output"
output :epoch, :description => "Package epoch number", :display_as => "Epoch" ................. ...........
MCollective Agent - The ComponentsThe Agent Plugin
Installed on all MCollective servers
Uses DLL for Meta Data & initialization
Defines Agent Actions
Action : individual tasks the agent can do
MCollective Agent - The ComponentsThe Client
Installed only on MCollective clients
Provides access to agents and actions
Uses also DDL
eg. input validation....
Clients - Agents - DLL are strongly coupled
MCollective Client Help[vagrant@johan ~]$ mco help plugin package
MCollective Plugin Application
Usage: mco plugin package [options] <directory> mco plugin info <directory> mco plugin doc <plugin> mco plugin doc <type/plugin> mco plugin generate agent <pluginname> [actions=val,val] mco plugin generate data <pluginname> [outputs=val,val]
info : Display plugin information including package details. package : Create all available plugin packages. doc : Display documentation for a specific plugin.
Application Options -n, --name NAME Plugin name --postinstall POSTINSTALL Post install script --preinstall PREINSTALL Pre install script --revision REVISION Revision number .... -h, --help Display this screen
The Marionette Collective 2.5.2[vagrant@johan ~]$
And What About Tomorrow
This is not the end,Just the beginning
Delve much deeper into MCollective
Read, Read and Read even moreExperiment as much as you can
Secure your MCollective Infrastructure
Authentication connector
Tuning your ActiveMQ
Puppetlabs Docs on ActiveMQ & MCollective
Manage yout MCollective infrastructure with puppet
Puppetlabs MCollective Module on the ForgeLearning MCollective pupept module
Great for getting more insight in managing MCollective withpuppet
referenceswikipedia - Orchastration(computing
PuppetLabs MCollective online docs
Introduction to orchestration using MCollective - Pieter Loubser
Inroduction to Mcollective - R.I. Pienaar
MCollective Installed. And now ? - Thomas Gelf
Learning MCollective - Jo Rhet (O'Reilly)
This Presentation on Github
??? Questions ???
Thank You For Attending
- - -
Thanks go to our Host for Tonight
Telenet/HostbasketDo not forget the coolest T-Shirt