customer - micro focusblade number four on the ibm blade center, (64 bit xeon cpu, 3.2 ghz, 2gb...
TRANSCRIPT
SLES9 AutoYaST exampleWhitepaper Report
prepared for
Customer
deploying_suse_linux_using_autoyast v1-24.pdf
Deploying SUSE Linux in the data center
September 30, 2007
www.novell.com
Disclaimer Novell, Inc. makes no representations or warranties with respect to the contents or use of this document, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose.
Trademarks Novell is a registered trademark of Novell, Inc. in the United States and other countries.
* All third-party trademarks are property of their respective owner.
Copyright 2007 Novell, Inc. All rights reserved. No part of this publication may be reproduced, photocopied, stored on a retrieval system, or transmitted without the express written consent of Novell, Inc.
Novell, Inc.1800 South Novell PlaceProvo, UT 84606USA
Novell Nederland BVConsulting Services NetherlandsBarbizonlaan 25Capelle aan den IJssel
PO 85024
3009 MA Rotterdam
Netherlands
Prepared By Robert Zondervan
SLES9 AutoYaST example Whitepaper Report
February, 2006
Statement of Work: NL-31xx-xxx
Customer Master Agreement Number: Client nr
Consultants: Robert Zondervan, Yan Fitterer
Revision HistoryVersion Date Editor Revisions0.1-0.8 10 Jan – 4 Feb 06 Robert Zondervan Continuing progress reports0.84 6 February 2006 Yan Fitterer Checked content and English0.9 6 February 2006 Robert Zondervan Added Yan's revisions1.0 11 February 2006 Robert Zondervan Added SAN boot1.1 12 February 2006 Robert Zondervan Removed customer information1.1 15 February 2006 Robert Zondervan Added Rules and Classes summary1.2 28 April 2006 Robert Zondervan Added LDAP server screen dumps1.23 18 June 2007 Robert Zondervan Correction to LDAP in autoyast xml1.24 30 Sep 2007 Robert Zondervan Added comment section in autoyast xml
Reviewer RecordVersion Date Reviewers Name Reviewers Role
(Peer/ Team Leader / Project Manager)
0.1 10 January 2006 Yan Fitterer Peer0.22 12 January 2006 Yan Fitterer Peer0.4 25 January 2006 Yan Fitterer Peer0.5 26 January 2006 Yan Fitterer Peer0.7 1 February 2006 Yan Fitterer Peer0.8 4 February 2006 Yan Fitterer Peer0.9 6 February 2006 Yan Fitterer Peer
Final Approval
Version Date Approved By Approvers Role(Peer/ Team Leader / Project
Manager)NA NA NA NA
Contents1 Executive Summary.................................................................1
General design...........................................................................1
2 Document introduction.............................................................2
First four weeks..........................................................................2
3 Deployment Infrastructure ........................................................3
Hardware..................................................................................3
IBM Blade Center...................................................................3HP Blade Center....................................................................5IBM Blade Center issues...........................................................7HP Blade Center issues............................................................8
4 AutoYaST..............................................................................9
Build Servers........................................................................9RDP...................................................................................9ZLM...................................................................................9AutoYaST............................................................................9GRUB.................................................................................9
Mapping Components to Servers.....................................................10
AutoYaST Control System.............................................................11
Rules and Classes.................................................................11Merging.............................................................................11
AutoYaST Server Setup................................................................14
Partitioning and RAID setup..........................................................14
AutoYaST Server installation settings...............................................18
AutoYaST server Software.......................................................18Post Server installation settings................................................18
AutoYaST Post Server Setup..........................................................19
Create an extra YaST installation source...........................................21
Troubleshoot the extra installation source...................................22Add an extra package to the installation source............................22
Manual installation mode using a network source ................................22
Phase 1 linuxrc....................................................................22Phase 2 YaST.......................................................................23Example manual installation....................................................23
Automated installation mode........................................................24
AutoYaST template build.............................................................25
Partitioning scheme..............................................................25Changing the partition size......................................................27Changes to the template........................................................27Adding a script....................................................................28Checking the XML template syntax............................................28
5 Updates and patches ..............................................................29
IP address for the YOU server........................................................29
Internet access.........................................................................30
i
YOU server setup.......................................................................30
Change the YOU package directory............................................31
YOU client setup.......................................................................31
Deploying patches......................................................................32
ZENworks Linux Management server...........................................33
6 Monitoring ..........................................................................34
Nagios server and plug-in setup......................................................34
Nagios configuration...................................................................35
Host Status and Service Status..................................................35Customer Nagios actions.........................................................36
Starting and stopping the Nagios daemon..........................................37
7 LDAP authentication...............................................................38
LDAP configuration on the server....................................................38
Create a certificate....................................................................41
LDAP configuration on the clients...................................................46
Create LDAP Users and Groups.......................................................46
Test the LDAP connection.............................................................47
8 Connecting to the SAN............................................................48
EVMS......................................................................................49
SAN storage status checking commands............................................53
SAN boot.................................................................................54
Changing the blade BIOS.........................................................54Changing the Fiber Channel Adapter BIOS....................................56
9 Conclusion...........................................................................60
Contact List............................................................................61
Appendix...............................................................................62
AutoYaST template 1..................................................................62
AutoYaST template 2..................................................................69
AutoYaST template 3..................................................................69
Customized files........................................................................69
/etc/init.d/boot.local on the AutoYaST server..............................69/etc/apache2/conf.d/inst_server.conf.......................................70
Nagios custom setup files.............................................................71
nagios.cfg..........................................................................71checkcommands.cfg..............................................................72cgi.cfg..............................................................................73timeperiods.cfg...................................................................73services.cfg........................................................................74misccommands.cfg...............................................................75resource.cfg.......................................................................76hostgroups.cfg.....................................................................76escalations.cfg....................................................................76contactgroups.cfg.................................................................76dependencies.cfg.................................................................77
ii
hosts.cfg............................................................................77contacts.cfg.......................................................................78
Installation sources....................................................................80
Building the installation source tree with integrated Service Pack.......80Installation source tip............................................................82Sharing the installation source with Apache.................................83Installation source on client....................................................84
Extras....................................................................................84
Supported modules...............................................................84Installing an RPM plus dependencies from the command line.............84Checking and setting the NIC speed...........................................84SLES9 in VMware..................................................................85References.........................................................................85
Table of FiguresIllustration 1: IBM BladeCenter Management Module............................4
Illustration 2: HP iLO Blade Module.................................................5
Illustration 3: Build System Components.........................................10
Illustration 4: Components to Servers mapping.................................10
Illustration 5: Rules and Classes Merging.........................................13
Illustration 6: Screen dump of the correct hardware RAID configuration .15
Illustration 7: Example installation source file called order..................21
Illustration 8: Example installation source file called instorder.............21
Illustration 9: Example directory.yast file.......................................21
Illustration 10: Example products file............................................21
Illustration 11: Example extra installation source tree.......................22
Illustration 12: Partitioning part of the AutoYaST XML answer file.........27
Illustration 13: YOU server configuration file /etc/apache2/conf.d/you_server.conf............................................31
Illustration 14: YOU client configuration in the XML file......................31
Illustration 15: Available Nagios packages in YaST.............................34
Illustration 16: Automatically selected Nagios dependency packages......34
Illustration 17: Example Nagios monitor screen................................35
Illustration 18: LDAP server enabling.............................................38
Illustration 19: LDAP server TLS Settings........................................39
Illustration 21: LDAP server Import Certificate.................................40
Illustration 22: LDAP server database creation.................................40
Illustration 23: CA Management <Alt-C>.........................................41
Illustration 24: New Root CA.......................................................41
Illustration 25: Root CA password.................................................42
Illustration 26: Root CA creation summary......................................42
iii
Illustration 27: Enter CA.............................................................43
Illustration 28: Root CA..............................................................43
Illustration 29: Add Server Certificate............................................44
Illustration 30: LDAP Certificate creation........................................44
Illustration 31: LDAP Certificate summary.......................................45
Illustration 32: Finish LDAP certificate...........................................45
Illustration 33: Used LDAP client configuration in the XML control file....46
Illustration 34: SAN and LUN architecture.......................................48
Illustration 35: EVMS Volumes......................................................50
Illustration 36: EVMS Regions.......................................................50
Illustration 37: EVMS Containers...................................................51
Illustration 38: EVMS DOS Segments..............................................51
Illustration 39: EVMS Disks..........................................................52
Illustration 40: EVMS Plugins.......................................................52
Illustration 41: Output of multipath -l during a failed path test.............54
Illustration 42: <F9> BIOS...........................................................55
Illustration 43: Standard Boot Order (IPL).......................................55
Illustration 44: Boot Controller Order............................................56
Illustration 45: QLogic <Ctrl-Q> Configuration Settings.......................57
Illustration 46: QLogic Host Adapter Settings...................................58
Illustration 47: QLogic Selectable Boot Options................................59
Illustration 48: /etc/init.d/boot.local on the AutoYaST server...............70
Illustration 49: /etc/apache2/conf.d/inst_server.conf........................70
Illustration 50: Example installation sources....................................80
Illustration 51: Example installation source lines on client..................84
Table of TablesTable: IBM HS20 8843 Blades on 172.21.101.62.................................4
Table: HP BL20 pG3 Blades...........................................................6
Table: Zones, subnetmasks and gateways, dns 171.21.99.200...............6
Table: Partitioning scheme on the AutoYaST server...........................17
Table: File systems resizing on line/off line.....................................27
Table: Contact list....................................................................61
iv
Executive Summary
1 Executive Summary
Customer requested Novell Consulting to get three applications deployed in the SUSE Linux environment. These applications are:
• Web application 1
• Web application 2
• Web application 3
General design
The project is owned by Customer, and Customer has requested Novell Consulting to work on the following tasks.
Deployment Infrastructure:
• Determining and creating a core build
• Setting up the AutoYaST installation server (YaST: Yet another Setup Tool)
• Creating AutoYaST templates for the applications
Updates and patches:
• Setting up an YOU server for updating the servers
Monitoring:
• Setting up open source Nagios for operating system and hardware monitoring
Extras:
• Centralized LDAP authentication and client deployment
• EVMS Mirrored and multipathed SAN disks configuration
• Diskless boot from the SAN without using PXE
High availability design and integrating into the existing management infrastructure is not part of this stage of the Quick Hosting Proof of Concept.
The optional integration and usage of Rapid Deployment Pack, AutoYaST Rules and Classes are briefly introduced.
SLES9 AutoYaST example - Whitepaper Report 1Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Document introduction
2 Document introduction
This document describes the main steps of the first four weeks of the Deploying SUSELinux project.
First four weeks
The first four weeks will be used to setup the general Linux infrastructure environment and to get the Web application 1 live. In the fourth week we plan to make a start with building the Web application 2 instance including Apache, Tomcat and JBoss.
2 Novell Consulting Example
Deployment Infrastructure
3 Deployment Infrastructure
The AutoYaST server is usually used to perform 'scripted' deployments with a minimal OS build, common for all hosts. Updates and applications can be deployed using an update server such as YaST On line Update (YOU) or ZENworks Linux Management.
Customer wants three Linux builds in this stage of the Linux deployment:
1. OS + Apache
2. OS + Apache + Tomcat
3. OS + Apache + Tomcat + JBoss
Nagios will be included in the OS build to provide monitoring functionality. Customer requested that the internal disk(s) be under Logical Volume Management (LVM).
Hardware
The two blade servers (one IBM and one HP) are located in the secure data center. The scope of the POC includes the hardware platform selection for Quick Hosting. Novell personnel has no physical access to the data center. We will use remote control of the blade center, the blades and the hosts. The remote control of the blade center uses a Java based browser on the client to control the blade center and the inserted blades. Once the installations are finished we use Secure Shell (SSH) for remote administration of the hosts.
IBM Blade Center
Blade number four on the IBM blade center, (64 bit XEON CPU, 3.2 GHz, 2GB memory, IP 172.21.101.62) is the AutoYaST server. The IBM HTTP BladeCenter Management Console can make any blade the owner of the media tray (CD-ROM). We insert and leave the SLES9 SP3 64bit boot CD in the media tray and can boot any blade from it at any time. PXE boot is for security reasons and firewall limitations not allowed.
HS20 Purpose CPU
nr
Mem
in [GB]
Zone Hostname
.is.cuscorp.net
IP address
1 Application 2 1 4 ZF1-AE SEG cusFBGH 171.21.67.32
2 Application 3 2 8 DMZ2 cusFBCY 171.21.251.202
3 Installation tests 2 4 ZF1-AE SEG cusFBH5 171.21.67.33
4 AutoYaST+YOU+Nagios
1 2 ZF1-AE SEG cusFBES
cusFBL1
171.21.67.30
alias: 67.37(YOU)
5 former YOU 1 2 ZF1-AE SEG cusFBFV 171.21.67.31
Application 1 DMZ intern 'MENS' 171.21.
Application 1 DMZ extern 'MENS' 171.21.
Application 3 DMZ2 171.21.
SLES9 AutoYaST example - Whitepaper Report 3Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Deployment Infrastructure
Table: IBM HS20 8843 Blades on 172.21.101.62
The IBM blades each have 2 NIC's, eth0 was used, ZF1-AE SEG: subnet mask 255.255.255.128, gateway 171.21.67.126, DNS 171.21.99.200. DMZ2 is on 100Mbps, the rest on 1Gbps.
4 Novell Consulting Example
Illustration 1: IBM BladeCenter Management Module
Deployment Infrastructure
HP Blade Center
The HP Blade Center is using one extra IP address per BL20pG3 blade for remote control over HTTP. The IP address is the integrated Lights Out (iLO) address.
Unlike the IBM blade center, the HP blade center, on 172.21.101.65, does not have a CD-ROM drive. The iLO HTTP management connection can connect to a local CD-ROM drive or ISO file on the controlling workstation. The SLES9 SP3 ISO file can be obtained from the installation source tree on the AutoYaST server, cusfbes, in the directory /data1/instserver/isos/sles9/x86_64/
The ISO files are also shared by the Apache server and are available from a browser at the following URL: http://suse.cus.nl/sles_isos/
Each HP blade has 4 NIC's, eth0 was used.
SLES9 AutoYaST example - Whitepaper Report 5Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Illustration 2: HP iLO Blade Module
Deployment Infrastructure
BL20 Purpose and iLO address
CPU Mem
in [GB]
Zone Hostname
.is.cuscorp.net
IP address
1 cusshop
172.21.105.149
AMD 2 ZF1-AE SEG cusfbj7 171.21.67.35
2 JBoss
172.21.105.150
XEON 4 ZF1-AE SEG cusfbk6 171.21.67.36
Table: HP BL20 pG3 Blades
zone subnetmask default gateway vlan
ZF1-AE SEG 255.255.255.128 171.21.67.126 69
DMZ2 255.255.255.192 171.21.251.193
DMZ intern mens 255.255.255.192 171.21.60.???
DMZ extern mens 255.255.255.12 172.21.122.12
Table: Zones, subnetmasks and gateways, dns 171.21.99.200
6 Novell Consulting Example
Deployment Infrastructure
IBM Blade Center issues
Issue 1
IBM blade 1 (4GB) freezes occasionally during POST just after the Broadcom firmware and before the LSI firmware. Blade 2 boots correctly. Firmware 1.05 is used by the first blade. 1.03 on the fourth.
Issue 2
Concerns the IBM blades in general. The remote control session regularly loses keyboard input or repeats keys . This happens from Windows IE with Java 1.4.2 and with Firefox on Linux.
Selecting the <Ctrl> and <Alt> buttons, then pressing the <spacebar> usually reconnects the keyboard to the remote control session. On occasions, a full restart of the browser was necessary.
Issue 3
The Remote Disk feature did not show any options.
Solution: The workstation needs write permissions to a disk and the ability to dynamically register a Java dll. It did not work using a Knoppix CD and NT4 workstation, but it works on Windows XP.
Issue 4
The Ethernet ports on the IBM blades connect at 1Gbps only. The ports are not capable of connection to 100Mbps or 10Mbps ports. Some (DMZ) switches are only capable of 100Mbps. This is confirmed by the IBM service technician (Roel), who was on-site to resolve the issues. There is no quick and cheap solution at this moment for the DMZ blades, but the AutoYaST blade can (temporary) use 1 Gbps (setup by Ron).
Action: Customer has to decide on the implications of this restriction in regards to Quick Hosting. Issue 4 prevented use of blades where Customer could not provide 1Gbps ports (IBM blade 2 in DMZ2 zone).
Issue 5
The Hardware RAID configuration could not be set on blades 3 and 5 (and possibly on other blades also). The symptom is that only one disk could be added to the array, the second disk did not have the Yes option in the “Array Disk” column. Both disks could be used as independent disks.
Solution: After removing the RAID configuration, a Save, and a reboot, the RAID configuration could be setup correctly.
Issue 6
During the network setup stage of the installation process (i.e. initial configuration of the network interface, to enable retrieval of install system and software packages), the blades fail to assign the IP address.
SLES9 AutoYaST example - Whitepaper Report 7Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Deployment Infrastructure
Possible cause: The ports of the Cisco Systems switches go briefly off-line after link is brought up initially. This issue has already been encountered by Novell with other customers on HP hardware with Cisco switches. It would appear that trunking should be switched off, and portfast enabled.
During the POC we encountered the same issue and solution for the HP blades.
Solution: Customer used the following commands on the Cisco switch:
set spantree portfast port(s) enable
set port channel portspec mode off
HP Blade Center issues
Issue 1
During installation of blade 1 from a Customer NT4 workstation remote control connection was lost and could not be restored. Connection to the iLO could not be established either.
Solution: Physical manually reset of the blade did not solve the problem. The blade was confirmed faulty and had to be replaced.
8 Novell Consulting Example
AutoYaST
4 AutoYaST
System Components
Build Servers
Build servers are at the core of the build system. Each build unit is conceptually made up of an AutoYaST Control System server and an AutoYaST Installation Server. An optional ZLM or RDP server can be used. The RDP server is hosted in a Windows 2003 environment, whilst the two AutoYaST and ZLM servers can be hosted on a single SLES9 SPx server.
RDP
Novell Consulting integrated the Altiris Rapid Deployment System (RDP) for another customer. The RDP system provided the user interface to the build process via the “Deployment Server Console”. Each target server is identified by a “computer” object in the RDP console, and build operations are represented by “jobs”. The specifics of the build are chosen by adjusting the configuration of the jobs.
ZLM
A ZENworks Linux Management server (ZLM) can integrate scheduled 'work to do', for different Linux flavors and groups, using centralized administration, such as:
● PXE enabled AutoYaST and Kickstart installations
● Imaging
● Application package installations
● Updates (internal 'Satellite Server' used as subscription channel)
● Desktop Policy enforcements
● Remote scripts
● Remote Control
● Inventory Management
AutoYaST
AutoYaST is the SLES scripted build engine. All computing processes are run directly on the build target. The build configuration is retrieved from the Control Server, and the installation media from the Installation Server. All configuration is held in a set of XML files, and the final configuration of each build is computed by a complex merge process.
GRUB
GRUB is a boot manager. When AutoYaST integration with RDP is needed, GRUB provides the link between RDP and AutoYaST. The RDP server images and configures the target system with a temporary GRUB boot partition, which kicks off the AutoYaST process.
SLES9 AutoYaST example - Whitepaper Report 9Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
AutoYaST
Mapping Components to Servers
The Control Server and Installation Server are simple HTTP repositories serving static content. The Control Server holds the build configuration system, which is a set of XML files specifying the details of the build. The Installation Server is a repository containing the contents of the SLES9 installation media.
The ZLM server can also be scaled to more then one server, and can be integrated with the SLES9 AutoYaST server(s). The ZLM server can provide PXE services integrated with ZENworks for Desktops.
A Windows 2003 system can be configured when integration with RDP is required. The RDP server, providing PXE services and management of the various RDP processes.
10 Novell Consulting Example
Illustration 3: Build System Components
Illustration 4: Components to Servers mapping
AutoYaST
AutoYaST Control System
Rules and Classes
The Control System can use two advanced AutoYaST features:
● Rules
● Classes
Classes are used to achieve the design goals of “minimal system” and “manageability”. Using classes where appropriate ensures that configuration data is not duplicated unnecessarily.
Rules are used mainly for future expansion, possibly as part of scaling into live deployment (the “scalability” design principle). Rules also provide a structured way to customize the build according to hardware parameters.
Use of rules by the AutoYaST process is triggered by giving the “autoyast” kernel command line parameter a directory URL, as opposed to a file URL:
http://172.26.248.155/config/sles9sp2/
Appending a forward slash ensures the URL is treated as pointing to a directory. When AutoYaST detects a directory URL, it will try and retrieve the rules.xml file from a rules subdirectory, and apply to it the rules handling logic:
http://172.26.248.155/config/sles9sp2/rules/rules.xml
Novell Consulting recommends to NOT to use YaST's Autoyast module to edit the live XML control files. The module is very useful in helping to identify the specific entities and values for any particular configuration parameter, but does not currently support rules, and classes are only partly supported.
The control XML files should NOT be edited with other specialized XML editors either, as the order entities appear in the file is very important, and XML editors can reorder the entities in the edited files. The control XML files should be edited with a standard text editor, and validated to ensure the XML is well-formed. The xmllint Linux tool is useful for that purpose.
It is not useful to validate the XML control files against their DTD, as the DTD is slightly out of date on SLES9, and files that are invalid when checked for DTD conformance can be correctly parsed and used by the AutoYaST engine.
Merging
The merging and processing of rules and classes is broadly a three-step process:
1. Process Rules
2. Merge Profiles
3. Merge Classes
When processing the rules files, AutoYaST evaluates each <rule> entry and verifies if the condition matches. If it does, the file referred to by the <profile> tag is selected for merging (added to the “merge stack”). The process is repeated for each rule.
SLES9 AutoYaST example - Whitepaper Report 11Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
AutoYaST
The end result is a list of profiles (“merge stack”) that will go through the first merge process.
The stack of profiles to be merged will be taken from first to last, in the order they appeared in the rules file. The second file will be merged to the first, then each following file in the stack will be merged with the result of the previous merge.
The result of the profiles merge process will then be checked for classes. If any classes are found, a new merge list is compiled, and a new merge process is initiated. This new merge process is identical to the profiles merge process.
For any single entity that is included in two (or more) merged files the final value is the value in the last file to be merged.
For list values (partitions, packages, users, etc...), the merge process does a positional replacement (by opposition with an inclusive merge or a straight replacement). Because of this, it is not possible to allow list entries to be merged.
For all such entities, care should be taken to ensure they appear in one file only in the merge stacks.
It is not necessary to use rules and classes, but is often used to maintain manageable and scalable installations.
Classes can be used throughout the control file structure, but especially in three areas, such as:
● Package selection
● Partitioning of the system disk
● Network configuration
The following diagram gives a visual representation of the rules processing, and merge processes:
12 Novell Consulting Example
AutoYaST
SLES9 AutoYaST example - Whitepaper Report 13Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Illustration 5: Rules and Classes Merging
AutoYaST
AutoYaST Server Setup
The AutoYaST server was built using the following:
• A Customer (Windows) workstation. This machine is for Internet access and remote access of the blade servers. The IP address of the workstation was 171.21.238.189.
• A Live Linux boot CD (e.g. Knoppix 4 or SUSE 10) to access the AutoYaST and blade servers from the Customer workstation and to connect to a USB hard disk containing the SLES9 SP3 64bit installation source tree.
• The SLES9 SP3 64bit installation source tree software for two purposes:
1. To install the AutoYaST server from a network source instead of inserting 9 CD-ROMs (in the bunker).
2. To store on the AutoYaST server, after it is installed.
Building the install source tree, which was on the USB disk, is described in the Appendix. The installation source consists of 9 ISO files (6 SLES + 3 SP CD's), which can be downloaded from the Novell Customer Care portal, a directory structure, and a set of text files. The install source tree on the USB disk was copied to an HTTP server (cusfaj4.is.cuscorp.net 171.21.93.87) and the source was also shared from the Customer workstation with Apache. The Apache configuration file is in the Appendix.
(The firewall must accept a connection from the blade to the workstation with the USB disk or to the cusfaj4 HTTP server.)
• An inserted SLES9 SP3 64bit boot CD per blade center.
• Booting the AutoYaST blade from the SP3 CD with the following parameters (or using the <F2> and <F3> keys from the menu):
install=http://171.21.238.189/sles9/ textmode=1
• The used IP address, 171.21.238.189, is from the Apache server (on the Customer workstation using Knoppix Live Linux CD boot) attached to the external USB hard disk with the prepared installation source tree (see Appendix).
• Fixed IP address for the AutoYaST blade 4: 171.21.67.30/25, gateway 171.21.67.126, DNS 171.21.99.200. Because there is no DHCP server, there is a small timeout before linuxrc switches to manual network configuration. The network configuration source has to be confirmed twice, but there are no issues. If the network interface hasn't been activated by the switch soon enough, the installation source might fail on first attempt, and a second attempt be needed.
• The setup of the first server, in this case the AutoYaST server, is done manually using the YaST interface. The setup of extra and new servers is done by an unattended installation using an XML file as input. Changing the setup such as package selection and partitioning is done by changing the XML file.
Partitioning and RAID setup
As per Customer specification, there are 2 hardware mirrored disks on every blade. The hardware mirroring configuration can be altered by pressing <Ctrl-C> to enter
14 Novell Consulting Example
AutoYaST
LSI BIOS during the server POST. Press <Enter> to select the only adapter. Maybe the current RAID arrays need to be erased using the <F2> key. If that is the case, the blade requires a reboot before the new array can be created (bug?).
Enter the <RAID Properties> and create a new array. The two disks should have the [Array Disk?] field set to Yes (see illustration). Use the <Delete> option to erase the old data on create a multi disk array. The following illustration shows the configuration provided by the IBM rental company on blade 4:
Illustration 6: Screen dump of the correct hardware RAID configuration
ReiserFS and Ext3 were both used as the file systems on the AutoYaST server for demonstration purposes. A common file system for all the Linux distributions would be preferred by Customer. XFS would be nice, but is only packaged and not supported by others.
LVM is the chosen volume manager , providing abstraction of the physical storage layout from the file system, and allow resizing of (Heiser) volumes on line. EVMS is available, but not (yet) supported by AutoYaST in SLES9 (SLE10 is supported). The LVM physical extent size is 4M (default), which limits the maximum logical volume size to 256 GB.
There is a maximum of four DOS partitions. The four DOS partitions could be all primary or three primaries and one extended DOS partition. The extended partition can have a maximum of 16 logical DOS partitions on SCSI (and a maximum of 63 on IDE).
Only one volume group is required. Two volume groups were created for POC demo purposes. There is no advantages to create more than one volume group where only one disk is available.
The 74GB disk is divided in 4 DOS partitions, 3 primaries /dev/sda1, sda2, volume group rootvg on sda3 and the extended partition. The extended partition contains one logical DOS partition which is consumed by the datavg volume group.and is divided in logical volumes. There still is free space in the volume group datavg and in the extended partition. The free space can be on line assigned to extend the logical volumes.
SLES9 AutoYaST example - Whitepaper Report 15Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
AutoYaST
The drawback of having two volume groups when there is only one disk available, is that to preserve some space allocation flexibility, free space must be left unpartitioned, and DOS partitions must be created and assigned to one or the other of the the two volume groups to make use of the free space. Maximum flexibility and ease of administration is achieved where each storage device is assigned to no more than one volume group.
The following partitions were used on the AutoYaST server:
16 Novell Consulting Example
AutoYaST
mount point
file system
size LVM volume group
LVM logical volume
device comment
n/a swap 2GB - - /dev/sda1 1-2 times RAM. Is only used for potential memory dump, not for paging (memory shortage is not recommended on a production server). The location is at the beginning of the disk, because incorrect MBR rewriting will potentially only corrupt the swap partition. More swap partitions can be added to the system afterwards.
/boot ext3 100MB - - /dev/sda2 Kernel partition. Separate volume to make fsck available after system crash. Must not be in LVM container to prevent boot issues. Ext3 only for testing (Reiser is default).
/ reiser 1GB rootvg rootlv /dev/rootvg/rootlv Requires write permissions at boot time.
/home reiser 1GB rootvg homelv /dev/rootvg/homelv Home directories of all users except root (root uses /root). Is only used for personal settings at login time.
/usr reiser 3GB rootvg usrlv /dev/rootvg/usrlv Applications. Logical volume for future read only SAN purposes. (Historically based on mounting /usr over NFS.)
/tmp reiser 2GB rootvg tmplv /dev/rootvg/tmplv Together with home directory the only location where users can write by default.
/opt reiser 500MB rootvg optlv /dev/rootvg/optlv Application software. Not used by SLES software.
/var reiser 2.4GB rootvg varlv /dev/rootvg/varlv Used for log files and other files growing in size.
/data1 reiser 15GB datavg data1lv /dev/datavg/data1lv Extra volume group for future SAN location.
/srv reiser 1GB datavg srvlv /dev/datavg/srvlv Used by web & ftp servers to store their data.
Table: Partitioning scheme on the AutoYaST server
SLES9 AutoYaST example - Whitepaper Report 17Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
AutoYaST
AutoYaST Server installation settings
The following installation options were changed from the default:
• Timezone Europe, Netherlands.
• Hardware clock: Set to local time instead of UTC.
• Boot manager GRUB instead of LILO, because GRUB has more troubleshooting options such as an interactive prompt. GRUB or LILO could both be the default boot loader depending on the chosen setup during installation.
The package selection for the AutoYaST server was done manually. The other servers will use automatic selection from an unattended answer file (XML). The package selection is based on one of the initial choices (Minimal, Full server, ...) plus the Detailed Selection option where extra packages can be selected.
AutoYaST server Software
The Minimal installation option was selected to help harden the server. This also means no X windows system and the default run level was 3. Using the Search menu (<Alt-S>) the following packages are added:
• xntp
• xntp-doc
• apache2-worker
• apache2-prefork (required for AppArmor)
• apache2-doc
• rsync
• nagios (5 packages)
• emacs-x11 (vi alternative)
• evms-gui (evms is in the base install, evms-gui provides the graphical interface)
• sysstat
• findutils-locate (On Customer request. updatedb is niced in the cron directory by default)
• mkisofs (On Customer request)
• quota (On Customer request)
• wget (Used by the XML templates to download files and a command line utility to check web availability, connectivity.)
Apart from using the Minimal installation, the following packages were deselected to help harden the system:
• rsh
• finger
Post Server installation settings
The first stage of the installation procedure ends with the copy and installation of all the selected packages. This took about five minutes. The first stage ends with a reboot. In the second stage the following configurations were done:
18 Novell Consulting Example
AutoYaST
• Activation of USB and the network card(s)
• Root password setup (passwd)
• ISDN card detection (may be skipped)
• It is important to change the default network card configuration. Changing the network card at this point is the preferred method to change the default hostname from linux to the required FQDN, as the server name will be used later in the process to create server certificates. Changing the name after the certificate creation would require to implement new certificates on every service that is already using the old certificate.
• Certificate creation (may be skipped)
• LDAP client configuration
• Unpriviliged user account generation: cuspoc/passwd
• Release notes are displayed
• Graphic card detection
AutoYaST Post Server Setup
After installing the server, the following actions were taken to setup the AutoYaST service:
• Copied the 9 SUSE installation ISO files to the server to build the installation source tree. The ISOs were copied to /data1/instserver/isos/sles9/x86_64/
The 9 ISO files are described in the Appendix.
• Copied and check the md5sums of the ISO files. To verify the quality and the integrity of the installation source tree check the isos with the following command from the directory containing the ISO files:
md5sum *
The result of the md5sum command has to be identical to the official published md5sums of the isos. The published md5 checksums are on the official Novell download site.
• Modified GRUB to enlarge the maximum number of loop devices. This was done by adding max_loop=32 to the kernel line in the GRUB configuration file /boot/grub/menu.lst
• Modified the auto execute file /etc/init.d/boot.local of the server to mount the ISO files in the proper directories in the installation source tree. Another option would have been to add the loop ISO mounts in the file /etc/fstab. Both files are used as common practice for administrators to modify the boot procedure. A third option would have been to create a script in /etc/init.d/ to be able to mount and umount the ISO files in a more or less standard manner. The boot.local file is in the Appendix.
• Created an Apache alias, (/sles9_64) to publish the installation source tree by placing a configuration file, e.g. inst_server.conf in the directory /etc/apache2/conf.d/
The file inst_server.conf is in the Appendix. This file can be automatically created by yast2, Misc, Installation server.
SLES9 AutoYaST example - Whitepaper Report 19Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
AutoYaST
• Created an Apache alias, (/config)to publish the AutoYaST XML template files in /data1/confserver/ by placing a configuration file, e.g. conf_server.conf similar to inst_server.conf in the same directory /etc/apache2/conf.d/
• Created an Apache alias (/cus) to publish the Customer specific files in /data1/cus/ by placing a configuration file, e.g. cus_server.conf similar to inst_server.conf in the same directory /etc/apache2/conf.d/
• Started the Apache server, e.g.: /etc/init.d/apache2 start
• (To restart the Apache server and reread the configuration file: /etc/init.d/apache2 reload)
• Added Apache to the list of servers started at boot time:
chkconfig --level 35 apache2 on
• Checked if the Apache server automaticaly starts at boot time:
chkconfig --list | grep -i apache
• Checked that the Apache aliases were correctly created. This can be done from the command line with one of the following commands:
w3m http://cus.suse.nl/sles9_64
wget http://cus.suse.nl/cus
curl http://cus.suse.nl/config
• Changed the installation source of the AutoYaST server to point at its own installation source instead of using the USB disk on the Customer workstation. This can be achieved by yast2, Software, Change Source of Installation.
• Changed the installation source of the AutoYaST server to add an extra RPM repository. The RPM repository is a new subdirectory in the installation tree, where Customer can place their own and extra packages. (See below)
20 Novell Consulting Example
AutoYaST
Create an extra YaST installation source
First, the installation source was setup according to the steps detailed in the “YaST Source” section of:
http://portal.suse.com/sdb/en/2004/02/yast_instsrc.html
Secondly, the additional installation source was integrated to the AutoYaST repository. This was done by editing the files order and instorder in the yast/ subdirectory of the root of the installation source tree. The order file has the cus-Packages at the top to indicate to YaST that the cus-Packages have precedence over packages with the same name in the other directories:
/cus-Packages /cus-Packages/SUSE-SLES-9-Service-Pack-Version-3/CD1 /SUSE-SLES-9-Service-Pack-Version-3/CD1/SUSE-SLES-Version-9/CD1 /SUSE-SLES-Version-9/CD1/SUSE-CORE-Version-9/CD1 /SUSE-CORE-Version-9/CD1
Illustration 7: Example installation source file called order
The instorder file has the cus-Packages at the bottom in order to indicate that the cus packages should be installed after all others (this is required so that the rpm scripts work as required):
/SUSE-SLES-9-Service-Pack-Version-3/CD1/SUSE-SLES-Version-9/CD1/SUSE-CORE-Version-9/CD1/cus-Packages
Illustration 8: Example installation source file called instorder
Thirdly, directory.yast files were created in several directories with the following command: ls -Ah > directory.yast
EXTRA_PROVRPMScontentdirectory.yastmedia.1setup
Illustration 9: Example directory.yast file
Lastly, the products file was created in the media.1 directory. The file contained a single line:
/ cus-Packages
Illustration 10: Example products file
The following directory tree is an example of the complete extra installation source directory tree. The EXTRA_PROV file is empty:
data1/instserver/sles9/x86_64/sp3/cus-Packages/data1/instserver/sles9/x86_64/sp3/cus-Packages/RPMS/data1/instserver/sles9/x86_64/sp3/cus-Packages/RPMS/src/
SLES9 AutoYaST example - Whitepaper Report 21Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
AutoYaST
data1/instserver/sles9/x86_64/sp3/cus-Packages/RPMS/src/jboss4-4.0_jbas.src.rpmdata1/instserver/sles9/x86_64/sp3/cus-Packages/RPMS/i586/data1/instserver/sles9/x86_64/sp3/cus-Packages/RPMS/i586/jboss4-4.0_jbas.i586.rpmdata1/instserver/sles9/x86_64/sp3/cus-Packages/directory.yastdata1/instserver/sles9/x86_64/sp3/cus-Packages/contentdata1/instserver/sles9/x86_64/sp3/cus-Packages/setup/data1/instserver/sles9/x86_64/sp3/cus-Packages/setup/descr/data1/instserver/sles9/x86_64/sp3/cus-Packages/setup/descr/directory.yastdata1/instserver/sles9/x86_64/sp3/cus-Packages/setup/descr/packagesdata1/instserver/sles9/x86_64/sp3/cus-Packages/setup/descr/packages.DUdata1/instserver/sles9/x86_64/sp3/cus-Packages/setup/descr/packages.endata1/instserver/sles9/x86_64/sp3/cus-Packages/media.1/data1/instserver/sles9/x86_64/sp3/cus-Packages/media.1/directory.yastdata1/instserver/sles9/x86_64/sp3/cus-Packages/media.1/mediadata1/instserver/sles9/x86_64/sp3/cus-Packages/media.1/productsdata1/instserver/sles9/x86_64/sp3/cus-Packages/EXTRA_PROV
Illustration 11: Example extra installation source tree
Troubleshoot the extra installation source
Some troubleshooting of the AutoYaST process in relation to the extra installation source can be done by reading the Apache error log, e.g.:
tail -f /var/log/apache2/error_log
The Apache error log will list all the files which were requested by AutoYaST but were not present on the server. Some of the files required by AutoYaST are not always necessary.
Add an extra package to the installation source
After adding extra packages in the RPMS/src/ and/or RPMS/i586/ directories, the following command must be executed to update the required package description tree of the installation source:
create_package_desc -d RPMS -x EXTRA_PROV
The working directory for this command is cus-Packages/
Manual installation mode using a network source
The AutoYaST XML auto answer template file can be generated from an already installed system. It will use the already installed system settings as an example. The test blade is manually installed to generate an example XML template file.
The installation process uses two phases.
Phase 1 linuxrc
Phase 1 after booting from the SP CD1 is running linuxrc. The linuxrc process is used to setup the install system environment and kick off yast, e.g.:
• Install type (cd, network, local)
• Network configuration during installation (not the final system)
• ssh or vnc
• loading of kernel modules
22 Novell Consulting Example
AutoYaST
• select yast environment (xml path)
Phase 2 YaST
Phase 2 after booting from the SP CD1 first retrieves the YaST system from the chosen installation source, then runs YaST. YaST will configure the installed system. During phase 2 an automatic reboot will initiate the final stage of phase 2.
Both phases 1 and 2 have a manual and an automatic mode.
Example manual installation
The next installation example uses the automatic mode for linuxrc and the manual mode for YaST. Use the SLES9 SP3 CD1, which is always inserted in the IBM blade center or mount the local ISO file on the HP blade, to boot the blade.
Use the <F2> key to change to Text Mode, the <F3> key to tell the installation program to use an HTTP installation source.
The Boot options can be used to prevent the questions asked during the installation.
<F3> Server: suse.cus.nl (171.21.67.30)
<F3> Directory: /sles9_64/sp3
Alias sles9_64 is set in /etc/apache2/conf.d/inst_server.conf
Example Boot option:
netsetup=-dhcp
The option netsetup=1 or netsetup=-dhcp will disable the time consuming automatic probing for a DHCP server and will ask for the IP setup immediately.
Alternatively, the following command line can be used to specify the installation source in place of the F3 key:
netsetup=-dhcp install=http://suse.cus.nl/sles9_64/sp3
SLES9 AutoYaST example - Whitepaper Report 23Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
AutoYaST
Automated installation mode
The following steps are required to install a new blade:
• Copy one of the template XML files from /data1/confserver/ on suse.cus.nl and rename the file to hostname.xml.
• Edit the hostname.xml file. Change the hostname and IP settings to reflect that of the system to be installed.
• Connect to the blade center (IBM) or iLO address (HP) of the blade from an up-to-date Java enabled browser.
• Connect the CD-ROM or the SLES9 SP CD1 ISO file to the blade using the remote control feature of the blade center web site interface. *) PXE
• Check if the RAID mirror is configured (IBM: <Ctrl-C>, HP: Read screen).
• Make sure the machine boots from CD. This is default on an empty blade, but not on an installed blade. Use <F12> or change the boot order in the BIOS of the blade if necessary.
• Enter the following kernel parameters at the boot prompt and select Installation in the menu.
If the unattended installation is required, then the following parameters are required on the Boot options command line:
netsetup=-dhcp autoyast=http://suse.cus.nl/config/hostname.xml
Use the <F2> key to change to Text Mode, the <F3> key to tell the installation program to use an HTTP installation source.
The Boot options can be used to prevent the questions asked during the installation.
<F3> Server: suse.cus.nl (171.21.67.30)
<F3> Directory: /sles9_64/sp3
Alias sles9_64 is set in /etc/apache2/conf.d/inst_server.conf
The boot command line options may not exceed 255 characters. The boot command line uses invisible characters, e.g. <F2> and <F3> effectively add command line arguments. The following Boot options command line replaces the need for the <F2> and <F3> keys:
netsetup=-dhcp install=http://suse.cus.nl/sles9_64/sp3 textmode=1 autoyast=http://suse.cus.nl/config/hostname.xml
The AutoYaST server has a DNS alias suse.cus.com pointing to cusfbes.is.cusnet.corp, because it shortens the boot command line used for automated installations.
*) PXE If PXE boot is required, the following services are needed (Ch. 4.3 Booting from the Network in the SLES9 Administration and Installation Guide):
● Enable TFTP using YaST and the syslinux package
● Enable DHCP with the parameter filename “pxelinux.0”;
24 Novell Consulting Example
AutoYaST
AutoYaST template build
The way Customer is used to work with automated unattended installations is as follows:
• A script is used to fill in the machine specific entries such as network settings and services to install. This is done by asking about 20 questions.
• A host-specific configuration file is generated by the script.
• At boot time of installing the new host, the custom host unattended answer file is typed as a reference together with the host IP, net mask, gateway and optionally a name server.
During this POC we used a similar method, but without the custom script. The script part was done by copying a template and manual changing the personal host settings in the file. The XML file for every host will be saved on the HTTP installation source on the AutoYaST server, as well as on the target system in /var/adm/autoinstall/cache/
The first build basically includes the minimal Operating System plus the Apache server. Any installation option not mentioned in the XML file will initiate using the defaults. XML is only used for changing the defaults.
Partitioning scheme
We used the following example partitioning scheme in the initial AutoYaST control file:
<partitioning config:type="list"> <drive> <device>/dev/rootvg</device> <is_lvm_vg config:type="boolean">true</is_lvm_vg> <lvm2 config:type="boolean">true</lvm2> <partitions config:type="list"> <partition> <filesystem config:type="symbol">reiser</filesystem> <format config:type="boolean">true</format> <lv_name>homelv</lv_name> <mount>/home</mount> <size>1GB</size> </partition> <partition> <filesystem config:type="symbol">reiser</filesystem> <format config:type="boolean">true</format> <lv_name>optlv</lv_name> <mount>/opt</mount> <size>512MB</size> </partition> <partition> <filesystem config:type="symbol">reiser</filesystem> <format config:type="boolean">true</format>
SLES9 AutoYaST example - Whitepaper Report 25Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
AutoYaST
<lv_name>rootlv</lv_name> <mount>/</mount> <size>1GB</size> </partition> <partition> <filesystem config:type="symbol">reiser</filesystem> <format config:type="boolean">true</format> <lv_name>tmplv</lv_name> <mount>/tmp</mount> <size>1GB</size> </partition> <partition> <filesystem config:type="symbol">reiser</filesystem> <format config:type="boolean">true</format> <lv_name>usrlv</lv_name> <mount>/usr</mount> <size>3GB</size> </partition> <partition> <filesystem config:type="symbol">reiser</filesystem> <format config:type="boolean">true</format> <lv_name>varlv</lv_name> <mount>/var</mount> <size>1GB</size> </partition> </partitions> <pesize>4M</pesize> <use>all</use> </drive> <drive> <device>/dev/sda</device> <partitions config:type="list"> <partition> <filesystem config:type="symbol">swap</filesystem> <format config:type="boolean">true</format> <mount>swap</mount> <partition_id config:type="integer">130</partition_id> <size>1GB</size> </partition> <partition> <filesystem config:type="symbol">ext3</filesystem> <format config:type="boolean">true</format> <mount>/boot</mount> <partition_id config:type="integer">131</partition_id> <size>100MB</size> </partition> <partition>
26 Novell Consulting Example
AutoYaST
<lvm_group>rootvg</lvm_group> <partition_id config:type="integer">142</partition_id> <partition_nr config:type="integer">3</partition_nr> <region config:type="list"> <region_entry config:type="integer">275</region_entry> <region_entry config:type="integer">1567</region_entry> </region> <size>12879986689</size> </partition> </partitions> <use>all</use> </drive> </partitioning>
Illustration 12: Partitioning part of the AutoYaST XML answer file
Changing the partition size
Any new design of the partition schema can easily be applied to new systems by editing the XML file. A complete example XML file used for the builds is in the Appendix.
The preferred method for resizing the existing partition sizes is using the menu system yast2, System, LVM.
Resizing the partition sizes using the command line utilities requires two steps. The first step for extending the partition size is resizing the partition with lvextend and the second step is using resize_reiserfs. Shrinking the file system must be performed in the reverse order.
File system extend shrink
ReiserFS on line off line
ext3 off line off line
Table: File systems resizing on line/off line
Changes to the template
Creation of the base XML file is done by yast2, Misc, Autoinstallation, Tools menu, Create Reference Control File. The individual resources to be used to create the base XML file have to be selected. After retrieving the settings, they can be changed in the subsections to the left of the window. The final version of the basic XML settings can be saved via the File menu.
Tip: If there are changes for a custom template file, then the aforementioned method is only recommended to use as a tool to retrieve the syntax of the changes. The recommended way to edit production AutoYaST control files is by using a simple text editor. Although this can be achieved through the YaST Autoinstallation module, better control over the exact changes will be exercised if using an editor.
SLES9 AutoYaST example - Whitepaper Report 27Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
AutoYaST
The changed template file 1 is in the Appendix. Some changes that are written into the XML file are:
• Security permissions template is applied
• Encrypted root password
• Exclusion of the yast2-you-server package, because it creates an apache conf file in the conf.d/ subdirectory.
• Addition of scripts
Adding a script
YaST, Misc, Autoinstallation, has a menu to create a pre-script, post-script and a chrooted environment. The scripts are added to the template file, which is printed in the appendix.
The chroot script was used to implement Customer specific requirements. Some objectives of the script are:
• Redirect the output of the script commands to a log file.
• Change the legal herald, the Welcome message, in the /etc/issue, /etc/issue.net and /etc/motd files
• Add a file /.function
• Add authorized keys
• Add the Customer get_description.sh system
Checking the XML template syntax
The following command will check the changed XLM file for syntax errors:
xmllint --noout <target.xml>
The command will give no output for a correct XML file.
Extra documentation for creating AutoYaST XML files is available on http://www.suse.de/~ug
28 Novell Consulting Example
Updates and patches
5 Updates and patches
The YaST Online Update (YOU) server can be added to same server as the AutoYaST server. Sometimes this is not done, because the YOU server requires Internet access and can be considered a security issue in 'paranoia setup security level' environments.
IP address for the YOU server
The YOU service is added to the AutoYaST server using its own IP address. This IP address will get Internet access in the firewall rules. The use of a dedicated IP address for YOU will make future movement of the YOU to another server possible without changing the existing firewall rules or the configuration of all existing servers.
We must make sure that YOU uses only its assigned IP address to retrieve patches from the Novell update servers.
This can be achieved by setting the assigned IP address within curl. Curl is used by YOU to get the updates from the Internet.
Another method is to add an ip route command to force a specific source IP address to be used for a specific destination. Any script file containing the command can be placed in an auto execute directory. The auto execute directory /etc/sysconfig/network/if-up.d/ is part of the normal ifup command and will execute all script files in this directory at system boot and when the ifup command is issued. The following custom script is used:
The file if-up.d/custom.routes:
#!/bin/baship route add default via 171.21.67.126 src 171.21.67.30ip route add 63.76.115.0/24 via 171.21.67.126 src \ 171.21.67.37
The file if-down.d/custom.routes used by the ifdown command is not required, because the routes are removed automatically when the interface is taken down.
To check the current routes, the following command can be used:
ip route list
The IP network address 63.76.115.0 is from the Novell update servers. The address 171.21.67.126 is the gateway in the ZF1-AE SEG Zone. The files require to be executable, e.g.
chmod +x custom.routes
SLES9 AutoYaST example - Whitepaper Report 29Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Updates and patches
Internet access
The YOU server requires Internet access (Customer firewall rule) over port 443 to the Novell Update servers, which currently have the following IP addresses:
• you.novell.com 63.76.115.39 (YOU server)
• update.novell.com 63.76.115.15 (ZLM server)
The Novell update servers are currently in the network 63.76.115.0/24. There are no known plans scheduled to change the Novell IP addresses, but that is not a 100% guarantee.
The POC Novell Account credentials for the Internet YOU portal account are:
• Userid: cuspoc
• Password: passwd
YOU server setup
The YaST Online Update server is configured by the YaST module you_server.
Because the underlying SLES server used four installation sources (cus, SP3, SLES and CORE) from our installation source tree, we have to remove the extra installation sources for the YaST Online Update module:
• To prevent the YOU server to search for nonexistent packages and generate a timeout error, we had to remove the extra Installation Source entries 'cus' and 'SP3' In the YaST module you_server. The YOU server can only retrieve packages for 'CORE' and 'SLES', because they are the only available patches on the Novell update server. The CORE and SLES channels already include every patch from every Service Pack level.
• To prevent the YOU clients to search for nonexistent packages and generate timeout errors, we had to create one empty subdirectory under /var/lib/YaST/you/mnt/ on the YOU server:
i586/update/cus-Custom-Packages/1/patches/directory.3/
We traced this directory name by looking in the Apache2 access_log file.
In the YOU server interface we used the sync now button to download all the patches since September 2004, about 945MB at the time of download. The setup of the download schedule is done by using the YaST interface. Customization of the schedule can be done by changing the crontab file /etc/cron.d/yast2-you-server:
23 20 * * * root /var/lib/YaST2/you/syncfile
After enabling the YOU server by YaST, an Apache instance is automatically created. The HTTP share is available for YOU clients using the following url: http://suse.cus.nl/YOU
# httpd configuration file for YOU server included by httpd.conf:
<IfDefine you_server>Alias /YOU /var/lib/YaST2/you/mnt/<Directory /var/lib/YaST2/you/mnt/>
Options +Indexes
30 Novell Consulting Example
Updates and patches
IndexOptions +NameWidth=*Order allow,denyAllow from all
</Directory></IfDefine>
Illustration 13: YOU server configuration file /etc/apache2/conf.d/you_server.conf
Because this configuration file uses the IfDefine command, we had to create the you_server variable. The following line in /etc/sysconfig/apache2 is changed to add the you_server flag:
APACHE_SERVER_FLAGS “ inst_server you_server”
Change the YOU package directory
The YOU server always saves the downloaded patches in the directory /var/lib/YaST/you/mnt/. We moved the patches to the /data1/you/ directory to prevent dealing with two growing volumes. A symbolic link from /mnt to the new directory would be one solution, but we chose a mount, because some applications do not follow symlinks. This choice is purely to show a less-used but valid approach in the scope of the POC. The following line was added to /etc/fstab:
/data1/you /var/lib/YaST2/you/mnt none bind
YOU client setup
The YOU clients must use the internal YOU server instead of one of the SUSE mirrors. The configuration of the YOU clients requires two steps :
• Disable the Internet search for SUSE mirrors via a change in /etc/sysconfig/onlineupdate. This change is executed by the XML control file during installation:
<sysconfig config:type="list"> <sysconfig_entry> <sysconfig_key>YAST2_LOADFTPSERVER</sysconfig_key> <sysconfig_path>/etc/sysconfig/onlineupdate</sysconfig_path> <sysconfig_value>no</sysconfig_value> </sysconfig_entry> </sysconfig>
Illustration 14: YOU client configuration in the XML file
• Point to the internal YOU server by changing the file /etc/youservers This is done by adding a script in the XML template file:
echo 'http://suse.cus.nl/YOU' >> "$chrootbase/etc/youservers" 2>>"$log"
SLES9 AutoYaST example - Whitepaper Report 31Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Updates and patches
Deploying patches
Patches have the following priorities:
• Security (high)
• Recommended (normal)
• Optional (low)
When a patch is published by Novell, the customers IT department should investigate the need for installation of the patch. If the patch is required it has to be tested on the various environments in production. Only after successful testing should the patch be approved for deployment. This safe method is already in use at Customer. Novell sends e-mail alerts to subscribers if patches are published for YOU.
The YOU client can deploy patches in two ways:
• Method 1: The patches for the installed RPM's are automatically installed. This could require a reboot and there would be a risk of breaking the system.
• Method 2: The patches are manually selected on every host using the YaST Online Update module. This is the preferred method when the YOU server is used. The manual selection enables the installation of only tested and approved patches. The testing and approving of the patches is usually done by the customers IT department.
The testing, approving and manual selection and installation of patches is the preferred method in a production environment when YOU is used. A patch can be selected from the YaST, Online Update menu and from the command line, e.g.:
online_update -S patch-1088
Checking the patch list can be done by a starting a dry run, e.g.:
onlineupdate -sdV
32 Novell Consulting Example
Updates and patches
ZENworks Linux Management server
The YOU manual installation of tested and approved patches on many hosts is very time consuming and therefore not recommended.
One of the features of the Novell ZENworks Linux Management (ZLM) server is offering a centralized automatic patch and rpm deployment to selected hosts.
Patches are deployed using one or more channels. Every host can subscribe to one or more channels. After testing and approving the patch, the patch is loaded into the channel(s) on the ZLM server by the IT department.
All subscribers to the channel(s) pick up and install the patches automatically, if they have the original rpm installed and haven't already been patched.
The ZLM server is installed on a SUSE Linux Enterprise server and could also be used as AutoYaST server.
SLES9 AutoYaST example - Whitepaper Report 33Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Monitoring
6 Monitoring
Nagios server and plug-in setup
The following rpm packages are available in the SUSE Linux Enterprise Server 9 SP3 edition via YaST, Software, Install and Remove Software:
The plug-in packages and the Nagios network monitor are installed on every monitored server. The nagios-www monitor packages are required only on the central monitoring host(s). Depending on the base system the extra required packages are installed automatically.
34 Novell Consulting Example
Illustration 15: Available Nagios packages in YaST
Illustration 16: Automatically selected Nagios dependency packages
Monitoring
Nagios configuration
Documentation can be found at http://nagios.org/docs and at the localhost web site (http://localhost/nagios/).
The main Nagios object definition configuration file is /etc/nagios/ and this file links to the other configuration files in the same directory.
The log file is located at /var/log/nagios/nagios.log and the syslogger is enabled by default.
We added the Nagios packages in the build of every AutoYaST template, but in the limited POC setup we only configured and started the Nagios server on the AutoYaST server. This central Nagios server monitored the other servers by checking the hosts with the tcp_check plugin (port 22) and the services with the ntp_check plugin (as an example) using SSH. This complies with the Customer firewall policies used in the DMZs.
The limited setup is chosen to quickly provide Customer with an open source monitoring framework, which can later be tuned and expanded upon. We added mail warnings for an account called Tivoli. The mail is send to this account, but is not forwarded yet to a Customer mail address.
In addition, in the miscommands.cfg file, a notify-tivoli command was defined, as an example of a possible method of forwarding messages to Tivoli via the wpostemsg Tivoli utility.
Host Status and Service Status
Every monitored host must have at least one host configuration and and one service configuration. Every host can have multiple service checks.
SLES9 AutoYaST example - Whitepaper Report 35Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Illustration 17: Example Nagios monitor screen
Monitoring
Some files in /etc/nagios/ that require configuration before the initial installation works without errors are:
• nagios.cfg
• checkcommands.cfg
• cgi.cfg
• timeperiods.cfg
• services.cfg
• misccommands.cfg
• resource.cfg
• hostgroups.cfg
• escalations.cfg
• contactgroups.cfg
• dependencies.cfg
• hosts.cfg
• contacts.cfg
• ...
Nagios setup can be very time consuming. We choose the approach of disabling all pre-configured entities in the configuration files and then adding entries for the necessary items. The customized Nagios setup files can be found in the Appendix.
To enable successful monitoring via ssh, both the Nagios server and the monitored hosts must be configured to enable silent automated ssh sessions with key-based authentication:
The monitored hosts needed to be 'known hosts' of the Nagios server. The known host keys were manually imported in /etc/ssh/known_hosts file of the Nagios server. There is no default mechanism for automatically adding new hosts' identities to the Nagios server each time a new system is deployed.
One change was made to the ssh client configuration. In the file /etc/ssh/ssh_config, the GlobalKnowHostsFile option was given the value /etc/ssh/know_hosts
In addition the monitored servers must have the public key from the Nagios server. The public and private keys were stored in /data1/nagios.key and /data1/nagios/nagios.key.pub during the base setup of this POC. This key could be distributed together in the AutoYaST installation with the cus public authorized_keys.
Customer Nagios actions• The GlobalKnowHostsFile = /etc/ssh/know_hosts should be
(automatically) supplied with all the monitored host keys.
• The Nagios public key should be (automatically) appended to all the .ssh/authorized_keys files
• Nagios can be extended with lots of configuration options.
36 Novell Consulting Example
Monitoring
Starting and stopping the Nagios daemon
Nagios is installed on every server, but in our POC setup we only start Nagios on the AutoYaST server. Nagios can be started and stopped using the following command:
/etc/init.d/nagios start ¦ stop ¦ reload
The rcnagios command can to the same, but is SUSE specific.
The configuration file of the monitoring web site http://locahost/nagios/ is /etc/httpd/nagios.conf and is dependent on the last slash (/) and Apache:
/etc/init.d/apache2 start ¦ stop ¦ reload
Thee default security setting 'allow from' of the web site allows only localhost access, but requires no authentication. By default, no monitoring screen can be accessed from the Web interface. We added the authentication requirement by changing the Apache configuration and Nagios configuration file nagios.cfg. We added a Nagios user called nagiosadmin/passwd for authentication with the following command:
httpasswd2 -c /etc/nagios/httpwd.users nagiosadmin
There is no X graphical interface on the server. The web site can be shown with the localhost restriction still in tact by using ssh port redirection on the workstation, e.g.:
ssh -L 8000:127.0.0.1:80 -N [email protected]
The local redirected port is 8000. The destination address on the remote machine is 127.0.0.1 port 80. The remote machine is 172.21.67.30. Putty.exe on Windows is able to do similar port redirection. The following URL (including the last /) is used to display the Nagios web site on the workstation using the example SSH port redirection:
http://localhost:8000/nagios/
The Nagios server boot script is placed into the correct run level directory using the chkconfig command.
Surviving a reboot:
chkconfig --level 35 nagios on
chkconfig --level 35 apache2 on
The insserver command has the same effect, but is SUSE specific.
SLES9 AutoYaST example - Whitepaper Report 37Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
LDAP authentication
7 LDAP authentication
The creation of a new LDAP server was requested by Customer to provide a central repository of user accounts for the POC, as connecting to the existing Customer LDAP servers was not possible for various operational reasons.
LDAP configuration on the server
We used the following procedure to manually setup the standard SLES9 packaged OpenLDAP server on the AutoYaST server:
• If TLS support is required, the first step is to get a signed or self-signed certificate and private key for the LDAP server. This can be done by requesting one from the official Customer Certificate Authority or create one using the SLES9 LDAP server. The passphrase must be disabled, as the LDAP server must be able to boot automatically. The creation of the certificate is described in the next paragraph.
• The second step is to install the OpenLDAP server via YaST. The required openldap2 package will be installed automatically. Start yast2, Network Services, LDAP Server. We enabled the starting of the server without registering at SLP services. Use <Alt>-<C> Configure to enter step 3.
38 Novell Consulting Example
Illustration 18: LDAP server enabling
LDAP authentication
• The third step is to activate TLS in the Global Settings menu.
Choose Select Certificate in the TLS Settings to import the certificate.
Choose OK and select the Certificate file (.pem), the Key file (.pem) and the CA file (.pem). If the SLES CA Certificate from the example is used, the directory is called: /var/lib/CAM/root-cus_CA (See next figure)
SLES9 AutoYaST example - Whitepaper Report 39Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Illustration 19: LDAP server TLS Settings
Illustration 20: LDAP server import
LDAP authentication
• The fourth step is to create the LDAP Database with the Add Database menu and the pre-configured POSIX attributes are added automatically. The following settings were used:
• Base DN: ou=suse,o=cus,c=nl
• Root DN: cn=admin
• Password: passwd
• Append base DN
• Database directory: /var/lib/ldap/cus_LDAP_DB
Make sure that the directory exists before entering OK.
40 Novell Consulting Example
Illustration 21: LDAP server Import Certificate
Illustration 22: LDAP server database creation
LDAP authentication
Create a certificate
The SLES9 Certificate Authority can be used to create the certificate for the LDAP server. The CA is created during system installation or can be created with the yast2, Security and Users, CA Management.
We used the following settings:
• CA Name: ....cus_CA
• Common Name: cusCA
• Organization: cus OU: -
• Loc.: - State: -
SLES9 AutoYaST example - Whitepaper Report 41Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Illustration 23: CA Management <Alt-C>
Illustration 24: New Root CA
LDAP authentication
• Next, Password: passwd
• Next
The CA is created in the directory: /var/lib/CAM/root-cus_CA
42 Novell Consulting Example
Illustration 25: Root CA password
Illustration 26: Root CA creation summary
LDAP authentication
• Select the Enter CA-button
• Choose Certificates, and <Alt>-<A>, Add Server Certificate:
SLES9 AutoYaST example - Whitepaper Report 43Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Illustration 27: Enter CA
Illustration 28: Root CA
LDAP authentication
• Common Name: LDAP_CERT
• Next: Password: passwd, Validity: 3600 days (365 is default), Next and Create
44 Novell Consulting Example
Illustration 29: Add Server Certificate
Illustration 30: LDAP Certificate creation
LDAP authentication
Select Finish several times to quit the certificate creation.
The created certificate uses a passphrase, which should be removed, if the certificate is for the LDAP server daemon. The removal of the password was done with the following command:
openssl rsa -in keyname.key -out ldap.key
Note that you will be asked for the passphrase one last time. The keyname.key that is created is in the subdirectory keys/ in var/lib/CAM/root-cus_CA/
SLES9 AutoYaST example - Whitepaper Report 45Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Illustration 31: LDAP Certificate summary
Illustration 32: Finish LDAP certificate
LDAP authentication
LDAP configuration on the clients
The LDAP client is available if the LDAP packages (nss_ldap and pam_ldap) are installed. The LDAP package configuration can be done using YaST, but to enable the automatic creation of local home directories the pam configuration file has to be changed.
On the request of Customer the AutoYaST server was not configured as an LDAP client, but we added the automatic configuration of the LDAP client and the configuration of the pam stack in the template XML files for all the other blades:
<ldap> <base_config_dn>ou=ldapconfig,dc=suse,dc=cus,dc=nl</base_config_dn> <bind_dn>cn=Administrator,dc=suse,dc=cus,dc=nl</bind_dn> <create_ldap config:type="boolean">true</create_ldap> <file_server config:type="boolean">true</file_server> <ldap_domain>dc=suse,dc=cus,dc=nl</ldap_domain> <ldap_server>171.21.67.30</ldap_server> <ldap_tls config:type="boolean">true</ldap_tls> <ldap_v2 config:type="boolean">false</ldap_v2> <member_attribute>member</member_attribute> <nss_base_group>ou=people,dc=suse,dc=cus,dc=nl</nss_base_group> <nss_base_passwd>ou=people,dc=suse,dc=cus,dc=nl</nss_base_passwd> <nss_base_shadow>ou=people,dc=suse,dc=cus,dc=nl</nss_base_shadow> <pam_password>crypt</pam_password> <start_ldap config:type="boolean">true</start_ldap> </ldap>
Illustration 33: Used LDAP client configuration in the XML control file
The following script was added to the XML file. It enables automatic creation of the local home directory upon login, if the home directory does not exist:
# Configure PAM to create home dirs on LDAP logins
LINE=`egrep -m1 -n '^session' "$chrootbase/etc/pam.d/sshd" | cut -b 1` 2>>"$log"
sed -ie "${LINE}i\session required pam_mkhomedir.so skel=/etc/skel umask=0022" "$chrootbase/etc/pam.d/sshd" 2>>"$log"
LINE=`egrep -m1 -n '^session' "$chrootbase/etc/pam.d/login" | cut -b 1` 2>>"$log"
sed -ie "${LINE}i\session required pam_mkhomedir.so skel=/etc/skel umask=0022" "$chrootbase/etc/pam.d/login" 2>>"$log"
Create LDAP Users and Groups
LDAP Users and Groups can be created from every SLES9 host running the LDAP client. YaST can be used to create users via yast2, Security and Users, Edit and Create Users, but in the Create Users and Groups module the Set Filter has to be set to LDAP.
46 Novell Consulting Example
LDAP authentication
Test the LDAP connection
If LDAP is running and configured correctly, the following command shows all the LDAP objects (e.g. users):
ldapsearch -x -b ou=suse,o=cus,c=nl
The -x option enables simple authentication instead of SASL.
The -ZZ option requires StartTLS (Transport Layer Security) to be successful and must show the same results:
ldapsearch -x -b ou=suse,o=cus,c=nl -ZZ
SLES9 AutoYaST example - Whitepaper Report 47Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Connecting to the SAN
8 Connecting to the SAN
Documentation about SLES9 and multipathing can be found in:
http://portal.suse.com/sdb/en/2005/04/sles_multipathing.html
All the blades have two Qlogic fiber channel HBA's. One HP blade (nr 2) is patched to the IBM SAN through both adapters. The SAN storage is configured to have 4 LUN's. Each LUN is 10GB. 2 LUNs are located on location A (PRIME) and 2 LUNs on location B (DUPE). The command scsidev lists a total of 8 LUN's, but the goal is to define 2 volumes. Each volume should be a software mirrored RAID 1 set, each with one LUN on location A and one on location B.
This setup requires multipathing functionality to be available. The SLES9 built-in driver qla2300 was removed and reloaded to make the new SCSI devices visible:
rmmod qla2300
modprobe qla2300
The SLES9 multipath-tools package was added to the blade using:
yast2 sw_single multipath-tools
The following command activates multipathing:
/etc/init.d/boot.multipath start
/etc/init.d/multipathd start
To survive a reboot:
chkconfig -–level 35 boot.multipath on
48 Novell Consulting Example
Illustration 34: SAN and LUN architecture
Connecting to the SAN
chkconfig –-level 35 multipathd on
The following command automatically shows 4 LUNS including their WWID, e.g.:
multipath -v2 -d
The WWID's were easily matched with the list provided by the Customer SAN administrator.
ll /dev/disk/by-name/ also lists the 4 LUN's. (or by-path/ or by-id/)
modinfo qla2300 shows all the available module parameters.
EVMS
At this point EVMS can create the mirrors, the LVM containers, and the logical volumes. The command evmsgui is very helpful with this. The command evmsgui also shows the WWID's and were also easily matched with the list provided by the Customer SAN administrator.
The following steps created the two required mirrored sets example from the evmsgui Actions menu, but many other design steps could be used:
• Create a Region, Select MD Raid 1 Region Manager (not Raid 0, 4/5, Multipath), Select 2 disks with the correct WWID's identified from the SAN administrators list. Disk set md/md0 is created and operational.
• Create a Container, Select LVM Region Manager, Select both md/md0 and md/md1, Name: EVMS_LVM_CONTAINER1 (20GB).
• Create another Region, Select LVM Region Manager, Select Storage object = 20 GB lvm/EVMS_LVM_CONTAINER1/Freespace, LVM Region Name: EVMS_LV1, size: 10 G, nr of stripes: 1.
• Create another Region, Select LVM Region Manager, Select Storage object = 20 GB lvm/EVMS_LVM_CONTAINER1/Freespace, LVM Region Name: EVMS_LV2, size: 10 G, nr of stripes: 1.
• Create EVMS Volume, EVMS_VOL1.
• Create EVMS Volume, EVMS_VOL2.
• FileSystem, Make, ReiserFS (XFS, Ext2/3, Swap, JFS), Select /dev/evms/EVMS_VOL1, label: VOLLABEL1, Make.
• FileSystem, Make, ReiserFS (XFS, Ext2/3, Swap, JFS), Select /dev/evms/EVMS_VOL1, label: VOLLABEL2, Make.
• Save.
• FileSystem, Mount, Select volume and mount point directory /data2.
• FileSystem, Mount, Select volume and mount point directory /data3.
The result of the evmsgui steps is shown in the next illustrations.
SLES9 AutoYaST example - Whitepaper Report 49Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Connecting to the SAN
50 Novell Consulting Example
Illustration 35: EVMS Volumes
Illustration 36: EVMS Regions
Connecting to the SAN
SLES9 AutoYaST example - Whitepaper Report 51Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Illustration 37: EVMS Containers
Illustration 38: EVMS DOS Segments
Connecting to the SAN
52 Novell Consulting Example
Illustration 40: EVMS Plugins
Illustration 39: EVMS Disks
Connecting to the SAN
SAN storage status checking commands
Some commands to check the status of the SAN volumes are:
ls /dev/evms/
ls /dev/evms/.nodes/
ls /dev/evms/.nodes/md
ls /dev/evms/.nodes/lvm
ls /dev/evms/.nodes/EVMS_LVM_CONTAINER1/
cat /proc/mdstat
scsidev -l
lsscsi
multipath -l
multipath -v2 -d
The multipath -l command gave the following output during the 'failed path' test:
dm names N dm table lvm|EVMS_LVM_CONTAINER1|EVMS_LV1 N dm table rootvg-usrlv N dm table cciss|c0d0p7 N dm table rootvg-appllv N dm table 360050768018180a4f800000000000009 N dm table 360050768018180a4f800000000000009 N dm status 360050768018180a4f800000000000009 N dm info 360050768018180a4f800000000000009 O dm table rootvg-srvlv N dm table rootvg-tmplv N dm table 360050768018180a4f800000000000008 N dm table 360050768018180a4f800000000000008 N dm status 360050768018180a4f800000000000008 N dm info 360050768018180a4f800000000000008 O dm table rootvg-homelv N dm table rootvg-varlv N dm table rootvg-datalv N dm table EVMS_VOL2 N dm table EVMS_VOL1 N dm table rootvg-optlv N dm table rootvg-rootlv N dm table 36005076801858047b800000000000009 N dm table 36005076801858047b800000000000009 N dm status 36005076801858047b800000000000009 N dm info 36005076801858047b800000000000009 O dm table lvm|EVMS_LVM_CONTAINER1|EVMS_LV2 N dm table 36005076801858047b800000000000008 N dm table 36005076801858047b800000000000008 N
SLES9 AutoYaST example - Whitepaper Report 53Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Connecting to the SAN
dm status 36005076801858047b800000000000008 N dm info 36005076801858047b800000000000008 O 360050768018180a4f800000000000009[size=10 GB][features="1 queue_if_no_path"][hwhandler="0"]\_ round-robin 0 [active] \_ 3:0:1:1 sdd 8:48 [failed][faulty] \_ 4:0:1:1 sdh 8:112 [active][ready]
360050768018180a4f800000000000008[size=10 GB][features="1 queue_if_no_path"][hwhandler="0"]\_ round-robin 0 [active] \_ 3:0:1:0 sdc 8:32 [failed][faulty] \_ 4:0:1:0 sdg 8:96 [active][ready]
36005076801858047b800000000000009[size=10 GB][features="1 queue_if_no_path"][hwhandler="0"]\_ round-robin 0 [active] \_ 3:0:0:1 sdb 8:16 [failed][faulty] \_ 4:0:0:1 sdf 8:80 [active][ready]
36005076801858047b800000000000008[size=10 GB][features="1 queue_if_no_path"][hwhandler="0"]\_ round-robin 0 [active] \_ 3:0:0:0 sda 8:0 [failed][faulty] \_ 4:0:0:0 sde 8:64 [active][ready]
Illustration 41: Output of multipath -l during a failed path test
Disabling one of the two SAN switch connections was logged automatically in the syslog file /var/log/messages. We were still able to read and write from and to the SAN volumes.
SAN boot
We removed the 2 disks of HP blade 2 to test diskless boot using the SAN. The diskless boot was not enabled by PXE, but by changing the BIOS of the blade, <F9> or <F10>, and the BIOS of the Fiber Logic HBA adapter, <Ctrl-Q>.
During the boot of the HP blade there are several configuration options:
• <F8> iLO
• <F8> HP Smart Array 6i Direct Attached Storage
• <Ctrl-Q> Qlogic
• <Ctrl-S> NIC
• <F9> ROM Setup
• <F10> System and Maintenance (ROM Setup, Inspection, Diagn.)
• <F12> PXE boot
Changing the blade BIOS
The first step of enabling the SAN boot is changing the boot order in the PC BIOS:
54 Novell Consulting Example
Connecting to the SAN
SLES9 AutoYaST example - Whitepaper Report 55Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Illustration 42: <F9> BIOS
Illustration 43: Standard Boot Order (IPL)
Connecting to the SAN
When a single non mirrored boot LUN was used in a multipath environment, the boot order of the 2 FC controllers had to be reversed to enable the boot over the other FC controller. The use of a mirrored boot disk was not tested, but this could enable the boot over the second adapter without changing the boot order in the BIOS. This requires more testing.
EVMS should not be used on the SAN boot disk, to prevent issues.
Changing the Fiber Channel Adapter BIOS
The QLogic 2312 Adapter required assignment of a boot LUN and enabling the BIOS of the FC Adapter:
56 Novell Consulting Example
Illustration 44: Boot Controller Order
Connecting to the SAN
SLES9 AutoYaST example - Whitepaper Report 57Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Illustration 45: QLogic <Ctrl-Q> Configuration Settings
Connecting to the SAN
The Host Adapter BIOS required to be enabled ('Disabled' in the illustration).
58 Novell Consulting Example
Illustration 46: QLogic Host Adapter Settings
Connecting to the SAN
The LUNS with a bootable MBR should be in the list (of both FC adapters).
SLES9 AutoYaST example - Whitepaper Report 59Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Illustration 47: QLogic Selectable Boot Options
Conclusion
9 Conclusion
Customer has requested Novell Consulting to work on the following tasks.
Deployment Infrastructure:
• Determining and creating a core build
• Setting up the AutoYaST installation server (YaST: Yet another Setup Tool)
• Creating AutoYaST templates for the applications
Updates and patches:
• Setting up an YOU server for updating the servers
Monitoring:
• Setting up Nagios for operating system and hardware monitoring
Extras:
• Centralized LDAP authentication and client deployment
• EVMS Mirrored and multipathed SAN disks configuration
• Diskless boot from the SAN without using PXE
All the aforementioned deliverables were successful.
High availability design and integrating into the existing management infrastructure is not part of this stage of the Quick Hosting Proof of Concept.
60 Novell Consulting Example
Contact List
Contact List
teName Phone/Region/District Email
Robert Zondervan rzondervan at REMOVE_THIS gmail dot com
Table: Contact list
SLES9 AutoYaST example - Whitepaper Report 61Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Appendix
Appendix
AutoYaST template 1
OS + Apache build
<?xml version="1.0"?><!DOCTYPE profile SYSTEM "/usr/share/autoinstall/dtd/profile.dtd"><profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns"><!-- See: http://www.suse.com/~ug/ for the latest autoyast documentation --> <configure> <networking> <dns> <dhcp_hostname config:type="boolean">false</dhcp_hostname> <dhcp_resolv config:type="boolean">false</dhcp_resolv> <domain>is.cuscorp.net</domain> <hostname>cusfbh5</hostname> <nameservers config:type="list"> <nameserver>171.21.99.200</nameserver> <nameserver>171.21.99.201</nameserver> </nameservers> <searchlist config:type="list"> <search>is.cuscorp.net</search> </searchlist> </dns> <interfaces config:type="list"> <interface> <bootproto>static</bootproto> <device>eth0</device> <broadcast>171.21.67.127</broadcast> <ipaddr>171.21.67.33</ipaddr> <netmask>255.255.255.128</netmask> <network>171.21.67.0</network> <startmode>onboot</startmode> </interface> </interfaces> <routing> <ip_forward config:type="boolean">false</ip_forward> <routes config:type="list"> <route> <destination>default</destination> <gateway>171.21.67.126</gateway> </route> </routes> </routing> </networking> <runlevel> <default>3</default> <services config:type="list"> <service> <service_name>portmap</service_name> <service_stop>3 5</service_stop> </service> <service> <service_name>xdm</service_name> <service_stop>5</service_stop>
62 Novell Consulting Example
Appendix
</service> </services> </runlevel>
<ldap> <base_config_dn>ou=ldapconfig,dc=suse,dc=cus,dc=nl</base_config_dn> <bind_dn>cn=Administrator,dc=suse,dc=cus,dc=nl</bind_dn> <create_ldap config:type="boolean">true</create_ldap> <file_server config:type="boolean">true</file_server> <ldap_domain>dc=suse,dc=cus,dc=nl</ldap_domain> <ldap_server>171.21.67.30</ldap_server> <ldap_tls config:type="boolean">true</ldap_tls> <ldap_v2 config:type="boolean">false</ldap_v2> <member_attribute>member</member_attribute> <nss_base_group>ou=people,dc=suse,dc=cus,dc=nl</nss_base_group> <nss_base_passwd>ou=people,dc=suse,dc=cus,dc=nl</nss_base_passwd> <nss_base_shadow>ou=people,dc=suse,dc=cus,dc=nl</nss_base_shadow> <pam_password>crypt</pam_password> <start_ldap config:type="boolean">true</start_ldap> </ldap>
<ntp-client> <peers config:type="list"> <peer> <address>ntp1.cus.nl</address> <initial_sync config:type="boolean">true</initial_sync> <options></options> <type>server</type> </peer> <peer> <address>ntp2.cus.nl</address> <initial_sync config:type="boolean">false</initial_sync> <options></options> <type>server</type> </peer> </peers> <start_at_boot config:type="boolean">true</start_at_boot> <start_in_chroot config:type="boolean">true</start_in_chroot> </ntp-client> <sysconfig config:type="list"> <sysconfig_entry> <sysconfig_key>YAST2_LOADFTPSERVER</sysconfig_key> <sysconfig_path>/etc/sysconfig/onlineupdate</sysconfig_path> <sysconfig_value>no</sysconfig_value> </sysconfig_entry> </sysconfig> <scripts> <chroot-scripts config:type="list"> <script> <chrooted config:type="boolean">false</chrooted> <filename>chroot.sh</filename> <interpreter>shell</interpreter> <source><![CDATA[#!/bin/bash
# Implement a few cus specific changes during the AutoYaST # installation of SLES9
# Author: Yan Fitterer (YVF)# Date: 26/01/2006# Last Modified: 26/01/2006 (YVF)
SLES9 AutoYaST example - Whitepaper Report 63Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Appendix
# Autoyast mounts the installed system in /mnt during the chroot scripts phase.chrootbase="/mnt"chrootedbase="/var/adm/autoinstall/cus"base="$chrootbase$chrootedbase"mkdir "$base"log="$base/chroot.log"env >> "$log"set >> "$log"
# Get all needed filescd $base/usr/bin/wget -a "$log" -nH --cut-dirs=1 "http://$Server/cus/filelist.txt" 2>>"$log"/usr/bin/wget -a "$log" -nH --cut-dirs=2 -x -B "http://$Server/cus/" -i "$base/filelist.txt" 2>>"$log"
# Create "function" systemcat .function > "$chrootbase/.function" 2>>"$log"cp get_description.sh "$chrootbase/usr/local/bin/" 2>>"$log"chmod 755 "$chrootbase/usr/local/bin/get_description.sh" 2>>"$log"cat profile.local >> "$chrootbase/etc/profile.local" 2>>"$log"
# Add a key to root's authorized keys[ ! -d "$chrootbase/root/.ssh" ] && (mkdir "$chrootbase/root/.ssh" ; chmod 700 "$chrootbase/root/.ssh")cat authorized_keys >> "$chrootbase/root/.ssh/authorized_keys" 2>>"$log"
# Change standard issue / herald / motd filescat motd > "$chrootbase/etc/motd" 2>>"$log"cat issue > "$chrootbase/etc/issue" 2>>"$log"cat issue > "$chrootbase/etc/issue.net" 2>>"$log"
# Configure sshd to display cus's standard bannersed -i -e 's|^#\?Banner.*$|Banner /etc/issue.net|' "$chrootbase/etc/ssh/sshd_config" 2>>"$log"
# Configure PAM to create home dirs on LDAP loginsLINE=`egrep -m1 -n '^session' "$chrootbase/etc/pam.d/sshd" | cut -b 1` 2>>"$log"sed -ie "${LINE}i\session required pam_mkhomedir.so skel=/etc/skel umask=0022" "$chrootbase/etc/pam.d/sshd" 2>>"$log"LINE=`egrep -m1 -n '^session' "$chrootbase/etc/pam.d/login" | cut -b 1` 2>>"$log"sed -ie "${LINE}i\session required pam_mkhomedir.so skel=/etc/skel umask=0022" "$chrootbase/etc/pam.d/login" 2>>"$log"
# Add the cus YOU Serverecho 'http://suse.cus.nl/YOU' >> "$chrootbase/etc/youservers" 2>>"$log"
]]></source> </script> </chroot-scripts> </scripts> <security> <console_shutdown>ignore</console_shutdown> <cracklib_dict_path>/usr/lib/cracklib_dict</cracklib_dict_path> <cwd_in_root_path>no</cwd_in_root_path> <cwd_in_user_path>no</cwd_in_user_path> <displaymanager_remote_access>no</displaymanager_remote_access>
64 Novell Consulting Example
Appendix
<enable_sysrq>no</enable_sysrq> <fail_delay>3</fail_delay> <faillog_enab>yes</faillog_enab> <gid_max>60000</gid_max> <gid_min>1000</gid_min> <kdm_shutdown></kdm_shutdown> <lastlog_enab>yes</lastlog_enab> <obscure_checks_enab>yes</obscure_checks_enab> <pass_max_days>99999</pass_max_days> <pass_max_len>8</pass_max_len> <pass_min_days>0</pass_min_days> <pass_min_len>5</pass_min_len> <pass_warn_age>7</pass_warn_age> <passwd_encryption>md5</passwd_encryption> <passwd_use_cracklib>yes</passwd_use_cracklib> <permission_security>secure</permission_security> <run_updatedb_as>nobody</run_updatedb_as> <system_gid_max>499</system_gid_max> <system_gid_min>100</system_gid_min> <system_uid_max>499</system_uid_max> <system_uid_min>100</system_uid_min> <uid_max>60000</uid_max> <uid_min>1000</uid_min> <useradd_cmd>/usr/sbin/useradd.local</useradd_cmd> <userdel_postcmd>/usr/sbin/userdel-post.local</userdel_postcmd> <userdel_precmd>/usr/sbin/userdel-pre.local</userdel_precmd> </security> <users config:type="list"> <user> <encrypted config:type="boolean">true</encrypted> <user_password>4NDKWsvl.Sox2</user_password> <username>root</username> </user> </users> </configure> <install> <bootloader> <activate config:type="boolean">false</activate> <global config:type="list"> <global_entry> <key>color</key> <value>white/blue black/light-gray</value> </global_entry> <global_entry> <key>default</key> <value config:type="integer">0</value> </global_entry> <global_entry> <key>timeout</key> <value config:type="integer">8</value> </global_entry> </global> <loader_type>grub</loader_type> <location>mbr</location> <repl_mbr config:type="boolean">true</repl_mbr> </bootloader> <general> <clock> <hwclock>localtime</hwclock>
SLES9 AutoYaST example - Whitepaper Report 65Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Appendix
<timezone>Europe/Amsterdam</timezone> </clock> <keyboard> <keymap>english-us</keymap> </keyboard> <language>en_US</language> <mode> <confirm config:type="boolean">false</confirm> </mode> <mouse> <id>23_exps2</id> </mouse> </general> <partitioning config:type="list"> <drive> <device>/dev/rootvg</device> <is_lvm_vg config:type="boolean">true</is_lvm_vg> <lvm2 config:type="boolean">true</lvm2> <partitions config:type="list"> <partition> <filesystem config:type="symbol">reiser</filesystem> <format config:type="boolean">true</format> <lv_name>homelv</lv_name> <mount>/home</mount> <size>1GB</size> </partition> <partition> <filesystem config:type="symbol">reiser</filesystem> <format config:type="boolean">true</format> <lv_name>optlv</lv_name> <mount>/opt</mount> <size>512MB</size> </partition> <partition> <filesystem config:type="symbol">reiser</filesystem> <format config:type="boolean">true</format> <lv_name>rootlv</lv_name> <mount>/</mount> <size>1GB</size> </partition> <partition> <filesystem config:type="symbol">reiser</filesystem> <format config:type="boolean">true</format> <lv_name>tmplv</lv_name> <mount>/tmp</mount> <size>1GB</size> </partition> <partition> <filesystem config:type="symbol">reiser</filesystem> <format config:type="boolean">true</format> <lv_name>usrlv</lv_name> <mount>/usr</mount> <size>3GB</size> </partition> <partition> <filesystem config:type="symbol">reiser</filesystem> <format config:type="boolean">true</format> <lv_name>varlv</lv_name> <mount>/var</mount> <size>1GB</size>
66 Novell Consulting Example
Appendix
</partition> <partition> <filesystem config:type="symbol">ext3</filesystem> <format config:type="boolean">true</format> <lv_name>appllv</lv_name> <mount>/appl</mount> <size>10MB</size> </partition> <partition> <filesystem config:type="symbol">ext3</filesystem> <format config:type="boolean">true</format> <lv_name>datalv</lv_name> <mount>/data</mount> <size>10MB</size> </partition> <partition> <filesystem config:type="symbol">ext3</filesystem> <format config:type="boolean">true</format> <lv_name>srvlv</lv_name> <mount>/srv</mount> <size>1GB</size> </partition> </partitions> <pesize>4M</pesize> <use>all</use> </drive> <drive> <partitions config:type="list"> <partition> <filesystem config:type="symbol">swap</filesystem> <format config:type="boolean">true</format> <mount>swap</mount> <partition_id config:type="integer">130</partition_id> <size>1GB</size> </partition> <partition> <filesystem config:type="symbol">ext3</filesystem> <format config:type="boolean">true</format> <mount>/boot</mount> <partition_id config:type="integer">131</partition_id> <size>100MB</size> </partition> <partition> <lvm_group>rootvg</lvm_group> <partition_id config:type="integer">142</partition_id> <size>max</size> </partition> </partitions> <use>all</use> </drive> </partitioning> <report> <errors> <log config:type="boolean">true</log> <show config:type="boolean">true</show>
<timeout config:type="integer">0</timeout> </errors> <messages> <log config:type="boolean">true</log>
SLES9 AutoYaST example - Whitepaper Report 67Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Appendix
<show config:type="boolean">true</show> <timeout config:type="integer">0</timeout> </messages> <warnings> <log config:type="boolean">true</log> <show config:type="boolean">true</show> <timeout config:type="integer">0</timeout> </warnings> <yesno_messages> <log config:type="boolean">true</log> <show config:type="boolean">true</show> <timeout config:type="integer">0</timeout> </yesno_messages> </report> <software> <addons config:type="list"> <addon>Base-System</addon> <addon>YaST2</addon> </addons> <base>Minimal</base> <packages config:type="list"> <package>apache2</package> <package>apache2-doc</package> <package>apache2-prefork</package> <package>XFree86-libs</package> <package>emacs</package> <package>evms-gui</package> <package>mkisofs</package> <package>nagios-plugins</package> <package>net-snmp</package> <package>rsync</package> <package>sysstat</package> <package>xntp</package> <package>emacs-info</package> <package>emacs-x11</package> <package>findutils-locate</package> <package>nagios</package> <package>nagios-nsca</package> <package>nagios-plugins-extras</package> <package>xntp-doc</package> <package>libimmunix</package> <package>mod-change-hat</package> <package>subdomain-docs</package> <package>subdomain-profiles</package> <package>quota</package> <package>wget</package> <package>nss_ldap</package> <package>pam_ldap</package> <package>yast2-qt</package> </packages> <remove-packages config:type="list"> <package>yast2-you-server</package> </remove-packages> </software> </install></profile>
68 Novell Consulting Example
Appendix
AutoYaST template 2
OS + Apache + Tomcat
Similar to template three, but without the JBoss package.
AutoYaST template 3
OS + Apache + Tomcat + JBoss
Similar to template one, but with extra packages required for Tomcat and JBoss4. If JBoss is not packaged as an RPM, the following dependencies will have to be added to the package selection:
• alsa-32bit
• XFree86-libs-32bit
• java2
• java2-jre
• unixODBC
• expat-32bit
• frontconfig-32bit
• freetype2-32bit
Customized files
/etc/init.d/boot.local on the AutoYaST server
# Here you should add things, that should happen directly after booting
# before we're going to the first run level.
ISODIR='/data1/instserver/isos/sles9/x86_64'
SLES9_64SP3='/data1/instserver/sles9/x86_64/sp3'
mount -o loop $ISODIR/SLES-9-x86-64-RC5-CD1.iso $SLES9_64SP3/SUSE-SLES-Version-9/CD1
mount -o loop $ISODIR/SLES-9-x86-64-RC5-CD2.iso $SLES9_64SP3/SUSE-CORE-Version-9/CD1
mount -o loop $ISODIR/SLES-9-x86-64-RC5-CD3.iso $SLES9_64SP3/SUSE-CORE-Version-9/CD2
mount -o loop $ISODIR/SLES-9-x86-64-RC5-CD4.iso $SLES9_64SP3/SUSE-CORE-Version-9/CD3
mount -o loop $ISODIR/SLES-9-x86-64-RC5-CD5.iso $SLES9_64SP3/SUSE-CORE-Version-9/CD4
mount -o loop $ISODIR/SLES-9-x86-64-RC5-CD6.iso $SLES9_64SP3/SUSE-CORE-Version-9/CD5
SLES9 AutoYaST example - Whitepaper Report 69Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Appendix
mount -o loop $ISODIR/SLES-9-SP-3-x86-64-GM-CD1.iso $SLES9_64SP3/SUSE-SLES-9-Service-Pack-Version-3/CD1
mount -o loop $ISODIR/SLES-9-SP-3-x86-64-GM-CD2.iso $SLES9_64SP3/SUSE-SLES-9-Service-Pack-Version-3/CD2
mount -o loop $ISODIR/SLES-9-SP-3-x86-64-GM-CD3.iso $SLES9_64SP3/SUSE-SLES-9-Service-Pack-Version-3/CD3
Illustration 48: /etc/init.d/boot.local on the AutoYaST server
/etc/apache2/conf.d/inst_server.conf
# httpd configuration for AutoYaST Installation Server included by httpd.conf
<IfDefine inst_server>
Alias /sles9_64/ /data1/instserver/sles9/x86_64/
<Directory /data1/instserver/sles9/x86_64/>
Options +Indexes +FollowSymLinks
IndexOptions +NameWidth=*
Order allow,deny
Allow from all
</Directory>
</IfDefine>
Illustration 49: /etc/apache2/conf.d/inst_server.conf
70 Novell Consulting Example
Appendix
Nagios custom setup files
The following example files were minimized to start basic monitoring for 6 hosts:
• nagios.cfg
• checkcommands.cfg
• cgi.cfg
• timeperiods.cfg
• services.cfg
• misccommands.cfg
• resource.cfg
• hostgroups.cfg
• escalations.cfg
• contactgroups.cfg
• dependencies.cfg
• hosts.cfg
• contacts.cfg.
nagios.cfglog_file=/var/log/nagios/nagios.logcfg_file=/etc/nagios/checkcommands.cfgcfg_file=/etc/nagios/misccommands.cfgcfg_file=/etc/nagios/contactgroups.cfgcfg_file=/etc/nagios/contacts.cfgcfg_file=/etc/nagios/dependencies.cfgcfg_file=/etc/nagios/escalations.cfgcfg_file=/etc/nagios/hostgroups.cfgcfg_file=/etc/nagios/hosts.cfgcfg_file=/etc/nagios/services.cfgcfg_file=/etc/nagios/timeperiods.cfgresource_file=/etc/nagios/resource.cfgstatus_file=/var/log/nagios/status.lognagios_user=daemonnagios_group=daemoncheck_external_commands=1command_check_interval=-1command_file=/var/spool/nagios/nagios.cmdcomment_file=/var/log/nagios/comment.logdowntime_file=/var/log/nagios/downtime.loglock_file=/var/run/nagios.pidtemp_file=/var/log/nagios/nagios.tmplog_rotation_method=dlog_archive_path=/var/log/nagios/archivesuse_syslog=1log_notifications=1log_service_retries=1log_host_retries=1log_event_handlers=1log_initial_states=0log_external_commands=1
SLES9 AutoYaST example - Whitepaper Report 71Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Appendix
log_passive_service_checks=1inter_check_delay_method=sservice_interleave_factor=smax_concurrent_checks=0service_reaper_frequency=10sleep_time=1service_check_timeout=60host_check_timeout=30event_handler_timeout=30notification_timeout=30ocsp_timeout=5perfdata_timeout=5retain_state_information=1state_retention_file=/var/log/nagios/status.savretention_update_interval=60use_retained_program_state=0interval_length=60use_agressive_host_checking=0execute_service_checks=1accept_passive_service_checks=1enable_notifications=1enable_event_handlers=1process_performance_data=0obsess_over_services=0check_for_orphaned_services=0check_service_freshness=1freshness_check_interval=60aggregate_status_updates=1status_update_interval=15enable_flap_detection=0low_service_flap_threshold=5.0high_service_flap_threshold=20.0low_host_flap_threshold=5.0high_host_flap_threshold=20.0date_format=usillegal_object_name_chars=`~!$%^&*|'"<>?,()=illegal_macro_output_chars=`~$&|'"<>admin_email=daemonadmin_pager=pagedaemon
checkcommands.cfgdefine command { command_name check-ntp command_line /usr/bin/ssh -l root -i $USER2$ $HOSTADDRESS$ -C '/usr/lib/nagios/plugins/check_ntp -H localhost -w 3 -c 5'} define command { command_name check_local_load command_line $USER1$/check_load -w $ARG1$ -c $ARG2$ }define command { command_name check_file command_line /usr/bin/touch /tmp/nagios_file }define command{
72 Novell Consulting Example
Appendix
command_name check-host-alive command_line $USER1$/check_tcp -H $HOSTADDRESS$ -p 22 -w 1 -c 3 }
cgi.cfgmain_config_file=/etc/nagios/nagios.cfgphysical_html_path=/usr/share/nagiosurl_html_path=/nagiosshow_context_help=0/var/log/nagios/status.log 5 '/usr/sbin/nagios'use_authentication=1authorized_for_system_information=nagiosadminauthorized_for_configuration_information=nagiosadminauthorized_for_system_commands=nagiosadminauthorized_for_all_service_commands=nagiosadminauthorized_for_all_host_commands=nagiosadmineds.html;novell40.gif;novell40.jpg;novell40.gd2;IntranetWare 4.11;100,50;3.5,0.0,-1.5;serviceextinfo[<host_name>;<svc_description>]=<notes_url>;<icon_image>;<image_alt>eds;PING]=http://www.somewhere.com?tracerouteto=$HOSTADDRESS$;;PING ratedefault_statusmap_layout=5default_statuswrl_layout=4ping_syntax=/bin/ping -n -U -c 5 $HOSTADDRESS$refresh_rate=90
timeperiods.cfgdefine timeperiod{
timeperiod_name 24x7alias 24 Hours A Day, 7 Days A Weeksunday 00:00-24:00monday 00:00-24:00tuesday 00:00-24:00wednesday 00:00-24:00thursday 00:00-24:00friday 00:00-24:00saturday 00:00-24:00}
define timeperiod{timeperiod_name workhoursalias "Normal" Working Hoursmonday 08:00-18:00tuesday 08:00-18:00wednesday 08:00-18:00thursday 08:00-18:00friday 08:00-18:00}
define timeperiod{timeperiod_name nonworkhoursalias Non-Work Hourssunday 00:00-24:00monday 00:00-08:00,18:00-24:00tuesday 00:00-08:00,18:00-24:00wednesday 00:00-08:00,18:00-24:00thursday 00:00-08:00,18:00-24:00friday 00:00-08:00,18:00-24:00saturday 00:00-24:00}
define timeperiod{
SLES9 AutoYaST example - Whitepaper Report 73Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Appendix
timeperiod_name nonealias No Time Is A Good Time}
services.cfgdefine service{
name generic-service ; The 'name' of this service template, referenced in other service definitions
active_checks_enabled 1 ; Active service checks are enabledpassive_checks_enabled 1 ; Passive service checks are
enabled/acceptedparallelize_check 1 ; Active service checks should be parallelized
(disabling this can lead to major performance problems)obsess_over_service 1 ; We should obsess over this service
(if necessary)check_freshness 0 ; Default is to NOT check service
'freshness'notifications_enabled 1 ; Service notifications are enabledevent_handler_enabled 1 ; Service event handler is enabledflap_detection_enabled 1 ; Flap detection is enabledprocess_perf_data 1 ; Process performance dataretain_status_information 1 ; Retain status information across program
restartsretain_nonstatus_information 1 ; Retain non-status information across
program restartsregister 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A
REAL SERVICE, JUST A TEMPLATE!}
# NTP service definition templatedefine service{
name ntp-service-templateactive_checks_enabled 1 ; Active service checks are enabledpassive_checks_enabled 1 ; Passive service checks are
enabled/acceptedparallelize_check 1 ; Active service checks should be parallelized
(disabling this can lead to major performance problems)obsess_over_service 1 ; We should obsess over this service
(if necessary)check_freshness 0 ; Default is to NOT check service
'freshness'notifications_enabled 1 ; Service notifications are enabledevent_handler_enabled 1 ; Service event handler is enabledflap_detection_enabled 1 ; Flap detection is enabledprocess_perf_data 1 ; Process performance dataretain_status_information 1 ; Retain status information across program
restartsretain_nonstatus_information 1 ; Retain non-status information across
program restartsis_volatile 0check_period 24x7max_check_attempts 3normal_check_interval 5retry_check_interval 1contact_groups nagios-adminsnotification_interval 120notification_period 24x7notification_options w,u,c,rcheck_command check-ntpregister 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A
REAL SERVICE, JUST A TEMPLATE!}
# NTP service definitiondefine service{
use ntp-service-template ; Name of service template to usehost_name cusfbguservice_description NTP Synchronisation
74 Novell Consulting Example
Appendix
}# NTP service definitiondefine service{
use ntp-service-template ; Name of service template to usehost_name cusfbh5service_description NTP Synchronisation}
# NTP service definitiondefine service{
use ntp-service-template ; Name of service template to usehost_name cusfbesservice_description NTP Synchronisation}
# NTP service definitiondefine service{
use ntp-service-template ; Name of service template to usehost_name cusfbfvservice_description NTP Synchronisation}
# NTP service definitiondefine service{
use ntp-service-template ; Name of service template to usehost_name cusfbj7service_description NTP Synchronisation}
# NTP service definitiondefine service{
use ntp-service-template ; Name of service template to usehost_name cusfbk6service_description NTP Synchronisation}
# Local Load on installation serverdefine service{
use generic-service ; Name of service template to usehost_name cusfbesservice_description suse.cus.nl Local Loadservice_description LOADis_volatile 0check_period workhoursmax_check_attempts 4normal_check_interval 5retry_check_interval 1contact_groups nagios-adminsnotification_interval 960notification_period workhoursnotification_options c,r
# check_command check_local_load!5!10 check_command check_file }
misccommands.cfg# 'notify-by-email' command definitiondefine command{
command_name notify-by-emailcommand_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification
Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $DATETIME$\n\nAdditional Info:\n\n$OUTPUT$" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ alert - $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$
}# 'notify-by-epager' command definitiondefine command{
command_name notify-by-epager
SLES9 AutoYaST example - Whitepaper Report 75Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Appendix
command_line /usr/bin/printf "%b" "Service: $SERVICEDESC$\nHost: $HOSTNAME$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nInfo: $OUTPUT$\nDate: $DATETIME$" | /usr/bin/mail -s "$NOTIFICATIONTYPE$: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$" $CONTACTPAGER$
}# 'host-notify-by-email' command definitiondefine command{
command_name host-notify-by-emailcommand_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification
Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $OUTPUT$\n\nDate/Time: $DATETIME$\n" | /usr/bin/mail -s "Host $HOSTSTATE$ alert for $HOSTNAME$!" $CONTACTEMAIL$
}# 'host-notify-by-epager' command definitiondefine command{
command_name host-notify-by-epagercommand_line /usr/bin/printf "%b" "Host '$HOSTALIAS$' is
$HOSTSTATE$\nInfo: $OUTPUT$\nTime: $DATETIME$" | /usr/bin/mail -s "$NOTIFICATIONTYPE$ alert - Host $HOSTNAME$ is $HOSTSTATE$" $CONTACTPAGER$
}# 'notify-tivoli' command definitiondefine command{
command_name notify-tivolicommand_line /usr/bin/printf "%b" "Host '$HOSTALIAS$' is
$HOSTSTATE$\nInfo: $OUTPUT$\nTime: $DATETIME$" | /path/to/wpostemsg -s "$NOTIFICATIONTYPE$ alert - Host $HOSTNAME$ is $HOSTSTATE$" $CONTACTPAGER$
}# 'process-host-perfdata' command definitiondefine command{
command_name process-host-perfdatacommand_line /usr/bin/printf "%b"
"$LASTCHECK$\t$HOSTNAME$\t$HOSTSTATE$\t$HOSTATTEMPT$\t$STATETYPE$\t$EXECUTIONTIME$\t$OUTPUT$\t$PERFDATA$" >> /var/log/nagios/host-perfdata.out
}# 'process-service-perfdata' command definitiondefine command{
command_name process-service-perfdatacommand_line /usr/bin/printf "%b"
"$LASTCHECK$\t$HOSTNAME$\t$SERVICEDESC$\t$SERVICESTATE$\t$SERVICEATTEMPT$\t$STATETYPE$\t$EXECUTIONTIME$\t$LATENCY$\t$OUTPUT$\t$PERFDATA$" >> /var/log/nagios/service-perfdata.out
}
resource.cfg$USER1$=/usr/lib/nagios/plugins$USER2$=/tmp/nagios.key
hostgroups.cfg# 'ZF1-ae_seg' host group definitiondefine hostgroup{
hostgroup_name ZF1-ae_segalias Primary DMZcontact_groups nagios-adminsmembers cusfbgu,cusfbh5,cusfbes,cusfbfv,cusfbj7,cusfbk6}
escalations.cfg
# everything disabled
contactgroups.cfg# 'tivoli-monitor' contact group definitiondefine contactgroup{
76 Novell Consulting Example
Appendix
contactgroup_name tivoli-monitoralias Tivoly Monitoring Servermembers tivoli}
# 'nagios-admins' contact group definitiondefine contactgroup{
contactgroup_name nagios-adminsalias Nagios Administratormembers daemon,nagiosadmin}
dependencies.cfg
# everything disabled
hosts.cfgdefine host{
name generic-host ; The name of this host template - referenced in other host definitions, used for template recursion/resolution
notifications_enabled 1 ; Host notifications are enabledevent_handler_enabled 1 ; Host event handler is enabledflap_detection_enabled 1 ; Flap detection is enabledprocess_perf_data 1 ; Process performance dataretain_status_information 1 ; Retain status information across
program restartsretain_nonstatus_information 1 ; Retain non-status information
across program restarts check_command check-host-alive max_check_attempts 5 notification_interval 60 notification_options d,u,r notification_period 24x7
register 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL HOST, JUST A TEMPLATE!
}define host{
use generic-host ; Name of host template to usehost_name cusfbesalias cusfbesaddress 171.21.67.30}
# '' host definitiondefine host{
use generic-host ; Name of host template to usehost_name cusfbfvalias cusfbfv address 171.21.67.31}
# '' host definitiondefine host{
use generic-host ; Name of host template to usehost_name cusfbgualias cusfbguaddress 171.21.67.32}
SLES9 AutoYaST example - Whitepaper Report 77Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Appendix
# '' host definitiondefine host{
use generic-host ; Name of host template to usehost_name cusfbh5alias cusfbh5address 171.21.67.33}
# '' host definitiondefine host{
use generic-host ; Name of host template to usehost_name cusfbj7alias cusfbj7address 171.21.67.35}
# '' host definitiondefine host{
use generic-host ; Name of host template to usehost_name cusfbk6alias cusfbk6address 171.21.67.36}
contacts.cfg# 'daemon' contact definitiondefine contact{
contact_name daemonalias Nagios Adminservice_notification_period 24x7host_notification_period 24x7
# service_notification_options w,u,c,rservice_notification_options n
# host_notification_options d,u,rhost_notification_options nservice_notification_commands notify-by-email,notify-by-epagerhost_notification_commands host-notify-by-email,host-notify-by-
epageremail [email protected] [email protected]}
# 'tivoli' contact definitiondefine contact{
contact_name tivolialias Tivoli Systemservice_notification_period workhourshost_notification_period workhoursservice_notification_options w,u,c,rhost_notification_options d,u,rservice_notification_commands notify-by-emailhost_notification_commands host-notify-by-emailemail tivoli}
# 'nagiosadmin' contact definitiondefine contact{
contact_name nagiosadminalias Nagios Administratorservice_notification_period workhours
78 Novell Consulting Example
Appendix
host_notification_period workhoursservice_notification_options nhost_notification_options nservice_notification_commands notify-by-emailhost_notification_commands host-notify-by-email
email tivoli}
SLES9 AutoYaST example - Whitepaper Report 79Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Appendix
Installation sources
Building the installation source tree with integrated Service Pack
The building of the installation source is not only copying of the installation CD's to a common directory such as with other Linux distributions. The advantage of the SUSE system is that afterwards adding patches, Service Packs or new(er) rpm's is done by copying them to the installation source and register them higher in the order. The same rpm files, elsewhere located on the original CD's, lower in the order are not installed.
The following text is taken from the RELEASE-NOTES.en.html file of Service Pack 3. It actually contains some notes of SP2. I changed SP2 in SP3 at some locations:
80 Novell Consulting Example
Illustration 50: Example installation sources
Appendix
Setting up an installation server for network installations
If you have a system with SLES9 already installed, you can use the YaST installation-server module to create a network install source. It can be found below Misc - Installation Server in the YaST main screen.
Note: the SLES9 General Availability (GA) version of the YaST installation-server module can not integrate SLES9 service pack iso images. This bug was fixed in SLES9 SP1. Either upgrade the server to SP1 or newer, or just update the yast2-instserver.rpm: rpm -Uvh /SP1-CD1/suse/noarch/yast2-instserver-2.9.22-0.2.noarch.rpm Restart YaST after the yast2-instserver package update.
Every new service pack CD provides an updated rescue system on CD1/boot/rescue. To use this new rescue system on a YaST generated installation source, remove the 'boot' symlink in the toplevel directory. rm boot mkdir boot cp SUSE-SLES-Version-9/CD1/boot/root ./boot/root cp SUSE-SLES-9-Service-Pack-Version-3/boot/rescue ./boot/rescue
To manually set up an installation Server for installations via NFS/FTP/HTTP the CDs have to be copied into a special directory structure.
Go to a directory of your choice and execute the following commands:
mkdir -p installroot/sles9/CD1 # now copy the contents of SLES CD1 into this directory mkdir -p installroot/core9/CD1 # now copy the contents of SLES CD2 into this directory mkdir -p installroot/core9/CD2 # now copy the contents of SLES CD3 into this directory mkdir -p installroot/core9/CD3 # now copy the contents of SLES CD4 into this directory mkdir -p installroot/core9/CD4 # now copy the contents of SLES CD5 into this directory mkdir -p installroot/core9/CD5 # now copy the contents of SLES CD6 into this directory mkdir -p installroot/sp3/CD1 # now copy the contents of SLES SP3 CD1 into this directory mkdir -p installroot/sp3/CD2 # now copy the contents of SLES SP3 CD2 into this directory mkdir -p installroot/sp3/CD3 # now copy the contents of SLES SP3 CD3 into this directory cd installroot mkdir boot ln -s ../sles9/CD1/boot/root boot/root ln -s ../sp3/CD1/boot/rescue boot/rescue ln -s sp3/CD1/driverupdate driverupdate ln -s sp3/CD1/linux linux ln -s sles9/CD1/content content ln -s sles9/CD1/control.xml control.xml
SLES9 AutoYaST example - Whitepaper Report 81Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Appendix
ln -s sles9/CD1/media.1 media.1 mkdir yast echo "/sp3/CD1 /sp3/CD1" > yast/instorder echo "/sles9/CD1 /sles9/CD1" >> yast/instorder echo "/core9/CD1 /core9/CD1" >> yast/instorder echo "/sp3/CD1 /sp3/CD1" > yast/order echo "/sles9/CD1 /sles9/CD1" >> yast/order echo "/core9/CD1 /core9/CD1" >> yast/order
If you are now asked for the installation directory just specify:
installroot
If you want to set up an MS windows system as an install server go to the directory dosutils/install. There is a script install.bat that will create the structure and asks you for the CDs. There are also the files instorder and order that have to be copied to the directory \suseinstall\yast. Before you copy the order file please replace the Variables UserAccount, PASSword and IP-Number with the respective values of your MS windows machine. During the installation process you only need to specify the share suseinstall.
If you use a SAMBA-Server as installation server, simply use the commands mentioned above to create the appropriate structure. Use the instorder and order files from dosutils/install and replace the Variables UserAccount, PASSword and IP-Number with the respective values. The following shares need to be exported:
installroot (the directory installroot)
sles9-CD1 (the directory installroot/sles9/CD1)
core9-CD1 (the directory installroot/core9/CD1)
core9-CD2 (the directory installroot/core9/CD2)
core9-CD3 (the directory installroot/core9/CD3)
core9-CD4 (the directory installroot/core9/CD4)
core9-CD5 (the directory installroot/core9/CD5)
sp3-CD1 (the directory installroot/sp3/CD1)
sp3-CD2 (the directory installroot/sp3/CD2)
sp3-CD3 (the directory installroot/sp3/CD3)
Installation source tip
A nice trick can be used In the above mentioned procedure to create the installation source. It will make the installation source more trustworthy, because you can md5sum all the ISO's any time you like.
Instead of copying the contents of the CD's you only copy all the ISO files to one directory. The following mount.sh file is an example of loop mount commands to open up the ISO files:
ISODIR='../../../../iso/SLES9/x86_32'
82 Novell Consulting Example
Appendix
mount -o loop $ISODIR/SLES-9-i386-RC5-CD1.iso ./SUSE-SLES-Version-9/CD1
mount -o loop $ISODIR/SLES-9-i386-RC5-CD2.iso ./SUSE-CORE-Version-9/CD1
mount -o loop $ISODIR/SLES-9-i386-RC5-CD3.iso ./SUSE-CORE-Version-9/CD2
mount -o loop $ISODIR/SLES-9-i386-RC5-CD4.iso ./SUSE-CORE-Version-9/CD3
mount -o loop $ISODIR/SLES-9-i386-RC5-CD5.iso ./SUSE-CORE-Version-9/CD4
mount -o loop $ISODIR/SLES-9-i386-RC5-CD6.iso ./SUSE-CORE-Version-9/CD5
mount -o loop $ISODIR/SLES-9-SP-3-i386-RC4-CD1.iso ./SUSE-SLES-9-Service-Pack-Version-3/CD1
mount -o loop $ISODIR/SLES-9-SP-3-i386-RC4-CD2.iso ./SUSE-SLES-9-Service-Pack-Version-3/CD2
mount -o loop $ISODIR/SLES-9-SP-3-i386-RC4-CD3.iso ./SUSE-SLES-9-Service-Pack-Version-3/CD3
The mount loop command has a default maximum of 8 mounts in the kernel. To mount more, e.g. 24, the system needs to be booted with the following parameter (at the boot prompt or in /boot/grub/menu.lst)
max_loop=24
Another option to work with the default of maximal 8 ISO files is not loop mounting CD5, CD6 and SP3 CD3, because they contain sources.
Sharing the installation source with Apache
HTTP is the preferred remote installation method in a production environment, because the other available protocols such as NFS, FTP and SMB are known to have reported issues in the past. One other advantage of using HTTP is the good logging features of Apache.
The contents of the installation source tree need to be copied to the Apache server document subdirectory. The file system must support the symbolic links (e.g. reiser, ext2, ext3, xfs are fine) and the file system rights need to be set to read only for others.
The following example file instsource.conf can be changed and placed in the Apache virtual host subdirectory /etc/apache2/vhosts.d/ to activate the installation source service:
Alias /sles9 "/media/ieee1394disk/yan/instsource/sles9/"
<Directory "/media/ieee1394disk/yan/instsource/sles9/">
Options +Indexes +FollowSymLinks
AllowOverride Limit
Order allow,deny
Allow from all
</Directory>
SLES9 AutoYaST example - Whitepaper Report 83Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf
Appendix
Installation source on client
The result of installing a host using the HTTP network installation source is a 3-line installation source, one for every CD1 directory.
Extras
Supported modules
The modinfo command shows if the packaged module is supported or not, e.g.:
modinfo tg3 (yes)modinfo ndiswrapper (no)
There are lists on the Internet where the packages and support levels of the RPM's are shown per SUSE product:
http://www.novell.com/support/products/linuxenterpriseserver/supported_packages/
Installing an RPM plus dependencies from the command line
Apart from using the menu by typing yast and go to Software, Install and Remove software, an RPM from the installation source can be installed from the command line. The dependencies will be retrieved automatically:
yast2 sw_single <packagename>
Checking and setting the NIC speed
The following commands are examples of checking and setting the network adapter card speed:
ethtool eth0ethtool eth0 duplex full speed 100 autoneg off
84 Novell Consulting Example
Illustration 51: Example installation source lines on client
Appendix
If the last command is required to be executed automatically after every reboot a script with the command can be placed in the autoexecute directory /etc/sysconfig/network/ifup.d/
SLES9 in VMware
For graphical installation in VMware use at the Linux install prompt either one of the following commands:
x11i=fbdevvnc=1
The VNC option is used for remote graphical installation. Follow the remarks for the rest of the installation.
References● http://www.suse.de/~ug Uwe Gansert SUSE AutoYaST Homepage
● http://forge.novell.com/modules/xfmod/docman/?group_id=1532 YaST project
● http://en.opensuse.org/YaST_Autoinstallation openSUSE project
● http://www.novell.com/connectionmagazine/2005/11/tech_talk_3.html PXE & ZLM
● http://www.novell.com/documentation SLES9 Administration and Installation Guide
● http://wiki.novell.com/index.php/Roberts_Quick_ReferencesBasic Linux knowledge plus autoyast, zlm, ...
SLES9 AutoYaST example - Whitepaper Report 85Revision: September 30, 2007, deploying_suse_linux_using_autoyast v1-24.pdf