ogre user manuel 1 7 a4

185
OGRE Manual v1.7 (’Cthugha’) Steve Streeting

Upload: rjedrzej

Post on 27-Nov-2014

541 views

Category:

Documents


9 download

TRANSCRIPT

Page 1: Ogre User Manuel 1 7 A4

OGRE Manual v1.7 (’Cthugha’)

Steve Streeting

Page 2: Ogre User Manuel 1 7 A4

Copyright c© Torus Knot Software Ltd

Permission is granted to make and distribute verbatim copies of this manual provided the copy-right notice and this permission notice are preserved on all copies.

Permission is granted to copy and distribute modified versions of this manual under the condi-tions for verbatim copying, provided that the entire resulting derived work is distributed underthe terms of a permission notice identical to this one.

Page 3: Ogre User Manuel 1 7 A4

i

Table of Contents

OGRE Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1 Object Orientation - more than just a buzzword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Multi-everything . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 The Core Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.1 The Root object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 The RenderSystem object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 The SceneManager object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.4 The ResourceGroupManager Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.5 The Mesh Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.6 Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.7 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.8 Overlays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.1 Material Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.1.1 Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.1.2 Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.1.3 Texture Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.1.4 Declaring Vertex/Geometry/Fragment Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.1.5 Cg programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603.1.6 DirectX9 HLSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613.1.7 OpenGL GLSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.1.8 Unified High-level Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663.1.9 Using Vertex/Geometry/Fragment Programs in a Pass . . . . . . . . . . . . . . . . . . . . . . . 693.1.10 Vertex Texture Fetch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833.1.11 Script Inheritence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843.1.12 Texture Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873.1.13 Script Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913.1.14 Script Import Directive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

3.2 Compositor Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923.2.1 Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953.2.2 Target Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983.2.3 Compositor Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003.2.4 Applying a Compositor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

3.3 Particle Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063.3.1 Particle System Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083.3.2 Particle Emitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1143.3.3 Particle Emitter Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153.3.4 Standard Particle Emitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183.3.5 Particle Affectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213.3.6 Standard Particle Affectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

3.4 Overlay Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1273.4.1 OverlayElement Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1313.4.2 Standard OverlayElements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

3.5 Font Definition Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Page 4: Ogre User Manuel 1 7 A4

ii

4 Mesh Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404.1 Exporters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404.2 XmlConverter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404.3 MeshUpgrader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

5 Hardware Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1425.1 The Hardware Buffer Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1425.2 Buffer Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1425.3 Shadow Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435.4 Locking buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435.5 Practical Buffer Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455.6 Hardware Vertex Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

5.6.1 The VertexData class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455.6.2 Vertex Declarations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1465.6.3 Vertex Buffer Bindings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475.6.4 Updating Vertex Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

5.7 Hardware Index Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1495.7.1 The IndexData class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1495.7.2 Updating Index Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

5.8 Hardware Pixel Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505.8.1 Textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505.8.2 Updating Pixel Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1515.8.3 Texture Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1525.8.4 Pixel Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1535.8.5 Pixel boxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

6 External Texture Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

7 Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1607.1 Stencil Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1617.2 Texture-based Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1647.3 Modulative Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1697.4 Additive Light Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

8 Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1758.1 Skeletal Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1758.2 Animation State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1768.3 Vertex Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

8.3.1 Morph Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1788.3.2 Pose Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1788.3.3 Combining Skeletal and Vertex Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

8.4 SceneNode Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1808.5 Numeric Value Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

Page 5: Ogre User Manuel 1 7 A4

OGRE Manual 1

OGRE Manual

Copyright c© The OGRE Team

This work is licenced under the Creative Commons Attribution-ShareAlike 2.5 License. Toview a copy of this licence, visit http://creativecommons.org/licenses/by-sa/2.5/ or senda letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

Page 6: Ogre User Manuel 1 7 A4

Chapter 1: Introduction 2

1 Introduction

This chapter is intended to give you an overview of the main components of OGRE and whythey have been put together that way.

1.1 Object Orientation - more than just a buzzword

The name is a dead giveaway. It says Object-Oriented Graphics Rendering Engine, and that’sexactly what it is. Ok, but why? Why did I choose to make such a big deal about this?

Well, nowadays graphics engines are like any other large software system. They start small,but soon they balloon into monstrously complex beasts which just can’t be all understood atonce. It’s pretty hard to manage systems of this size, and even harder to make changes to themreliably, and that’s pretty important in a field where new techniques and approaches seem toappear every other week. Designing systems around huge files full of C function calls just doesn’tcut it anymore - even if the whole thing is written by one person (not likely) they will find ithard to locate that elusive bit of code after a few months and even harder to work out how itall fits together.

Object orientation is a very popular approach to addressing the complexity problem. It’sa step up from decomposing your code into separate functions, it groups function and statedata together in classes which are designed to represent real concepts. It allows you to hidecomplexity inside easily recognised packages with a conceptually simple interface so they areeasy to recognise and have a feel of ’building blocks’ which you can plug together again later.You can also organise these blocks so that some of them look the same on the outside, but havevery different ways of achieving their objectives on the inside, again reducing the complexity forthe developers because they only have to learn one interface.

I’m not going to teach you OO here, that’s a subject for many other books, but suffice tosay I’d seen enough benefits of OO in business systems that I was surprised most graphics codeseemed to be written in C function style. I was interested to see whether I could apply mydesign experience in other types of software to an area which has long held a place in my heart- 3D graphics engines. Some people I spoke to were of the opinion that using full C++ wouldn’tbe fast enough for a real-time graphics engine, but others (including me) were of the opinionthat, with care, and object-oriented framework can be performant. We were right.

In summary, here’s the benefits an object-oriented approach brings to OGRE:

AbstractionCommon interfaces hide the nuances between different implementations of 3D APIand operating systems

EncapsulationThere is a lot of state management and context-specific actions to be done in agraphics engine - encapsulation allows me to put the code and data nearest towhere it is used which makes the code cleaner and easier to understand, and morereliable because duplication is avoided

PolymorphismThe behaviour of methods changes depending on the type of object you are using,even if you only learn one interface, e.g. a class specialised for managing indoor

Page 7: Ogre User Manuel 1 7 A4

Chapter 1: Introduction 3

levels behaves completely differently from the standard scene manager, but looksidentical to other classes in the system and has the same methods called on it

1.2 Multi-everything

I wanted to do more than create a 3D engine that ran on one 3D API, on one platform, withone type of scene (indoor levels are most popular). I wanted OGRE to be able to extend toany kind of scene (but yet still implement scene-specific optimisations under the surface), anyplatform and any 3D API.

Therefore all the ’visible’ parts of OGRE are completely independent of platform, 3D APIand scene type. There are no dependencies on Windows types, no assumptions about the typeof scene you are creating, and the principles of the 3D aspects are based on core maths textsrather than one particular API implementation.

Now of course somewhere OGRE has to get down to the nitty-gritty of the specifics of theplatform, API and scene, but it does this in subclasses specially designed for the environmentin question, but which still expose the same interface as the abstract versions.

For example, there is a ’Win32Window’ class which handles all the details about renderingwindows on a Win32 platform - however the application designer only has to manipulate it viathe superclass interface ’RenderWindow’, which will be the same across all platforms.

Similarly the ’SceneManager’ class looks after the arrangement of objects in the scene andtheir rendering sequence. Applications only have to use this interface, but there is a ’BspScene-Manager’ class which optimises the scene management for indoor levels, meaning you get bothperformance and an easy to learn interface. All applications have to do is hint about the kindof scene they will be creating and let OGRE choose the most appropriate implementation - thisis covered in a later tutorial.

OGRE’s object-oriented nature makes all this possible. Currently OGRE runs on Windows,Linux and Mac OSX using plugins to drive the underlying rendering API (currently Direct3D orOpenGL). Applications use OGRE at the abstract level, thus ensuring that they automaticallyoperate on all platforms and rendering subsystems that OGRE provides without any need forplatform or API specific code.

Page 8: Ogre User Manuel 1 7 A4

Chapter 2: The Core Objects 4

2 The Core Objects

Introduction

This tutorial gives you a quick summary of the core objects that you will use in OGRE andwhat they are used for.

A Word About Namespaces

OGRE uses a C++ feature called namespaces. This lets you put classes, enums, structures,anything really within a ’namespace’ scope which is an easy way to prevent name clashes,i.e. situations where you have 2 things called the same thing. Since OGRE is designed to beused inside other applications, I wanted to be sure that name clashes would not be a problem.Some people prefix their classes/types with a short code because some compilers don’t supportnamespaces, but I chose to use them because they are the ’right’ way to do it. Sorry if you havea non-compliant compiler, but hey, the C++ standard has been defined for years, so compilerwriters really have no excuse anymore. If your compiler doesn’t support namespaces then it’sprobably because it’s sh*t - get a better one. ;)

This means every class, type etc should be prefixed with ’Ogre::’, e.g. ’Ogre::Camera’,’Ogre::Vector3’ etc which means if elsewhere in your application you have used a Vector3 typeyou won’t get name clashes. To avoid lots of extra typing you can add a ’using namespace Ogre;’statement to your code which means you don’t have to type the ’Ogre::’ prefix unless there isambiguity (in the situation where you have another definition with the same name).

Overview from 10,000 feet

Shown below is a diagram of some of the core objects and where they ’sit’ in thegrand scheme of things. This is not all the classes by a long shot, just a few exam-ples of the more more significant ones to give you an idea of how it slots together.

Page 9: Ogre User Manuel 1 7 A4

Chapter 2: The Core Objects 5

At the very top of the diagram is the Root object. This is your ’way in’ to the OGRE system,and it’s where you tend to create the top-level objects that you need to deal with, like scenemanagers, rendering systems and render windows, loading plugins, all the fundamental stuff.If you don’t know where to start, Root is it for almost everything, although often it will justgive you another object which will actually do the detail work, since Root itself is more of anorganiser and facilitator object.

The majority of rest of OGRE’s classes fall into one of 3 roles:

Scene ManagementThis is about the contents of your scene, how it’s structured, how it’s viewed fromcameras, etc. Objects in this area are responsible for giving you a natural declarativeinterface to the world you’re building; i.e. you don’t tell OGRE "set these renderstates and then render 3 polygons", you tell it "I want an object here, here andhere, with these materials on them, rendered from this view", and let it get on withit.

Resource ManagementAll rendering needs resources, whether it’s geometry, textures, fonts, whatever. It’simportant to manage the loading, re-use and unloading of these things carefully, sothat’s what classes in this area do.

Rendering Finally, there’s getting the visuals on the screen - this is about the lower-level endof the rendering pipeline, the specific rendering system API objects like buffers,render states and the like and pushing it all down the pipeline. Classes in the SceneManagement subsystem use this to get their higher-level scene information onto thescreen.

You’ll notice that scattered around the edge are a number of plugins. OGRE is designedto be extended, and plugins are the usual way to go about it. Many of the classes in OGREcan be subclassed and extended, whether it’s changing the scene organisation through a customSceneManager, adding a new render system implementation (e.g. Direct3D or OpenGL), orproviding a way to load resources from another source (say from a web location or a database).Again this is just a small smattering of the kinds of things plugins can do, but as you can seethey can plug in to almost any aspect of the system. This way, OGRE isn’t just a solution forone narrowly defined problem, it can extend to pretty much anything you need it to do.

2.1 The Root object

The ’Root’ object is the entry point to the OGRE system. This object MUST be the first oneto be created, and the last one to be destroyed. In the example applications I chose to make aninstance of Root a member of my application object which ensured that it was created as soonas my application object was, and deleted when the application object was deleted.

The root object lets you configure the system, for example through the showConfigDialog()method which is an extremely handy method which performs all render system options detectionand shows a dialog for the user to customise resolution, colour depth, full screen options etc. Italso sets the options the user selects so that you can initialise the system directly afterwards.

The root object is also your method for obtaining pointers to other objects in the system,such as the SceneManager, RenderSystem and various other resource managers. See below fordetails.

Page 10: Ogre User Manuel 1 7 A4

Chapter 2: The Core Objects 6

Finally, if you run OGRE in continuous rendering mode, i.e. you want to always refresh allthe rendering targets as fast as possible (the norm for games and demos, but not for windowedutilities), the root object has a method called startRendering, which when called will entera continuous rendering loop which will only end when all rendering windows are closed, orany FrameListener objects indicate that they want to stop the cycle (see below for details ofFrameListener objects).

2.2 The RenderSystem object

The RenderSystem object is actually an abstract class which defines the interface to the under-lying 3D API. It is responsible for sending rendering operations to the API and setting all thevarious rendering options. This class is abstract because all the implementation is renderingAPI specific - there are API-specific subclasses for each rendering API (e.g. D3DRenderSystemfor Direct3D). After the system has been initialised through Root::initialise, the RenderSystemobject for the selected rendering API is available via the Root::getRenderSystem() method.

However, a typical application should not normally need to manipulate the RenderSystemobject directly - everything you need for rendering objects and customising settings should beavailable on the SceneManager, Material and other scene-oriented classes. It’s only if you wantto create multiple rendering windows (completely separate windows in this case, not multipleviewports like a split-screen effect which is done via the RenderWindow class) or access otheradvanced features that you need access to the RenderSystem object.

For this reason I will not discuss the RenderSystem object further in these tutorials. Youcan assume the SceneManager handles the calls to the RenderSystem at the appropriate times.

2.3 The SceneManager object

Apart from the Root object, this is probably the most critical part of the system from theapplication’s point of view. Certainly it will be the object which is most used by the application.The SceneManager is in charge of the contents of the scene which is to be rendered by the engine.It is responsible for organising the contents using whatever technique it deems best, for creatingand managing all the cameras, movable objects (entities), lights and materials (surface propertiesof objects), and for managing the ’world geometry’ which is the sprawling static geometry usuallyused to represent the immovable parts of a scene.

It is to the SceneManager that you go when you want to create a camera for the scene. It’salso where you go to retrieve or to remove a light from the scene. There is no need for yourapplication to keep lists of objects, the SceneManager keeps a named set of all of the sceneobjects for you to access, should you need them. Look in the main documentation under thegetCamera, getLight, getEntity etc methods.

Page 11: Ogre User Manuel 1 7 A4

Chapter 2: The Core Objects 7

The SceneManager also sends the scene to the RenderSystem object when it is time to renderthe scene. You never have to call the SceneManager:: renderScene method directly though - itis called automatically whenever a rendering target is asked to update.

So most of your interaction with the SceneManager is during scene setup. You’re likely tocall a great number of methods (perhaps driven by some input file containing the scene data) inorder to set up your scene. You can also modify the contents of the scene dynamically duringthe rendering cycle if you create your own FrameListener object (see later).

Because different scene types require very different algorithmic approaches to deciding whichobjects get sent to the RenderSystem in order to attain good rendering performance, the Scene-Manager class is designed to be subclassed for different scene types. The default SceneManagerobject will render a scene, but it does little or no scene organisation and you should not expectthe results to be high performance in the case of large scenes. The intention is that special-isations will be created for each type of scene such that under the surface the subclass willoptimise the scene organisation for best performance given assumptions which can be made forthat scene type. An example is the BspSceneManager which optimises rendering for large indoorlevels based on a Binary Space Partition (BSP) tree.

The application using OGRE does not have to know which subclasses are available. Theapplication simply calls Root::createSceneManager(..) passing as a parameter one of a numberof scene types (e.g. ST GENERIC, ST INTERIOR etc). OGRE will automatically use thebest SceneManager subclass available for that scene type, or default to the basic SceneManagerif a specialist one is not available. This allows the developers of OGRE to add new scenespecialisations later and thus optimise previously unoptimised scene types without the userapplications having to change any code.

2.4 The ResourceGroupManager Object

The ResourceGroupManager class is actually a ’hub’ for loading of reusable resources like tex-tures and meshes. It is the place that you define groups for your resources, so they may beunloaded and reloaded when you want. Servicing it are a number of ResourceManagers whichmanage the individual types of resource, like TextureManager or MeshManager. In this context,resources are sets of data which must be loaded from somewhere to provide OGRE with thedata it needs.

ResourceManagers ensure that resources are only loaded once and shared throughout theOGRE engine. They also manage the memory requirements of the resources they look after.They can also search in a number of locations for the resources they need, including multiplesearch paths and compressed archives (ZIP files).

Most of the time you won’t interact with resource managers directly. Resource managers willbe called by other parts of the OGRE system as required, for example when you request for atexture to be added to a Material, the TextureManager will be called for you. If you like, you

Page 12: Ogre User Manuel 1 7 A4

Chapter 2: The Core Objects 8

can call the appropriate resource manager directly to preload resources (if for example you wantto prevent disk access later on) but most of the time it’s ok to let OGRE decide when to do it.

One thing you will want to do is to tell the resource managers where to look for resources. Youdo this via Root::getSingleton().addResourceLocation, which actually passes the information onto ResourceGroupManager.

Because there is only ever 1 instance of each resource manager in the engine, if you do wantto get a reference to a resource manager use the following syntax:TextureManager::getSingleton().someMethod()MeshManager::getSingleton().someMethod()

2.5 The Mesh Object

A Mesh object represents a discrete model, a set of geometry which is self-contained and istypically fairly small on a world scale. Mesh objects are assumed to represent movable objectsand are not used for the sprawling level geometry typically used to create backgrounds.

Mesh objects are a type of resource, and are managed by the MeshManager resource manager.They are typically loaded from OGRE’s custom object format, the ’.mesh’ format. Mesh filesare typically created by exporting from a modelling tool See Section 4.1 [Exporters], page 140and can be manipulated through various Chapter 4 [Mesh Tools], page 140

You can also create Mesh objects manually by calling the MeshManager::createManualmethod. This way you can define the geometry yourself, but this is outside the scope of thismanual.

Mesh objects are the basis for the individual movable objects in the world, which are calledSection 2.6 [Entities], page 8.

Mesh objects can also be animated using See Section 8.1 [Skeletal Animation], page 175.

2.6 Entities

An entity is an instance of a movable object in the scene. It could be a car, a person, a dog, ashuriken, whatever. The only assumption is that it does not necessarily have a fixed position inthe world.

Entities are based on discrete meshes, i.e. collections of geometry which are self-containedand typically fairly small on a world scale, which are represented by the Mesh object. Multipleentities can be based on the same mesh, since often you want to create multiple copies of the

Page 13: Ogre User Manuel 1 7 A4

Chapter 2: The Core Objects 9

same type of object in a scene.

You create an entity by calling the SceneManager::createEntity method, giving it a name andspecifying the name of the mesh object which it will be based on (e.g. ’muscleboundhero.mesh’).The SceneManager will ensure that the mesh is loaded by calling the MeshManager resourcemanager for you. Only one copy of the Mesh will be loaded.

Entities are not deemed to be a part of the scene until you attach them to a SceneNode (seethe section below). By attaching entities to SceneNodes, you can create complex hierarchicalrelationships between the positions and orientations of entities. You then modify the positionsof the nodes to indirectly affect the entity positions.

When a Mesh is loaded, it automatically comes with a number of materials defined. It ispossible to have more than one material attached to a mesh - different parts of the mesh mayuse different materials. Any entity created from the mesh will automatically use the defaultmaterials. However, you can change this on a per-entity basis if you like so you can create anumber of entities based on the same mesh but with different textures etc.

To understand how this works, you have to know that all Mesh objects are actually composedof SubMesh objects, each of which represents a part of the mesh using one Material. If a Meshuses only one Material, it will only have one SubMesh.

When an Entity is created based on this Mesh, it is composed of (possibly) multiple SubEntityobjects, each matching 1 for 1 with the SubMesh objects from the original Mesh. You can accessthe SubEntity objects using the Entity::getSubEntity method. Once you have a reference to aSubEntity, you can change the material it uses by calling it’s setMaterialName method. In thisway you can make an Entity deviate from the default materials and thus create an individuallooking version of it.

2.7 Materials

The Material object controls how objects in the scene are rendered. It specifies what basicsurface properties objects have such as reflectance of colours, shininess etc, how many texturelayers are present, what images are on them and how they are blended together, what specialeffects are applied such as environment mapping, what culling mode is used, how the texturesare filtered etc.

Materials can either be set up programmatically, by calling SceneManager::createMaterialand tweaking the settings, or by specifying it in a ’script’ which is loaded at runtime. SeeSection 3.1 [Material Scripts], page 13 for more info.

Page 14: Ogre User Manuel 1 7 A4

Chapter 2: The Core Objects 10

Basically everything about the appearance of an object apart from it’s shape is controlled bythe Material class.

The SceneManager class manages the master list of materials available to the scene. Thelist can be added to by the application by calling SceneManager::createMaterial, or by loadinga Mesh (which will in turn load material properties). Whenever materials are added to theSceneManager, they start off with a default set of properties; these are defined by OGRE as thefollowing:

• ambient reflectance = ColourValue::White (full)• diffuse reflectance = ColourValue::White (full)• specular reflectance = ColourValue::Black (none)• emmissive = ColourValue::Black (none)• shininess = 0 (not shiny)• No texture layers (& hence no textures)• SourceBlendFactor = SBF ONE, DestBlendFactor = SBF ZERO (opaque)• Depth buffer checking on• Depth buffer writing on• Depth buffer comparison function = CMPF LESS EQUAL• Culling mode = CULL CLOCKWISE• Ambient lighting in scene = ColourValue(0.5, 0.5, 0.5) (mid-grey)• Dynamic lighting enabled• Gourad shading mode• Solid polygon mode• Bilinear texture filtering

You can alter these settings by calling SceneManager::getDefaultMaterialSettings() and mak-ing the required changes to the Material which is returned.

Entities automatically have Material’s associated with them if they use a Mesh object, sincethe Mesh object typically sets up it’s required materials on loading. You can also customisethe material used by an entity as described in Section 2.6 [Entities], page 8. Just create anew Material, set it up how you like (you can copy an existing material into it if you likeusing a standard assignment statement) and point the SubEntity entries at it using SubEn-tity::setMaterialName().

2.8 Overlays

Overlays allow you to render 2D and 3D elements on top of the normal scene contents to createeffects like heads-up displays (HUDs), menu systems, status panels etc. The frame rate statisticspanel which comes as standard with OGRE is an example of an overlay. Overlays can contain2D or 3D elements. 2D elements are used for HUDs, and 3D elements can be used to createcockpits or any other 3D object which you wish to be rendered on top of the rest of the scene.

You can create overlays either through the SceneManager::createOverlay method, or you candefine them in an .overlay script. In reality the latter is likely to be the most practical becauseit is easier to tweak (without the need to recompile the code). Note that you can define as many

Page 15: Ogre User Manuel 1 7 A4

Chapter 2: The Core Objects 11

overlays as you like: they all start off life hidden, and you display them by calling their ’show()’method. You can also show multiple overlays at once, and their Z order is determined by theOverlay::setZOrder() method.

Creating 2D Elements

The OverlayElement class abstracts the details of 2D elements which are added to overlays. Allitems which can be added to overlays are derived from this class. It is possible (and encouraged)for users of OGRE to define their own custom subclasses of OverlayElement in order to providetheir own user controls. The key common features of all OverlayElements are things like size,position, basic material name etc. Subclasses extend this behaviour to include more complexproperties and behaviour.

An important built-in subclass of OverlayElement is OverlayContainer. OverlayContaineris the same as a OverlayElement, except that it can contain other OverlayElements, groupingthem together (allowing them to be moved together for example) and providing them with alocal coordinate origin for easier lineup.

The third important class is OverlayManager. Whenever an application wishes tocreate a 2D element to add to an overlay (or a container), it should call OverlayMan-ager::createOverlayElement. The type of element you wish to create is identified by a string,the reason being that it allows plugins to register new types of OverlayElement for you tocreate without you having to link specifically to those libraries. For example, to create apanel (a plain rectangular area which can contain other OverlayElements) you would callOverlayManager::getSingleton().createOverlayElement("Panel", "myNewPanel");

Adding 2D Elements to the Overlay

Only OverlayContainers can be added direct to an overlay. The reason is that each level ofcontainer establishes the Zorder of the elements contained within it, so if you nest severalcontainers, inner containers have a higher Zorder than outer ones to ensure they are displayedcorrectly. To add a container (such as a Panel) to the overlay, simply call Overlay::add2D.

If you wish to add child elements to that container, call OverlayContainer::addChild. Childelements can be OverlayElements or OverlayContainer instances themselves. Remember thatthe position of a child element is relative to the top-left corner of it’s parent.

A word about 2D coordinates

OGRE allows you to place and size elements based on 2 coordinate systems: relative and pixelbased.

Page 16: Ogre User Manuel 1 7 A4

Chapter 2: The Core Objects 12

Pixel ModeThis mode is useful when you want to specify an exact size for your overlay items,and you don’t mind if those items get smaller on the screen if you increase the screenresolution (in fact you might want this). In this mode the only way to put somethingin the middle or at the right or bottom of the screen reliably in any resolution is touse the aligning options, whilst in relative mode you can do it just by using the rightrelative coordinates. This mode is very simple, the top-left of the screen is (0,0) andthe bottom-right of the screen depends on the resolution. As mentioned above, youcan use the aligning options to make the horizontal and vertical coordinate originsthe right, bottom or center of the screen if you want to place pixel items in theselocations without knowing the resolution.

Relative ModeThis mode is useful when you want items in the overlay to be the same size on thescreen no matter what the resolution. In relative mode, the top-left of the screenis (0,0) and the bottom-right is (1,1). So if you place an element at (0.5, 0.5),it’s top-left corner is placed exactly in the center of the screen, no matter whatresolution the application is running in. The same principle applies to sizes; if youset the width of an element to 0.5, it covers half the width of the screen. Note thatbecause the aspect ratio of the screen is typically 1.3333 : 1 (width : height), anelement with dimensions (0.25, 0.25) will not be square, but it will take up exactly1/16th of the screen in area terms. If you want square-looking areas you will haveto compensate using the typical aspect ratio e.g. use (0.1875, 0.25) instead.

Transforming Overlays

Another nice feature of overlays is being able to rotate, scroll and scale them as a whole. Youcan use this for zooming in / out menu systems, dropping them in from off screen and other niceeffects. See the Overlay::scroll, Overlay::rotate and Overlay::scale methods for more information.

Scripting overlays

Overlays can also be defined in scripts. See Section 3.4 [Overlay Scripts], page 127 for details.

GUI systems

Overlays are only really designed for non-interactive screen elements, although you canuse them as a crude GUI. For a far more complete GUI solution, we recommend CEGui(http://www.cegui.org.uk), as demonstrated in the sample Demo Gui.

Page 17: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 13

3 Scripts

OGRE drives many of its features through scripts in order to make it easier to set up. The scriptsare simply plain text files which can be edited in any standard text editor, and modifying themimmediately takes effect on your OGRE-based applications, without any need to recompile. Thismakes prototyping a lot faster. Here are the items that OGRE lets you script:• Section 3.1 [Material Scripts], page 13• Section 3.2 [Compositor Scripts], page 92• Section 3.3 [Particle Scripts], page 106• Section 3.4 [Overlay Scripts], page 127• Section 3.5 [Font Definition Scripts], page 137

3.1 Material Scripts

Material scripts offer you the ability to define complex materials in a script which can be reusedeasily. Whilst you could set up all materials for a scene in code using the methods of theMaterial and TextureLayer classes, in practice it’s a bit unwieldy. Instead you can store materialdefinitions in text files which can then be loaded whenever required.

Loading scripts

Material scripts are loaded when resource groups are initialised: OGRE looks in all resource loca-tions associated with the group (see Root::addResourceLocation) for files with the ’.material’ ex-tension and parses them. If you want to parse files manually, use MaterialSerializer::parseScript.

It’s important to realise that materials are not loaded completely by this parsing process:only the definition is loaded, no textures or other resources are loaded. This is because it iscommon to have a large library of materials, but only use a relatively small subset of themin any one scene. To load every material completely in every script would therefore causeunnecessary memory overhead. You can access a ’deferred load’ Material in the normal way(MaterialManager::getSingleton().getByName()), but you must call the ’load’ method beforetrying to use it. Ogre does this for you when using the normal material assignment methods ofentities etc.

Another important factor is that material names must be unique throughout ALL scriptsloaded by the system, since materials are always identified by name.

Format

Several materials may be defined in a single script. The script format is pseudo-C++, withsections delimited by curly braces (’’, ’’), and comments indicated by starting a line with ’//’(note, no nested form comments allowed). The general format is shown below in the examplebelow (note that to start with, we only consider fixed-function materials which don’t use vertex,geometry or fragment programs, these are covered later):

Page 18: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 14

// This is a commentmaterial walls/funkywall1{

// first, preferred techniquetechnique{

// first passpass{

ambient 0.5 0.5 0.5diffuse 1.0 1.0 1.0

// Texture unit 0texture_unit{

texture wibbly.jpgscroll_anim 0.1 0.0wave_xform scale sine 0.0 0.7 0.0 1.0

}// Texture unit 1 (this is a multitexture pass)texture_unit{

texture wobbly.pngrotate_anim 0.25colour_op add

}}

}

// Second technique, can be used as a fallback or LOD leveltechnique{

// .. and so on}

}

Every material in the script must be given a name, which is the line ’material <blah>’ beforethe first opening ’’. This name must be globally unique. It can include path characters (as inthe example) to logically divide up your materials, and also to avoid duplicate names, but theengine does not treat the name as hierarchical, just as a string. If you include spaces in thename, it must be enclosed in double quotes.

NOTE: ’:’ is the delimiter for specifying material copy in the script so it can’t be used aspart of the material name.

A material can inherit from a previously defined material by using a colon : after the materialname followed by the name of the reference material to inherit from. You can in fact even inheritjust parts of a material from others; all this is covered in See Section 3.1.11 [Script Inheritence],page 84). You can also use variables in your script which can be replaced in inheriting versions,

Page 19: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 15

see See Section 3.1.13 [Script Variables], page 91.

A material can be made up of many techniques (See Section 3.1.1 [Techniques], page 17)-a technique is one way of achieving the effect you are looking for. You can supply more thanone technique in order to provide fallback approaches where a card does not have the ability torender the preferred technique, or where you wish to define lower level of detail versions of thematerial in order to conserve rendering power when objects are more distant.

Each technique can be made up of many passes (See Section 3.1.2 [Passes], page 20), thatis a complete render of the object can be performed multiple times with different settings inorder to produce composite effects. Ogre may also split the passes you have defined into manypasses at runtime, if you define a pass which uses too many texture units for the card you arecurrently running on (note that it can only do this if you are not using a fragment program).Each pass has a number of top-level attributes such as ’ambient’ to set the amount & colour ofthe ambient light reflected by the material. Some of these options do not apply if you are usingvertex programs, See Section 3.1.2 [Passes], page 20 for more details.

Within each pass, there can be zero or many texture units in use (See Section 3.1.3 [TextureUnits], page 39). These define the texture to be used, and optionally some blending operations(which use multitexturing) and texture effects.

You can also reference vertex and fragment programs (or vertex and pixel shaders, ifyou want to use that terminology) in a pass with a given set of parameters. Programsthemselves are declared in separate .program scripts (See Section 3.1.4 [DeclaringVertex/Geometry/Fragment Programs], page 54) and are used as described in Section 3.1.9[Using Vertex/Geometry/Fragment Programs in a Pass], page 69.

Top-level material attributes

The outermost section of a material definition does not have a lot of attributes of its own (mostof the configurable parameters are within the child sections. However, it does have some, andhere they are:

lod distances (deprecated)

This option is deprecated in favour of 〈undefined〉 [lod values], page 〈undefined〉 now.

lod strategy

Sets the name of the LOD strategy to use. Defaults to ’Distance’ which means LOD changesbased on distance from the camera. Also supported is ’PixelCount’ which changes LOD basedon an estimate of the screen-space pixels affected.

Format: lod strategy <name>

Page 20: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 16

Default: lod strategy Distance

lod values

This attribute defines the values used to control the LOD transition for this material. Bysetting this attribute, you indicate that you want this material to alter the Technique that ituses based on some metric, such as the distance from the camera, or the approximate screenspace coverage. The exact meaning of these values is determined by the option you select for[lod strategy], page 15 - it is a list of distances for the ’Distance’ strategy, and a list of pixelcounts for the ’PixelCount’ strategy, for example. You must give it a list of values, in order fromhighest LOD value to lowest LOD value, each one indicating the point at which the materialwill switch to the next LOD. Implicitly, all materials activate LOD index 0 for values less thanthe first entry, so you do not have to specify ’0’ at the start of the list. You must ensure thatthere is at least one Technique with a [lod index], page 18 value for each value in the list (so ifyou specify 3 values, you must have techniques for LOD indexes 0, 1, 2 and 3). Note you mustalways have at least one Technique at lod index 0.

Format: lod values <value0> <value1> <value2> ...Default: none

Example:lod strategy Distance lod values 300.0 600.5 1200

The above example would cause the material to use the best Technique at lod index 0 up toa distance of 300 world units, the best from lod index 1 from 300 up to 600, lod index 2 from600 to 1200, and lod index 3 from 1200 upwards.

receive shadows

This attribute controls whether objects using this material can have shadows cast upon them.

Format: receive shadows <on|off>Default: on

Whether or not an object receives a shadow is the combination of a number of factors, SeeChapter 7 [Shadows], page 160 for full details; however this allows you to make a material opt-out of receiving shadows if required. Note that transparent materials never receive shadows sothis option only has an effect on solid materials.

transparency casts shadows

This attribute controls whether transparent materials can cast certain kinds of shadow.

Page 21: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 17

Format: transparency casts shadows <on|off>Default: off

Whether or not an object casts a shadow is the combination of a number of factors, See Chapter 7[Shadows], page 160 for full details; however this allows you to make a transparent material castshadows, when it would otherwise not. For example, when using texture shadows, transparentmaterials are normally not rendered into the shadow texture because they should not block light.This flag overrides that.

set texture alias

This attribute associates a texture alias with a texture name.

Format: set texture alias <alias name> <texture name>

This attribute can be used to set the textures used in texture unit states that were inheritedfrom another material.(See Section 3.1.12 [Texture Aliases], page 87)

3.1.1 Techniques

A "technique" section in your material script encapsulates a single method of rendering anobject. The simplest of material definitions only contains a single technique, however since PChardware varies quite greatly in it’s capabilities, you can only do this if you are sure that everycard for which you intend to target your application will support the capabilities which yourtechnique requires. In addition, it can be useful to define simpler ways to render a material ifyou wish to use material LOD, such that more distant objects use a simpler, less performance-hungry technique.

When a material is used for the first time, it is ’compiled’. That involves scanning thetechniques which have been defined, and marking which of them are supportable using thecurrent rendering API and graphics card. If no techniques are supportable, your material willrender as blank white. The compilation examines a number of things, such as:• The number of texture unit entries in each pass

Note that if the number of texture unit entries exceeds the number of texture units in thecurrent graphics card, the technique may still be supportable so long as a fragment programis not being used. In this case, Ogre will split the pass which has too many entries intomultiple passes for the less capable card, and the multitexture blend will be turned into amultipass blend (See [colour op multipass fallback], page 51).

• Whether vertex, geometry or fragment programs are used, and if so which syntax they use(e.g. vs 1 1, ps 2 x, arbfp1 etc.)

• Other effects like cube mapping and dot3 blending• Whether the vendor or device name of the current graphics card matches some user-specified

rules

In a material script, techniques must be listed in order of preference, i.e. the earlier techniquesare preferred over the later techniques. This normally means you will list your most advanced,most demanding techniques first in the script, and list fallbacks afterwards.

Page 22: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 18

To help clearly identify what each technique is used for, the technique can be named butits optional. Techniques not named within the script will take on a name that is the techniqueindex number. For example: the first technique in a material is index 0, its name would be"0" if it was not given a name in the script. The technique name must be unique within thematerial or else the final technique is the resulting merge of all techniques with the same namein the material. A warning message is posted in the Ogre.log if this occurs. Named techniquescan help when inheriting a material and modifying an existing technique: (See Section 3.1.11[Script Inheritence], page 84)

Format: technique name

Techniques have only a small number of attributes of their own:

• [scheme], page 18

• [lod index], page 18 (and also see [lod distances], page 15 in the parent material)

• [shadow caster material], page 19

• [shadow receiver material], page 19

• [gpu vendor rule], page 19

• [gpu device rule], page 19

scheme

Sets the ’scheme’ this Technique belongs to. Material schemes are used to control top-levelswitching from one set of techniques to another. For example, you might use this to define’high’, ’medium’ and ’low’ complexity levels on materials to allow a user to pick a performance /quality ratio. Another possibility is that you have a fully HDR-enabled pipeline for top machines,rendering all objects using unclamped shaders, and a simpler pipeline for others; this can beimplemented using schemes. The active scheme is typically controlled at a viewport level, andthe active one defaults to ’Default’.

Format: scheme <name>Example: scheme hdrDefault: scheme Default

lod index

Sets the level-of-detail (LOD) index this Technique belongs to.

Format: lod index <number>NB Valid values are 0 (highest level of detail) to 65535, although this is unlikely. You shouldnot leave gaps in the LOD indexes between Techniques.

Page 23: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 19

Example: lod index 1

All techniques must belong to a LOD index, by default they all belong to index 0, i.e. thehighest LOD. Increasing indexes denote lower levels of detail. You can (and often will) assignmore than one technique to the same LOD index, what this means is that OGRE will pick thebest technique of the ones listed at the same LOD index. For readability, it is advised thatyou list your techniques in order of LOD, then in order of preference, although the latter is theonly prerequisite (OGRE determines which one is ’best’ by which one is listed first). You mustalways have at least one Technique at lod index 0.

The distance at which a LOD level is applied is determined by the lod distances attribute ofthe containing material, See [lod distances], page 15 for details.

Default: lod index 0

Techniques also contain one or more passes (and there must be at least one), See Section 3.1.2[Passes], page 20.

shadow caster material

When using See Section 7.2 [Texture-based Shadows], page 164 you can specify an alternatematerial to use when rendering the object using this material into the shadow texture. This islike a more advanced version of using shadow caster vertex program, however note that for themoment you are expected to render the shadow in one pass, i.e. only the first pass is respected.

shadow receiver material

When using See Section 7.2 [Texture-based Shadows], page 164 you can specify an alter-nate material to use when performing the receiver shadow pass. Note that this explicit ’re-ceiver’ pass is only done when you’re not using [Integrated Texture Shadows], page 168 -i.e. the shadow rendering is done separately (either as a modulative pass, or a masked lightpass). This is like a more advanced version of using shadow receiver vertex program andshadow receiver fragment program, however note that for the moment you are expected torender the shadow in one pass, i.e. only the first pass is respected.

gpu vendor rule and gpu device rule

Although Ogre does a good job of detecting the capabilities of graphics cards and setting thesupportability of techniques from that, occasionally card-specific behaviour exists which is notnecessarily detectable and you may want to ensure that your materials go down a particular pathto either use or avoid that behaviour. This is what these rules are for - you can specify matchingrules so that a technique will be considered supportable only on cards from a particular vendor,or which match a device name pattern, or will be considered supported only if they don’t fulfillsuch matches.

The format of the rules are as follows:

gpu vendor rule <include|exclude> <vendor name>gpu device rule <include|exclude> <device pattern> [case sensitive]

Page 24: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 20

An ’include’ rule means that the technique will only be supported if one of the include rules ismatched (if no include rules are provided, anything will pass). An ’exclude’ rules means that thetechnique is considered unsupported if any of the exclude rules are matched. You can provideas many rules as you like, although <vendor name> and <device pattern> must obviously beunique. The valid list of <vendor name> values is currently ’nvidia’, ’ati’, ’intel’, ’s3’, ’matrox’and ’3dlabs’. <device pattern> can be any string, and you can use wildcards (’*’) if you need tomatch variants. Here’s an example:

gpu vendor rule include nvidiagpu vendor rule include intelgpu device rule exclude *950*

These rules, if all included in one technique, will mean that the technique will only be consideredsupported on graphics cards made by NVIDIA and Intel, and so long as the device name doesn’thave ’950’ in it.

Note that these rules can only mark a technique ’unsupported’ when it would otherwise beconsidered ’supported’ judging by the hardware capabilities. Even if a technique passes theserules, it is still subject to the usual hardware support tests.

3.1.2 Passes

A pass is a single render of the geometry in question; a single call to the rendering API with acertain set of rendering properties. A technique can have between one and 16 passes, althoughclearly the more passes you use, the more expensive the technique will be to render.

To help clearly identify what each pass is used for, the pass can be named but its optional.Passes not named within the script will take on a name that is the pass index number. Forexample: the first pass in a technique is index 0 so its name would be "0" if it was not given aname in the script. The pass name must be unique within the technique or else the final passis the resulting merge of all passes with the same name in the technique. A warning messageis posted in the Ogre.log if this occurs. Named passes can help when inheriting a material andmodifying an existing pass: (See Section 3.1.11 [Script Inheritence], page 84)

Passes have a set of global attributes (described below), zero or more nested texture unitentries (See Section 3.1.3 [Texture Units], page 39), and optionally a reference to a vertex and/ or a fragment program (See Section 3.1.9 [Using Vertex/Geometry/Fragment Programs in aPass], page 69).

Here are the attributes you can use in a ’pass’ section of a .material script:

• [ambient], page 21• [diffuse], page 22• [specular], page 22• [emissive], page 23• [scene blend], page 23

Page 25: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 21

• [separate scene blend], page 25• [scene blend op], page 25• [separate scene blend op], page 26• [depth check], page 26• [depth write], page 26• [depth func], page 26• [depth bias], page 27• [iteration depth bias], page 27• [alpha rejection], page 28• [alpha to coverage], page 28• [light scissor], page 28• [light clip planes], page 29• [illumination stage], page 30• [transparent sorting], page 30• [normalise normals], page 30• [cull hardware], page 31• [cull software], page 31• [lighting], page 32• [shading], page 32• [polygon mode], page 33• [polygon mode overrideable], page 33• [fog override], page 33• [colour write], page 34• [max lights], page 35• [start light], page 34• [iteration], page 35• [point size], page 38• [point sprites], page 38• [point size attenuation], page 38• [point size min], page 39• [point size max], page 39

Attribute Descriptions

ambient

Sets the ambient colour reflectance properties of this pass. This attribute has no effect if a asm,CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL materialstate.

Format: ambient (<red> <green> <blue> [<alpha>]| vertexcolour)NB valid colour values are between 0.0 and 1.0.

Page 26: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 22

Example: ambient 0.0 0.8 0.0

The base colour of a pass is determined by how much red, green and blue light is reflectsat each vertex. This property determines how much ambient light (directionless global light) isreflected. It is also possible to make the ambient reflectance track the vertex colour as definedin the mesh by using the keyword vertexcolour instead of the colour values. The default isfull white, meaning objects are completely globally illuminated. Reduce this if you want to seediffuse or specular light effects, or change the blend of colours to make the object have a basecolour other than white. This setting has no effect if dynamic lighting is disabled using the’lighting off’ attribute, or if any texture layer has a ’colour op replace’ attribute.

Default: ambient 1.0 1.0 1.0 1.0

diffuse

Sets the diffuse colour reflectance properties of this pass. This attribute has no effect if a asm,CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL materialstate.

Format: diffuse (<red> <green> <blue> [<alpha>]| vertexcolour)NB valid colour values are between 0.0 and 1.0.

Example: diffuse 1.0 0.5 0.5

The base colour of a pass is determined by how much red, green and blue light is reflectsat each vertex. This property determines how much diffuse light (light from instances of theLight class in the scene) is reflected. It is also possible to make the diffuse reflectance track thevertex colour as defined in the mesh by using the keyword vertexcolour instead of the colourvalues. The default is full white, meaning objects reflect the maximum white light they canfrom Light objects. This setting has no effect if dynamic lighting is disabled using the ’lightingoff’ attribute, or if any texture layer has a ’colour op replace’ attribute.

Default: diffuse 1.0 1.0 1.0 1.0

specular

Sets the specular colour reflectance properties of this pass. This attribute has no effect if a asm,CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL materialstate.

Page 27: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 23

Format: specular (<red> <green> <blue> [<alpha>]| vertexcolour) <shininess>NB valid colour values are between 0.0 and 1.0. Shininess can be any value greater than 0.

Example: specular 1.0 1.0 1.0 12.5

The base colour of a pass is determined by how much red, green and blue light is reflectsat each vertex. This property determines how much specular light (highlights from instancesof the Light class in the scene) is reflected. It is also possible to make the diffuse reflectancetrack the vertex colour as defined in the mesh by using the keyword vertexcolour instead of thecolour values. The default is to reflect no specular light. The colour of the specular highlightsis determined by the colour parameters, and the size of the highlights by the separate shininessparameter.. The higher the value of the shininess parameter, the sharper the highlight ie theradius is smaller. Beware of using shininess values in the range of 0 to 1 since this causes the thespecular colour to be applied to the whole surface that has the material applied to it. When theviewing angle to the surface changes, ugly flickering will also occur when shininess is in the rangeof 0 to 1. Shininess values between 1 and 128 work best in both DirectX and OpenGL renderers.This setting has no effect if dynamic lighting is disabled using the ’lighting off’ attribute, or ifany texture layer has a ’colour op replace’ attribute.

Default: specular 0.0 0.0 0.0 0.0 0.0

emissive

Sets the amount of self-illumination an object has. This attribute has no effect if a asm, CG, orHLSL shader program is used. With GLSL, the shader can read the OpenGL material state.

Format: emissive (<red> <green> <blue> [<alpha>]| vertexcolour)NB valid colour values are between 0.0 and 1.0.

Example: emissive 1.0 0.0 0.0

If an object is self-illuminating, it does not need external sources to light it, ambient orotherwise. It’s like the object has it’s own personal ambient light. Unlike the name suggests,this object doesn’t act as a light source for other objects in the scene (if you want it to, youhave to create a light which is centered on the object). It is also possible to make the emissivecolour track the vertex colour as defined in the mesh by using the keyword vertexcolour insteadof the colour values. This setting has no effect if dynamic lighting is disabled using the ’lightingoff’ attribute, or if any texture layer has a ’colour op replace’ attribute.

Default: emissive 0.0 0.0 0.0 0.0

Page 28: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 24

scene blend

Sets the kind of blending this pass has with the existing contents of the scene. Wheras the textureblending operations seen in the texture unit entries are concerned with blending between texturelayers, this blending is about combining the output of this pass as a whole with the existingcontents of the rendering target. This blending therefore allows object transparency and otherspecial effects. There are 2 formats, one using predefined blend types, the other allowing aroll-your-own approach using source and destination factors.

Format1: scene blend <add|modulate|alpha blend|colour blend>

Example: scene blend add

This is the simpler form, where the most commonly used blending modes are enumeratedusing a single parameter. Valid <blend type> parameters are:

add The colour of the rendering output is added to the scene. Good for explosions,flares, lights, ghosts etc. Equivalent to ’scene blend one one’.

modulate The colour of the rendering output is multiplied with the scene contents. Generallycolours and darkens the scene, good for smoked glass, semi-transparent objects etc.Equivalent to ’scene blend dest colour zero’.

colour blendColour the scene based on the brightness of the input colours, but don’t darken.Equivalent to ’scene blend src colour one minus src colour’

alpha blendThe alpha value of the rendering output is used as a mask. Equivalent to’scene blend src alpha one minus src alpha’

Format2: scene blend <src factor> <dest factor>

Example: scene blend one one minus dest alpha

This version of the method allows complete control over the blending operation, by speci-fying the source and destination blending factors. The resulting colour which is written to therendering target is (texture * sourceFactor) + (scene pixel * destFactor). Valid values for bothparameters are:

one Constant value of 1.0

zero Constant value of 0.0

dest colourThe existing pixel colour

src colour The texture pixel (texel) colour

one minus dest colour1 - (dest colour)

Page 29: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 25

one minus src colour1 - (src colour)

dest alpha The existing pixel alpha value

src alpha The texel alpha value

one minus dest alpha1 - (dest alpha)

one minus src alpha1 - (src alpha)

Default: scene blend one zero (opaque)

Also see [separate scene blend], page 25.

separate scene blend

This option operates in exactly the same way as [scene blend], page 23, except that it allows youto specify the operations to perform between the rendered pixel and the frame buffer separatelyfor colour and alpha components. By nature this option is only useful when rendering to targetswhich have an alpha channel which you’ll use for later processing, such as a render texture.

Format1: separate scene blend <simple colour blend> <simple alpha blend>

Example: separate scene blend add modulate

This example would add colour components but multiply alpha components. The blendmodes available are as in [scene blend], page 23. The more advanced form is also available:

Format2: separate scene blend <colour src factor> <colour dest factor> <alpha src factor><alpha dest factor>

Example: separate scene blend one one minus dest alpha one one

Again the options available in the second format are the same as those in the second formatof [scene blend], page 23.

scene blend op

This directive changes the operation which is applied between the two components of the sceneblending equation, which by default is ’add’ (sourceFactor * source + destFactor * dest). Youmay change this to ’add’, ’subtract’, ’reverse subtract’, ’min’ or ’max’.

Format: scene blend op <add|subtract|reverse subtract|min|max>

Page 30: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 26

Default: scene blend op add

separate scene blend op

This directive is as scene blend op, except that you can set the operation for colour and alphaseparately.

Format: separate scene blend op <colourOp> <alphaOp>

Default: separate scene blend op add add

depth check

Sets whether or not this pass renders with depth-buffer checking on or not.

Format: depth check <on|off>

If depth-buffer checking is on, whenever a pixel is about to be written to the frame bufferthe depth buffer is checked to see if the pixel is in front of all other pixels written at that point.If not, the pixel is not written. If depth checking is off, pixels are written no matter what hasbeen rendered before. Also see depth func for more advanced depth check configuration.

Default: depth check on

depth write

Sets whether or not this pass renders with depth-buffer writing on or not.

Format: depth write <on|off>

If depth-buffer writing is on, whenever a pixel is written to the frame buffer the depth bufferis updated with the depth value of that new pixel, thus affecting future rendering operations iffuture pixels are behind this one. If depth writing is off, pixels are written without updating thedepth buffer. Depth writing should normally be on but can be turned off when rendering staticbackgrounds or when rendering a collection of transparent objects at the end of a scene so thatthey overlap each other correctly.

Default: depth write on

Page 31: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 27

depth func

Sets the function used to compare depth values when depth checking is on.

Format: depth func <func>

If depth checking is enabled (see depth check) a comparison occurs between the depth valueof the pixel to be written and the current contents of the buffer. This comparison is normallyless equal, i.e. the pixel is written if it is closer (or at the same distance) than the currentcontents. The possible functions are:

always failNever writes a pixel to the render target

always passAlways writes a pixel to the render target

less Write if (new Z < existing Z)

less equal Write if (new Z <= existing Z)

equal Write if (new Z == existing Z)

not equal Write if (new Z != existing Z)

greater equalWrite if (new Z >= existing Z)

greater Write if (new Z >existing Z)

Default: depth func less equal

depth bias

Sets the bias applied to the depth value of this pass. Can be used to make coplanar polygonsappear on top of others e.g. for decals.

Format: depth bias <constant bias> [<slopescale bias>]

The final depth bias value is constant bias * minObservableDepth + maxSlope *slopescale bias. Slope scale biasing is relative to the angle of the polygon to the camera, whichmakes for a more appropriate bias value, but this is ignored on some older hardware. Constantbiasing is expressed as a factor of the minimum depth value, so a value of 1 will nudge thedepth by one ’notch’ if you will. Also see [iteration depth bias], page 27

iteration depth bias

Sets an additional bias derived from the number of times a given pass has been iterated. Op-erates just like [depth bias], page 27 except that it applies an additional bias factor to the basedepth bias value, multiplying the provided value by the number of times this pass has beeniterated before, through one of the [iteration], page 35 variants. So the first time the pass will

Page 32: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 28

get the depth bias value, the second time it will get depth bias + iteration depth bias, the thirdtime it will get depth bias + iteration depth bias * 2, and so on. The default is zero.

Format: iteration depth bias <bias per iteration>

alpha rejection

Sets the way the pass will have use alpha to totally reject pixels from the pipeline.

Format: alpha rejection <function> <value>

Example: alpha rejection greater equal 128

The function parameter can be any of the options listed in the material depth functionattribute. The value parameter can theoretically be any value between 0 and 255, but is bestlimited to 0 or 128 for hardware compatibility.

Default: alpha rejection always pass

alpha to coverage

Sets whether this pass will use ’alpha to coverage’, a way to multisample alpha texture edgesso they blend more seamlessly with the background. This facility is typically only available oncards from around 2006 onwards, but it is safe to enable it anyway - Ogre will just ignore it ifthe hardware does not support it. The common use for alpha to coverage is foliage renderingand chain-link fence style textures.

Format: alpha to coverage <on|off>

Default: alpha to coverage off

light scissor

Sets whether when rendering this pass, rendering will be limited to a screen-space scissor rectan-gle representing the coverage of the light(s) being used in this pass, derived from their attenuationranges.

Page 33: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 29

Format: light scissor <on|off>

Default: light scissor off

This option is usually only useful if this pass is an additive lighting pass, and is at least thesecond one in the technique. Ie areas which are not affected by the current light(s) will neverneed to be rendered. If there is more than one light being passed to the pass, then the scissor isdefined to be the rectangle which covers all lights in screen-space. Directional lights are ignoredsince they are infinite.

This option does not need to be specified if you are using a standard additive shadow mode,i.e. SHADOWTYPE STENCIL ADDITIVE or SHADOWTYPE TEXTURE ADDITIVE,since it is the default behaviour to use a scissor for each additive shadow pass. However, ifyou’re not using shadows, or you’re using [Integrated Texture Shadows], page 168 where passesare specified in a custom manner, then this could be of use to you.

light clip planes

Sets whether when rendering this pass, triangle setup will be limited to clipping volume coveredby the light. Directional lights are ignored, point lights clip to a cube the size of the attenuationrange or the light, and spotlights clip to a pyramid bounding the spotlight angle and attenuationrange.

Format: light clip planes <on|off>

Default: light clip planes off

This option will only function if there is a single non-directional light being used in this pass.If there is more than one light, or only directional lights, then no clipping will occur. If thereare no lights at all then the objects won’t be rendered at all.

When using a standard additive shadow mode, ie SHADOWTYPE STENCIL ADDITIVE orSHADOWTYPE TEXTURE ADDITIVE, you have the option of enabling clipping for all lightpasses by calling SceneManager::setShadowUseLightClipPlanes regardless of this pass setting,since rendering is done lightwise anyway. This is off by default since using clip planes is notalways faster - it depends on how much of the scene the light volumes cover. Generally thesmaller your lights are the more chance you’ll see a benefit rather than a penalty from clipping.If you’re not using shadows, or you’re using [Integrated Texture Shadows], page 168 where passesare specified in a custom manner, then specify the option per-pass using this attribute.

A specific note about OpenGL: user clip planes are completely ignored when you use an ARBvertex program. This means light clip planes won’t help much if you use ARB vertex programson GL, although OGRE will perform some optimisation of its own, in that if it sees that the clipvolume is completely off-screen, it won’t perform a render at all. When using GLSL, user clipping

Page 34: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 30

can be used but you have to use glClipVertex in your shader, see the GLSL documentation formore information. In Direct3D user clip planes are always respected.

illumination stage

When using an additive lighting mode (SHADOWTYPE STENCIL ADDITIVE or SHADOW-TYPE TEXTURE ADDITIVE), the scene is rendered in 3 discrete stages, ambient (or pre-lighting), per-light (once per light, with shadowing) and decal (or post-lighting). Usually OGREfigures out how to categorise your passes automatically, but there are some effects you cannotachieve without manually controlling the illumination. For example specular effects are mutedby the typical sequence because all textures are saved until the ’decal’ stage which mutes thespecular effect. Instead, you could do texturing within the per-light stage if it’s possible foryour material and thus add the specular on after the decal texturing, and have no post-lightrendering.

If you assign an illumination stage to a pass you have to assign it to all passes in the techniqueotherwise it will be ignored. Also note that whilst you can have more than one pass in eachgroup, they cannot alternate, ie all ambient passes will be before all per-light passes, whichwill also be before all decal passes. Within their categories the passes will retain their orderingthough.

Format: illumination stage <ambient|per light|decal>

Default: none (autodetect)

normalise normals

Sets whether or not this pass renders with all vertex normals being automatically re-normalised.

Format: normalise normals <on|off>

Scaling objects causes normals to also change magnitude, which can throw off your lightingcalculations. By default, the SceneManager detects this and will automatically re-normalisenormals for any scaled object, but this has a cost. If you’d prefer to control this manually, callSceneManager::setNormaliseNormalsOnScale(false) and then use this option on materials whichare sensitive to normals being resized.

Default: normalise normals off

transparent sorting

Sets if transparent textures should be sorted by depth or not.

Format: transparent sorting <on|off|force>

Page 35: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 31

By default all transparent materials are sorted such that renderables furthest away fromthe camera are rendered first. This is usually the desired behaviour but in certain cases thisdepth sorting may be unnecessary and undesirable. If for example it is necessary to ensure therendering order does not change from one frame to the next. In this case you could set the valueto ’off’ to prevent sorting.

You can also use the keyword ’force’ to force transparent sorting on, regardless of othercircumstances. Usually sorting is only used when the pass is also transparent, and has a depthwrite or read which indicates it cannot reliably render without sorting. By using ’force’, you tellOGRE to sort this pass no matter what other circumstances are present.

Default: transparent sorting on

cull hardware

Sets the hardware culling mode for this pass.

Format: cull hardware <clockwise|anticlockwise|none>

A typical way for the hardware rendering engine to cull triangles is based on the ’vertexwinding’ of triangles. Vertex winding refers to the direction in which the vertices are passedor indexed to in the rendering operation as viewed from the camera, and will wither be clock-wise or anticlockwise (that’s ’counterclockwise’ for you Americans out there ;). If the option’cull hardware clockwise’ is set, all triangles whose vertices are viewed in clockwise order fromthe camera will be culled by the hardware. ’anticlockwise’ is the reverse (obviously), and ’none’turns off hardware culling so all triagles are rendered (useful for creating 2-sided passes).

Default: cull hardware clockwiseNB this is the same as OpenGL’s default but the opposite of Direct3D’s default (because Ogreuses a right-handed coordinate system like OpenGL).

cull software

Sets the software culling mode for this pass.

Format: cull software <back|front|none>

In some situations the engine will also cull geometry in software before sending it to thehardware renderer. This setting only takes effect on SceneManager’s that use it (since it isbest used on large groups of planar world geometry rather than on movable geometry since thiswould be expensive), but if used can cull geometry before it is sent to the hardware. In this casethe culling is based on whether the ’back’ or ’front’ of the triangle is facing the camera - thisdefinition is based on the face normal (a vector which sticks out of the front side of the polygon

Page 36: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 32

perpendicular to the face). Since Ogre expects face normals to be on anticlockwise side of theface, ’cull software back’ is the software equivalent of ’cull hardware clockwise’ setting, whichis why they are both the default. The naming is different to reflect the way the culling is donethough, since most of the time face normals are pre-calculated and they don’t have to be theway Ogre expects - you could set ’cull hardware none’ and completely cull in software based onyour own face normals, if you have the right SceneManager which uses them.

Default: cull software back

lighting

Sets whether or not dynamic lighting is turned on for this pass or not. If lighting is turnedoff, all objects rendered using the pass will be fully lit. This attribute has no effect if a vertexprogram is used.

Format: lighting <on|off>

Turning dynamic lighting off makes any ambient, diffuse, specular, emissive and shadingproperties for this pass redundant. When lighting is turned on, objects are lit according to theirvertex normals for diffuse and specular light, and globally for ambient and emissive.

Default: lighting on

shading

Sets the kind of shading which should be used for representing dynamic lighting for this pass.

Format: shading <flat|gouraud|phong>

When dynamic lighting is turned on, the effect is to generate colour values at each vertex.Whether these values are interpolated across the face (and how) depends on this setting.

flat No interpolation takes place. Each face is shaded with a single colour determinedfrom the first vertex in the face.

gouraud Colour at each vertex is linearly interpolated across the face.

phong Vertex normals are interpolated across the face, and these are used to determinecolour at each pixel. Gives a more natural lighting effect but is more expensive andworks better at high levels of tessellation. Not supported on all hardware.

Page 37: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 33

Default: shading gouraud

polygon mode

Sets how polygons should be rasterised, i.e. whether they should be filled in, or just drawn aslines or points.

Format: polygon mode <solid|wireframe|points>

solid The normal situation - polygons are filled in.

wireframe Polygons are drawn in outline only.

points Only the points of each polygon are rendered.

Default: polygon mode solid

polygon mode overrideable

Sets whether or not the [polygon mode], page 33 set on this pass can be downgraded by thecamera, if the camera itself is set to a lower polygon mode. If set to false, this pass will alwaysbe rendered at its own chosen polygon mode no matter what the camera says. The default istrue.

Format: polygon mode overrideable <true|false>

fog override

Tells the pass whether it should override the scene fog settings, and enforce it’s own. Very usefulfor things that you don’t want to be affected by fog when the rest of the scene is fogged, or viceversa. Note that this only affects fixed-function fog - the original scene fog parameters are stillsent to shaders which use the fog params parameter binding (this allows you to turn off fixedfunction fog and calculate it in the shader instead; if you want to disable shader fog you can dothat through shader parameters anyway).

Format: fog override <override?> [<type> <colour> <density> <start> <end>]

Default: fog override false

If you specify ’true’ for the first parameter and you supply the rest of the parameters, youare telling the pass to use these fog settings in preference to the scene settings, whatever theymight be. If you specify ’true’ but provide no further parameters, you are telling this pass to

Page 38: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 34

never use fogging no matter what the scene says. Here is an explanation of the parameters:

type none = No fog, equivalent of just using ’fog override true’linear = Linear fog from the <start> and <end> distancesexp = Fog increases exponentially from the camera (fog = 1/e^(distance * density)),use <density> param to control itexp2 = Fog increases at the square of FOG EXP, i.e. even quicker (fog =1/e^(distance * density)^2), use <density> param to control it

colour Sequence of 3 floating point values from 0 to 1 indicating the red, green and blueintensities

density The density parameter used in the ’exp’ or ’exp2’ fog types. Not used in linear modebut param must still be there as a placeholder

start The start distance from the camera of linear fog. Must still be present in othermodes, even though it is not used.

end The end distance from the camera of linear fog. Must still be present in other modes,even though it is not used.

Example: fog override true exp 1 1 1 0.002 100 10000

colour write

Sets whether or not this pass renders with colour writing on or not.

Format: colour write <on|off>

If colour writing is off no visible pixels are written to the screen during this pass. You mightthink this is useless, but if you render with colour writing off, and with very minimal othersettings, you can use this pass to initialise the depth buffer before subsequently rendering otherpasses which fill in the colour data. This can give you significant performance boosts on somenewer cards, especially when using complex fragment programs, because if the depth check failsthen the fragment program is never run.

Default: colour write on

start light

Sets the first light which will be considered for use with this pass.

Format: start light <number>

You can use this attribute to offset the starting point of the lights for this pass. In otherwords, if you set start light to 2 then the first light to be processed in that pass will be the thirdactual light in the applicable list. You could use this option to use different passes to process thefirst couple of lights versus the second couple of lights for example, or use it in conjunction withthe [iteration], page 35 option to start the iteration from a given point in the list (e.g. doing thefirst 2 lights in the first pass, and then iterating every 2 lights from then on perhaps).

Page 39: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 35

Default: start light 0

max lights

Sets the maximum number of lights which will be considered for use with this pass.

Format: max lights <number>

The maximum number of lights which can be used when rendering fixed-function materialsis set by the rendering system, and is typically set at 8. When you are using the programmablepipeline (See Section 3.1.9 [Using Vertex/Geometry/Fragment Programs in a Pass], page 69)this limit is dependent on the program you are running, or, if you use ’iteration once per light’or a variant (See [iteration], page 35), it effectively only bounded by the number of passes youare willing to use. If you are not using pass iteration, the light limit applies once for this pass. Ifyou are using pass iteration, the light limit applies across all iterations of this pass - for exampleif you have 12 lights in range with an ’iteration once per light’ setup but your max lights is setto 4 for that pass, the pass will only iterate 4 times.

Default: max lights 8

iteration

Sets whether or not this pass is iterated, i.e. issued more than once.

Format 1: iteration <once | once per light> [lightType]

Format 2: iteration <number> [<per light> [lightType]]

Format 3: iteration <number> [<per n lights> <num lights> [lightType]]

Examples:

iteration onceThe pass is only executed once which is the default behaviour.

iteration once per light pointThe pass is executed once for each point light.

iteration 5 The render state for the pass will be setup and then the draw call will execute 5times.

iteration 5 per light pointThe render state for the pass will be setup and then the draw call will execute 5times. This will be done for each point light.

iteration 1 per n lights 2 pointThe render state for the pass will be setup and the draw call executed once for every2 lights.

Page 40: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 36

By default, passes are only issued once. However, if you use the programmable pipeline, oryou wish to exceed the normal limits on the number of lights which are supported, you mightwant to use the once per light option. In this case, only light index 0 is ever used, and the passis issued multiple times, each time with a different light in light index 0. Clearly this will makethe pass more expensive, but it may be the only way to achieve certain effects such as per-pixellighting effects which take into account 1..n lights.

Using a number instead of "once" instructs the pass to iterate more than once after therender state is setup. The render state is not changed after the initial setup so repeated drawcalls are very fast and ideal for passes using programmable shaders that must iterate more thanonce with the same render state i.e. shaders that do fur, motion blur, special filtering.

If you use once per light, you should also add an ambient pass to the technique before thispass, otherwise when no lights are in range of this object it will not get rendered at all; this isimportant even when you have no ambient light in the scene, because you would still want theobjects silhouette to appear.

The lightType parameter to the attribute only applies if you use once per light, per light,or per n lights and restricts the pass to being run for lights of a single type (either ’point’,’directional’ or ’spot’). In the example, the pass will be run once per point light. This can beuseful because when you’re writing a vertex / fragment program it is a lot easier if you canassume the kind of lights you’ll be dealing with. However at least point and directional lightscan be dealt with in one way.

Default: iteration once

Example: Simple Fur shader material script that uses a second pass with 10 iterations togrow the fur:

// GLSL simple Furvertex_program GLSLDemo/FurVS glsl{source fur.vertdefault_params{

param_named_auto lightPosition light_position_object_space 0param_named_auto eyePosition camera_position_object_spaceparam_named_auto passNumber pass_numberparam_named_auto multiPassNumber pass_iteration_numberparam_named furLength float 0.15

}}

fragment_program GLSLDemo/FurFS glsl{

Page 41: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 37

source fur.fragdefault_params{

param_named Ka float 0.2param_named Kd float 0.5param_named Ks float 0.0param_named furTU int 0

}}

material Fur{technique GLSL{

pass base_coat{ambient 0.7 0.7 0.7diffuse 0.5 0.8 0.5specular 1.0 1.0 1.0 1.5

vertex_program_ref GLSLDemo/FurVS{}

fragment_program_ref GLSLDemo/FurFS{}

texture_unit{

texture Fur.tgatex_coord_set 0filtering trilinear

}

}

pass grow_fur{ambient 0.7 0.7 0.7diffuse 0.8 1.0 0.8specular 1.0 1.0 1.0 64depth_write off

scene_blend src_alpha oneiteration 10

vertex_program_ref GLSLDemo/FurVS{}

fragment_program_ref GLSLDemo/FurFS

Page 42: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 38

{}

texture_unit{

texture Fur.tgatex_coord_set 0filtering trilinear

}}

}}

Note: use gpu program auto parameters [pass number], page 80 and [pass iteration number],page 80 to tell the vertex, geometry or fragment program the pass number and iteration number.

point size

This setting allows you to change the size of points when rendering a point list, or a list of pointsprites. The interpretation of this command depends on the [point size attenuation], page 38option - if it is off (the default), the point size is in screen pixels, if it is on, it expressed asnormalised screen coordinates (1.0 is the height of the screen) when the point is at the origin.

NOTE: Some drivers have an upper limit on the size of points they support - this can evenvary between APIs on the same card! Don’t rely on point sizes that cause the points to get verylarge on screen, since they may get clamped on some cards. Upper sizes can range from 64 to256 pixels.

Format: point size <size>

Default: point size 1.0

point sprites

This setting specifies whether or not hardware point sprite rendering is enabled for this pass.Enabling it means that a point list is rendered as a list of quads rather than a list of dots. It isvery useful to use this option if you’re using a BillboardSet and only need to use point orientedbillboards which are all of the same size. You can also use it for any other point list render.

Format: point sprites <on|off>

Default: point sprites off

Page 43: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 39

point size attenuation

Defines whether point size is attenuated with view space distance, and in what fashion. Thisoption is especially useful when you’re using point sprites (See [point sprites], page 38) since itdefines how they reduce in size as they get further away from the camera. You can also disablethis option to make point sprites a constant screen size (like points), or enable it for points sothey change size with distance.

You only have to provide the final 3 parameters if you turn attenuation on. The formula forattenuation is that the size of the point is multiplied by 1 / (constant + linear * dist + quadratic* d^2); therefore turning it off is equivalent to (constant = 1, linear = 0, quadratic = 0) andstandard perspective attenuation is (constant = 0, linear = 1, quadratic = 0). The latter isassumed if you leave out the final 3 parameters when you specify ’on’.

Note that the resulting attenuated size is clamped to the minimum and maximum point size,see the next section.

Format: point size attenuation <on|off> [constant linear quadratic] Default:point size attenuation off

point size min

Sets the minimum point size after attenuation ([point size attenuation], page 38). For detailson the size metrics, See [point size], page 38.

Format: point size min <size> Default: point size min 0

point size max

Sets the maximum point size after attenuation ([point size attenuation], page 38). For detailson the size metrics, See [point size], page 38. A value of 0 means the maximum is set to thesame as the max size reported by the current card.

Format: point size max <size> Default: point size max 0

3.1.3 Texture Units

Here are the attributes you can use in a ’texture unit’ section of a .material script:

Available Texture Layer Attributes

• [texture alias], page 40• [texture], page 40• [anim texture], page 43• [cubic texture], page 44• [tex coord set], page 45• [tex address mode], page 46• [tex border colour], page 46

Page 44: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 40

• [filtering], page 46• [max anisotropy], page 47• [mipmap bias], page 48• [colour op], page 48• [colour op ex], page 49• [colour op multipass fallback], page 51• [alpha op ex], page 51• [env map], page 51• [scroll], page 52• [scroll anim], page 52• [rotate], page 52• [rotate anim], page 53• [scale], page 53• [wave xform], page 53• [transform], page 54• [binding type], page 44• [content type], page 45

You can also use a nested ’texture source’ section in order to use a special add-in as a sourceof texture data, See Chapter 6 [External Texture Sources], page 157 for details.

Attribute Descriptions

texture alias

Sets the alias name for this texture unit.

Format: texture alias <name>

Example: texture alias NormalMap

Setting the texture alias name is useful if this material is to be inherited by other othermaterials and only the textures will be changed in the new material.(See Section 3.1.12 [TextureAliases], page 87)

Default: If a texture unit has a name then the texture alias defaults to the texture unit name.

texture

Sets the name of the static texture image this layer will use.

Format: texture <texturename> [<type>] [unlimited | numMipMaps] [alpha] [<PixelFormat>][gamma]

Page 45: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 41

Example: texture funkywall.jpg

This setting is mutually exclusive with the anim texture attribute. Note that the texturefile cannot include spaces. Those of you Windows users who like spaces in filenames, please getover it and use underscores instead.

The ’type’ parameter allows you to specify a the type of texture to create - the default is ’2d’,but you can override this; here’s the full list:

1d A 1-dimensional texture; that is, a texture which is only 1 pixel high. These kindsof textures can be useful when you need to encode a function in a texture and useit as a simple lookup, perhaps in a fragment program. It is important that youuse this setting when you use a fragment program which uses 1-dimensional texturecoordinates, since GL requires you to use a texture type that matches (D3D will letyou get away with it, but you ought to plan for cross-compatibility). Your texturewidths should still be a power of 2 for best compatibility and performance.

2d The default type which is assumed if you omit it, your texture has a width and aheight, both of which should preferably be powers of 2, and if you can, make themsquare because this will look best on the most hardware. These can be addressedwith 2D texture coordinates.

3d A 3 dimensional texture i.e. volume texture. Your texture has a width, a height,both of which should be powers of 2, and has depth. These can be addressed with3d texture coordinates i.e. through a pixel shader.

cubic This texture is made up of 6 2D textures which are pasted around the inside ofa cube. Can be addressed with 3D texture coordinates and are useful for cubicreflection maps and normal maps.

The ’numMipMaps’ option allows you to specify the number of mipmaps to generate for thistexture. The default is ’unlimited’ which means mips down to 1x1 size are generated. You canspecify a fixed number (even 0) if you like instead. Note that if you use the same texture inmany material scripts, the number of mipmaps generated will conform to the number specifiedin the first texture unit used to load the texture - so be consistent with your usage.

The ’alpha’ option allows you to specify that a single channel (luminance) texture should beloaded as alpha, rather than the default which is to load it into the red channel. This can behelpful if you want to use alpha-only textures in the fixed function pipeline.

Default: none

The <PixelFormat> option allows you to specify the desired pixel format of the texture tocreate, which may be different to the pixel format of the texture file being loaded. Bear inmind that the final pixel format will be constrained by hardware capabilities so you may notget exactly what you ask for. The available options are:

PF L8 8-bit pixel format, all bits luminance.

PF L16 16-bit pixel format, all bits luminance.

PF A8 8-bit pixel format, all bits alpha.

PF A4L4 8-bit pixel format, 4 bits alpha, 4 bits luminance.

Page 46: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 42

PF BYTE LA2 byte pixel format, 1 byte luminance, 1 byte alpha

PF R5G6B516-bit pixel format, 5 bits red, 6 bits green, 5 bits blue.

PF B5G6R516-bit pixel format, 5 bits blue, 6 bits green, 5 bits red.

PF R3G3B28-bit pixel format, 3 bits red, 3 bits green, 2 bits blue.

PF A4R4G4B416-bit pixel format, 4 bits for alpha, red, green and blue.

PF A1R5G5B516-bit pixel format, 1 bit for alpha, 5 bits for red, green and blue.

PF R8G8B824-bit pixel format, 8 bits for red, green and blue.

PF B8G8R824-bit pixel format, 8 bits for blue, green and red.

PF A8R8G8B832-bit pixel format, 8 bits for alpha, red, green and blue.

PF A8B8G8R832-bit pixel format, 8 bits for alpha, blue, green and red.

PF B8G8R8A832-bit pixel format, 8 bits for blue, green, red and alpha.

PF R8G8B8A832-bit pixel format, 8 bits for red, green, blue and alpha.

PF X8R8G8B832-bit pixel format, 8 bits for red, 8 bits for green, 8 bits for blue like PF A8R8G8B8,but alpha will get discarded

PF X8B8G8R832-bit pixel format, 8 bits for blue, 8 bits for green, 8 bits for red like PF A8B8G8R8,but alpha will get discarded

PF A2R10G10B1032-bit pixel format, 2 bits for alpha, 10 bits for red, green and blue.

PF A2B10G10R1032-bit pixel format, 2 bits for alpha, 10 bits for blue, green and red.

PF FLOAT16 R16-bit pixel format, 16 bits (float) for red

PF FLOAT16 RGB48-bit pixel format, 16 bits (float) for red, 16 bits (float) for green, 16 bits (float)for blue

PF FLOAT16 RGBA64-bit pixel format, 16 bits (float) for red, 16 bits (float) for green, 16 bits (float)for blue, 16 bits (float) for alpha

PF FLOAT32 R16-bit pixel format, 16 bits (float) for red

Page 47: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 43

PF FLOAT32 RGB96-bit pixel format, 32 bits (float) for red, 32 bits (float) for green, 32 bits (float)for blue

PF FLOAT32 RGBA128-bit pixel format, 32 bits (float) for red, 32 bits (float) for green, 32 bits (float)for blue, 32 bits (float) for alpha

PF SHORT RGBA64-bit pixel format, 16 bits for red, green, blue and alpha

The ’gamma’ option informs the renderer that you want the graphics hardware to performgamma correction on the texture values as they are sampled for rendering. This is only appli-cable for textures which have 8-bit colour channels (e.g.PF R8G8B8). Often, 8-bit per channeltextures will be stored in gamma space in order to increase the precision of the darker colours(http://en.wikipedia.org/wiki/Gamma_correction) but this can throw out blending andfiltering calculations since they assume linear space colour values. For the best quality shading,you may want to enable gamma correction so that the hardware converts the texture valuesto linear space for you automatically when sampling the texture, then the calculations in thepipeline can be done in a reliable linear colour space. When rendering to a final 8-bit per channeldisplay, you’ll also want to convert back to gamma space which can be done in your shader (byraising to the power 1/2.2) or you can enable gamma correction on the texture being renderedto or the render window. Note that the ’gamma’ option on textures is applied on loading thetexture so must be specified consistently if you use this texture in multiple places.

anim texture

Sets the images to be used in an animated texture layer. In this case an animated texturelayer means one which has multiple frames, each of which is a separate image file. There are 2formats, one for implicitly determined image names, one for explicitly named images.

Format1 (short): anim texture <base name> <num frames> <duration>

Example: anim texture flame.jpg 5 2.5

This sets up an animated texture layer made up of 5 frames named flame 0.jpg, flame 1.jpg,flame 2.jpg etc, with an animation length of 2.5 seconds (2fps). If duration is set to 0, then noautomatic transition takes place and frames must be changed manually in code.

Format2 (long): anim texture <frame1> <frame2> ... <duration>

Example: anim texture flamestart.jpg flamemore.png flameagain.jpg moreflame.jpg last-flame.tga 2.5

This sets up the same duration animation but from 5 separately named image files. The firstformat is more concise, but the second is provided if you cannot make your images conform tothe naming standard required for it.

Page 48: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 44

Default: none

cubic texture

Sets the images used in a cubic texture, i.e. one made up of 6 individual images making up thefaces of a cube. These kinds of textures are used for reflection maps (if hardware supports cubicreflection maps) or skyboxes. There are 2 formats, a brief format expecting image names of aparticular format and a more flexible but longer format for arbitrarily named textures.

Format1 (short): cubic texture <base name> <combinedUVW|separateUV>

The base name in this format is something like ’skybox.jpg’, and the system will expectyou to provide skybox fr.jpg, skybox bk.jpg, skybox up.jpg, skybox dn.jpg, skybox lf.jpg, andskybox rt.jpg for the individual faces.

Format2 (long): cubic texture <front> <back> <left> <right> <up> <down> separateUV

In this case each face is specified explicitly, incase you don’t want to conform to the imagenaming standards above. You can only use this for the separateUV version since the combine-dUVW version requires a single texture name to be assigned to the combined 3D texture (seebelow).

In both cases the final parameter means the following:

combinedUVWThe 6 textures are combined into a single ’cubic’ texture map which is then addressedusing 3D texture coordinates with U, V and W components. Necessary for reflectionmaps since you never know which face of the box you are going to need. Note thatnot all cards support cubic environment mapping.

separateUVThe 6 textures are kept separate but are all referenced by this single texture layer.One texture at a time is active (they are actually stored as 6 frames), and theyare addressed using standard 2D UV coordinates. This type is good for skyboxessince only one face is rendered at one time and this has more guaranteed hardwaresupport on older cards.

Default: none

binding type

Tells this texture unit to bind to either the fragment processing unit or the vertex processingunit (for Section 3.1.10 [Vertex Texture Fetch], page 83).

Page 49: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 45

Format: binding type <vertex|fragment> Default: binding type fragment

content type

Tells this texture unit where it should get its content from. The default is to get texturecontent from a named texture, as defined with the [texture], page 40, [cubic texture], page 44,[anim texture], page 43 attributes. However you can also pull texture information from otherautomated sources. The options are:

named The default option, this derives texture content from a texture name, loaded byordinary means from a file or having been manually created with a given name.

shadow This option allows you to pull in a shadow texture, and is only valid when you usetexture shadows and one of the ’custom sequence’ shadowing types (See Chapter 7[Shadows], page 160). The shadow texture in question will be from the ’n’th closestlight that casts shadows, unless you use light-based pass iteration or the light startoption which may start the light index higher. When you use this option in multipletexture units within the same pass, each one references the next shadow texture. Theshadow texture index is reset in the next pass, in case you want to take into accountthe same shadow textures again in another pass (e.g. a separate specular / glosspass). By using this option, the correct light frustum projection is set up for you foruse in fixed-function, if you use shaders just reference the texture viewproj matrixauto parameter in your shader.

compositorThis option allows you to reference a texture from a compositor, and is only validwhen the pass is rendered within a compositor sequence. This can be either in arender scene directive inside a compositor script, or in a general pass in a viewportthat has a compositor attached. Note that this is a reference only, meaning that itdoes not change the render order. You must make sure that the order is reasonablefor what you are trying to achieve (for example, texture pooling might cause thereferenced texture to be overwritten by something else by the time it is referenced).

The extra parameters for the content type are only required for this type:

The first is the name of the compositor being referenced. (Required)

The second is the name of the texture to reference in the compositor. (Required)

The third is the index of the texture to take, in case of an MRT. (Optional)

Format: content type <named|shadow|compositor> [<Referenced Compositor Name>] [<Ref-erenced Texture Name>] [<Referenced MRT Index>]

Default: content type named

Example: content type compositor DepthCompositor OutputTexture

tex coord set

Sets which texture coordinate set is to be used for this texture layer. A mesh can define multiplesets of texture coordinates, this sets which one this material uses.

Page 50: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 46

Format: tex coord set <set num>

Example: tex coord set 2

Default: tex coord set 0

tex address mode

Defines what happens when texture coordinates exceed 1.0 for this texture layer.You can usethe simple format to specify the addressing mode for all 3 potential texture coordinates atonce, or you can use the 2/3 parameter extended format to specify a different mode per texturecoordinate.

Simple Format: tex address mode <uvw mode>Extended Format: tex address mode <u mode> <v mode> [<w mode>]

wrap Any value beyond 1.0 wraps back to 0.0. Texture is repeated.

clamp Values beyond 1.0 are clamped to 1.0. Texture ’streaks’ beyond 1.0 since last lineof pixels is used across the rest of the address space. Useful for textures which needexact coverage from 0.0 to 1.0 without the ’fuzzy edge’ wrap gives when combinedwith filtering.

mirror Texture flips every boundary, meaning texture is mirrored every 1.0 u or v

border Values outside the range [0.0, 1.0] are set to the border colour, you might also setthe [tex border colour], page 46 attribute too.

Default: tex address mode wrap

tex border colour

Sets the border colour of border texture address mode (see [tex address mode], page 46).

Format: tex border colour <red> <green> <blue> [<alpha>]NB valid colour values are between 0.0 and 1.0.

Example: tex border colour 0.0 1.0 0.3

Default: tex border colour 0.0 0.0 0.0 1.0

Page 51: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 47

filtering

Sets the type of texture filtering used when magnifying or minifying a texture. There are 2formats to this attribute, the simple format where you simply specify the name of a predefinedset of filtering options, and the complex format, where you individually set the minification,magnification, and mip filters yourself.

Simple FormatFormat: filtering <none|bilinear|trilinear|anisotropic>Default: filtering bilinear

With this format, you only need to provide a single parameter which is one of the following:

none No filtering or mipmapping is used. This is equivalent to the complex format ’fil-tering point point none’.

bilinear 2x2 box filtering is performed when magnifying or reducing a texture, and a mipmapis picked from the list but no filtering is done between the levels of the mipmaps.This is equivalent to the complex format ’filtering linear linear point’.

trilinear 2x2 box filtering is performed when magnifying and reducing a texture, and theclosest 2 mipmaps are filtered together. This is equivalent to the complex format’filtering linear linear linear’.

anisotropicThis is the same as ’trilinear’, except the filtering algorithm takes account of theslope of the triangle in relation to the camera rather than simply doing a 2x2 pixelfilter in all cases. This makes triangles at acute angles look less fuzzy. Equivalentto the complex format ’filtering anisotropic anisotropic linear’. Note that in orderfor this to make any difference, you must also set the [max anisotropy], page 47attribute too.

Complex FormatFormat: filtering <minification> <magnification> <mip>Default: filtering linear linear point

This format gives you complete control over the minification, magnification, and mip filters.Each parameter can be one of the following:

none Nothing - only a valid option for the ’mip’ filter , since this turns mipmapping offcompletely. The lowest setting for min and mag is ’point’.

point Pick the closet pixel in min or mag modes. In mip mode, this picks the closetmatching mipmap.

linear Filter a 2x2 box of pixels around the closest one. In the ’mip’ filter this enablesfiltering between mipmap levels.

anisotropicOnly valid for min and mag modes, makes the filter compensate for camera-spaceslope of the triangles. Note that in order for this to make any difference, you mustalso set the [max anisotropy], page 47 attribute too.

max anisotropy

Sets the maximum degree of anisotropy that the renderer will try to compensate for when filteringtextures. The degree of anisotropy is the ratio between the height of the texture segment visible

Page 52: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 48

in a screen space region versus the width - so for example a floor plane, which stretches on intothe distance and thus the vertical texture coordinates change much faster than the horizontalones, has a higher anisotropy than a wall which is facing you head on (which has an anisotropyof 1 if your line of sight is perfectly perpendicular to it). You should set the max anisotropyvalue to something greater than 1 to begin compensating; higher values can compensate formore acute angles. The maximum value is determined by the hardware, but it is usually 8 or16.

In order for this to be used, you have to set the minification and/or the magnification [filtering],page 46 option on this texture to anisotropic.

Format: max anisotropy <value>Default: max anisotropy 1

mipmap bias

Sets the bias value applied to the mipmapping calculation, thus allowing you to alter the decisionof which level of detail of the texture to use at any distance. The bias value is applied afterthe regular distance calculation, and adjusts the mipmap level by 1 level for each unit of bias.Negative bias values force larger mip levels to be used, positive bias values force smaller miplevels to be used. The bias is a floating point value so you can use values in between wholenumbers for fine tuning.

In order for this option to be used, your hardware has to support mipmap biasing (exposedthrough the render system capabilities), and your minification [filtering], page 46 has to be setto point or linear.

Format: mipmap bias <value>Default: mipmap bias 0

colour op

Determines how the colour of this texture layer is combined with the one below it (or the lightingeffect on the geometry if this is the first layer).

Format: colour op <replace|add|modulate|alpha blend>

This method is the simplest way to blend texture layers, because it requires only one param-eter, gives you the most common blending types, and automatically sets up 2 blending methods:one for if single-pass multitexturing hardware is available, and another for if it is not and theblending must be achieved through multiple rendering passes. It is, however, quite limitedand does not expose the more flexible multitexturing operations, simply because these can’t beautomatically supported in multipass fallback mode. If want to use the fancier options, use[colour op ex], page 49, but you’ll either have to be sure that enough multitexturing units willbe available, or you should explicitly set a fallback using [colour op multipass fallback], page 51.

replace Replace all colour with texture with no adjustment.

add Add colour components together.

modulate Multiply colour components together.

alpha blendBlend based on texture alpha.

Page 53: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 49

Default: colour op modulate

colour op ex

This is an extended version of the [colour op], page 48 attribute which allows extremely detailedcontrol over the blending applied between this and earlier layers. Multitexturing hardware canapply more complex blending operations that multipass blending, but you are limited to thenumber of texture units which are available in hardware.

Format: colour op ex <operation> <source1> <source2> [<manual factor>] [<man-ual colour1>] [<manual colour2>]

Example colour op ex add signed src manual src current 0.5

See the IMPORTANT note below about the issues between multipass and multitexturingthat using this method can create. Texture colour operations determine how the final colourof the surface appears when rendered. Texture units are used to combine colour values fromvarious sources (e.g. the diffuse colour of the surface from lighting calculations, combined withthe colour of the texture). This method allows you to specify the ’operation’ to be used, i.e. thecalculation such as adds or multiplies, and which values to use as arguments, such as a fixedvalue or a value from a previous calculation.

Operation optionssource1 Use source1 without modification

source2 Use source2 without modification

modulate Multiply source1 and source2 together.

modulate x2Multiply source1 and source2 together, then by 2 (brightening).

modulate x4Multiply source1 and source2 together, then by 4 (brightening).

add Add source1 and source2 together.

add signedAdd source1 and source2 then subtract 0.5.

add smoothAdd source1 and source2, subtract the product

subtract Subtract source2 from source1

blend diffuse alphaUse interpolated alpha value from vertices to scale source1, then addsource2 scaled by (1-alpha).

blend texture alphaAs blend diffuse alpha but use alpha from texture

Page 54: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 50

blend current alphaAs blend diffuse alpha but use current alpha from previous stages (sameas blend diffuse alpha for first layer)

blend manualAs blend diffuse alpha but use a constant manual alpha value specifiedin <manual>

dotproductThe dot product of source1 and source2

blend diffuse colourUse interpolated colour value from vertices to scale source1, then addsource2 scaled by (1-colour).

Source1 and source2 optionssrc current

The colour as built up from previous stages.

src textureThe colour derived from the texture assigned to this layer.

src diffuse The interpolated diffuse colour from the vertices (same as ’src current’for first layer).

src specularThe interpolated specular colour from the vertices.

src manualThe manual colour specified at the end of the command.

For example ’modulate’ takes the colour results of the previous layer, and multiplies them withthe new texture being applied. Bear in mind that colours are RGB values from 0.0-1.0 somultiplying them together will result in values in the same range, ’tinted’ by the multiply. Notehowever that a straight multiply normally has the effect of darkening the textures - for thisreason there are brightening operations like modulate x2. Note that because of the limitationson some underlying APIs (Direct3D included) the ’texture’ argument can only be used as thefirst argument, not the second.

Note that the last parameter is only required if you decide to pass a value manually into theoperation. Hence you only need to fill these in if you use the ’blend manual’ operation.

IMPORTANT: Ogre tries to use multitexturing hardware to blend texture layers together.However, if it runs out of texturing units (e.g. 2 of a GeForce2, 4 on a GeForce3) it has tofall back on multipass rendering, i.e. rendering the same object multiple times with differ-ent textures. This is both less efficient and there is a smaller range of blending operationswhich can be performed. For this reason, if you use this method you really should set thecolour op multipass fallback attribute to specify which effect you want to fall back on if suffi-cient hardware is not available (the default is just ’modulate’ which is unlikely to be what youwant if you’re doing swanky blending here). If you wish to avoid having to do this, use thesimpler colour op attribute which allows less flexible blending options but sets up the multipassfallback automatically, since it only allows operations which have direct multipass equivalents.

Page 55: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 51

Default: none (colour op modulate)

colour op multipass fallback

Sets the multipass fallback operation for this layer, if you used colour op ex and not enoughmultitexturing hardware is available.

Format: colour op multipass fallback <src factor> <dest factor>

Example: colour op multipass fallback one one minus dest alpha

Because some of the effects you can create using colour op ex are only supported under mul-titexturing hardware, if the hardware is lacking the system must fallback on multipass rendering,which unfortunately doesn’t support as many effects. This attribute is for you to specify thefallback operation which most suits you.

The parameters are the same as in the scene blend attribute; this is because multipassrendering IS effectively scene blending, since each layer is rendered on top of the last using thesame mechanism as making an object transparent, it’s just being rendered in the same placerepeatedly to get the multitexture effect. If you use the simpler (and less flexible) colour opattribute you don’t need to call this as the system sets up the fallback for you.

alpha op ex

Behaves in exactly the same away as [colour op ex], page 49 except that it determines howalpha values are combined between texture layers rather than colour values.The only differenceis that the 2 manual colours at the end of colour op ex are just single floating-point values inalpha op ex.

env map

Turns on/off texture coordinate effect that makes this layer an environment map.

Format: env map <off|spherical|planar|cubic reflection|cubic normal>

Environment maps make an object look reflective by using automatic texture coordinategeneration depending on the relationship between the objects vertices or normals and the eye.

spherical A spherical environment map. Requires a single texture which is either a fish-eyelens view of the reflected scene, or some other texture which looks good as a sphericalmap (a texture of glossy highlights is popular especially in car sims). This effect isbased on the relationship between the eye direction and the vertex normals of the

Page 56: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 52

object, so works best when there are a lot of gradually changing normals, i.e. curvedobjects.

planar Similar to the spherical environment map, but the effect is based on the positionof the vertices in the viewport rather than vertex normals. This effect is thereforeuseful for planar geometry (where a spherical env map would not look good becausethe normals are all the same) or objects without normals.

cubic reflectionA more advanced form of reflection mapping which uses a group of 6 textures makingup the inside of a cube, each of which is a view if the scene down each axis. Worksextremely well in all cases but has a higher technical requirement from the cardthan spherical mapping. Requires that you bind a [cubic texture], page 44 to thistexture unit and use the ’combinedUVW’ option.

cubic normalGenerates 3D texture coordinates containing the camera space normal vector fromthe normal information held in the vertex data. Again, full use of this featurerequires a [cubic texture], page 44 with the ’combinedUVW’ option.

Default: env map off

scroll

Sets a fixed scroll offset for the texture.

Format: scroll <x> <y>

This method offsets the texture in this layer by a fixed amount. Useful for small adjustmentswithout altering texture coordinates in models. However if you wish to have an animated scrolleffect, see the [scroll anim], page 52 attribute.

scroll anim

Sets up an animated scroll for the texture layer. Useful for creating fixed-speed scrolling effectson a texture layer (for varying scroll speeds, see [wave xform], page 53).

Format: scroll anim <xspeed> <yspeed>

rotate

Rotates a texture to a fixed angle. This attribute changes the rotational orientation of a tex-ture to a fixed angle, useful for fixed adjustments. If you wish to animate the rotation, see[rotate anim], page 53.

Format: rotate <angle>

Page 57: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 53

The parameter is a anti-clockwise angle in degrees.

rotate anim

Sets up an animated rotation effect of this layer. Useful for creating fixed-speed rotation ani-mations (for varying speeds, see [wave xform], page 53).

Format: rotate anim <revs per second>

The parameter is a number of anti-clockwise revolutions per second.

scale

Adjusts the scaling factor applied to this texture layer. Useful for adjusting the size of textureswithout making changes to geometry. This is a fixed scaling factor, if you wish to animate thissee [wave xform], page 53.

Format: scale <x scale> <y scale>

Valid scale values are greater than 0, with a scale factor of 2 making the texture twice as bigin that dimension etc.

wave xform

Sets up a transformation animation based on a wave function. Useful for more advanced texturelayer transform effects. You can add multiple instances of this attribute to a single texture layerif you wish.

Format: wave xform <xform type> <wave type> <base> <frequency> <phase> <amplitude>

Example: wave xform scale x sine 1.0 0.2 0.0 5.0

xform typescroll x Animate the x scroll value

scroll y Animate the y scroll value

rotate Animate the rotate value

scale x Animate the x scale value

scale y Animate the y scale value

Page 58: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 54

wave type

sine A typical sine wave which smoothly loops between min and max values

triangle An angled wave which increases & decreases at constant speed, changinginstantly at the extremes

square Max for half the wavelength, min for the rest with instant transitionbetween

sawtooth Gradual steady increase from min to max over the period with an instantreturn to min at the end.

inverse sawtoothGradual steady decrease from max to min over the period, with aninstant return to max at the end.

base The base value, the minimum if amplitude > 0, the maximum if amplitude < 0

frequency The number of wave iterations per second, i.e. speed

phase Offset of the wave start

amplitude The size of the wave

The range of the output of the wave will be base, base+amplitude. So the example above scalesthe texture in the x direction between 1 (normal size) and 5 along a sine wave at one cycle every5 second (0.2 waves per second).

transform

This attribute allows you to specify a static 4x4 transformation matrix for the texture unit, thusreplacing the individual scroll, rotate and scale attributes mentioned above.

Format: transform m00 m01 m02 m03 m10 m11 m12 m13 m20 m21 m22 m23 m30 m31 m32m33

The indexes of the 4x4 matrix value above are expressed as m<row><col>.

3.1.4 Declaring Vertex/Geometry/Fragment Programs

In order to use a vertex, geometry or fragment program in your materials (See Section 3.1.9[Using Vertex/Geometry/Fragment Programs in a Pass], page 69), you first have to define them.A single program definition can be used by any number of materials, the only prerequisite isthat a program must be defined before being referenced in the pass section of a material.

The definition of a program can either be embedded in the .material script itself (in whichcase it must precede any references to it in the script), or if you wish to use the same programacross multiple .material files, you can define it in an external .program script. You define theprogram in exactly the same way whether you use a .program script or a .material script, theonly difference is that all .program scripts are guaranteed to have been parsed before all .materialscripts, so you can guarantee that your program has been defined before any .material script

Page 59: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 55

that might use it. Just like .material scripts, .program scripts will be read from any locationwhich is on your resource path, and you can define many programs in a single script.

Vertex, geometry and fragment programs can be low-level (i.e. assembler code written to thespecification of a given low level syntax such as vs 1 1 or arbfp1) or high-level such as DirectX9HLSL, Open GL Shader Language, or nVidia’s Cg language (See [High-level Programs], page 58).High level languages give you a number of advantages, such as being able to write more intuitivecode, and possibly being able to target multiple architectures in a single program (for example,the same Cg program might be able to be used in both D3D and GL, whilst the equivalentlow-level programs would require separate techniques, each targeting a different API). High-level programs also allow you to use named parameters instead of simply indexed ones, althoughparameters are not defined here, they are used in the Pass.

Here is an example of a definition of a low-level vertex program:vertex_program myVertexProgram asm{

source myVertexProgram.asmsyntax vs_1_1

}

As you can see, that’s very simple, and defining a fragment or geometry program is exactlythe same, just with vertex program replaced with fragment program or geometry program,respectively. You give the program a name in the header, followed by the word ’asm’ to indicatethat this is a low-level program. Inside the braces, you specify where the source is going tocome from (and this is loaded from any of the resource locations as with other media), andalso indicate the syntax being used. You might wonder why the syntax specification is requiredwhen many of the assembler syntaxes have a header identifying them anyway - well the reason isthat the engine needs to know what syntax the program is in before reading it, because duringcompilation of the material, we want to skip programs which use an unsupportable syntaxquickly, without loading the program first.

The current supported syntaxes are:

vs 1 1 This is one of the DirectX vertex shader assembler syntaxes.Supported on cards from: ATI Radeon 8500, nVidia GeForce 3

vs 2 0 Another one of the DirectX vertex shader assembler syntaxes.Supported on cards from: ATI Radeon 9600, nVidia GeForce FX 5 series

vs 2 x Another one of the DirectX vertex shader assembler syntaxes.Supported on cards from: ATI Radeon X series, nVidia GeForce FX 6 series

vs 3 0 Another one of the DirectX vertex shader assembler syntaxes.Supported on cards from: ATI Radeon HD 2000+, nVidia GeForce FX 6 series

arbvp1 This is the OpenGL standard assembler format for vertex programs. It’s roughlyequivalent to DirectX vs 1 1.

vp20 This is an nVidia-specific OpenGL vertex shader syntax which is a superset of vs1.1. ATI Radeon HD 2000+ also supports it.

Page 60: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 56

vp30 Another nVidia-specific OpenGL vertex shader syntax. It is a superset of vs 2.0,which is supported on nVidia GeForce FX 5 series and higher. ATI Radeon HD2000+ also supports it.

vp40 Another nVidia-specific OpenGL vertex shader syntax. It is a superset of vs 3.0,which is supported on nVidia GeForce FX 6 series and higher.

ps 1 1, ps 1 2, ps 1 3DirectX pixel shader (ie fragment program) assembler syntax.Supported on cards from: ATI Radeon 8500, nVidia GeForce 3NOTE: for ATI 8500, 9000, 9100, 9200 hardware, this profile can also be used inOpenGL. The ATI 8500 to 9200 do not support arbfp1 but do support atifs extensionin OpenGL which is very similar in function to ps 1 4 in DirectX. Ogre has a builtin ps 1 x to atifs compiler that is automatically invoked when ps 1 x is used inOpenGL on ATI hardware.

ps 1 4 DirectX pixel shader (ie fragment program) assembler syntax.Supported on cards from: ATI Radeon 8500, nVidia GeForce FX 5 seriesNOTE: for ATI 8500, 9000, 9100, 9200 hardware, this profile can also be used inOpenGL. The ATI 8500 to 9200 do not support arbfp1 but do support atifs extensionin OpenGL which is very similar in function to ps 1 4 in DirectX. Ogre has a builtin ps 1 x to atifs compiler that is automatically invoked when ps 1 x is used inOpenGL on ATI hardware.

ps 2 0 DirectX pixel shader (ie fragment program) assembler syntax.Supported cards: ATI Radeon 9600, nVidia GeForce FX 5 series

ps 2 x DirectX pixel shader (ie fragment program) assembler syntax. This is basicallyps 2 0 with a higher number of instructions.Supported cards: ATI Radeon X series, nVidia GeForce FX 6 series

ps 3 0 DirectX pixel shader (ie fragment program) assembler syntax.Supported cards: ATI Radeon HD 2000+, nVidia GeForce FX6 series

ps 3 x DirectX pixel shader (ie fragment program) assembler syntax.Supported cards: nVidia GeForce FX7 series

arbfp1 This is the OpenGL standard assembler format for fragment programs. It’s roughlyequivalent to ps 2 0, which means that not all cards that support basic pixel shadersunder DirectX support arbfp1 (for example neither the GeForce3 or GeForce4 sup-port arbfp1, but they do support ps 1 1).

fp20 This is an nVidia-specific OpenGL fragment syntax which is a superset of ps 1.3. Itallows you to use the ’nvparse’ format for basic fragment programs. It actually usesNV texture shader and NV register combiners to provide functionality equivalentto DirectX’s ps 1 1 under GL, but only for nVidia cards. However, since ATIcards adopted arbfp1 a little earlier than nVidia, it is mainly nVidia cards like theGeForce3 and GeForce4 that this will be useful for. You can find more informationabout nvparse at http://developer.nvidia.com/object/nvparse.html.

fp30 Another nVidia-specific OpenGL fragment shader syntax. It is a superset of ps 2.0,which is supported on nVidia GeForce FX 5 series and higher. ATI Radeon HD2000+ also supports it.

Page 61: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 57

fp40 Another nVidia-specific OpenGL fragment shader syntax. It is a superset of ps 3.0,which is supported on nVidia GeForce FX 6 series and higher.

gpu gp, gp4 gpAn nVidia-specific OpenGL geometry shader syntax.Supported cards: nVidia GeForce FX8 series

You can get a definitive list of the syntaxes supported by the current card by calling GpuPro-gramManager::getSingleton().getSupportedSyntax().

Specifying Named Constants for Assembler Shaders

Assembler shaders don’t have named constants (also called uniform parameters) because thelanguage does not support them - however if you for example decided to precompile your shadersfrom a high-level language down to assembler for performance or obscurity, you might still wantto use the named parameters. Well, you actually can - GpuNamedConstants which containsthe named parameter mappings has a ’save’ method which you can use to write this data todisk, where you can reference it later using the manual named constants directive inside yourassembler program declaration, e.g.vertex_program myVertexProgram asm{

source myVertexProgram.asmsyntax vs_1_1

manual_named_constants myVertexProgram.constants}

In this case myVertexProgram.constants has been created by calling highLevelGpuProgram->getNamedConstants().save("myVertexProgram.constants"); sometime earlier as preparation,from the original high-level program. Once you’ve used this directive, you can use namedparameters here even though the assembler program itself has no knowledge of them.

Default Program Parameters

While defining a vertex, geometry or fragment program, you can also specify the default param-eters to be used for materials which use it, unless they specifically override them. You do thisby including a nested ’default params’ section, like so:vertex_program Ogre/CelShadingVP cg{

source Example_CelShading.cgentry_point main_vpprofiles vs_1_1 arbvp1

default_params{

param_named_auto lightPosition light_position_object_space 0param_named_auto eyePosition camera_position_object_spaceparam_named_auto worldViewProj worldviewproj_matrixparam_named shininess float 10

}}

The syntax of the parameter definition is exactly the same as when you define parameterswhen using programs, See [Program Parameter Specification], page 70. Defining default pa-

Page 62: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 58

rameters allows you to avoid rebinding common parameters repeatedly (clearly in the aboveexample, all but ’shininess’ are unlikely to change between uses of the program) which makesyour material declarations shorter.

Declaring Shared Parameters

Often, not every parameter you want to pass to a shader is unique to that program, and perhapsyou want to give the same value to a number of different programs, and a number of differentmaterials using that program. Shared parameter sets allow you to define a ’holding area’ forshared parameters that can then be referenced when you need them in particular shaders, whilekeeping the definition of that value in one place. To define a set of shared parameters, you dothis:

shared_params YourSharedParamsName{

shared_param_named mySharedParam1 float4 0.1 0.2 0.3 0.4...

}

As you can see, you need to use the keyword ’shared params’ and follow it with the namethat you will use to identify these shared parameters. Inside the curly braces, you can defineone parameter per line, in a way which is very similar to the [param named], page 81 syntax.The definition of these lines is:

Format: shared param name <param name> <param type> [<[array size]>] [<initial values>]

The param name must be unique within the set, and the param type can be any one offloat, float2, float3, float4, int, int2, int3, int4, matrix2x2, matrix2x3, matrix2x4, matrix3x2,matrix3x3, matrix3x4, matrix4x2, matrix4x3 and matrix4x4. The array size option allows youto define arrays of param type should you wish, and if present must be a number enclosed insquare brackets (and note, must be separated from the param type with whitespace). If youwish, you can also initialise the parameters by providing a list of values.

Once you have defined the shared parameters, you can reference them inside default paramsand params blocks using [shared params ref], page 81. You can also obtain a reference tothem in your code via GpuProgramManager::getSharedParameters, and update the values forall instances using them.

High-level Programs

Support for high level vertex and fragment programs is provided through plugins; this is to makesure that an application using OGRE can use as little or as much of the high-level program func-tionality as they like. OGRE currently supports 3 high-level program types, Cg (Section 3.1.5[Cg], page 60) (an API- and card-independent, high-level language which lets you write pro-grams for both OpenGL and DirectX for lots of cards), DirectX 9 High-Level Shader Language(Section 3.1.6 [HLSL], page 61), and OpenGL Shader Language (Section 3.1.7 [GLSL], page 62).HLSL can only be used with the DirectX rendersystem, and GLSL can only be used with the GLrendersystem. Cg can be used with both, although experience has shown that more advancedprograms, particularly fragment programs which perform a lot of texture fetches, can producebetter code in the rendersystem-specific shader language.

Page 63: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 59

One way to support both HLSL and GLSL is to include separate techniques in the materialscript, each one referencing separate programs. However, if the programs are basically the same,with the same parameters, and the techniques are complex this can bloat your material scriptswith duplication fairly quickly. Instead, if the only difference is the language of the vertex &fragment program you can use OGRE’s Section 3.1.8 [Unified High-level Programs], page 66 toautomatically pick a program suitable for your rendersystem whilst using a single technique.

Skeletal Animation in Vertex Programs

You can implement skeletal animation in hardware by writing a vertex program whichuses the per-vertex blending indices and blending weights, together with an array of worldmatrices (which will be provided for you by Ogre if you bind the automatic parameter’world matrix array 3x4’). However, you need to communicate this support to Ogre so it doesnot perform skeletal animation in software for you. You do this by adding the following attributeto your vertex program definition:

includes_skeletal_animation true

When you do this, any skeletally animated entity which uses this material will forgo theusual animation blend and will expect the vertex program to do it, for both vertex positionsand normals. Note that ALL submeshes must be assigned a material which implements this,and that if you combine skeletal animation with vertex animation (See Chapter 8 [Animation],page 175) then all techniques must be hardware accelerated for any to be.

Morph Animation in Vertex Programs

You can implement morph animation in hardware by writing a vertex program which linearlyblends between the first and second position keyframes passed as positions and the first freetexture coordinate set, and by binding the animation parametric value to a parameter (whichtells you how far to interpolate between the two). However, you need to communicate thissupport to Ogre so it does not perform morph animation in software for you. You do this byadding the following attribute to your vertex program definition:

includes_morph_animation true

When you do this, any skeletally animated entity which uses this material will forgo theusual software morph and will expect the vertex program to do it. Note that if your modelincludes both skeletal animation and morph animation, they must both be implemented in thevertex program if either is to be hardware acceleration. Note that ALL submeshes must beassigned a material which implements this, and that if you combine skeletal animation withvertex animation (See Chapter 8 [Animation], page 175) then all techniques must be hardwareaccelerated for any to be.

Pose Animation in Vertex Programs

You can implement pose animation (blending between multiple poses based on weight) in avertex program by pulling in the original vertex data (bound to position), and as many poseoffset buffers as you’ve defined in your ’includes pose animation’ declaration, which will be inthe first free texture unit upwards. You must also use the animation parametric parameter todefine the starting point of the constants which will contain the pose weights; they will start atthe parameter you define and fill ’n’ constants, where ’n’ is the max number of poses this shadercan blend, i.e. the parameter to includes pose animation.

includes_pose_animation 4

Note that ALL submeshes must be assigned a material which implements this, and that ifyou combine skeletal animation with vertex animation (See Chapter 8 [Animation], page 175)then all techniques must be hardware accelerated for any to be.

Page 64: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 60

Vertex texture fetching in vertex programs

If your vertex program makes use of Section 3.1.10 [Vertex Texture Fetch], page 83, you shoulddeclare that with the ’uses vertex texture fetch’ directive. This is enough to tell Ogre that yourprogram uses this feature and that hardware support for it should be checked.

uses_vertex_texture_fetch true

Adjacency information in Geometry Programs

Some geometry programs require adjacency information from the geometry. It means that ageometry shader doesn’t only get the information of the primitive it operates on, it also hasaccess to its neighbors (in the case of lines or triangles). This directive will tell Ogre to sendthe information to the geometry shader.

uses_adjacency_information true

Vertex Programs With Shadows

When using shadows (See Chapter 7 [Shadows], page 160), the use of vertex programs can addsome additional complexities, because Ogre can only automatically deal with everything whenusing the fixed-function pipeline. If you use vertex programs, and you are also using shadows,you may need to make some adjustments.

If you use stencil shadows, then any vertex programs which do vertex deformation can bea problem, because stencil shadows are calculated on the CPU, which does not have access tothe modified vertices. If the vertex program is doing standard skeletal animation, this is ok (seesection above) because Ogre knows how to replicate the effect in software, but any other vertexdeformation cannot be replicated, and you will either have to accept that the shadow will notreflect this deformation, or you should turn off shadows for that object.

If you use texture shadows, then vertex deformation is acceptable; however, when renderingthe object into a shadow texture (the shadow caster pass), the shadow has to be rendered in asolid colour (linked to the ambient colour for modulative shadows, black for additive shadows).You must therefore provide an alternative vertex program, so Ogre provides you with a way ofspecifying one to use when rendering the caster, See [Shadows and Vertex Programs], page 81.

3.1.5 Cg programs

In order to define Cg programs, you have to have to load Plugin CgProgramManager.so/.dll atstartup, either through plugins.cfg or through your own plugin loading code. They are very easyto define:fragment_program myCgFragmentProgram cg{

source myCgFragmentProgram.cgentry_point mainprofiles ps_2_0 arbfp1

}

There are a few differences between this and the assembler program - to begin with, we declarethat the fragment program is of type ’cg’ rather than ’asm’, which indicates that it’s a high-levelprogram using Cg. The ’source’ parameter is the same, except this time it’s referencing a Cgsource file instead of a file of assembler.

Here is where things start to change. Firstly, we need to define an ’entry point’, which is the

Page 65: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 61

name of a function in the Cg program which will be the first one called as part of the fragmentprogram. Unlike assembler programs, which just run top-to-bottom, Cg programs can includemultiple functions and as such you must specify the one which start the ball rolling.

Next, instead of a fixed ’syntax’ parameter, you specify one or more ’profiles’; profiles are howCg compiles a program down to the low-level assembler. The profiles have the same names asthe assembler syntax codes mentioned above; the main difference is that you can list more thanone, thus allowing the program to be compiled down to more low-level syntaxes so you can writea single high-level program which runs on both D3D and GL. You are advised to just enter thesimplest profiles under which your programs can be compiled in order to give it the maximumcompatibility. The ordering also matters; if a card supports more than one syntax then the onelisted first will be used.

Lastly, there is a final option called ’compile arguments’, where you can specify argumentsexactly as you would to the cgc command-line compiler, should you wish to.

3.1.6 DirectX9 HLSL

DirectX9 HLSL has a very similar language syntax to Cg but is tied to the DirectX API. The onlybenefit over Cg is that it only requires the DirectX 9 render system plugin, not any additionalplugins. Declaring a DirectX9 HLSL program is very similar to Cg. Here’s an example:vertex_program myHLSLVertexProgram hlsl{

source myHLSLVertexProgram.txtentry_point maintarget vs_2_0

}

As you can see, the main syntax is almost identical, except that instead of ’profiles’ with alist of assembler formats, you have a ’target’ parameter which allows a single assembler targetto be specified - obviously this has to be a DirectX assembler format syntax code.

Important Matrix Ordering Note: One thing to bear in mind is that HLSL allows youto use 2 different ways to multiply a vector by a matrix - mul(v,m) or mul(m,v). The onlydifference between them is that the matrix is effectively transposed. You should use mul(m,v)with the matrices passed in from Ogre - this agrees with the shaders produced from toolslike RenderMonkey, and is consistent with Cg too, but disagrees with the Dx9 SDK and FXComposer which use mul(v,m) - you will have to switch the parameters to mul() in those shaders.

Note that if you use the float3x4 / matrix3x4 type in your shader, bound to an OGRE auto-definition (such as bone matrices) you should use the column major matrices = false option(discussed below) in your program definition. This is because OGRE passes float3x4 as row-major to save constant space (3 float4’s rather than 4 float4’s with only the top 3 values used)and this tells OGRE to pass all matrices like this, so that you can use mul(m,v) consistentlyfor all calculations. OGRE will also to tell the shader to compile in row-major form (you don’thave to set the /Zpr compile option or #pragma pack(row-major) option, OGRE does this foryou). Note that passing bones in float4x3 form is not supported by OGRE, but you don’t needit given the above.

Page 66: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 62

Advanced options

preprocessor defines <defines>This allows you to define symbols which can be used inside the HLSL shader codeto alter the behaviour (through #ifdef or #if clauses). Definitions are separated by’;’ or ’,’ and may optionally have a ’=’ operator within them to specify a definitionvalue. Those without an ’=’ will implicitly have a definition of 1.

column major matrices <true|false>The default for this option is ’true’ so that OGRE passes matrices auto-boundmatrices in a form where mul(m,v) works. Setting this option to false does 2 things- it transpose auto-bound 4x4 matrices and also sets the /Zpr (row-major) optionon the shader compilation. This means you can still use mul(m,v), but the matrixlayout is row-major instead. This is only useful if you need to use bone matrices(float3x4) in a shader since it saves a float4 constant for every bone involved.

optimisation level <opt>Set the optimisation level, which can be one of ’default’, ’none’, ’0’, ’1’, ’2’, or ’3’.This corresponds to the /O parameter of fxc.exe, except that in ’default’ mode,optimisation is disabled in debug mode and set to 1 in release mode (fxc.exe uses 1all the time). Unsurprisingly the default value is ’default’. You may want to changethis if you want to tweak the optimisation, for example if your shader gets so complexthat it will not longer compile without some minimum level of optimisation.

3.1.7 OpenGL GLSL

OpenGL GLSL has a similar language syntax to HLSL but is tied to the OpenGL API. Theare a few benefits over Cg in that it only requires the OpenGL render system plugin, not anyadditional plugins. Declaring a OpenGL GLSL program is similar to Cg but simpler. Here’s anexample:

vertex_program myGLSLVertexProgram glsl{

source myGLSLVertexProgram.txt}

In GLSL, no entry point needs to be defined since it is always ’main()’ and there is no targetdefinition since GLSL source is compiled into native GPU code and not intermediate assembly.

GLSL supports the use of modular shaders. This means you can write GLSL external func-tions that can be used in multiple shaders.

vertex_program myExternalGLSLFunction1 glsl{

source myExternalGLSLfunction1.txt}

vertex_program myExternalGLSLFunction2 glsl{

source myExternalGLSLfunction2.txt}

vertex_program myGLSLVertexProgram1 glsl{

Page 67: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 63

source myGLSLfunction.txtattach myExternalGLSLFunction1 myExternalGLSLFunction2

}

vertex_program myGLSLVertexProgram2 glsl{

source myGLSLfunction.txtattach myExternalGLSLFunction1

}

External GLSL functions are attached to the program that needs them by using ’attach’ andincluding the names of all external programs required on the same line separated by spaces.This can be done for both vertex and fragment programs.

GLSL Texture Samplers

To pass texture unit index values from the material script to texture samplers in glsl use ’int’type named parameters. See the example below:

excerpt from GLSL example.frag source:varying vec2 UV;uniform sampler2D diffuseMap;

void main(void){

gl_FragColor = texture2D(diffuseMap, UV);}

In material script:fragment_program myFragmentShader glsl{source example.frag

}

material exampleGLSLTexturing{technique{

pass{fragment_program_ref myFragmentShader{

param_named diffuseMap int 0}

texture_unit{

texture myTexture.jpg 2d}

}}

}

An index value of 0 refers to the first texture unit in the pass, an index value of 1 refers tothe second unit in the pass and so on.

Page 68: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 64

Matrix parameters

Here are some examples of passing matrices to GLSL mat2, mat3, mat4 uniforms:

material exampleGLSLmatrixUniforms{technique matrix_passing{

pass examples{vertex_program_ref myVertexShader{

// mat4 uniformparam_named OcclusionMatrix matrix4x4 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0// orparam_named ViewMatrix float16 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0

// mat3param_named TextRotMatrix float9 1 0 0 0 1 0 0 0 1

}

fragment_program_ref myFragmentShader{

// mat2 uniformparam_named skewMatrix float4 0.5 0 -0.5 1.0

}

}}

}

Accessing OpenGL states in GLSL

GLSL can access most of the GL states directly so you do not need to pass these states through[param named auto], page 81 in the material script. This includes lights, material state, andall the matrices used in the openGL state i.e. model view matrix, worldview projection matrixetc.

Binding vertex attributes

GLSL natively supports automatic binding of the most common incoming per-vertex attributes(e.g. gl Vertex, gl Normal, gl MultiTexCoord0 etc). However, there are some which are notautomatically bound, which must be declared in the shader using the ’attribute <type> <name>’syntax, and the vertex data bound to it by Ogre.

In addition to the built in attributes described in section 7.3 of the GLSL manual, Ogresupports a number of automatically bound custom vertex attributes. There are some driversthat do not behave correctly when mixing built-in vertex attributes like gl Normal and custom

Page 69: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 65

vertex attributes, so for maximum compatibility you may well wish to use all custom attributesin shaders where you need at least one (e.g. for skeletal animation).

vertex Binds VES POSITION, declare as ’attribute vec4 vertex;’.

normal Binds VES NORMAL, declare as ’attribute vec3 normal;’.

colour Binds VES DIFFUSE, declare as ’attribute vec4 colour;’.

secondary colourBinds VES SPECULAR, declare as ’attribute vec4 secondary colour;’.

uv0 - uv7 Binds VES TEXTURE COORDINATES, declare as ’attribute vec4 uv0;’. Notethat uv6 and uv7 share attributes with tangent and binormal respectively so cannotboth be present.

tangent Binds VES TANGENT, declare as ’attribute vec3 tangent;’.

binormal Binds VES BINORMAL, declare as ’attribute vec3 binormal;’.

blendIndicesBinds VES BLEND INDICES, declare as ’attribute vec4 blendIndices;’.

blendWeightsBinds VES BLEND WEIGHTS, declare as ’attribute vec4 blendWeights;’.

Preprocessor definitions

GLSL supports using preprocessor definitions in your code - some are defined by the imple-mentation, but you can also define your own, say in order to use the same source code for afew different variants of the same technique. In order to use this feature, include preprocessorconditions in your GLSL code, of the kind #ifdef SYMBOL, #if SYMBOL==2 etc. Then inyour program definition, use the ’preprocessor defines’ option, following it with a string if def-initions. Definitions are separated by ’;’ or ’,’ and may optionally have a ’=’ operator withinthem to specify a definition value. Those without an ’=’ will implicitly have a definition of 1.For example:

// in your GLSL

#ifdef CLEVERTECHNIQUE// some clever stuff here

#else// normal technique

#endif

#if NUM_THINGS==2// Some specific code

#else// something else

#endif

// in your program definitionpreprocessor_defines CLEVERTECHNIQUE,NUMTHINGS=2

This way you can use the same source code but still include small variations, each one definedas a different Ogre program name but based on the same source code.

Page 70: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 66

GLSL Geometry shader specification

GLSL allows the same shader to run on different types of geometry primitives. In order toproperly link the shaders together, you have to specify which primitives it will receive as input,which primitives it will emit and how many vertices a single run of the shader can generate.The GLSL geometry program definition requires three additional parameters :

input operation typeThe operation type of the geometry that the shader will receive. Can be ’point list’,’line list’, ’line strip’, ’triangle list’, ’triangle strip’ or ’triangle fan’.

output operation typeThe operation type of the geometry that the shader will emit. Can be ’point list’,’line strip’ or ’triangle strip’.

max output verticesThe maximum number of vertices that the shader can emit. There is an upper limitfor this value, it is exposed in the render system capabilities.

For example:geometry_program Ogre/GPTest/Swizzle_GP_GLSL glsl{

source SwizzleGP.glslinput_operation_type triangle_listoutput_operation_type line_stripmax_output_vertices 6

}

3.1.8 Unified High-level Programs

As mentioned above, it can often be useful to write both HLSL and GLSL programs to specif-ically target each platform, but if you do this via multiple material techniques this can causea bloated material definition when the only difference is the program language. Well, there isanother option. You can ’wrap’ multiple programs in a ’unified’ program definition, which willautomatically choose one of a series of ’delegate’ programs depending on the rendersystem andhardware support.vertex_program myVertexProgram unified{

delegate realProgram1delegate realProgram2... etc

}

This works for both vertex and fragment programs, and you can list as many delegates asyou like - the first one to be supported by the current rendersystem & hardware will be usedas the real program. This is almost like a mini-technique system, but for a single programand with a much tighter purpose. You can only use this where the programs take all the sameinputs, particularly textures and other pass / sampler state. Where the only difference betweenthe programs is the language (or possibly the target in HLSL - you can include multiple HLSLprograms with different targets in a single unified program too if you want, or indeed any numberof other high-level programs), this can become a very powerful feature. For example, withoutthis feature here’s how you’d have to define a programmable material which supported HLSLand GLSL:vertex_program myVertexProgramHLSL hlsl{

source prog.hlsl

Page 71: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 67

entry_point main_vptarget vs_2_0

}fragment_program myFragmentProgramHLSL hlsl{

source prog.hlslentry_point main_fptarget ps_2_0

}vertex_program myVertexProgramGLSL glsl{

source prog.vert}fragment_program myFragmentProgramGLSL glsl{

source prog.fragdefault_params{

param_named tex int 0}

}material SupportHLSLandGLSLwithoutUnified{

// HLSL techniquetechnique{

pass{

vertex_program_ref myVertexProgramHLSL{

param_named_auto worldViewProj world_view_proj_matrixparam_named_auto lightColour light_diffuse_colour 0param_named_auto lightSpecular light_specular_colour 0param_named_auto lightAtten light_attenuation 0

}fragment_program_ref myFragmentProgramHLSL{}

}}// GLSL techniquetechnique{

pass{

vertex_program_ref myVertexProgramHLSL{

param_named_auto worldViewProj world_view_proj_matrixparam_named_auto lightColour light_diffuse_colour 0param_named_auto lightSpecular light_specular_colour 0param_named_auto lightAtten light_attenuation 0

}

Page 72: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 68

fragment_program_ref myFragmentProgramHLSL{}

}}

}

And that’s a really small example. Everything you added to the HLSL technique, you’d haveto duplicate in the GLSL technique too. So instead, here’s how you’d do it with unified programdefinitions:

vertex_program myVertexProgramHLSL hlsl{

source prog.hlslentry_point main_vptarget vs_2_0

}fragment_program myFragmentProgramHLSL hlsl{

source prog.hlslentry_point main_fptarget ps_2_0

}vertex_program myVertexProgramGLSL glsl{

source prog.vert}fragment_program myFragmentProgramGLSL glsl{

source prog.fragdefault_params{

param_named tex int 0}

}// Unified definitionvertex_program myVertexProgram unified{

delegate myVertexProgramGLSLdelegate myVertexProgramHLSL

}fragment_program myFragmentProgram unified{

delegate myFragmentProgramGLSLdelegate myFragmentProgramHLSL

}material SupportHLSLandGLSLwithUnified{

// HLSL techniquetechnique{

pass{

Page 73: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 69

vertex_program_ref myVertexProgram{

param_named_auto worldViewProj world_view_proj_matrixparam_named_auto lightColour light_diffuse_colour 0param_named_auto lightSpecular light_specular_colour 0param_named_auto lightAtten light_attenuation 0

}fragment_program_ref myFragmentProgram{}

}}

}

At runtime, when myVertexProgram or myFragmentProgram are used, OGRE automaticallypicks a real program to delegate to based on what’s supported on the current hardware /rendersystem. If none of the delegates are supported, the entire technique referencing the unifiedprogram is marked as unsupported and the next technique in the material is checked fro fallback,just like normal. As your materials get larger, and you find you need to support HLSL and GLSLspecifically (or need to write multiple interface-compatible versions of a program for whateverother reason), unified programs can really help reduce duplication.

3.1.9 Using Vertex/Geometry/Fragment Programs in a Pass

Within a pass section of a material script, you can reference a vertex, geometry and / or afragment program which is been defined in a .program script (See Section 3.1.4 [DeclaringVertex/Geometry/Fragment Programs], page 54). The programs are defined separately fromthe usage of them in the pass, since the programs are very likely to be reused between manyseparate materials, probably across many different .material scripts, so this approach lets youdefine the program only once and use it many times.

As well as naming the program in question, you can also provide parameters to it. Here’s asimple example:vertex_program_ref myVertexProgram{

param_indexed_auto 0 worldviewproj_matrixparam_indexed 4 float4 10.0 0 0 0

}

In this example, we bind a vertex program called ’myVertexProgram’ (which will be definedelsewhere) to the pass, and give it 2 parameters, one is an ’auto’ parameter, meaning we do nothave to supply a value as such, just a recognised code (in this case it’s the world/view/projectionmatrix which is kept up to date automatically by Ogre). The second parameter is a manuallyspecified parameter, a 4-element float. The indexes are described later.

The syntax of the link to a vertex program and a fragment or geometry program are iden-tical, the only difference is that ’fragment program ref’ and ’geometry program ref’ are usedrespectively instead of ’vertex program ref’.

For many situations vertex, geometry and fragment programs are associated with each otherin a pass but this is not cast in stone. You could have a vertex program that can be used by

Page 74: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 70

several different fragment programs. Another situation that arises is that you can mix fixedpipeline and programmable pipeline (shaders) together. You could use the non-programablevertex fixed function pipeline and then provide a fragment program ref in a pass i.e. therewould be no vertex program ref section in the pass. The fragment program referenced in thepass must meet the requirements as defined in the related API in order to read from the outputsof the vertex fixed pipeline. You could also just have a vertex program that outputs to thefragment fixed function pipeline.

The requirements to read from or write to the fixed function pipeline are similar betweenrendering API’s (DirectX and OpenGL) but how its actually done in each type of shader(vertex, geometry or fragment) depends on the shader language. For HLSL (DirectXAPI) and associated asm consult MSDN at http://msdn.microsoft.com/library/.For GLSL (OpenGL), consult section 7.6 of the GLSL spec 1.1 available athttp://developer.3dlabs.com/documents/index.htm. The built in varying vari-ables provided in GLSL allow your program to read/write to the fixed function pipelinevaryings. For Cg consult the Language Profiles section in CgUsersManual.pdf that comes withthe Cg Toolkit available at http://developer.nvidia.com/object/cg_toolkit.html. ForHLSL and Cg its the varying bindings that allow your shader programs to read/write to thefixed function pipeline varyings.

Parameter specification

Parameters can be specified using one of 4 commands as shown below. The same syntax isused whether you are defining a parameter just for this particular use of the program, or whenspecifying the [Default Program Parameters], page 57. Parameters set in the specific use of theprogram override the defaults.• [param indexed], page 70• [param indexed auto], page 71• [param named], page 81• [param named auto], page 81• [shared params ref], page 81

param indexed

This command sets the value of an indexed parameter.

format: param indexed <index> <type> <value>

example: param indexed 0 float4 10.0 0 0 0

The ’index’ is simply a number representing the position in the parameter list which thevalue should be written, and you should derive this from your program definition. The index isrelative to the way constants are stored on the card, which is in 4-element blocks. For example ifyou defined a float4 parameter at index 0, the next index would be 1. If you defined a matrix4x4at index 0, the next usable index would be 4, since a 4x4 matrix takes up 4 indexes.

Page 75: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 71

The value of ’type’ can be float4, matrix4x4, float<n>, int4, int<n>. Note that ’int’ parametersare only available on some more advanced program syntaxes, check the D3D or GL vertex /fragment program documentation for full details. Typically the most useful ones will be float4and matrix4x4. Note that if you use a type which is not a multiple of 4, then the remainingvalues up to the multiple of 4 will be filled with zeroes for you (since GPUs always use banks of4 floats per constant even if only one is used).

’value’ is simply a space or tab-delimited list of values which can be converted into the typeyou have specified.

param indexed auto

This command tells Ogre to automatically update a given parameter with a derived value. Thisfrees you from writing code to update program parameters every frame when they are alwayschanging.

format: param indexed auto <index> <value code> <extra params>

example: param indexed auto 0 worldviewproj matrix

’index’ has the same meaning as [param indexed], page 70; note this time you do not haveto specify the size of the parameter because the engine knows this already. In the example, theworld/view/projection matrix is being used so this is implicitly a matrix4x4.

’value code’ is one of a list of recognised values:

world matrixThe current world matrix.

inverse world matrixThe inverse of the current world matrix.

transpose world matrixThe transpose of the world matrix

inverse transpose world matrixThe inverse transpose of the world matrix

world matrix array 3x4An array of world matrices, each represented as only a 3x4 matrix (3 rows of4columns) usually for doing hardware skinning. You should make enough entriesavailable in your vertex program for the number of bones in use, ie an array ofnumBones*3 float4’s.

view matrixThe current view matrix.

inverse view matrixThe inverse of the current view matrix.

transpose view matrixThe transpose of the view matrix

Page 76: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 72

inverse transpose view matrixThe inverse transpose of the view matrix

projection matrixThe current projection matrix.

inverse projection matrixThe inverse of the projection matrix

transpose projection matrixThe transpose of the projection matrix

inverse transpose projection matrixThe inverse transpose of the projection matrix

worldview matrixThe current world and view matrices concatenated.

inverse worldview matrixThe inverse of the current concatenated world and view matrices.

transpose worldview matrixThe transpose of the world and view matrices

inverse transpose worldview matrixThe inverse transpose of the current concatenated world and view matrices.

viewproj matrixThe current view and projection matrices concatenated.

inverse viewproj matrixThe inverse of the view & projection matrices

transpose viewproj matrixThe transpose of the view & projection matrices

inverse transpose viewproj matrixThe inverse transpose of the view & projection matrices

worldviewproj matrixThe current world, view and projection matrices concatenated.

inverse worldviewproj matrixThe inverse of the world, view and projection matrices

transpose worldviewproj matrixThe transpose of the world, view and projection matrices

inverse transpose worldviewproj matrixThe inverse transpose of the world, view and projection matrices

texture matrixThe transform matrix of a given texture unit, as it would usually be seen in the fixed-function pipeline. This requires an index in the ’extra params’ field, and relates tothe ’nth’ texture unit of the pass in question. NB if the given index exceeds thenumber of texture units available for this pass, then the parameter will be set toMatrix4::IDENTITY.

render target flippingThe value use to adjust transformed y position if bypassed projection matrix trans-form. It’s -1 if the render target requires texture flipping, +1 otherwise.

Page 77: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 73

vertex windingIndicates what vertex winding mode the render state is in at this point; +1 forstandard, -1 for inverted (e.g. when processing reflections).

light diffuse colourThe diffuse colour of a given light; this requires an index in the ’extra params’ field,and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers tothe closest light - note that directional lights are always first in the list and alwayspresent). NB if there are no lights this close, then the parameter will be set to black.

light specular colourThe specular colour of a given light; this requires an index in the ’extra params’field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refersto the closest light). NB if there are no lights this close, then the parameter will beset to black.

light attenuationA float4 containing the 4 light attenuation variables for a given light. This requiresan index in the ’extra params’ field, and relates to the ’nth’ closest light which couldaffect this object (i.e. 0 refers to the closest light). NB if there are no lights thisclose, then the parameter will be set to all zeroes. The order of the parameters isrange, constant attenuation, linear attenuation, quadric attenuation.

spotlight paramsA float4 containing the 3 spotlight parameters and a control value. The order ofthe parameters is cos(inner angle /2 ), cos(outer angle / 2), falloff, and the final wvalue is 1.0f. For non-spotlights the value is float4(1,0,0,1). This requires an indexin the ’extra params’ field, and relates to the ’nth’ closest light which could affectthis object (i.e. 0 refers to the closest light). If there are less lights than this, thedetails are like a non-spotlight.

light positionThe position of a given light in world space. This requires an index in the ’ex-tra params’ field, and relates to the ’nth’ closest light which could affect this object(i.e. 0 refers to the closest light). NB if there are no lights this close, then theparameter will be set to all zeroes. Note that this property will work with all kindsof lights, even directional lights, since the parameter is set as a 4D vector. Pointlights will be (pos.x, pos.y, pos.z, 1.0f) whilst directional lights will be (-dir.x, -dir.y,-dir.z, 0.0f). Operations like dot products will work consistently on both.

light directionThe direction of a given light in world space. This requires an index in the ’ex-tra params’ field, and relates to the ’nth’ closest light which could affect this object(i.e. 0 refers to the closest light). NB if there are no lights this close, then theparameter will be set to all zeroes. DEPRECATED - this property only works ondirectional lights, and we recommend that you use light position instead since thatreturns a generic 4D vector.

light position object spaceThe position of a given light in object space (i.e. when the object is at (0,0,0)). Thisrequires an index in the ’extra params’ field, and relates to the ’nth’ closest lightwhich could affect this object (i.e. 0 refers to the closest light). NB if there are nolights this close, then the parameter will be set to all zeroes. Note that this propertywill work with all kinds of lights, even directional lights, since the parameter is setas a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f) whilst directional

Page 78: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 74

lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot products will workconsistently on both.

light direction object spaceThe direction of a given light in object space (i.e. when the object is at (0,0,0)).This requires an index in the ’extra params’ field, and relates to the ’nth’ clos-est light which could affect this object (i.e. 0 refers to the closest light). NB ifthere are no lights this close, then the parameter will be set to all zeroes. DEPRE-CATED, except for spotlights - for directional lights we recommend that you uselight position object space instead since that returns a generic 4D vector.

light distance object spaceThe distance of a given light from the centre of the object - this is a useful approxi-mation to per-vertex distance calculations for relatively small objects. This requiresan index in the ’extra params’ field, and relates to the ’nth’ closest light which couldaffect this object (i.e. 0 refers to the closest light). NB if there are no lights thisclose, then the parameter will be set to all zeroes.

light position view spaceThe position of a given light in view space (i.e. when the camera is at (0,0,0)). Thisrequires an index in the ’extra params’ field, and relates to the ’nth’ closest lightwhich could affect this object (i.e. 0 refers to the closest light). NB if there are nolights this close, then the parameter will be set to all zeroes. Note that this propertywill work with all kinds of lights, even directional lights, since the parameter is setas a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f) whilst directionallights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot products will workconsistently on both.

light direction view spaceThe direction of a given light in view space (i.e. when the camera is at (0,0,0)).This requires an index in the ’extra params’ field, and relates to the ’nth’ clos-est light which could affect this object (i.e. 0 refers to the closest light). NB ifthere are no lights this close, then the parameter will be set to all zeroes. DEPRE-CATED, except for spotlights - for directional lights we recommend that you uselight position view space instead since that returns a generic 4D vector.

light powerThe ’power’ scaling for a given light, useful in HDR rendering. This requires anindex in the ’extra params’ field, and relates to the ’nth’ closest light which couldaffect this object (i.e. 0 refers to the closest light).

light diffuse colour power scaledAs light diffuse colour, except the RGB channels of the passed colour have beenpre-scaled by the light’s power scaling as given by light power.

light specular colour power scaledAs light specular colour, except the RGB channels of the passed colour have beenpre-scaled by the light’s power scaling as given by light power.

light numberWhen rendering, there is generally a list of lights available for use by all of the passesfor a given object, and those lights may or may not be referenced in one or morepasses. Sometimes it can be useful to know where in that overall list a given lightlight (as seen from a pass) is. For example if you use iterate once per light, the passalways sees the light as index 0, but in each iteration the actual light referenced isdifferent. This binding lets you pass through the actual index of the light in that

Page 79: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 75

overall list. You just need to give it a parameter of the pass-relative light numberand it will map it to the overall list index.

light diffuse colour arrayAs light diffuse colour, except that this populates an array of parameters with anumber of lights, and the ’extra params’ field refers to the number of ’nth clos-est’ lights to be processed. This parameter is not compatible with light-basedpass iteration options but can be used for single-pass lighting.

light specular colour arrayAs light specular colour, except that this populates an array of parameters with anumber of lights, and the ’extra params’ field refers to the number of ’nth clos-est’ lights to be processed. This parameter is not compatible with light-basedpass iteration options but can be used for single-pass lighting.

light diffuse colour power scaled arrayAs light diffuse colour power scaled, except that this populates an array of parame-ters with a number of lights, and the ’extra params’ field refers to the number of ’nthclosest’ lights to be processed. This parameter is not compatible with light-basedpass iteration options but can be used for single-pass lighting.

light specular colour power scaled arrayAs light specular colour power scaled, except that this populates an array of pa-rameters with a number of lights, and the ’extra params’ field refers to the numberof ’nth closest’ lights to be processed. This parameter is not compatible with light-based pass iteration options but can be used for single-pass lighting.

light attenuation arrayAs light attenuation, except that this populates an array of parameters with a num-ber of lights, and the ’extra params’ field refers to the number of ’nth closest’ lightsto be processed. This parameter is not compatible with light-based pass iterationoptions but can be used for single-pass lighting.

spotlight params arrayAs spotlight params, except that this populates an array of parameters with a num-ber of lights, and the ’extra params’ field refers to the number of ’nth closest’ lightsto be processed. This parameter is not compatible with light-based pass iterationoptions but can be used for single-pass lighting.

light position arrayAs light position, except that this populates an array of parameters with a numberof lights, and the ’extra params’ field refers to the number of ’nth closest’ lightsto be processed. This parameter is not compatible with light-based pass iterationoptions but can be used for single-pass lighting.

light direction arrayAs light direction, except that this populates an array of parameters with a numberof lights, and the ’extra params’ field refers to the number of ’nth closest’ lightsto be processed. This parameter is not compatible with light-based pass iterationoptions but can be used for single-pass lighting.

light position object space arrayAs light position object space, except that this populates an array of parameterswith a number of lights, and the ’extra params’ field refers to the number of ’nthclosest’ lights to be processed. This parameter is not compatible with light-basedpass iteration options but can be used for single-pass lighting.

Page 80: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 76

light direction object space arrayAs light direction object space, except that this populates an array of parameterswith a number of lights, and the ’extra params’ field refers to the number of ’nthclosest’ lights to be processed. This parameter is not compatible with light-basedpass iteration options but can be used for single-pass lighting.

light distance object space arrayAs light distance object space, except that this populates an array of parameterswith a number of lights, and the ’extra params’ field refers to the number of ’nthclosest’ lights to be processed. This parameter is not compatible with light-basedpass iteration options but can be used for single-pass lighting.

light position view space arrayAs light position view space, except that this populates an array of parameterswith a number of lights, and the ’extra params’ field refers to the number of ’nthclosest’ lights to be processed. This parameter is not compatible with light-basedpass iteration options but can be used for single-pass lighting.

light direction view space arrayAs light direction view space, except that this populates an array of parameterswith a number of lights, and the ’extra params’ field refers to the number of ’nthclosest’ lights to be processed. This parameter is not compatible with light-basedpass iteration options but can be used for single-pass lighting.

light power arrayAs light power, except that this populates an array of parameters with a number oflights, and the ’extra params’ field refers to the number of ’nth closest’ lights to beprocessed. This parameter is not compatible with light-based pass iteration optionsbut can be used for single-pass lighting.

light countThe total number of lights active in this pass.

light casts shadowsSets an integer parameter to 1 if the given light casts shadows, 0 otherwise, Requiresa light index parameter.

ambient light colourThe colour of the ambient light currently set in the scene.

surface ambient colourThe ambient colour reflectance properties of the pass (See [ambient], page 21). Thisallows you access to fixed-function pipeline property handily.

surface diffuse colourThe diffuse colour reflectance properties of the pass (See [diffuse], page 22). Thisallows you access to fixed-function pipeline property handily.

surface specular colourThe specular colour reflectance properties of the pass (See [specular], page 22). Thisallows you access to fixed-function pipeline property handily.

surface emissive colourThe amount of self-illumination of the pass (See [emissive], page 23). This allowsyou access to fixed-function pipeline property handily.

surface shininessThe shininess of the pass, affecting the size of specular highlights (See [specular],page 22). This allows you bind to fixed-function pipeline property handily.

Page 81: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 77

derived ambient light colourThe derived ambient light colour, with ’r’, ’g’, ’b’ components filled with product ofsurface ambient colour and ambient light colour, respectively, and ’a’ componentfilled with surface ambient alpha component.

derived scene colourThe derived scene colour, with ’r’, ’g’ and ’b’ components filled with sum of de-rived ambient light colour and surface emissive colour, respectively, and ’a’ com-ponent filled with surface diffuse alpha component.

derived light diffuse colourThe derived light diffuse colour, with ’r’, ’g’ and ’b’ components filled with productof surface diffuse colour, light diffuse colour and light power, respectively, and ’a’component filled with surface diffuse alpha component. This requires an index inthe ’extra params’ field, and relates to the ’nth’ closest light which could affect thisobject (i.e. 0 refers to the closest light).

derived light specular colourThe derived light specular colour, with ’r’, ’g’ and ’b’ components filled with prod-uct of surface specular colour and light specular colour, respectively, and ’a’ com-ponent filled with surface specular alpha component. This requires an index in the’extra params’ field, and relates to the ’nth’ closest light which could affect thisobject (i.e. 0 refers to the closest light).

derived light diffuse colour arrayAs derived light diffuse colour, except that this populates an array of parameterswith a number of lights, and the ’extra params’ field refers to the number of ’nthclosest’ lights to be processed. This parameter is not compatible with light-basedpass iteration options but can be used for single-pass lighting.

derived light specular colour arrayAs derived light specular colour, except that this populates an array of parameterswith a number of lights, and the ’extra params’ field refers to the number of ’nthclosest’ lights to be processed. This parameter is not compatible with light-basedpass iteration options but can be used for single-pass lighting.

fog colour The colour of the fog currently set in the scene.

fog paramsThe parameters of the fog currently set in the scene. Packed as (exp density, lin-ear start, linear end, 1.0 / (linear end - linear start)).

camera positionThe current cameras position in world space.

camera position object spaceThe current cameras position in object space (i.e. when the object is at (0,0,0)).

lod camera positionThe current LOD camera position in world space. A LOD camera is a separatecamera associated with the rendering camera which allows LOD calculations to becalculated separately. The classic example is basing the LOD of the shadow texturerender on the position of the main camera, not the shadow camera.

lod camera position object spaceThe current LOD camera position in object space (i.e. when the object is at (0,0,0)).

time The current time, factored by the optional parameter (or 1.0f if not supplied).

Page 82: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 78

time 0 x Single float time value, which repeats itself based on "cycle time" given as an ’ex-tra params’ field

costime 0 xCosine of time 0 x

sintime 0 xSine of time 0 x

tantime 0 xTangent of time 0 x

time 0 x packed4-element vector of time0 x, sintime0 x, costime0 x, tantime0 x

time 0 1 As time0 x but scaled to [0..1]

costime 0 1As costime0 x but scaled to [0..1]

sintime 0 1As sintime0 x but scaled to [0..1]

tantime 0 1As tantime0 x but scaled to [0..1]

time 0 1 packedAs time0 x packed but all values scaled to [0..1]

time 0 2piAs time0 x but scaled to [0..2*Pi]

costime 0 2piAs costime0 x but scaled to [0..2*Pi]

sintime 0 2piAs sintime0 x but scaled to [0..2*Pi]

tantime 0 2piAs tantime0 x but scaled to [0..2*Pi]

time 0 2pi packedAs time0 x packed but scaled to [0..2*Pi]

frame timeThe current frame time, factored by the optional parameter (or 1.0f if not supplied).

fps The current frames per second

viewport widthThe current viewport width in pixels

viewport heightThe current viewport height in pixels

inverse viewport width1.0/the current viewport width in pixels

inverse viewport height1.0/the current viewport height in pixels

viewport size4-element vector of viewport width, viewport height, inverse viewport width, in-verse viewport height

Page 83: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 79

texel offsetsProvides details of the rendersystem-specific texture coordinate offsets required tomap texels onto pixels. float4(horizontalOffset, verticalOffset, horizontalOffset /viewport width, verticalOffset / viewport height).

view directionView direction vector in object space

view side vectorView local X axis

view up vectorView local Y axis

fov Vertical field of view, in radians

near clip distanceNear clip distance, in world units

far clip distanceFar clip distance, in world units (may be 0 for infinite view projection)

texture viewproj matrixApplicable to vertex programs which have been specified as the ’shadow receiver’ ver-tex program alternative, or where a texture unit is marked as content type shadow;this provides details of the view/projection matrix for the current shadow projec-tor. The optional ’extra params’ entry specifies which light the projector refers to(for the case of content type shadow where more than one shadow texture may bepresent in a single pass), where 0 is the default and refers to the first light referencedin this pass.

texture viewproj matrix arrayAs texture viewproj matrix, except an array of matrices is passed, up to the numberthat you specify as the ’extra params’ value.

texture worldviewproj matrixAs texture viewproj matrix except it also includes the world matrix.

texture worldviewproj matrix arrayAs texture worldviewproj matrix, except an array of matrices is passed, up to thenumber that you specify as the ’extra params’ value.

spotlight viewproj matrixProvides a view / projection matrix which matches the set up of a given spotlight(requires an ’extra params’ entry to indicate the light index, which must be a spot-light). Can be used to project a texture from a given spotlight.

spotlight worldviewproj matrixAs spotlight viewproj matrix except it also includes the world matrix.

scene depth rangeProvides information about the depth range as viewed from the current camerabeing used to render. Provided as float4(minDepth, maxDepth, depthRange, 1 /depthRange).

shadow scene depth rangeProvides information about the depth range as viewed from the shadow cam-era relating to a selected light. Requires a light index parameter. Provided asfloat4(minDepth, maxDepth, depthRange, 1 / depthRange).

Page 84: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 80

shadow colourThe shadow colour (for modulative shadows) as set via SceneMan-ager::setShadowColour.

shadow extrusion distanceThe shadow extrusion distance as determined by the range of a non-directionallight or set via SceneManager::setShadowDirectionalLightExtrusionDistance for di-rectional lights.

texture sizeProvides texture size of the selected texture unit. Requires a texture unit indexparameter. Provided as float4(width, height, depth, 1). For 2D-texture, depth setsto 1, for 1D-texture, height and depth sets to 1.

inverse texture sizeProvides inverse texture size of the selected texture unit. Requires a texture unitindex parameter. Provided as float4(1 / width, 1 / height, 1 / depth, 1). For2D-texture, depth sets to 1, for 1D-texture, height and depth sets to 1.

packed texture sizeProvides packed texture size of the selected texture unit. Requires a texture unitindex parameter. Provided as float4(width, height, 1 / width, 1 / height). For3D-texture, depth is ignored, for 1D-texture, height sets to 1.

pass numberSets the active pass index number in a gpu parameter. The first pass in a techniquehas an index of 0, the second an index of 1 and so on. This is useful for multipassshaders (i.e. fur or blur shader) that need to know what pass it is. By setting upthe auto parameter in a [Default Program Parameters], page 57 list in a programdefinition, there is no requirement to set the pass number parameter in each passand lose track. (See [fur example], page 36)

pass iteration numberUseful for GPU programs that need to know what the current pass iteration numberis. The first iteration of a pass is numbered 0. The last iteration number is one lessthan what is set for the pass iteration number. If a pass has its iteration attribute setto 5 then the last iteration number (5th execution of the pass) is 4.(See [iteration],page 35)

animation parametricUseful for hardware vertex animation. For morph animation, sets the parametricvalue (0..1) representing the distance between the first position keyframe (boundto positions) and the second position keyframe (bound to the first free texturecoordinate) so that the vertex program can interpolate between them. For poseanimation, indicates a group of up to 4 parametric weight values applying to asequence of up to 4 poses (each one bound to x, y, z and w of the constant), one foreach pose. The original positions are held in the usual position buffer, and the offsetsto take those positions to the pose where weight == 1.0 are in the first ’n’ free texturecoordinates; ’n’ being determined by the value passed to includes pose animation.If more than 4 simultaneous poses are required, then you’ll need more than 1 shaderconstant to hold the parametric values, in which case you should use this bindingmore than once, referencing a different constant entry; the second one will containthe parametrics for poses 5-8, the third for poses 9-12, and so on.

custom This allows you to map a custom parameter on an individual Renderable (see Ren-derable::setCustomParameter) to a parameter on a GPU program. It requires that

Page 85: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 81

you complete the ’extra params’ field with the index that was used in the Render-able::setCustomParameter call, and this will ensure that whenever this Renderableis used, it will have it’s custom parameter mapped in. It’s very important that thisparameter has been defined on all Renderables that are assigned the material thatcontains this automatic mapping, otherwise the process will fail.

param named

This is the same as param indexed, but uses a named parameter instead of an index. Thiscan only be used with high-level programs which include parameter names; if you’re using anassembler program then you have no choice but to use indexes. Note that you can use indexedparameters for high-level programs too, but it is less portable since if you reorder your parametersin the high-level program the indexes will change.

format: param named <name> <type> <value>

example: param named shininess float4 10.0 0 0 0

The type is required because the program is not compiled and loaded when the material scriptis parsed, so at this stage we have no idea what types the parameters are. Programs are onlyloaded and compiled when they are used, to save memory.

param named auto

This is the named equivalent of param indexed auto, for use with high-level programs.

Format: param named auto <name> <value code> <extra params>

Example: param named auto worldViewProj WORLDVIEWPROJ MATRIX

The allowed value codes and the meaning of extra params are detailed in[param indexed auto], page 71.

shared params ref

This option allows you to reference shared parameter sets as defined in [Declaring Shared Pa-rameters], page 58.

Format: shared params ref <shared set name>

Example: shared params ref mySharedParams

The only required parameter is a name, which must be the name of an already defined sharedparameter set. All named parameters which are present in the program that are also present inthe shared parameter set will be linked, and the shared parameters used as if you had definedthem locally. This is dependent on the definitions (type and array size) matching between theshared set and the program.

Page 86: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 82

Shadows and Vertex Programs

When using shadows (See Chapter 7 [Shadows], page 160), the use of vertex programs can addsome additional complexities, because Ogre can only automatically deal with everything whenusing the fixed-function pipeline. If you use vertex programs, and you are also using shadows,you may need to make some adjustments.

If you use stencil shadows, then any vertex programs which do vertex deformation can bea problem, because stencil shadows are calculated on the CPU, which does not have access tothe modified vertices. If the vertex program is doing standard skeletal animation, this is ok (seesection above) because Ogre knows how to replicate the effect in software, but any other vertexdeformation cannot be replicated, and you will either have to accept that the shadow will notreflect this deformation, or you should turn off shadows for that object.

If you use texture shadows, then vertex deformation is acceptable; however, when renderingthe object into the shadow texture (the shadow caster pass), the shadow has to be rendered ina solid colour (linked to the ambient colour). You must therefore provide an alternative vertexprogram, so Ogre provides you with a way of specifying one to use when rendering the caster.Basically you link an alternative vertex program, using exactly the same syntax as the originalvertex program link:

shadow_caster_vertex_program_ref myShadowCasterVertexProgram{

param_indexed_auto 0 worldviewproj_matrixparam_indexed_auto 4 ambient_light_colour

}

When rendering a shadow caster, Ogre will automatically use the alternate program. Youcan bind the same or different parameters to the program - the most important thing is thatyou bind ambiend light colour, since this determines the colour of the shadow in modulativetexture shadows. If you don’t supply an alternate program, Ogre will fall back on a fixed-function material which will not reflect any vertex deformation you do in your vertex program.

In addition, when rendering the shadow receivers with shadow textures, Ogre needs to projectthe shadow texture. It does this automatically in fixed function mode, but if the receivers usevertex programs, they need to have a shadow receiver program which does the usual vertexdeformation, but also generates projective texture coordinates. The additional program linkedinto the pass like this:

shadow_receiver_vertex_program_ref myShadowReceiverVertexProgram{

param_indexed_auto 0 worldviewproj_matrixparam_indexed_auto 4 texture_viewproj_matrix

}

For the purposes of writing this alternate program, there is an automatic parameter bindingof ’texture viewproj matrix’ which provides the program with texture projection parameters.The vertex program should do it’s normal vertex processing, and generate texture coordinatesusing this matrix and place them in texture coord sets 0 and 1, since some shadow techniquesuse 2 texture units. The colour of the vertices output by this vertex program must always bewhite, so as not to affect the final colour of the rendered shadow.

Page 87: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 83

When using additive texture shadows, the shadow pass render is actually the lighting render,so if you perform any fragment program lighting you also need to pull in a custom fragmentprogram. You use the shadow receiver fragment program ref for this:shadow_receiver_fragment_program_ref myShadowReceiverFragmentProgram{

param_named_auto lightDiffuse light_diffuse_colour 0}

You should pass the projected shadow coordinates from the custom vertex program. As fortextures, texture unit 0 will always be the shadow texture. Any other textures which you bind inyour pass will be carried across too, but will be moved up by 1 unit to make room for the shadowtexture. Therefore your shadow receiver fragment program is likely to be the same as the barelighting pass of your normal material, except that you insert an extra texture sampler at index0, which you will use to adjust the result by (modulating diffuse and specular components).

3.1.10 Vertex Texture Fetch

Introduction

More recent generations of video card allow you to perform a read from a texture in the vertexprogram rather than just the fragment program, as is traditional. This allows you to, forexample, read the contents of a texture and displace vertices based on the intensity of the colourcontained within.

Declaring the use of vertex texture fetching

Since hardware support for vertex texture fetching is not ubiquitous, you should use theuses vertex texture fetch (See 〈undefined〉 [Vertex texture fetching in vertex programs],page 〈undefined〉) directive when declaring your vertex programs which use vertex textures, sothat if it is not supported, technique fallback can be enabled. This is not strictly necessary forDirectX-targeted shaders, since vertex texture fetching is only supported in vs 3 0, which canbe stated as a required syntax in your shader definition, but for OpenGL (GLSL), there arecards which support GLSL but not vertex textures, so you should be explicit about your needfor them.

Render system texture binding differences

Unfortunately the method for binding textures so that they are available to a vertex pro-gram is not well standardised. As at the time of writing, Shader Model 3.0 (SM3.0) hard-ware under DirectX9 include 4 separate sampler bindings for the purposes of vertex textures.OpenGL, on the other hand, is able to access vertex textures in GLSL (and in assembler throughNV vertex program 3, although this is less popular), but the textures are shared with the frag-ment pipeline. I expect DirectX to move to the GL model with the advent of DirectX10, sincea unified shader architecture implies sharing of texture resources between the two stages. As itis right now though, we’re stuck with an inconsistent situation.

To reflect this, you should use the [binding type], page 44 attribute in a texture unit toindicate which unit you are targeting with your texture - ’fragment’ (the default) or ’vertex’.For render systems that don’t have separate bindings, this actually does nothing. But for thosethat do, it will ensure your texture gets bound to the right processing unit.

Page 88: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 84

Note that whilst DirectX9 has separate bindings for the vertex and fragment pipelines, bind-ing a texture to the vertex processing unit still uses up a ’slot’ which is then not available foruse in the fragment pipeline. I didn’t manage to find this documented anywhere, but the nVidiasamples certainly avoid binding a texture to the same index on both vertex and fragment units,and when I tried to do it, the texture did not appear correctly in the fragment unit, whilst itdid as soon as I moved it into the next unit.

Texture format limitations

Again as at the time of writing, the types of texture you can use in a vertex program arelimited to 1- or 4-component, full precision floating point formats. In code that equates toPF FLOAT32 R or PF FLOAT32 RGBA. No other formats are supported. In addition, thetextures must be regular 2D textures (no cube or volume maps) and mipmapping and filteringis not supported, although you can perform filtering in your vertex program if you wish bysampling multiple times.

Hardware limitations

As at the time of writing (early Q3 2006), ATI do not support texture fetch in their current cropof cards (Radeon X1n00). nVidia do support it in both their 6n00 and 7n00 range. ATI supportan alternative called ’Render to Vertex Buffer’, but this is not standardised at this time and isvery much different in its implementation, so cannot be considered to be a drop-in replacement.This is the case even though the Radeon X1n00 cards claim to support vs 3 0 (which requiresvertex texture fetch).

3.1.11 Script Inheritence

When creating new script objects that are only slight variations of another object, it’s good toavoid copying and pasting between scripts. Script inheritence lets you do this; in this sectionwe’ll use material scripts as an example, but this applies to all scripts parsed with the scriptcompilers in Ogre 1.6 onwards.

For example, to make a new material that is based on one previously defined, add a colon :after the new material name followed by the name of the material that is to be copied.

Format: material <NewUniqueChildName> : <ReferanceParentMaterial>

The only caveat is that a parent material must have been defined/parsed prior to the childmaterial script being parsed. The easiest way to achieve this is to either place parents at thebeginning of the material script file, or to use the ’import’ directive (See Section 3.1.14 [ScriptImport Directive], page 92). Note that inheritence is actually a copy - after scripts are loadedinto Ogre, objects no longer maintain their copy inheritance structure. If a parent material ismodified through code at runtime, the changes have no effect on child materials that were copiedfrom it in the script.

Material copying within the script alleviates some drudgery from copy/paste but having theability to identify specific techniques, passes, and texture units to modify makes material copyingeasier. Techniques, passes, texture units can be identified directly in the child material withouthaving to layout previous techniques, passes, texture units by associating a name with them,Techniques and passes can take a name and texture units can be numbered within the material

Page 89: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 85

script. You can also use variables, See Section 3.1.13 [Script Variables], page 91.

Names become very useful in materials that copy from other materials. In order to overridevalues they must be in the correct technique, pass, texture unit etc. The script could be lainout using the sequence of techniques, passes, texture units in the child material but if only oneparameter needs to change in say the 5th pass then the first four passes prior to the fifth wouldhave to be placed in the script:

Here is an example:

material test2 : test1{technique{

pass{}

pass{}

pass{}

pass{}

pass{

ambient 0.5 0.7 0.3 1.0}

}}

This method is tedious for materials that only have slight variations to their parent. Aneasier way is to name the pass directly without listing the previous passes:

material test2 : test1{technique 0{

pass 4{

ambient 0.5 0.7 0.3 1.0}

}}

Page 90: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 86

The parent pass name must be known and the pass must be in the correct technique in orderfor this to work correctly. Specifying the technique name and the pass name is the best method.If the parent technique/pass are not named then use their index values for their name as donein the example.

Adding new Techniques, Passes, to copied materials:

If a new technique or pass needs to be added to a copied material then use a unique name for thetechnique or pass that does not exist in the parent material. Using an index for the name thatis one greater than the last index in the parent will do the same thing. The new technique/passwill be added to the end of the techniques/passes copied from the parent material.

Note: if passes or techniques aren’t given a name, they will take on a default name based ontheir index. For example the first pass has index 0 so its name will be 0.

Identifying Texture Units to override values

A specific texture unit state (TUS) can be given a unique name within a pass of a material sothat it can be identified later in cloned materials that need to override specified texture unitstates in the pass without declaring previous texture units. Using a unique name for a Textureunit in a pass of a cloned material adds a new texture unit at the end of the texture unit listfor the pass.

material BumpMap2 : BumpMap1{technique ati8500{

pass 0{texture_unit NormalMap{

texture BumpyMetalNM.png}

}}

}

Advanced Script Inheritence

Starting with Ogre 1.6, script objects can now inherit from each other more generally. Theprevious concept of inheritance, material copying, was restricted only to the top-level materialobjects. Now, any level of object can take advantage of inheritance (for instance, techniques,passes, and compositor targets).

material Test{

technique{

pass : ParentPass

Page 91: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 87

{}

}}

Notice that the pass inherits from ParentPass. This allows for the creation of more fine-grained inheritance hierarchies.

Along with the more generalized inheritance system comes an important new keyword: "ab-stract." This keyword is used at a top-level object declaration (not inside any other object) todenote that it is not something that the compiler should actually attempt to compile, but ratherthat it is only for the purpose of inheritance. For example, a material declared with the abstractkeyword will never be turned into an actual usable material in the material framework. Objectswhich cannot be at a top-level in the document (like a pass) but that you would like to declareas such for inheriting purpose must be declared with the abstract keyword.

abstract pass ParentPass{

diffuse 1 0 0 1}

That declares the ParentPass object which was inherited from in the above example. Noticethe abstract keyword which informs the compiler that it should not attempt to actually turnthis object into any sort of Ogre resource. If it did attempt to do so, then it would obviouslyfail, since a pass all on its own like that is not valid.

The final matching option is based on wildcards. Using the ’*’ character, you can make apowerful matching scheme and override multiple objects at once, even if you don’t know exactnames or positions of those objects in the inherited object.

abstract technique Overrider{

pass *color*{

diffuse 0 0 0 0}

}

This technique, when included in a material, will override all passes matching the wildcard"*color*" (color has to appear in the name somewhere) and turn their diffuse properties black.It does not matter their position or exact name in the inherited technique, this will match them.

3.1.12 Texture Aliases

Texture aliases are useful for when only the textures used in texture units need to be specifiedfor a cloned material. In the source material i.e. the original material to be cloned, each textureunit can be given a texture alias name. The cloned material in the script can then specify whattextures should be used for each texture alias. Note that texture aliases are a more specific

Page 92: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 88

version of Section 3.1.13 [Script Variables], page 91 which can be used to easily set other values.

Using texture aliases within texture units:Format:texture alias <name>

Default: <name> will default to texture unit <name> if set

texture_unit DiffuseTex{texture diffuse.jpg

}

texture alias defaults to DiffuseTex.

Example: The base material to be cloned:

material TSNormalSpecMapping{technique GLSL{

pass{ambient 0.1 0.1 0.1diffuse 0.7 0.7 0.7specular 0.7 0.7 0.7 128

vertex_program_ref GLSLDemo/OffsetMappingVS{

param_named_auto lightPosition light_position_object_space 0param_named_auto eyePosition camera_position_object_spaceparam_named textureScale float 1.0

}

fragment_program_ref GLSLDemo/TSNormalSpecMappingFS{

param_named normalMap int 0param_named diffuseMap int 1param_named fxMap int 2

}

// Normal maptexture_unit NormalMap{

texture defaultNM.pngtex_coord_set 0filtering trilinear

}

// Base diffuse texture map

Page 93: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 89

texture_unit DiffuseMap{

texture defaultDiff.pngfiltering trilineartex_coord_set 1

}

// spec map for shininesstexture_unit SpecMap{

texture defaultSpec.pngfiltering trilineartex_coord_set 2

}

}

}

technique HLSL_DX9{

pass{

vertex_program_ref FxMap_HLSL_VS{

param_named_auto worldViewProj_matrix worldviewproj_matrixparam_named_auto lightPosition light_position_object_space 0param_named_auto eyePosition camera_position_object_space

}

fragment_program_ref FxMap_HLSL_PS{

param_named ambientColor float4 0.2 0.2 0.2 0.2}

// Normal maptexture_unit{

texture_alias NormalMaptexture defaultNM.pngtex_coord_set 0filtering trilinear

}

// Base diffuse texture maptexture_unit{

texture_alias DiffuseMaptexture defaultDiff.pngfiltering trilineartex_coord_set 1

Page 94: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 90

}

// spec map for shininesstexture_unit{

texture_alias SpecMaptexture defaultSpec.pngfiltering trilineartex_coord_set 2

}

}

}}

Note that the GLSL and HLSL techniques use the same textures. For each texture usagetype a texture alias is given that describes what the texture is used for. So the first texture unitin the GLSL technique has the same alias as the TUS in the HLSL technique since its the sametexture used. Same goes for the second and third texture units.For demonstration purposes, the GLSL technique makes use of texture unit naming and there-fore the texture alias name does not have to be set since it defaults to the texture unit name. Sowhy not use the default all the time since its less typing? For most situations you can. Its whenyou clone a material that and then want to change the alias that you must use the texture aliascommand in the script. You cannot change the name of a texture unit in a cloned material sotexture alias provides a facility to assign an alias name.

Now we want to clone the material but only want to change the textures used. We could copyand paste the whole material but if we decide to change the base material later then we alsohave to update the copied material in the script. With set texture alias, copying a material isvery easy now. set texture alias is specified at the top of the material definition. All techniquesusing the specified texture alias will be effected by set texture alias.

Format:set texture alias <alias name> <texture name>

material fxTest : TSNormalSpecMapping{set_texture_alias NormalMap fxTestNMap.pngset_texture_alias DiffuseMap fxTestDiff.pngset_texture_alias SpecMap fxTestMap.png

}

The textures in both techniques in the child material will automatically get replaced withthe new ones we want to use.

The same process can be done in code as long you set up the texture alias names so then thereis no need to traverse technique/pass/TUS to change a texture. You just call myMaterialPtr->applyTextureAliases(myAliasTextureNameList) which will update all textures in all texture

Page 95: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 91

units that match the alias names in the map container reference you passed as a parameter.

You don’t have to supply all the textures in the copied material.

material fxTest2 : fxTest{set_texture_alias DiffuseMap fxTest2Diff.pngset_texture_alias SpecMap fxTest2Map.png

}

Material fxTest2 only changes the diffuse and spec maps of material fxTest and uses the samenormal map.

Another example:material fxTest3 : TSNormalSpecMapping{set_texture_alias DiffuseMap fxTest2Diff.png

}

fxTest3 will end up with the default textures for the normal map and spec map setup inTSNormalSpecMapping material but will have a different diffuse map. So your base materialcan define the default textures to use and then the child materials can override specific textures.

3.1.13 Script Variables

A very powerful new feature in Ogre 1.6 is variables. Variables allow you to parameterize datain materials so that they can become more generalized. This enables greater reuse of scripts bytargeting specific customization points. Using variables along with inheritance allows for hugeamounts of overrides and easy object reuse.

abstract pass ParentPass{

diffuse $diffuse_colour}

material Test{

technique{

pass : ParentPass{

set $diffuse_colour "1 0 0 1"}

}}

The ParentPass object declares a variable called "diffuse colour" which is then overriddenin the Test material’s pass. The "set" keyword is used to set the value of that variable. Thevariable assignment follows lexical scoping rules, which means that the value of "1 0 0 1" is

Page 96: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 92

only valid inside that pass definition. Variable assignment in outer scopes carry over into innerscopes.

material Test{

set $diffuse_colour "1 0 0 1"technique{

pass : ParentPass{}

}}

The $diffuse colour assignment carries down through the technique and into the pass.

3.1.14 Script Import Directive

Imports are a feature introduced to remove ambiguity from script dependencies. When usingscripts that inherit from each other but which are defined in separate files sometimes errorsoccur because the scripts are loaded in incorrect order. Using imports removes this issue. Thescript which is inheriting another can explicitly import its parent’s definition which will ensurethat no errors occur because the parent’s definition was not found.

import * from "parent.material"material Child : Parent{}

The material "Parent" is defined in parent.material and the import ensures that those defi-nitions are found properly. You can also import specific targets from within a file.import Parent from "parent.material"

If there were other definitions in the parent.material file, they would not be imported.

Note, however that importing does not actually cause objects in the imported script tobe fully parsed & created, it just makes the definitions available for inheritence. This has aspecific ramification for vertex / fragment program definitions, which must be loaded beforeany parameters can be specified. You should continue to put common program definitions in.program files to ensure they are fully parsed before being referenced in multiple .material files.The ’import’ command just makes sure you can resolve dependencies between equivalent scriptdefinitions (e.g. material to material).

3.2 Compositor Scripts

The compositor framework is a subsection of the OGRE API that allows you to easily definefull screen post-processing effects. Compositor scripts offer you the ability to define compositoreffects in a script which can be reused and modified easily, rather than having to use the API to

Page 97: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 93

define them. You still need to use code to instantiate a compositor against one of your visibleviewports, but this is a much simpler process than actually defining the compositor itself.

Compositor Fundamentals

Performing post-processing effects generally involves first rendering the scene to a texture, eitherin addition to or instead of the main window. Once the scene is in a texture, you can then pullthe scene image into a fragment program and perform operations on it by rendering it throughfull screen quad. The target of this post processing render can be the main result (e.g. awindow), or it can be another render texture so that you can perform multi-stage convolutionson the image. You can even ’ping-pong’ the render back and forth between a couple of rendertextures to perform convolutions which require many iterations, without using a separate texturefor each stage. Eventually you’ll want to render the result to the final output, which you dowith a full screen quad. This might replace the whole window (thus the main window doesn’tneed to render the scene itself), or it might be a combinational effect.

So that we can discuss how to implement these techniques efficiently, a number of definitionsare required:

CompositorDefinition of a fullscreen effect that can be applied to a user viewport. This is whatyou’re defining when writing compositor scripts as detailed in this section.

Compositor InstanceAn instance of a compositor as applied to a single viewport. You create these basedon compositor definitions, See Section 3.2.4 [Applying a Compositor], page 105.

Compositor ChainIt is possible to enable more than one compositor instance on a viewport at the sametime, with one compositor taking the results of the previous one as input. This isknown as a compositor chain. Every viewport which has at least one compositorattached to it has a compositor chain. See Section 3.2.4 [Applying a Compositor],page 105

Target This is a RenderTarget, i.e. the place where the result of a series of render operationsis sent. A target may be the final output (and this is implicit, you don’t have todeclare it), or it may be an intermediate render texture, which you declare in yourscript with the [compositor texture], page 95. A target which is not the outputtarget has a defined size and pixel format which you can control.

Output TargetAs Target, but this is the single final result of all operations. The size and pixelformat of this target cannot be controlled by the compositor since it is defined bythe application using it, thus you don’t declare it in your script. However, you dodeclare a Target Pass for it, see below.

Target PassA Target may be rendered to many times in the course of a composition effect. Inparticular if you ’ping pong’ a convolution between a couple of textures, you willhave more than one Target Pass per Target. Target passes are declared in the scriptusing a Section 3.2.2 [Compositor Target Passes], page 98, the latter being the finaloutput target pass, of which there can be only one.

Page 98: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 94

Pass Within a Target Pass, there are one or more individual Section 3.2.3 [CompositorPasses], page 100, which perform a very specific action, such as rendering the originalscene (or pulling the result from the previous compositor in the chain), renderinga fullscreen quad, or clearing one or more buffers. Typically within a single targetpass you will use the either a ’render scene’ pass or a ’render quad’ pass, not both.Clear can be used with either type.

Loading scripts

Compositor scripts are loaded when resource groups are initialised: OGRE looks in all re-source locations associated with the group (see Root::addResourceLocation) for files with the’.compositor’ extension and parses them. If you want to parse files manually, use CompositorSe-rializer::parseScript.

Format

Several compositors may be defined in a single script. The script format is pseudo-C++, withsections delimited by curly braces (’’, ’’), and comments indicated by starting a line with ’//’(note, no nested form comments allowed). The general format is shown below in the examplebelow:

// This is a comment// Black and white effectcompositor B&W{

technique{

// Temporary texturestexture rt0 target_width target_height PF_A8R8G8B8

target rt0{

// Render output from previous compositor (or original scene)input previous

}

target_output{

// Start with clear outputinput none// Draw a fullscreen quad with the black and white imagepass render_quad{

// Renders a fullscreen quad with a materialmaterial Ogre/Compositor/BlackAndWhiteinput 0 rt0

}}

}}

Page 99: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 95

Every compositor in the script must be given a name, which is the line ’compositor <name>’before the first opening ’’. This name must be globally unique. It can include path characters(as in the example) to logically divide up your compositors, and also to avoid duplicate names,but the engine does not treat the name as hierarchical, just as a string. Names can includespaces but must be surrounded by double quotes ie compositor "My Name".

The major components of a compositor are the Section 3.2.1 [Compositor Techniques],page 95, the Section 3.2.2 [Compositor Target Passes], page 98 and the Section 3.2.3 [Com-positor Passes], page 100, which are covered in detail in the following sections.

3.2.1 Techniques

A compositor technique is much like a Section 3.1.1 [Techniques], page 17 in that it describesone approach to achieving the effect you’re looking for. A compositor definition can have morethan one technique if you wish to provide some fallback should the hardware not support thetechnique you’d prefer to use. Techniques are evaluated for hardware support based on 2 things:

Material supportAll Section 3.2.3 [Compositor Passes], page 100 that render a fullscreen quad use amaterial; for the technique to be supported, all of the materials referenced must haveat least one supported material technique. If they don’t, the compositor techniqueis marked as unsupported and won’t be used.

Texture format supportThis one is slightly more complicated. When you request a [compositor texture],page 95 in your technique, you request a pixel format. Not all formats are nativelysupported by hardware, especially the floating point formats. However, in this casethe hardware will typically downgrade the texture format requested to one that thehardware does support - with compositor effects though, you might want to use adifferent approach if this is the case. So, when evaluating techniques, the compositorwill first look for native support for the exact pixel format you’ve asked for, and willskip onto the next technique if it is not supported, thus allowing you to define othertechniques with simpler pixel formats which use a different approach. If it doesn’tfind any techniques which are natively supported, it tries again, this time allowingthe hardware to downgrade the texture format and thus should find at least somesupport for what you’ve asked for.

As with material techniques, compositor techniques are evaluated in the order you definethem in the script, so techniques declared first are preferred over those declared later.

Format: technique

Techniques can have the following nested elements:• [compositor texture], page 95• [compositor texture ref], page 97• [compositor scheme], page 97• [compositor logic], page 97• Section 3.2.2 [Compositor Target Passes], page 98• Section 3.2.2 [Compositor Target Passes], page 98

Page 100: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 96

texture

This declares a render texture for use in subsequent Section 3.2.2 [Compositor Target Passes],page 98.

Format: texture <Name> <Width> <Height> <Pixel Format> [<MRT Pixel Format2>] [<MRTPixel FormatN>] [pooled] [gamma] [no fsaa] [<scope>]

Here is a description of the parameters:

Name A name to give the render texture, which must be unique within this compositor.This name is used to reference the texture in Section 3.2.2 [Compositor TargetPasses], page 98, when the texture is rendered to, and in Section 3.2.3 [CompositorPasses], page 100, when the texture is used as input to a material rendering afullscreen quad.

Width, HeightThe dimensions of the render texture. You can either specify a fixed width andheight, or you can request that the texture is based on the physical dimensions ofthe viewport to which the compositor is attached. The options for the latter are ’tar-get width’, ’target height’, ’target width scaled <factor>’ and ’target height scaled<factor>’ - where ’factor’ is the amount by which you wish to multiply the size ofthe main target to derive the dimensions.

Pixel FormatThe pixel format of the render texture. This affects how much memory it willtake, what colour channels will be available, and what precision you will havewithin those channels. The available options are PF A8R8G8B8, PF R8G8B8A8,PF R8G8B8, PF FLOAT16 RGBA, PF FLOAT16 RGB, PF FLOAT16 R,PF FLOAT32 RGBA, PF FLOAT32 RGB, and PF FLOAT32 R.

pooled If present, this directive makes this texture ’pooled’ among compositor instances,which can save some memory.

gamma If present, this directive means that sRGB gamma correction will be enabled onwrites to this texture. You should remember to include the opposite sRGB con-version when you read this texture back in another material, such as a quad. Thisoption will automatically enabled if you use a render scene pass on this texture andthe viewport on which the compositor is based has sRGB write support enabled.

no fsaa If present, this directive disables the use of anti-aliasing on this texture. FSAA isonly used if this texture is subject to a render scene pass and FSAA was enabledon the original viewport on which this compositor is based; this option allows youto override it and disable the FSAA if you wish.

scope If present, this directive sets the scope for the texture for being accessed by othercompositors using the [compositor texture ref], page 97 directive. There are threeoptions : ’local scope’ (which is also the default) means that only the compositordefining the texture can access it. ’chain scope’ means that the compositors afterthis compositor in the chain can reference its textures, and ’global scope’ meansthat the entire application can access the texture. This directive also affects thecreation of the textures (global textures are created once and thus can’t be usedwith the pooled directive, and can’t rely on viewport size).

Page 101: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 97

Example: texture rt0 512 512 PF R8G8B8A8Example: texture rt1 target width target height PF FLOAT32 RGB

You can in fact repeat this element if you wish. If you do so, that means that this rendertexture becomes a Multiple Render Target (MRT), when the GPU writes to multiple texturesat once. It is imperative that if you use MRT that the shaders that render to it render to ALLthe targets. Not doing so can cause undefined results. It is also important to note that althoughyou can use different pixel formats for each target in a MRT, each one should have the sametotal bit depth since most cards do not support independent bit depths. If you try to use thisfeature on cards that do not support the number of MRTs you’ve asked for, the technique willbe skipped (so you ought to write a fallback technique).

Example : texture mrt output target width target height PF FLOAT16 RGBAPF FLOAT16 RGBA chain scope

texture ref

This declares a reference of a texture from another compositor to be used in this compositor.

Format: texture ref <Local Name> <Reference Compositor> <Reference Texture Name>

Here is a description of the parameters:

Local NameA name to give the referenced texture, which must be unique within this compositor.This name is used to reference the texture in Section 3.2.2 [Compositor TargetPasses], page 98, when the texture is rendered to, and in Section 3.2.3 [CompositorPasses], page 100, when the texture is used as input to a material rendering afullscreen quad.

Reference CompositorThe name of the compositor that we are referencing a texture from

Reference Texture NameThe name of the texture in the compositor that we are referencing

Make sure that the texture being referenced is scoped accordingly (either chain or globalscope) and placed accordingly during chain creation (if referencing a chain-scoped texture, thecompositor must be present in the chain and placed before the compositor referencing it).

Example : texture ref GBuffer GBufferCompositor mrt output

scheme

This gives a compositor technique a scheme name, allowing you to manually switch betweendifferent techniques for this compositor when instantiated on a viewport by calling Composi-torInstance::setScheme.

Format: material scheme <Name>

Page 102: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 98

compositor logic

This connects between a compositor and code that it requires in order to function correctly.When an instance of this compositor will be created, the compositor logic will be notified andwill have the chance to prepare the compositor’s operation (for example, adding a listener).

Format: compositor logic <Name>Registration of compositor logics is done by name through CompositorMan-

ager::registerCompositorLogic.

3.2.2 Target Passes

A target pass is the action of rendering to a given target, either a render texture or the finaloutput. You can update the same render texture multiple times by adding more than one targetpass to your compositor script - this is very useful for ’ping pong’ renders between a couple ofrender textures to perform complex convolutions that cannot be done in a single render, suchas blurring.

There are two types of target pass, the sort that updates a render texture:

Format: target <Name>

... and the sort that defines the final output render:

Format: target output

The contents of both are identical, the only real difference is that you can only have a singletarget output entry, whilst you can have many target entries. Here are the attributes you canuse in a ’target’ or ’target output’ section of a .compositor script:• [compositor target input], page 98• [only initial], page 99• [visibility mask], page 99• [compositor lod bias], page 99• [material scheme], page 99• [compositor shadows], page 99• Section 3.2.3 [Compositor Passes], page 100

Attribute Descriptions

input

Sets input mode of the target, which tells the target pass what is pulled in before any of its ownpasses are rendered.

Format: input (none | previous)

Default: input none

Page 103: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 99

none The target will have nothing as input, all the contents of the target must be gener-ated using its own passes. Note this does not mean the target will be empty, justno data will be pulled in. For it to truly be blank you’d need a ’clear’ pass withinthis target.

previous The target will pull in the previous contents of the viewport. This will be eitherthe original scene if this is the first compositor in the chain, or it will be the outputfrom the previous compositor in the chain if the viewport has multiple compositorsenabled.

only initial

If set to on, this target pass will only execute once initially after the effect has been enabled.This could be useful to perform once-off renders, after which the static contents are used by therest of the compositor.

Format: only initial (on | off)

Default: only initial off

visibility mask

Sets the visibility mask for any render scene passes performed in this target pass. Thisis a bitmask (although it must be specified as decimal, not hex) and maps to SceneMan-ager::setVisibilityMask. Format: visibility mask <mask>

Default: visibility mask 4294967295

lod bias

Set the scene LOD bias for any render scene passes performed in this target pass. The defaultis 1.0, everything below that means lower quality, higher means higher quality.

Format: lod bias <lodbias>

Default: lod bias 1.0

shadows

Sets whether shadows should be rendered during any render scene pass performed in this targetpass. The default is ’on’.

Format: shadows (on | off)

Default: shadows on

Page 104: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 100

material scheme

If set, indicates the material scheme to use for any render scene pass. Useful for performingspecial-case rendering effects.

Format: material scheme <scheme name>

Default: None

3.2.3 Compositor Passes

A pass is a single rendering action to be performed in a target pass.

Format: ’pass’ (render quad | clear | stencil | render scene | render custom) [custom name]

There are four types of pass:

clear This kind of pass sets the contents of one or more buffers in the target to a fixedvalue. So this could clear the colour buffer to a fixed colour, set the depth buffer toa certain set of contents, fill the stencil buffer with a value, or any combination ofthe above.

stencil This kind of pass configures stencil operations for the subsequent passes. It canset the stencil compare function, operations and reference values for you to performyour own stencil effects.

render sceneThis kind of pass performs a regular rendering of the scene. It will use the[visibility mask], page 99, [compositor lod bias], page 99, and [material scheme],page 99 from the parent target pass.

render quadThis kind of pass renders a quad over the entire render target, using a given material.You will undoubtedly want to pull in the results of other target passes into thisoperation to perform fullscreen effects.

render customThis kind of pass is just a callback to user code for the compositionpass specified in the custom name (and registered via CompositorMan-ager::registerCustomCompositionPass) and allows the user to create custom renderoperations for more advanced effects. This is the only pass type that requires thecustom name parameter.

Here are the attributes you can use in a ’pass’ section of a .compositor script:

Available Pass Attributes

• [material], page 101• [compositor pass input], page 101• [compositor pass identifier], page 101• [first render queue], page 101• [last render queue], page 102• [compositor pass material scheme], page 102

Page 105: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 101

• [compositor clear], page 102• [compositor stencil], page 103

material

For passes of type ’render quad’, sets the material used to render the quad. You will want touse shaders in this material to perform fullscreen effects, and use the [compositor pass input],page 101 attribute to map other texture targets into the texture bindings needed by this material.

Format: material <Name>

input

For passes of type ’render quad’, this is how you map one or more local render textures (See[compositor texture], page 95) into the material you’re using to render the fullscreen quad. Tobind more than one texture, repeat this attribute with different sampler indexes.

Format: input <sampler> <Name> [<MRTIndex>]

sampler The texture sampler to set, must be a number in the range [0,OGRE MAX TEXTURE LAYERS-1].

Name The name of the local render texture to bind, as declared in [compositor texture],page 95 and rendered to in one or more Section 3.2.2 [Compositor Target Passes],page 98.

MRTIndexIf the local texture that you’re referencing is a Multiple Render Target (MRT), thisidentifies the surface from the MRT that you wish to reference (0 is the first surface,1 the second etc).

Example: input 0 rt0

identifier

Associates a numeric identifier with the pass. This is useful for registering a listener with thecompositor (CompositorInstance::addListener), and being able to identify which pass it is that’sbeing processed when you get events regarding it. Numbers between 0 and 2^32 are allowed.

Format: identifier <number>

Example: identifier 99945

Default: identifier 0

first render queue

For passes of type ’render scene’, this sets the first render queue id that is included in the render.Defaults to the value of RENDER QUEUE SKIES EARLY.

Format: first render queue <id>

Page 106: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 102

Default: first render queue 0

last render queue

For passes of type ’render scene’, this sets the last render queue id that is included in the render.Defaults to the value of RENDER QUEUE SKIES LATE.

Format: last render queue <id>

Default: last render queue 95

material scheme

If set, indicates the material scheme to use for this pass only. Useful for performing special-caserendering effects.

This will overwrite the scheme if set at the target scope as well.

Format: material scheme <scheme name>

Default: None

Clear Section

For passes of type ’clear’, this section defines the buffer clearing parameters.

Format: clear

Here are the attributes you can use in a ’clear’ section of a .compositor script:• [compositor clear buffers], page 102• [compositor clear colour value], page 102• [compositor clear depth value], page 103• [compositor clear stencil value], page 103

buffers

Sets the buffers cleared by this pass.

Format: buffers [colour] [depth] [stencil]

Default: buffers colour depth

colour value

Set the colour used to fill the colour buffer by this pass, if the colour buffer is being cleared([compositor clear buffers], page 102).

Page 107: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 103

Format: colour value <red> <green> <blue> <alpha>

Default: colour value 0 0 0 0

depth value

Set the depth value used to fill the depth buffer by this pass, if the depth buffer is beingcleared ([compositor clear buffers], page 102).

Format: depth value <depth>

Default: depth value 1.0

stencil value

Set the stencil value used to fill the stencil buffer by this pass, if the stencil buffer is beingcleared ([compositor clear buffers], page 102).

Format: stencil value <value>

Default: stencil value 0.0

Stencil Section

For passes of type ’stencil’, this section defines the stencil operation parameters.

Format: stencil

Here are the attributes you can use in a ’stencil’ section of a .compositor script:• [compositor stencil check], page 103• [compositor stencil comp func], page 103• [compositor stencil ref value], page 104• [compositor stencil mask], page 104• [compositor stencil fail op], page 104• [compositor stencil depth fail op], page 105• [compositor stencil pass op], page 105• [compositor stencil two sided], page 105

check

Enables or disables the stencil check, thus enabling the use of the rest of the features inthis section. The rest of the options in this section do nothing if the stencil check is off.Format: check (on | off)

Page 108: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 104

comp func

Sets the function used to perform the following comparison:

(ref value & mask) comp func (Stencil Buffer Value & mask)

What happens as a result of this comparison will be one of 3 actions on the stencilbuffer, depending on whether the test fails, succeeds but with the depth buffer checkstill failing, or succeeds with the depth buffer check passing too. You set the actions inthe [compositor stencil fail op], page 104, [compositor stencil depth fail op], page 105 and[compositor stencil pass op], page 105 respectively. If the stencil check fails, no colour ordepth are written to the frame buffer.

Format: comp func (always fail | always pass | less | less equal | not equal | greater equal| greater)

Default: comp func always pass

ref value

Sets the reference value used to compare with the stencil buffer as described in[compositor stencil comp func], page 103.

Format: ref value <value>

Default: ref value 0.0

mask

Sets the mask used to compare with the stencil buffer as described in[compositor stencil comp func], page 103.

Format: mask <value>

Default: mask 4294967295

fail op

Sets what to do with the stencil buffer value if the result of the stencil comparison([compositor stencil comp func], page 103) and depth comparison is that both fail.

Format: fail op (keep | zero | replace | increment | decrement | increment wrap | decre-ment wrap | invert)

Default: depth fail op keep

These actions mean:

keep Leave the stencil buffer unchanged.

Page 109: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 105

zero Set the stencil value to zero.

replace Set the stencil value to the reference value.

increment Add one to the stencil value, clamping at the maximum value.

decrement Subtract one from the stencil value, clamping at 0.

increment wrapAdd one to the stencil value, wrapping back to 0 at the maximum.

decrement wrapSubtract one from the stencil value, wrapping to the maximum below 0.

invert invert the stencil value.

depth fail op

Sets what to do with the stencil buffer value if the result of the stencil comparison([compositor stencil comp func], page 103) passes but the depth comparison fails.

Format: depth fail op (keep | zero | replace | increment | decrement | increment wrap |decrement wrap | invert)

Default: depth fail op keep

pass op

Sets what to do with the stencil buffer value if the result of the stencil comparison([compositor stencil comp func], page 103) and the depth comparison pass.

Format: pass op (keep | zero | replace | increment | decrement | increment wrap | decre-ment wrap | invert)

Default: pass op keep

two sided

Enables or disables two-sided stencil operations, which means the inverse of the operationsapplies to back-facing polygons.

Format: two sided (on | off)

Default: two sided off

3.2.4 Applying a Compositor

Adding a compositor instance to a viewport is very simple. All you need to do is:

CompositorManager::getSingleton().addCompositor(viewport, compositorName);

Page 110: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 106

Where viewport is a pointer to your viewport, and compositorName is the name of the com-positor to create an instance of. By doing this, a new instance of a compositor will be addedto a new compositor chain on that viewport. You can call the method multiple times to addfurther compositors to the chain on this viewport. By default, each compositor which is addedis disabled, but you can change this state by calling:

CompositorManager::getSingleton().setCompositorEnabled(viewport, compositorName, enabledOrDisabled);

For more information on defining and using compositors, see Demo Compositor in the Samplesarea, together with the Examples.compositor script in the media area.

3.3 Particle Scripts

Particle scripts allow you to define particle systems to be instantiated in your code without hav-ing to hard-code the settings themselves in your source code, allowing a very quick turnaroundon any changes you make. Particle systems which are defined in scripts are used as templates,and multiple actual systems can be created from them at runtime.

Loading scripts

Particle system scripts are loaded at initialisation time by the system: by default it looksin all common resource locations (see Root::addResourceLocation) for files with the ’.particle’extension and parses them. If you want to parse files with a different extension, use the Par-ticleSystemManager::getSingleton().parseAllSources method with your own extension, or if youwant to parse an individual file, use ParticleSystemManager::getSingleton().parseScript.

Once scripts have been parsed, your code is free to instantiate systems based on them usingthe SceneManager::createParticleSystem() method which can take both a name for the newsystem, and the name of the template to base it on (this template name is in the script).

Format

Several particle systems may be defined in a single script. The script format is pseudo-C++, withsections delimited by curly braces (), and comments indicated by starting a line with ’//’ (note,no nested form comments allowed). The general format is shown below in a typical example:

// A sparkly purple fountainparticle_system Examples/PurpleFountain{

material Examples/Flare2particle_width 20particle_height 20cull_each falsequota 10000

Page 111: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 107

billboard_type oriented_self

// Area emitteremitter Point{

angle 15emission_rate 75time_to_live 3direction 0 1 0velocity_min 250velocity_max 300colour_range_start 1 0 0colour_range_end 0 0 1

}

// Gravityaffector LinearForce{

force_vector 0 -100 0force_application add

}

// Faderaffector ColourFader{

red -0.25green -0.25blue -0.25

}}

Every particle system in the script must be given a name, which is the line before the first opening’’, in the example this is ’Examples/PurpleFountain’. This name must be globally unique. Itcan include path characters (as in the example) to logically divide up your particle systems, andalso to avoid duplicate names, but the engine does not treat the name as hierarchical, just as astring.

A system can have top-level attributes set using the scripting commands available, such as’quota’ to set the maximum number of particles allowed in the system. Emitters (which createparticles) and affectors (which modify particles) are added as nested definitions within the script.The parameters available in the emitter and affector sections are entirely dependent on the typeof emitter / affector.

For a detailed description of the core particle system attributes, see the list below:

Available Particle System Attributes

• [quota], page 108• [particle material], page 108

Page 112: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 108

• [particle width], page 108• [particle height], page 109• [cull each], page 109• [billboard type], page 110• [billboard origin], page 111• [billboard rotation type], page 111• [common direction], page 112• [common up vector], page 112• [particle renderer], page 109• [particle sorted], page 110• [particle localspace], page 110• [particle point rendering], page 112• [particle accurate facing], page 113• [iteration interval], page 113• [nonvisible update timeout], page 114

See also: Section 3.3.2 [Particle Emitters], page 114, Section 3.3.5 [Particle Affectors],page 121

3.3.1 Particle System Attributes

This section describes to attributes which you can set on every particle system using scripts.All attributes have default values so all settings are optional in your script.

quota

Sets the maximum number of particles this system is allowed to contain at one time. Whenthis limit is exhausted, the emitters will not be allowed to emit any more particles until somedestroyed (e.g. through their time to live running out). Note that you will almost always wantto change this, since it defaults to a very low value (particle pools are only ever increased insize, never decreased).

format: quota <max particles>example: quota 10000default: 10

material

Sets the name of the material which all particles in this system will use. All particles in a systemuse the same material, although each particle can tint this material through the use of it’s colourproperty.

format: material <material name>example: material Examples/Flaredefault: none (blank material)

Page 113: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 109

particle width

Sets the width of particles in world coordinates. Note that this property is absolute when bill-board type (see below) is set to ’point’ or ’perpendicular self’, but is scaled by the length ofthe direction vector when billboard type is ’oriented common’, ’oriented self’ or ’perpendicu-lar common’.

format: particle width <width>example: particle width 20default: 100

particle height

Sets the height of particles in world coordinates. Note that this property is absolute when bill-board type (see below) is set to ’point’ or ’perpendicular self’, but is scaled by the length ofthe direction vector when billboard type is ’oriented common’, ’oriented self’ or ’perpendicu-lar common’.

format: particle height <height>example: particle height 20default: 100

cull each

All particle systems are culled by the bounding box which contains all the particles in the system.This is normally sufficient for fairly locally constrained particle systems where most particlesare either visible or not visible together. However, for those that spread particles over a widerarea (e.g. a rain system), you may want to actually cull each particle individually to save ontime, since it is far more likely that only a subset of the particles will be visible. You do this bysetting the cull each parameter to true.

format: cull each <true|false>example: cull each truedefault: false

renderer

Particle systems do not render themselves, they do it through ParticleRenderer classes. Thoseclasses are registered with a manager in order to provide particle systems with a particular’look’. OGRE comes configured with a default billboard-based renderer, but more can be addedthrough plugins. Particle renders are registered with a unique name, and you can use that namein this attribute to determine the renderer to use. The default is ’billboard’.

Particle renderers can have attributes, which can be passed by setting them on the rootparticle system.

Page 114: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 110

format: renderer <renderer name>default: billboard

sorted

By default, particles are not sorted. By setting this attribute to ’true’, the particles will besorted with respect to the camera, furthest first. This can make certain rendering effects lookbetter at a small sorting expense.

format: sorted <true|false>default: false

local space

By default, particles are emitted into world space, such that if you transform the node to whichthe system is attached, it will not affect the particles (only the emitters). This tends to givethe normal expected behaviour, which is to model how real world particles travel independentlyfrom the objects they are emitted from. However, to create some effects you may want theparticles to remain attached to the local space the emitter is in and to follow them directly.This option allows you to do that.

format: local space <true|false>default: false

billboard type

This is actually an attribute of the ’billboard’ particle renderer (the default), and is an example ofpassing attributes to a particle renderer by declaring them directly within the system declaration.Particles using the default renderer are rendered using billboards, which are rectangles formedby 2 triangles which rotate to face the given direction. However, there is more than 1 way toorient a billboard. The classic approach is for the billboard to directly face the camera: thisis the default behaviour. However this arrangement only looks good for particles which arerepresenting something vaguely spherical like a light flare. For more linear effects like laser fire,you actually want the particle to have an orientation of it’s own.

format: billboard type <point|oriented common|oriented self|perpendicular common|perpendicular self>

example: billboard type oriented selfdefault: point

The options for this parameter are:

point The default arrangement, this approximates spherical particles and the billboardsalways fully face the camera.

oriented commonParticles are oriented around a common, typically fixed direction vector (see[common direction], page 112), which acts as their local Y axis. The billboardrotates only around this axis, giving the particle some sense of direction. Good for

Page 115: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 111

rainstorms, starfields etc where the particles will traveling in one direction - this isslightly faster than oriented self (see below).

oriented selfParticles are oriented around their own direction vector, which acts as their local Yaxis. As the particle changes direction, so the billboard reorients itself to face thisway. Good for laser fire, fireworks and other ’streaky’ particles that should look likethey are traveling in their own direction.

perpendicular commonParticles are perpendicular to a common, typically fixed direction vector (see[common direction], page 112), which acts as their local Z axis, and their lo-cal Y axis coplanar with common direction and the common up vector (see[common up vector], page 112). The billboard never rotates to face the camera,you might use double-side material to ensure particles never culled by back-facing.Good for aureolas, rings etc where the particles will perpendicular to the ground -this is slightly faster than perpendicular self (see below).

perpendicular selfParticles are perpendicular to their own direction vector, which acts as their local Zaxis, and their local Y axis coplanar with their own direction vector and the commonup vector (see [common up vector], page 112). The billboard never rotates to facethe camera, you might use double-side material to ensure particles never culled byback-facing. Good for rings stack etc where the particles will perpendicular to theirtraveling direction.

billboard origin

Specifying the point which acts as the origin point for all billboard particles, controls the finetuning of where a billboard particle appears in relation to it’s position.

format: billboard origin <top left|top center|top right|center left|center|center right|bottom left|bottom center|bottom right>

example: billboard origin top rightdefault: center

The options for this parameter are:

top left The billboard origin is the top-left corner.

top center The billboard origin is the center of top edge.

top right The billboard origin is the top-right corner.

center left The billboard origin is the center of left edge.

center The billboard origin is the center.

center rightThe billboard origin is the center of right edge.

bottom leftThe billboard origin is the bottom-left corner.

bottom centerThe billboard origin is the center of bottom edge.

bottom rightThe billboard origin is the bottom-right corner.

Page 116: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 112

billboard rotation type

By default, billboard particles will rotate the texture coordinates to according with particlerotation. But rotate texture coordinates has some disadvantage, e.g. the corners of the texturewill lost after rotate, and the corners of the billboard will fill with unwanted texture area whenusing wrap address mode or sub-texture sampling. This settings allow you specifying otherrotation type.

format: billboard rotation type <vertex|texcoord>example: billboard rotation type vertexdefault: texcoord

The options for this parameter are:

vertex Billboard particles will rotate the vertices around their facing direction to accordingwith particle rotation. Rotate vertices guarantee texture corners exactly matchbillboard corners, thus has advantage mentioned above, but should take more timeto generate the vertices.

texcoord Billboard particles will rotate the texture coordinates to according with particlerotation. Rotate texture coordinates is faster than rotate vertices, but has somedisadvantage mentioned above.

common direction

Only required if [billboard type], page 110 is set to oriented common or perpendicular common,this vector is the common direction vector used to orient all particles in the system.

format: common direction <x> <y> <z>example: common direction 0 -1 0default: 0 0 1

See also: Section 3.3.2 [Particle Emitters], page 114, Section 3.3.5 [Particle Affectors], page 121

common up vector

Only required if [billboard type], page 110 is set to perpendicular self or perpendicular common,this vector is the common up vector used to orient all particles in the system.

format: common up vector <x> <y> <z>example: common up vector 0 1 0default: 0 1 0

See also: Section 3.3.2 [Particle Emitters], page 114, Section 3.3.5 [Particle Affectors], page 121

point rendering

This is actually an attribute of the ’billboard’ particle renderer (the default), and sets whetheror not the BillboardSet will use point rendering rather than manually generated quads.

Page 117: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 113

By default a BillboardSet is rendered by generating geometry for a textured quad in memory,taking into account the size and orientation settings, and uploading it to the video card. Thealternative is to use hardware point rendering, which means that only one position needs to besent per billboard rather than 4 and the hardware sorts out how this is rendered based on therender state.

Using point rendering is faster than generating quads manually, but is more restrictive. Thefollowing restrictions apply:• Only the ’point’ orientation type is supported• Size and appearance of each particle is controlled by the material pass ([point size], page 38,

[point size attenuation], page 38, [point sprites], page 38)• Per-particle size is not supported (stems from the above)• Per-particle rotation is not supported, and this can only be controlled through texture unit

rotation in the material definition• Only ’center’ origin is supported• Some drivers have an upper limit on the size of points they support - this can even vary

between APIs on the same card! Don’t rely on point sizes that cause the point sprites toget very large on screen, since they may get clamped on some cards. Upper sizes can rangefrom 64 to 256 pixels.

You will almost certainly want to enable in your material pass both point attenuation andpoint sprites if you use this option.

accurate facing

This is actually an attribute of the ’billboard’ particle renderer (the default), and sets whetheror not the BillboardSet will use a slower but more accurate calculation for facing the billboardto the camera. Bt default it uses the camera direction, which is faster but means the billboardsdon’t stay in the same orientation as you rotate the camera. The ’accurate facing true’ optionmakes the calculation based on a vector from each billboard to the camera, which means theorientation is constant even whilst the camera rotates.

format: accurate facing on|offdefault: accurate facing off 0

iteration interval

Usually particle systems are updated based on the frame rate; however this can give variableresults with more extreme frame rate ranges, particularly at lower frame rates. You can use thisoption to make the update frequency a fixed interval, whereby at lower frame rates, the particleupdate will be repeated at the fixed interval until the frame time is used up. A value of 0 meansthe default frame time iteration.

format: iteration interval <secs>example: iteration interval 0.01

Page 118: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 114

default: iteration interval 0

nonvisible update timeout

Sets when the particle system should stop updating after it hasn’t been visible for a while. Bydefault, visible particle systems update all the time, even when not in view. This means thatthey are guaranteed to be consistent when they do enter view. However, this comes at a cost,updating particle systems can be expensive, especially if they are perpetual.

This option lets you set a ’timeout’ on the particle system, so that if it isn’t visible for thisamount of time, it will stop updating until it is next visible. A value of 0 disables the timeoutand always updates.

format: nonvisible update timeout <secs>example: nonvisible update timeout 10default: nonvisible update timeout 0

3.3.2 Particle Emitters

Particle emitters are classified by ’type’ e.g. ’Point’ emitters emit from a single point whilst’Box’ emitters emit randomly from an area. New emitters can be added to Ogre by creatingplugins. You add an emitter to a system by nesting another section within it, headed with thekeyword ’emitter’ followed by the name of the type of emitter (case sensitive). Ogre currentlysupports ’Point’, ’Box’, ’Cylinder’, ’Ellipsoid’, ’HollowEllipsoid’ and ’Ring’ emitters.

It is also possible to ’emit emitters’ - that is, have new emitters spawned based on the positionof particles. See [Emitting Emitters], page 120

Particle Emitter Universal Attributes

• [angle], page 115• [colour], page 115• [colour range start], page 115• [colour range end], page 115• [direction], page 116• [emission rate], page 116• [position], page 116• [velocity], page 116• [velocity min], page 116• [velocity max], page 116• [time to live], page 117• [time to live min], page 117• [time to live max], page 117

Page 119: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 115

• [duration], page 117• [duration min], page 117• [duration max], page 117• [repeat delay], page 118• [repeat delay min], page 118• [repeat delay max], page 118

See also: Section 3.3 [Particle Scripts], page 106, Section 3.3.5 [Particle Affectors], page 121

3.3.3 Particle Emitter Attributes

This section describes the common attributes of all particle emitters. Specific emitter types mayalso support their own extra attributes.

angle

Sets the maximum angle (in degrees) which emitted particles may deviate from the direction ofthe emitter (see direction). Setting this to 10 allows particles to deviate up to 10 degrees in anydirection away from the emitter’s direction. A value of 180 means emit in any direction, whilst0 means emit always exactly in the direction of the emitter.

format: angle <degrees>example: angle 30default: 0

colour

Sets a static colour for all particle emitted. Also see the colour range start and colour range endattributes for setting a range of colours. The format of the colour parameter is "r g b a", whereeach component is a value from 0 to 1, and the alpha value is optional (assumes 1 if not specified).

format: colour <r> <g> <b> [<a>]example: colour 1 0 0 1default: 1 1 1 1

colour range start & colour range end

As the ’colour’ attribute, except these 2 attributes must be specified together, and indicatethe range of colours available to emitted particles. The actual colour will be randomly chosenbetween these 2 values.

format: as colourexample (generates random colours between red and blue):

colour range start 1 0 0colour range end 0 0 1

Page 120: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 116

default: both 1 1 1 1

direction

Sets the direction of the emitter. This is relative to the SceneNode which the particle system isattached to, meaning that as with other movable objects changing the orientation of the nodewill also move the emitter.

format: direction <x> <y> <z>example: direction 0 1 0default: 1 0 0

emission rate

Sets how many particles per second should be emitted. The specific emitter does not have toemit these in a continuous burst - this is a relative parameter and the emitter may choose toemit all of the second’s worth of particles every half-second for example, the behaviour dependson the emitter. The emission rate will also be limited by the particle system’s ’quota’ setting.

format: emission rate <particles per second>example: emission rate 50default: 10

position

Sets the position of the emitter relative to the SceneNode the particle system is attached to.

format: position <x> <y> <z>example: position 10 0 40default: 0 0 0

velocity

Sets a constant velocity for all particles at emission time. See also the velocity min and veloc-ity max attributes which allow you to set a range of velocities instead of a fixed one.

format: velocity <world units per second>example: velocity 100default: 1

velocity min & velocity max

As ’velocity’ except these attributes set a velocity range and each particle is emitted with arandom velocity within this range.

Page 121: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 117

format: as velocityexample:

velocity min 50velocity max 100

default: both 1

time to live

Sets the number of seconds each particle will ’live’ for before being destroyed. NB it is possiblefor particle affectors to alter this in flight, but this is the value given to particles on emission.See also the time to live min and time to live max attributes which let you set a lifetime rangeinstead of a fixed one.

format: time to live <seconds>example: time to live 10default: 5

time to live min & time to live max

As time to live, except this sets a range of lifetimes and each particle gets a random value in-between on emission.

format: as time to liveexample:

time to live min 2time to live max 5

default: both 5

duration

Sets the number of seconds the emitter is active. The emitter can be started again, see[repeat delay], page 118. A value of 0 means infinite duration. See also the duration minand duration max attributes which let you set a duration range instead of a fixed one.

format: duration <seconds>example:

duration 2.5default: 0

duration min & duration max

As duration, except these attributes set a variable time range between the min and max valueseach time the emitter is started.

Page 122: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 118

format: as durationexample:

duration min 2duration max 5

default: both 0

repeat delay

Sets the number of seconds to wait before the emission is repeated when stopped by a limited[duration], page 117. See also the repeat delay min and repeat delay max attributes whichallow you to set a range of repeat delays instead of a fixed one.

format: repeat delay <seconds>example:

repeat delay 2.5default: 0

repeat delay min & repeat delay max

As repeat delay, except this sets a range of repeat delays and each time the emitter is startedit gets a random value in-between.

format: as repeat delayexample:

repeat delay 2repeat delay 5

default: both 0

See also: Section 3.3.4 [Standard Particle Emitters], page 118, Section 3.3 [Particle Scripts],page 106, Section 3.3.5 [Particle Affectors], page 121

3.3.4 Standard Particle Emitters

Ogre comes preconfigured with a few particle emitters. New ones can be added by creatingplugins: see the Plugin ParticleFX project as an example of how you would do this (this iswhere these emitters are implemented).• [Point Emitter], page 118• [Box Emitter], page 119• [Cylinder Emitter], page 119• [Ellipsoid Emitter], page 120• [Hollow Ellipsoid Emitter], page 120• [Ring Emitter], page 120

Page 123: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 119

Point Emitter

This emitter emits particles from a single point, which is it’s position. This emitter has noadditional attributes over an above the standard emitter attributes.

To create a point emitter, include a section like this within your particle system script:

emitter Point{

// Settings go here}

Please note that the name of the emitter (’Point’) is case-sensitive.

Box Emitter

This emitter emits particles from a random location within a 3-dimensional box. It’s extraattributes are:

width Sets the width of the box (this is the size of the box along it’s local X axis, whichis dependent on the ’direction’ attribute which forms the box’s local Z).format: width <units>example: width 250default: 100

height Sets the height of the box (this is the size of the box along it’s local Y axis, whichis dependent on the ’direction’ attribute which forms the box’s local Z).format: height <units>example: height 250default: 100

depth Sets the depth of the box (this is the size of the box along it’s local Z axis, which isthe same as the ’direction’ attribute).format: depth <units>example: depth 250default: 100

To create a box emitter, include a section like this within your particle system script:

emitter Box{

// Settings go here}

Cylinder Emitter

This emitter emits particles in a random direction from within a cylinder area, where the cylinderis oriented along the Z-axis. This emitter has exactly the same parameters as the [Box Emitter],page 119 so there are no additional parameters to consider here - the width and height determine

Page 124: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 120

the shape of the cylinder along it’s axis (if they are different it is an ellipsoid cylinder), the depthdetermines the length of the cylinder.

Ellipsoid Emitter

This emitter emits particles from within an ellipsoid shaped area, i.e. a sphere or squashed-sphere area. The parameters are again identical to the [Box Emitter], page 119, except that thedimensions describe the widest points along each of the axes.

Hollow Ellipsoid Emitter

This emitter is just like [Ellipsoid Emitter], page 120 except that there is a hollow area in thecentre of the ellipsoid from which no particles are emitted. Therefore it has 3 extra parametersin order to define this area:

inner widthThe width of the inner area which does not emit any particles.

inner heightThe height of the inner area which does not emit any particles.

inner depthThe depth of the inner area which does not emit any particles.

Ring Emitter

This emitter emits particles from a ring-shaped area, i.e. a little like [Hollow Ellipsoid Emitter],page 120 except only in 2 dimensions.

inner widthThe width of the inner area which does not emit any particles.

inner heightThe height of the inner area which does not emit any particles.

See also: Section 3.3 [Particle Scripts], page 106, Section 3.3.2 [Particle Emitters], page 114

Emitting Emitters

It is possible to spawn new emitters on the expiry of particles, for example to product ’firework’style effects. This is controlled via the following directives:

emit emitter quotaThis parameter is a system-level parameter telling the system how many emittedemitters may be in use at any one time. This is just to allow for the space allocationprocess.

name This parameter is an emitter-level parameter, giving a name to an emitter. Thiscan then be referred to in another emitter as the new emitter type to spawn whenan emitted particle dies.

emit emitterThis is an emitter-level parameter, and if specified, it means that when particlesemitted by this emitter die, they spawn a new emitter of the named type.

Page 125: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 121

3.3.5 Particle Affectors

Particle affectors modify particles over their lifetime. They are classified by ’type’ e.g. ’Linear-Force’ affectors apply a force to all particles, whilst ’ColourFader’ affectors alter the colour ofparticles in flight. New affectors can be added to Ogre by creating plugins. You add an affectorto a system by nesting another section within it, headed with the keyword ’affector’ followedby the name of the type of affector (case sensitive). Ogre currently supports ’LinearForce’ and’ColourFader’ affectors.

Particle affectors actually have no universal attributes; they are all specific to the type ofaffector.

See also: Section 3.3.6 [Standard Particle Affectors], page 121, Section 3.3 [Particle Scripts],page 106, Section 3.3.2 [Particle Emitters], page 114

3.3.6 Standard Particle Affectors

Ogre comes preconfigured with a few particle affectors. New ones can be added by creatingplugins: see the Plugin ParticleFX project as an example of how you would do this (this iswhere these affectors are implemented).• [Linear Force Affector], page 121• [ColourFader Affector], page 122• [ColourFader2 Affector], page 122• [Scaler Affector], page 124• [Rotator Affector], page 124• [ColourInterpolator Affector], page 125• [ColourImage Affector], page 126• [DeflectorPlane Affector], page 126• [DirectionRandomiser Affector], page 126

Linear Force Affector

This affector applies a force vector to all particles to modify their trajectory. Can be used forgravity, wind, or any other linear force. It’s extra attributes are:

force vectorSets the vector for the force to be applied to every particle. The magnitude of thisvector determines how strong the force is.

format: force vector <x> <y> <z>example: force vector 50 0 -50default: 0 -100 0 (a fair gravity effect)

force applicationSets the way in which the force vector is applied to particle momentum.

format: force application <add|average>example: force application averagedefault: add

The options are:

Page 126: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 122

average The resulting momentum is the average of the force vector and theparticle’s current motion. Is self-stabilising but the speed at which theparticle changes direction is non-linear.

add The resulting momentum is the particle’s current motion plus the forcevector. This is traditional force acceleration but can potentially resultin unlimited velocity.

To create a linear force affector, include a section like this within your particle system script:

affector LinearForce{

// Settings go here}

Please note that the name of the affector type (’LinearForce’) is case-sensitive.

ColourFader Affector

This affector modifies the colour of particles in flight. It’s extra attributes are:

red Sets the adjustment to be made to the red component of the particle colour persecond.

format: red <delta value>example: red -0.1default: 0

green Sets the adjustment to be made to the green component of the particle colour persecond.

format: green <delta value>example: green -0.1default: 0

blue Sets the adjustment to be made to the blue component of the particle colour persecond.

format: blue <delta value>example: blue -0.1default: 0

alpha Sets the adjustment to be made to the alpha component of the particle colour persecond.

format: alpha <delta value>example: alpha -0.1default: 0

To create a colour fader affector, include a section like this within your particle system script:

affector ColourFader{

// Settings go here}

Page 127: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 123

ColourFader2 Affector

This affector is similar to the [ColourFader Affector], page 122, except it introduces two statesof colour changes as opposed to just one. The second colour change state is activated once aspecified amount of time remains in the particles life.

red1 Sets the adjustment to be made to the red component of the particle colour persecond for the first state.

format: red <delta value>example: red -0.1default: 0

green1 Sets the adjustment to be made to the green component of the particle colour persecond for the first state.

format: green <delta value>example: green -0.1default: 0

blue1 Sets the adjustment to be made to the blue component of the particle colour persecond for the first state.

format: blue <delta value>example: blue -0.1default: 0

alpha1 Sets the adjustment to be made to the alpha component of the particle colour persecond for the first state.

format: alpha <delta value>example: alpha -0.1default: 0

red2 Sets the adjustment to be made to the red component of the particle colour persecond for the second state.

format: red <delta value>example: red -0.1default: 0

green2 Sets the adjustment to be made to the green component of the particle colour persecond for the second state.

format: green <delta value>example: green -0.1default: 0

blue2 Sets the adjustment to be made to the blue component of the particle colour persecond for the second state.

format: blue <delta value>example: blue -0.1default: 0

alpha2 Sets the adjustment to be made to the alpha component of the particle colour persecond for the second state.

Page 128: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 124

format: alpha <delta value>example: alpha -0.1default: 0

state changeWhen a particle has this much time left to live, it will switch to state 2.

format: state change <seconds>example: state change 2default: 1

To create a ColourFader2 affector, include a section like this within your particle systemscript:

affector ColourFader2{

// Settings go here}

Scaler Affector

This affector scales particles in flight. It’s extra attributes are:

rate The amount by which to scale the particles in both the x and y direction per second.

To create a scale affector, include a section like this within your particle system script:

affector Scaler{

// Settings go here}

Rotator Affector

This affector rotates particles in flight. This is done by rotating the texture. It’s extra attributesare:

rotation speed range startThe start of a range of rotation speeds to be assigned to emitted particles.

format: rotation speed range start <degrees per second>example: rotation speed range start 90default: 0

rotation speed range endThe end of a range of rotation speeds to be assigned to emitted particles.

format: rotation speed range end <degrees per second>example: rotation speed range end 180default: 0

rotation range startThe start of a range of rotation angles to be assigned to emitted particles.

format: rotation range start <degrees>example: rotation range start 0default: 0

Page 129: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 125

rotation range endThe end of a range of rotation angles to be assigned to emitted particles.

format: rotation range end <degrees>example: rotation range end 360default: 0

To create a rotate affector, include a section like this within your particle system script:affector Rotator{

// Settings go here}

ColourInterpolator Affector

Similar to the ColourFader and ColourFader2 Affector?s, this affector modifies the colour ofparticles in flight, except it has a variable number of defined stages. It swaps the particle colourfor several stages in the life of a particle and interpolates between them. It’s extra attributesare:

time0 The point in time of stage 0.format: time0 <0-1 based on lifetime>

example: time0 0default: 1

colour0 The colour at stage 0.format: colour0 <r> <g> <b> [<a>]

example: colour0 1 0 0 1default: 0.5 0.5 0.5 0.0

time1 The point in time of stage 1.format: time1 <0-1 based on lifetime>

example: time1 0.5default: 1

colour1 The colour at stage 1.format: colour1 <r> <g> <b> [<a>]

example: colour1 0 1 0 1default: 0.5 0.5 0.5 0.0

time2 The point in time of stage 2.format: time2 <0-1 based on lifetime>

example: time2 1default: 1

colour2 The colour at stage 2.format: colour2 <r> <g> <b> [<a>]

example: colour2 0 0 1 1default: 0.5 0.5 0.5 0.0

[...]

Page 130: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 126

The number of stages is variable. The maximal number of stages is 6; where time5 andcolour5 are the last possible parameters. To create a colour interpolation affector, include asection like this within your particle system script:

affector ColourInterpolator{

// Settings go here}

ColourImage Affector

This is another affector that modifies the colour of particles in flight, but instead of program-matically defining colours, the colours are taken from a specified image file. The range of colourvalues begins from the left side of the image and move to the right over the lifetime of theparticle, therefore only the horizontal dimension of the image is used. Its extra attributes are:

image The start of a range of rotation speed to be assigned to emitted particles.format: image <image name>

example: image rainbow.pngdefault: none

To create a ColourImage affector, include a section like this within your particle systemscript:

affector ColourImage{

// Settings go here}

DeflectorPlane Affector

This affector defines a plane which deflects particles which collide with it. The attributes are:

plane pointA point on the deflector plane. Together with the normal vector it defines the plane.default: plane point 0 0 0

plane normalThe normal vector of the deflector plane. Together with the point it defines theplane.default: plane normal 0 1 0

bounce The amount of bouncing when a particle is deflected. 0 means no deflection and 1stands for 100 percent reflection.default: bounce 1.0

DirectionRandomiser Affector

This affector applies randomness to the movement of the particles. Its extra attributes are:

randomnessThe amount of randomness to introduce in each axial direction.example: randomness 5default: randomness 1

Page 131: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 127

scope The percentage of particles affected in each run of the affector.example: scope 0.5default: scope 1.0

keep velocityDetermines whether the velocity of particles is unchanged.example: keep velocity truedefault: keep velocity false

3.4 Overlay Scripts

Overlay scripts offer you the ability to define overlays in a script which can be reused easily.Whilst you could set up all overlays for a scene in code using the methods of the SceneManager,Overlay and OverlayElement classes, in practice it’s a bit unwieldy. Instead you can store overlaydefinitions in text files which can then be loaded whenever required.

Loading scripts

Overlay scripts are loaded at initialisation time by the system: by default it looks in all com-mon resource locations (see Root::addResourceLocation) for files with the ’.overlay’ extensionand parses them. If you want to parse files with a different extension, use the OverlayMan-ager::getSingleton().parseAllSources method with your own extension, or if you want to parsean individual file, use OverlayManager::getSingleton().parseScript.

Format

Several overlays may be defined in a single script. The script format is pseudo-C++, with sectionsdelimited by curly braces (), comments indicated by starting a line with ’//’ (note, no nestedform comments allowed), and inheritance through the use of templates. The general format isshown below in a typical example:// The name of the overlay comes firstMyOverlays/ANewOverlay{

zorder 200

container Panel(MyOverlayElements/TestPanel){

// Center it horizontally, put it at the topleft 0.25top 0width 0.5height 0.1material MyMaterials/APanelMaterial

// Another panel nested in this onecontainer Panel(MyOverlayElements/AnotherPanel){

left 0

Page 132: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 128

top 0width 0.1height 0.1material MyMaterials/NestedPanel

}}

}

The above example defines a single overlay called ’MyOverlays/ANewOverlay’, with 2 panelsin it, one nested under the other. It uses relative metrics (the default if no metrics mode optionis found).

Every overlay in the script must be given a name, which is the line before the first opening’’. This name must be globally unique. It can include path characters (as in the example) tologically divide up your overlays, and also to avoid duplicate names, but the engine does nottreat the name a hierarchical, just as a string. Within the braces are the properties of theoverlay, and any nested elements. The overlay itself only has a single property ’zorder’ whichdetermines how ’high’ it is in the stack of overlays if more than one is displayed at the sametime. Overlays with higher zorder values are displayed on top.

Adding elements to the overlay

Within an overlay, you can include any number of 2D or 3D elements. You do this by defininga nested block headed by:

’element’ if you want to define a 2D element which cannot have children of it’s own

’container’ if you want to define a 2D container object (which may itself have nested containersor elements)

The element and container blocks are pretty identical apart from their ability to store nestedblocks.

’container’ / ’element’ blocks

These are delimited by curly braces. The format for the header preceding the first brace is:

[container | element] <type name> ( <instance name>) [: <template name>]...

type nameMust resolve to the name of a OverlayElement type which has been registered withthe OverlayManager. Plugins register with the OverlayManager to advertise theirability to create elements, and at this time advertise the name of the type. OGREcomes preconfigured with types ’Panel’, ’BorderPanel’ and ’TextArea’.

Page 133: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 129

instance nameMust be a name unique among all other elements / containers by which to identifythe element. Note that you can obtain a pointer to any named element by callingOverlayManager::getSingleton().getOverlayElement(name).

template nameOptional template on which to base this item. See templates.

The properties which can be included within the braces depend on the custom type. Howeverthe following are always valid:• [metrics mode], page 131• [horz align], page 131• [vert align], page 132• [left], page 132• [top], page 133• [width], page 133• [height], page 133• [overlay material], page 134• [caption], page 134

Templates

You can use templates to create numerous elements with the same properties. A template isan abstract element and it is not added to an overlay. It acts as a base class that elements caninherit and get its default properties. To create a template, the keyword ’template’ must bethe first word in the element definition (before container or element). The template element iscreated in the topmost scope - it is NOT specified in an Overlay. It is recommended that youdefine templates in a separate overlay though this is not essential. Having templates defined ina separate file will allow different look & feels to be easily substituted.

Elements can inherit a template in a similar way to C++ inheritance - by using the : operatoron the element definition. The : operator is placed after the closing bracket of the name(separated by a space). The name of the template to inherit is then placed after the : operator(also separated by a space).

A template can contain template children which are created when the template is subclassedand instantiated. Using the template keyword for the children of a template is optional but rec-ommended for clarity, as the children of a template are always going to be templates themselves.

template container BorderPanel(MyTemplates/BasicBorderPanel){

left 0top 0width 1height 1

// setup the texture UVs for a borderpanel

Page 134: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 130

// do this in a template so it doesn’t need to be redone everywherematerial Core/StatsBlockCenterborder_size 0.05 0.05 0.06665 0.06665border_material Core/StatsBlockBorderborder_topleft_uv 0.0000 1.0000 0.1914 0.7969border_top_uv 0.1914 1.0000 0.8086 0.7969border_topright_uv 0.8086 1.0000 1.0000 0.7969border_left_uv 0.0000 0.7969 0.1914 0.2148border_right_uv 0.8086 0.7969 1.0000 0.2148border_bottomleft_uv 0.0000 0.2148 0.1914 0.0000border_bottom_uv 0.1914 0.2148 0.8086 0.0000border_bottomright_uv 0.8086 0.2148 1.0000 0.0000

}template container Button(MyTemplates/BasicButton) : MyTemplates/BasicBorderPanel{

left 0.82top 0.45width 0.16height 0.13material Core/StatsBlockCenterborder_up_material Core/StatsBlockBorder/Upborder_down_material Core/StatsBlockBorder/Down

}template element TextArea(MyTemplates/BasicText){

font_name Ogrechar_height 0.08colour_top 1 1 0colour_bottom 1 0.2 0.2left 0.03top 0.02width 0.12height 0.09

}

MyOverlays/AnotherOverlay{

zorder 490container BorderPanel(MyElements/BackPanel) : MyTemplates/BasicBorderPanel{

left 0top 0width 1height 1

container Button(MyElements/HostButton) : MyTemplates/BasicButton{

left 0.82top 0.45caption MyTemplates/BasicText HOST

}

Page 135: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 131

container Button(MyElements/JoinButton) : MyTemplates/BasicButton{

left 0.82top 0.60caption MyTemplates/BasicText JOIN

}}

}

The above example uses templates to define a button. Note that the button template inheritsfrom the borderPanel template. This reduces the number of attributes needed to instantiate abutton.

Also note that the instantiate of a Button needs a template name for the caption attribute.So templates can also be used by elements that need dynamic creation of children elements (thebutton creates a TextAreaElement in this case for its caption).

See Section 3.4.1 [OverlayElement Attributes], page 131, Section 3.4.2 [Standard OverlayEle-ments], page 135

3.4.1 OverlayElement Attributes

These attributes are valid within the braces of a ’container’ or ’element’ block in an overlayscript. They must each be on their own line. Ordering is unimportant.

metrics mode

Sets the units which will be used to size and position this element.

Format: metrics mode <pixels|relative>Example: metrics mode pixels

This can be used to change the way that all measurement attributes in the rest of this elementare interpreted. In relative mode, they are interpreted as being a parametric value from 0 to 1,as a proportion of the width / height of the screen. In pixels mode, they are simply pixel offsets.

Default: metrics mode relative

horz align

Sets the horizontal alignment of this element, in terms of where the horizontal origin is.

Format: horz align <left|center|right>Example: horz align center

Page 136: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 132

This can be used to change where the origin is deemed to be for the purposes of any horizontalpositioning attributes of this element. By default the origin is deemed to be the left edge of thescreen, but if you change this you can center or right-align your elements. Note that setting thealignment to center or right does not automatically force your elements to appear in the centeror the right edge, you just have to treat that point as the origin and adjust your coordinatesappropriately. This is more flexible because you can choose to position your element anywhererelative to that origin. For example, if your element was 10 pixels wide, you would use a ’left’property of -10 to align it exactly to the right edge, or -20 to leave a gap but still make it stickto the right edge.

Note that you can use this property in both relative and pixel modes, but it is most usefulin pixel mode.

Default: horz align left

vert align

Sets the vertical alignment of this element, in terms of where the vertical origin is.

Format: vert align <top|center|bottom>Example: vert align center

This can be used to change where the origin is deemed to be for the purposes of any verticalpositioning attributes of this element. By default the origin is deemed to be the top edge ofthe screen, but if you change this you can center or bottom-align your elements. Note thatsetting the alignment to center or bottom does not automatically force your elements to appearin the center or the bottom edge, you just have to treat that point as the origin and adjustyour coordinates appropriately. This is more flexible because you can choose to position yourelement anywhere relative to that origin. For example, if your element was 50 pixels high, youwould use a ’top’ property of -50 to align it exactly to the bottom edge, or -70 to leave a gapbut still make it stick to the bottom edge.

Note that you can use this property in both relative and pixel modes, but it is most usefulin pixel mode.

Default: vert align top

left

Sets the horizontal position of the element relative to it’s parent.

Format: left <value>Example: left 0.5

Page 137: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 133

Positions are relative to the parent (the top-left of the screen if the parent is an overlay,the top-left of the parent otherwise) and are expressed in terms of a proportion of screen size.Therefore 0.5 is half-way across the screen.

Default: left 0

top

Sets the vertical position of the element relative to it’s parent.

Format: top <value>Example: top 0.5

Positions are relative to the parent (the top-left of the screen if the parent is an overlay,the top-left of the parent otherwise) and are expressed in terms of a proportion of screen size.Therefore 0.5 is half-way down the screen.

Default: top 0

width

Sets the width of the element as a proportion of the size of the screen.

Format: width <value>Example: width 0.25

Sizes are relative to the size of the screen, so 0.25 is a quarter of the screen. Sizes are notrelative to the parent; this is common in windowing systems where the top and left are relativebut the size is absolute.

Default: width 1

height

Sets the height of the element as a proportion of the size of the screen.

Format: height <value>Example: height 0.25

Page 138: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 134

Sizes are relative to the size of the screen, so 0.25 is a quarter of the screen. Sizes are notrelative to the parent; this is common in windowing systems where the top and left are relativebut the size is absolute.

Default: height 1

material

Sets the name of the material to use for this element.

Format: material <name>Example: material Examples/TestMaterial

This sets the base material which this element will use. Each type of element may interpretthis differently; for example the OGRE element ’Panel’ treats this as the background of thepanel, whilst ’BorderPanel’ interprets this as the material for the center area only. Materialsshould be defined in .material scripts.

Note that using a material in an overlay element automatically disables lighting and depthchecking on this material. Therefore you should not use the same material as is used for real3D objects for an overlay.

Default: none

caption

Sets a text caption for the element.

Format: caption <string>Example: caption This is a caption

Not all elements support captions, so each element is free to disregard this if it wants.However, a general text caption is so common to many elements that it is included in thegeneric interface to make it simpler to use. This is a common feature in GUI systems.

Default: blank

rotation

Sets the rotation of the element.

Format: rotation <angle in degrees> <axis x> <axis y> <axis z> Example: rotation 30 0 0 1

Page 139: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 135

Default: none

3.4.2 Standard OverlayElements

Although OGRE’s OverlayElement and OverlayContainer classes are designed to be extendedby applications developers, there are a few elements which come as standard with Ogre. Theseinclude:

• [Panel], page 135

• [BorderPanel], page 135

• [TextArea], page 136

This section describes how you define their custom attributes in an .overlay script, but you canalso change these custom properties in code if you wish. You do this by calling setParame-ter(paramname, value). You may wish to use the StringConverter class to convert your typesto and from strings.

Panel (container)

This is the most bog-standard container you can use. It is a rectangular area which can containother elements (or containers) and may or may not have a background, which can be tiledhowever you like. The background material is determined by the material attribute, but is onlydisplayed if transparency is off.

Attributes:

transparent <true | false>If set to ’true’ the panel is transparent and is not rendered itself, it is just used asa grouping level for it’s children.

tiling <layer> <x tile> <y tile>Sets the number of times the texture(s) of the material are tiled across the panel inthe x and y direction. <layer> is the texture layer, from 0 to the number of texturelayers in the material minus one. By setting tiling per layer you can create somenice multitextured backdrops for your panels, this works especially well when youanimate one of the layers.

uv coords <topleft u> <topleft v> <bottomright u> <bottomright v>Sets the texture coordinates to use for this panel.

BorderPanel (container)

This is a slightly more advanced version of Panel, where instead of just a single flat panel, thepanel has a separate border which resizes with the panel. It does this by taking an approachvery similar to the use of HTML tables for bordered content: the panel is rendered as 9 squareareas, with the center area being rendered with the main material (as with Panel) and the outer8 areas (the 4 corners and the 4 edges) rendered with a separate border material. The advantageof rendering the corners separately from the edges is that the edge textures can be designed sothat they can be stretched without distorting them, meaning the single texture can serve anysize panel.

Attributes:

Page 140: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 136

border size <left> <right> <top> <bottom>The size of the border at each edge, as a proportion of the size of the screen. Thislets you have different size borders at each edge if you like, or you can use the samevalue 4 times to create a constant size border.

border material <name>The name of the material to use for the border. This is normally a different materialto the one used for the center area, because the center area is often tiled which meansyou can’t put border areas in there. You must put all the images you need for allthe corners and the sides into a single texture.

border topleft uv <u1> <v1> <u2> <v2>[also border topright uv, border bottomleft uv, border bottomright uv]; The tex-ture coordinates to be used for the corner areas of the border. 4 coordinates arerequired, 2 for the top-left corner of the square, 2 for the bottom-right of the square.

border left uv <u1> <v1> <u2> <v2>[also border right uv, border top uv, border bottom uv]; The texture coordinatesto be used for the edge areas of the border. 4 coordinates are required, 2 for thetop-left corner, 2 for the bottom-right. Note that you should design the texture sothat the left & right edges can be stretched / squashed vertically and the top andbottom edges can be stretched / squashed horizontally without detrimental effects.

TextArea (element)

This is a generic element that you can use to render text. It uses fonts which can be defined incode using the FontManager and Font classes, or which have been predefined in .fontdef files.See the font definitions section for more information.

Attributes:

font name <name>The name of the font to use. This font must be defined in a .fontdef file to ensureit is available at scripting time.

char height <height>The height of the letters as a proportion of the screen height. Character widths mayvary because OGRE supports proportional fonts, but will be based on this constantheight.

colour <red> <green> <blue>A solid colour to render the text in. Often fonts are defined in monochrome, so thisallows you to colour them in nicely and use the same texture for multiple differentcoloured text areas. The colour elements should all be expressed as values between0 and 1. If you use predrawn fonts which are already full colour then you don’t needthis.

colour bottom <red> <green> <blue> / colour top <red> <green> <blue>As an alternative to a solid colour, you can colour the text differently at the topand bottom to create a gradient colour effect which can be very effective.

alignment <left | center | right>Sets the horizontal alignment of the text. This is different from the horz alignparameter.

space width <width>Sets the width of a space in relation to the screen.

Page 141: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 137

3.5 Font Definition Scripts

Ogre uses texture-based fonts to render the TextAreaOverlayElement. You can also use the Fontobject for your own purpose if you wish. The final form of a font is a Material object generatedby the font, and a set of ’glyph’ (character) texture coordinate information.

There are 2 ways you can get a font into OGRE:

1. Design a font texture yourself using an art package or font generator tool

2. Ask OGRE to generate a font texture based on a truetype font

The former gives you the most flexibility and the best performance (in terms of startuptimes), but the latter is convenient if you want to quickly use a font without having to generatethe texture yourself. I suggest prototyping using the latter and change to the former for yourfinal solution.

All font definitions are held in .fontdef files, which are parsed by the system at startup time.Each .fontdef file can contain multiple font definitions. The basic format of an entry in the.fontdef file is:

<font_name>{

type <image | truetype>source <image file | truetype font file>...... custom attributes depending on type

}

Using an existing font texture

If you have one or more artists working with you, no doubt they can produce you a very nicefont texture. OGRE supports full colour font textures, or alternatively you can keep themmonochrome / greyscale and use TextArea’s colouring feature. Font textures should alwayshave an alpha channel, preferably an 8-bit alpha channel such as that supported by TGA andPNG files, because it can result in much nicer edges. To use an existing texture, here are thesettings you need:

type imageThis just tells OGRE you want a pre-drawn font.

source <filename>This is the name of the image file you want to load. This will be loaded fromthe standard TextureManager resource locations and can be of any type OGREsupports, although JPEG is not recommended because of the lack of alpha andthe lossy compression. I recommend PNG format which has both good losslesscompression and an 8-bit alpha channel.

glyph <character> <u1> <v1> <u2> <v2>This provides the texture coordinates for the specified character. You must repeatthis for every character you have in the texture. The first 2 numbers are the x andy of the top-left corner, the second two are the x and y of the bottom-right corner.Note that you really should use a common height for all characters, but widths canvary because of proportional fonts.

Page 142: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 138

’character’ is either an ASCII character for non-extended 7-bit ASCII, or for ex-tended glyphs, a unicode decimal value, which is identified by preceding the numberwith a ’u’ - e.g. ’u0546’ denotes unicode value 546.

A note for Windows users: I recommend using BitmapFontBuilder(http://www.lmnopc.com/bitmapfontbuilder/), a free tool which will generate atexture and export character widths for you, you can find a tool for converting the binaryoutput from this into ’glyph’ lines in the Tools folder.

Generating a font texture

You can also generate font textures on the fly using truetype fonts. I don’t recommend heavyuse of this in production work because rendering the texture can take a several seconds per fontwhich adds to the loading times. However it is a very nice way of quickly getting text output ina font of your choice.

Here are the attributes you need to supply:

type truetypeTells OGRE to generate the texture from a font

source <ttf file>The name of the ttf file to load. This will be searched for in the common resourcelocations and in any resource locations added to FontManager.

size <size in points>The size at which to generate the font, in standard points. Note this only affects howbig the characters are in the font texture, not how big they are on the screen. Youshould tailor this depending on how large you expect to render the fonts becausegenerating a large texture will result in blurry characters when they are scaled verysmall (because of the mipmapping), and conversely generating a small font willresult in blocky characters if large text is rendered.

resolution <dpi>The resolution in dots per inch, this is used in conjunction with the point size todetermine the final size. 72 / 96 dpi is normal.

antialias colour <true|false>This is an optional flag, which defaults to ’false’. The generator will antialias the fontby default using the alpha component of the texture, which will look fine if you usealpha blending to render your text (this is the default assumed by TextAreaOver-layElement for example). If, however you wish to use a colour based blend like addor modulate in your own code, you should set this to ’true’ so the colour valuesare anti-aliased too. If you set this to true and use alpha blending, you’ll find theedges of your font are antialiased too quickly resulting in a ’thin’ look to your fonts,because not only is the alpha blending the edges, the colour is fading too. Leavethis option at the default if in doubt.

code points nn-nn [nn-nn] ..This directive allows you to specify which unicode code points should be generatedas glyphs into the font texture. If you don’t specify this, code points 33-166 willbe generated by default which covers the basic Latin 1 glyphs. If you use this flag,

Page 143: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 139

you should specify a space-separated list of inclusive code point ranges of the form’start-end’. Numbers must be decimal.

You can also create new fonts at runtime by using the FontManager if you wish.

Page 144: Ogre User Manuel 1 7 A4

Chapter 4: Mesh Tools 140

4 Mesh Tools

There are a number of mesh tools available with OGRE to help you manipulate your meshes.

Section 4.1 [Exporters], page 140For getting data out of modellers and into OGRE.

Section 4.2 [XmlConverter], page 140For converting meshes and skeletons to/from XML.

Section 4.3 [MeshUpgrader], page 141For upgrading binary meshes from one version of OGRE to another.

4.1 Exporters

Exporters are plugins to 3D modelling tools which write meshes and skeletal animation to fileformats which OGRE can use for realtime rendering. The files the exporters write end in .meshand .skeleton respectively.

Each exporter has to be written specifically for the modeller in question, although they all usea common set of facilities provided by the classes MeshSerializer and SkeletonSerializer. Theyalso normally require you to own the modelling tool.

All the exporters here can be built from the source code, or you can download precompiledversions from the OGRE web site.

A Note About Modelling / Animation For OGRE

There are a few rules when creating an animated model for OGRE:• You must have no more than 4 weighted bone assignments per vertex. If you have more,

OGRE will eliminate the lowest weighted assignments and re-normalise the other weights.This limit is imposed by hardware blending limitations.

• All vertices must be assigned to at least one bone - assign static vertices to the root bone.• At the very least each bone must have a keyframe at the beginning and end of the animation.

If you’re creating unanimated meshes, then you do not need to be concerned with the above.Full documentation for each exporter is provided along with the exporter itself,

and there is a list of the currently supported modelling tools in the OGRE Wiki athttp://www.ogre3d.org/wiki/index.php/Exporters.

4.2 XmlConverter

The OgreXmlConverter tool can converter binary .mesh and .skeleton files to XML and backagain - this is a very useful tool for debugging the contents of meshes, or for exchanging meshdata easily - many of the modeller mesh exporters export to XML because it is simpler to do,and OgreXmlConverter can then produce a binary from it. Other than simplicity, the otheradvantage is that OgreXmlConverter can generate additional information for the mesh, likebounding regions and level-of-detail reduction.

Syntax:

Page 145: Ogre User Manuel 1 7 A4

Chapter 4: Mesh Tools 141

Usage: OgreXMLConverter sourcefile [destfile]sourcefile = name of file to convertdestfile = optional name of file to write to. If you don’t

specify this OGRE works it out through the extensionand the XML contents if the source is XML. For exampletest.mesh becomes test.xml, test.xml becomes test.meshif the XML document root is <mesh> etc.

When converting XML to .mesh, you will be prompted to (re)generate level-of-detail(LOD)information for the mesh - you can choose to skip this part if you wish, but doing it will allowyou to make your mesh reduce in detail automatically when it is loaded into the engine. Theengine uses a complex algorithm to determine the best parts of the mesh to reduce in detaildepending on many factors such as the curvature of the surface, the edges of the mesh and seamsat the edges of textures and smoothing groups - taking advantage of it is advised to make yourmeshes more scalable in real scenes.

4.3 MeshUpgrader

This tool is provided to allow you to upgrade your meshes when the binary format changes -sometimes we alter it to add new features and as such you need to keep your own assets up todate. This tools has a very simple syntax:OgreMeshUpgrade <oldmesh> <newmesh>

The OGRE release notes will notify you when this is necessary with a new release.

Page 146: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 142

5 Hardware Buffers

Vertex buffers, index buffers and pixel buffers inherit most of their features from the Hardware-Buffer class. The general premise with a hardware buffer is that it is an area of memory withwhich you can do whatever you like; there is no format (vertex or otherwise) associated withthe buffer itself - that is entirely up to interpretation by the methods that use it - in that way, aHardwareBuffer is just like an area of memory you might allocate using ’malloc’ - the differencebeing that this memory is likely to be located in GPU or AGP memory.

5.1 The Hardware Buffer Manager

The HardwareBufferManager class is the factory hub of all the objects in the new geometrysystem. You create and destroy the majority of the objects you use to define geometry throughthis class. It’s a Singleton, so you access it by doing HardwareBufferManager::getSingleton() -however be aware that it is only guaranteed to exist after the RenderSystem has been initialised(after you call Root::initialise); this is because the objects created are invariably API-specific,although you will deal with them through one common interface.

For example:

VertexDeclaration* decl = HardwareBufferManager::getSingleton().createVertexDeclaration();

HardwareVertexBufferSharedPtr vbuf =HardwareBufferManager::getSingleton().createVertexBuffer(

3*sizeof(Real), // size of one whole vertexnumVertices, // number of verticesHardwareBuffer::HBU_STATIC_WRITE_ONLY, // usagefalse); // no shadow buffer

Don’t worry about the details of the above, we’ll cover that in the later sections. Theimportant thing to remember is to always create objects through the HardwareBufferManager,don’t use ’new’ (it won’t work anyway in most cases).

5.2 Buffer Usage

Because the memory in a hardware buffer is likely to be under significant contention duringthe rendering of a scene, the kind of access you need to the buffer over the time it is used isextremely important; whether you need to update the contents of the buffer regularly, whetheryou need to be able to read information back from it, these are all important factors to how thegraphics card manages the buffer. The method and exact parameters used to create a bufferdepends on whether you are creating an index or vertex buffer (See Section 5.6 [Hardware VertexBuffers], page 145 and See Section 5.7 [Hardware Index Buffers], page 149), however one creationparameter is common to them both - the ’usage’.

The most optimal type of hardware buffer is one which is not updated often, and is neverread from. The usage parameter of createVertexBuffer or createIndexBuffer can be one of thefollowing:

HBU_STATICThis means you do not need to update the buffer very often, but you might occa-sionally want to read from it.

Page 147: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 143

HBU_STATIC_WRITE_ONLYThis means you do not need to update the buffer very often, and you do not need toread from it. However, you may read from it’s shadow buffer if you set one up (SeeSection 5.3 [Shadow Buffers], page 143). This is the optimal buffer usage setting.

HBU_DYNAMICThis means you expect to update the buffer often, and that you may wish to readfrom it. This is the least optimal buffer setting.

HBU_DYNAMIC_WRITE_ONLYThis means you expect to update the buffer often, but that you never wantto read from it. However, you may read from it’s shadow buffer if you setone up (See Section 5.3 [Shadow Buffers], page 143). If you use this option,and replace the entire contents of the buffer every frame, then you should useHBU DYNAMIC WRITE ONLY DISCARDABLE instead, since that has betterperformance characteristics on some platforms.

HBU_DYNAMIC_WRITE_ONLY_DISCARDABLEThis means that you expect to replace the entire contents of the buffer on an ex-tremely regular basis, most likely every frame. By selecting this option, you freethe system up from having to be concerned about losing the existing contents ofthe buffer at any time, because if it does lose them, you will be replacing themnext frame anyway. On some platforms this can make a significant performancedifference, so you should try to use this whenever you have a buffer you need toupdate regularly. Note that if you create a buffer this way, you should use theHBL DISCARD flag when locking the contents of it for writing.

Choosing the usage of your buffers carefully is important to getting optimal performance outof your geometry. If you have a situation where you need to update a vertex buffer often, considerwhether you actually need to update all the parts of it, or just some. If it’s the latter, considerusing more than one buffer, with only the data you need to modify in the HBU DYNAMICbuffer.

Always try to use the WRITE ONLY forms. This just means that you cannot read directlyfrom the hardware buffer, which is good practice because reading from hardware buffers is veryslow. If you really need to read data back, use a shadow buffer, described in the next section.

5.3 Shadow Buffers

As discussed in the previous section, reading data back from a hardware buffer performs verybadly. However, if you have a cast-iron need to read the contents of the vertex buffer, youshould set the ’shadowBuffer’ parameter of createVertexBuffer or createIndexBuffer to ’true’.This causes the hardware buffer to be backed with a system memory copy, which you can readfrom with no more penalty than reading ordinary memory. The catch is that when you write datainto this buffer, it will first update the system memory copy, then it will update the hardwarebuffer, as separate copying process - therefore this technique has an additional overhead whenwriting data. Don’t use it unless you really need it.

5.4 Locking buffers

In order to read or update a hardware buffer, you have to ’lock’ it. This performs 2 functions- it tells the card that you want access to the buffer (which can have an effect on its renderingqueue), and it returns a pointer which you can manipulate. Note that if you’ve asked to read thebuffer (and remember, you really shouldn’t unless you’ve set the buffer up with a shadow buffer),

Page 148: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 144

the contents of the hardware buffer will have been copied into system memory somewhere inorder for you to get access to it. For the same reason, when you’re finished with the buffer youmust unlock it; if you locked the buffer for writing this will trigger the process of uploading themodified information to the graphics hardware.

Lock parameters

When you lock a buffer, you call one of the following methods:

// Lock the entire bufferpBuffer->lock(lockType);// Lock only part of the bufferpBuffer->lock(start, length, lockType);

The first call locks the entire buffer, the second locks only the section from ’start’ (as abyte offset), for ’length’ bytes. This could be faster than locking the entire buffer since lessis transferred, but not if you later update the rest of the buffer too, because doing it in smallchunks like this means you cannot use HBL DISCARD (see below).

The lockType parameter can have a large effect on the performance of your application, especiallyif you are not using a shadow buffer.

HBL_NORMALThis kind of lock allows reading and writing from the buffer - it’s also the leastoptimal because basically you’re telling the card you could be doing anything at all.If you’re not using a shadow buffer, it requires the buffer to be transferred from thecard and back again. If you’re using a shadow buffer the effect is minimal.

HBL_READ_ONLYThis means you only want to read the contents of the buffer. Best used when youcreated the buffer with a shadow buffer because in that case the data does not haveto be downloaded from the card.

HBL_DISCARDThis means you are happy for the card to discard the entire current contents of thebuffer. Implicitly this means you are not going to read the data - it also means thatthe card can avoid any stalls if the buffer is currently being rendered from, becauseit will actually give you an entirely different one. Use this wherever possible whenyou are locking a buffer which was not created with a shadow buffer. If you areusing a shadow buffer it matters less, although with a shadow buffer it’s preferableto lock the entire buffer at once, because that allows the shadow buffer to useHBL DISCARD when it uploads the updated contents to the real buffer.

HBL_NO_OVERWRITEThis is useful if you are locking just part of the buffer and thus cannot useHBL DISCARD. It tells the card that you promise not to modify any section ofthe buffer which has already been used in a rendering operation this frame. Againthis is only useful on buffers with no shadow buffer.

Once you have locked a buffer, you can use the pointer returned however you wish (just don’tbother trying to read the data that’s there if you’ve used HBL DISCARD, or write the dataif you’ve used HBL READ ONLY). Modifying the contents depends on the type of buffer, SeeSection 5.6 [Hardware Vertex Buffers], page 145 and See Section 5.7 [Hardware Index Buffers],page 149

Page 149: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 145

5.5 Practical Buffer Tips

The interplay of usage mode on creation, and locking options when reading / updating is im-portant for performance. Here’s some tips:1. Aim for the ’perfect’ buffer by creating with HBU STATIC WRITE ONLY, with no shadow

buffer, and locking all of it once only with HBL DISCARD to populate it. Never touch itagain.

2. If you need to update a buffer regularly, you will have to compromise. UseHBU DYNAMIC WRITE ONLY when creating (still no shadow buffer), and useHBL DISCARD to lock the entire buffer, or if you can’t then use HBL NO OVERWRITEto lock parts of it.

3. If you really need to read data from the buffer, create it with a shadow buffer. Makesure you use HBL READ ONLY when locking for reading because it will avoid the uploadnormally associated with unlocking the buffer. You can also combine this with either of the2 previous points, obviously try for static if you can - remember that the WRITE ONLY’part refers to the hardware buffer so can be safely used with a shadow buffer you read from.

4. Split your vertex buffers up if you find that your usage patterns for different elements ofthe vertex are different. No point having one huge updateable buffer with all the vertexdata in it, if all you need to update is the texture coordinates. Split that part out into it’sown buffer and make the rest HBU STATIC WRITE ONLY.

5.6 Hardware Vertex Buffers

This section covers specialised hardware buffers which contain vertex data. For a general discus-sion of hardware buffers, along with the rules for creating and locking them, see the Chapter 5[Hardware Buffers], page 142 section.

5.6.1 The VertexData class

The VertexData class collects together all the vertex-related information used to render geome-try. The new RenderOperation requires a pointer to a VertexData object, and it is also used inMesh and SubMesh to store the vertex positions, normals, texture coordinates etc. VertexDatacan either be used alone (in order to render unindexed geometry, where the stream of verticesdefines the triangles), or in combination with IndexData where the triangles are defined by in-dexes which refer to the entries in VertexData.

It’s worth noting that you don’t necessarily have to use VertexData to store your applicationsgeometry; all that is required is that you can build a VertexData structure when it comes torendering. This is pretty easy since all of VertexData’s members are pointers, so you couldmaintain your vertex buffers and declarations in alternative structures if you like, so long as youcan convert them for rendering.

The VertexData class has a number of important members:

vertexStartThe position in the bound buffers to start reading vertex data from. This allowsyou to use a single buffer for many different renderables.

vertexCountThe number of vertices to process in this particular rendering group

vertexDeclarationA pointer to a VertexDeclaration object which defines the format of the vertex input;note this is created for you by VertexData. See Section 5.6.2 [Vertex Declarations],page 146

Page 150: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 146

vertexBufferBindingA pointer to a VertexBufferBinding object which defines which vertex buffers arebound to which sources - again, this is created for you by VertexData. SeeSection 5.6.3 [Vertex Buffer Bindings], page 147

5.6.2 Vertex Declarations

Vertex declarations define the vertex inputs used to render the geometry you want to appear onthe screen. Basically this means that for each vertex, you want to feed a certain set of data intothe graphics pipeling, which (you hope) will affect how it all looks when the triangles are drawn.Vertex declarations let you pull items of data (which we call vertex elements, represented by theVertexElement class) from any number of buffers, both shared and dedicated to that particularelement. It’s your job to ensure that the contents of the buffers make sense when interpreted inthe way that your VertexDeclaration indicates that they should.

To add an element to a VertexDeclaration, you call it’s addElement method. The parametersto this method are:

source This tells the declaration which buffer the element is to be pulled from. Notethat this is just an index, which may range from 0 to one less than the number ofbuffers which are being bound as sources of vertex data. See Section 5.6.3 [VertexBuffer Bindings], page 147 for information on how a real buffer is bound to a sourceindex. Storing the source of the vertex element this way (rather than using a bufferpointer) allows you to rebind the source of a vertex very easily, without changingthe declaration of the vertex format itself.

offset Tells the declaration how far in bytes the element is offset from the start of eachwhole vertex in this buffer. This will be 0 if this is the only element being sourcedfrom this buffer, but if other elements are there then it may be higher. A good wayof thinking of this is the size of all vertex elements which precede this element inthe buffer.

type This defines the data type of the vertex input, including it’s size. This is an impor-tant element because as GPUs become more advanced, we can no longer assume thatposition input will always require 3 floating point numbers, because programmablevertex pipelines allow full control over the inputs and outuputs. This part of theelement definition covers the basic type and size, e.g. VET FLOAT3 is 3 floatingpoint numbers - the meaning of the data is dealt with in the next paramter.

semantic This defines the meaning of the element - the GPU will use this to determine what touse this input for, and programmable vertex pipelines will use this to identify whichsemantic to map the input to. This can identify the element as positional data,normal data, texture coordinate data, etc. See the API reference for full details ofall the options.

index This parameter is only required when you supply more than one element of the samesemantic in one vertex declaration. For example, if you supply more than one setof texture coordinates, you would set first sets index to 0, and the second set to 1.

You can repeat the call to addElement for as many elements as you have in your vertex inputstructures. There are also useful methods on VertexDeclaration for locating elements within adeclaration - see the API reference for full details.

Important Considerations

Whilst in theory you have completely full reign over the format of you vertices, in reality there aresome restrictions. Older DirectX hardware imposes a fixed ordering on the elements which are

Page 151: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 147

pulled from each buffer; specifically any hardware prior to DirectX 9 may impose the followingrestrictions:• VertexElements should be added in the following order, and the order of the elements within

any shared buffer should be as follows:1. Positions2. Blending weights3. Normals4. Diffuse colours5. Specular colours6. Texture coordinates (starting at 0, listed in order, with no gaps)

• You must not have unused gaps in your buffers which are not referenced by any VertexEle-ment

• You must not cause the buffer & offset settings of 2 VertexElements to overlap

OpenGL and DirectX 9 compatible hardware are not required to follow these strict limita-tions, so you might find, for example that if you broke these rules your application would rununder OpenGL and under DirectX on recent cards, but it is not guaranteed to run on olderhardware under DirectX unless you stick to the above rules. For this reason you’re advised toabide by them!

5.6.3 Vertex Buffer Bindings

Vertex buffer bindings are about associating a vertex buffer with a source index used inSection 5.6.2 [Vertex Declarations], page 146.

Creating the Vertex Buffer

Firstly, lets look at how you create a vertex buffer:HardwareVertexBufferSharedPtr vbuf =

HardwareBufferManager::getSingleton().createVertexBuffer(3*sizeof(Real), // size of one whole vertexnumVertices, // number of verticesHardwareBuffer::HBU_STATIC_WRITE_ONLY, // usagefalse); // no shadow buffer

Notice that we use Section 5.1 [The Hardware Buffer Manager], page 142 to create our ver-tex buffer, and that a class called HardwareVertexBufferSharedPtr is returned from the method,rather than a raw pointer. This is because vertex buffers are reference counted - you are ableto use a single vertex buffer as a source for multiple pieces of geometry therefore a standardpointer would not be good enough, because you would not know when all the different users ofit had finished with it. The HardwareVertexBufferSharedPtr class manages its own destructionby keeping a reference count of the number of times it is being used - when the last Hardware-VertexBufferSharedPtr is destroyed, the buffer itself automatically destroys itself.

The parameters to the creation of a vertex buffer are as follows:

vertexSize The size in bytes of a whole vertex in this buffer. A vertex may include multipleelements, and in fact the contents of the vertex data may be reinterpreted by differentvertex declarations if you wish. Therefore you must tell the buffer manager how largea whole vertex is, but not the internal format of the vertex, since that is down to thedeclaration to interpret. In the above example, the size is set to the size of 3 floatingpoint values - this would be enough to hold a standard 3D position or normal, or a3D texture coordinate, per vertex.

Page 152: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 148

numVerticesThe number of vertices in this buffer. Remember, not all the vertices have to beused at once - it can be beneficial to create large buffers which are shared betweenmany chunks of geometry because changing vertex buffer bindings is a render stateswitch, and those are best minimised.

usage This tells the system how you intend to use the buffer. See Section 5.2 [BufferUsage], page 142

useShadowBufferTells the system whether you want this buffer backed by a system-memory copy.See Section 5.3 [Shadow Buffers], page 143

Binding the Vertex Buffer

The second part of the process is to bind this buffer which you have created to a source index.To do this, you call:vertexBufferBinding->setBinding(0, vbuf);

This results in the vertex buffer you created earlier being bound to source index 0, so anyvertex element which is pulling its data from source index 0 will retrieve data from this buffer.

There are also methods for retrieving buffers from the binding data - see the API reference forfull details.

5.6.4 Updating Vertex Buffers

The complexity of updating a vertex buffer entirely depends on how its contents are laid out.You can lock a buffer (See Section 5.4 [Locking buffers], page 143), but how you write data intoit vert much depends on what it contains.

Lets start with a vert simple example. Lets say you have a buffer which only contains vertexpositions, so it only contains sets of 3 floating point numbers per vertex. In this case, all youneed to do to write data into it is:Real* pReal = static_cast<Real*>(vbuf->lock(HardwareBuffer::HBL_DISCARD));

... then you just write positions in chunks of 3 reals. If you have other floating point datain there, it’s a little more complex but the principle is largely the same, you just need to writealternate elements. But what if you have elements of different types, or you need to derive howto write the vertex data from the elements themselves? Well, there are some useful methods onthe VertexElement class to help you out.

Firstly, you lock the buffer but assign the result to a unsigned char* rather than a specific type.Then, for each element whcih is sourcing from this buffer (which you can find out by calling Ver-texDeclaration::findElementsBySource) you call VertexElement::baseVertexPointerToElement.This offsets a pointer which points at the base of a vertex in a buffer to the beginning of theelement in question, and allows you to use a pointer of the right type to boot. Here’s a fullexample:// Get base pointerunsigned char* pVert = static_cast<unsigned char*>(vbuf->lock(HardwareBuffer::HBL_READ_ONLY));Real* pReal;for (size_t v = 0; v < vertexCount; ++v){

// Get elementsVertexDeclaration::VertexElementList elems = decl->findElementsBySource(bufferIdx);VertexDeclaration::VertexElementList::iterator i, iend;

Page 153: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 149

for (i = elems.begin(); i != elems.end(); ++i){

VertexElement& elem = *i;if (elem.getSemantic() == VES_POSITION){

elem.baseVertexPointerToElement(pVert, &pReal);// write position using pReal

}

...

}pVert += vbuf->getVertexSize();

}vbuf->unlock();

See the API docs for full details of all the helper methods on VertexDeclaration and Vertex-Element to assist you in manipulating vertex buffer data pointers.

5.7 Hardware Index Buffers

Index buffers are used to render geometry by building triangles out of vertices indirectly byreference to their position in the buffer, rather than just building triangles by sequentiallyreading vertices. Index buffers are simpler than vertex buffers, since they are just a list ofindexes at the end of the day, howeverthey can be held on the hardware and shared betweenmultiple pieces of geometry in the same way vertex buffers can, so the rules on creation andlocking are the same. See Chapter 5 [Hardware Buffers], page 142 for information.

5.7.1 The IndexData class

This class summarises the information required to use a set of indexes to render geometry. It’smembers are as follows:

indexStart The first index used by this piece of geometry; this can be useful for sharing a singleindex buffer among several geometry pieces.

indexCountThe number of indexes used by this particular renderable.

indexBufferThe index buffer which is used to source the indexes.

Creating an Index Buffer

Index buffers are created using See Section 5.1 [The Hardware Buffer Manager], page 142 justlike vertex buffers, here’s how:HardwareIndexBufferSharedPtr ibuf = HardwareBufferManager::getSingleton().

createIndexBuffer(HardwareIndexBuffer::IT_16BIT, // type of indexnumIndexes, // number of indexesHardwareBuffer::HBU_STATIC_WRITE_ONLY, // usagefalse); // no shadow buffer

Once again, notice that the return type is a class rather than a pointer; this is referencecounted so that the buffer is automatically destroyed when no more references are made to it.The parameters to the index buffer creation are:

Page 154: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 150

indexType There are 2 types of index; 16-bit and 32-bit. They both perform the same way,except that the latter can address larger vertex buffers. If your buffer includes morethan 65526 vertices, then you will need to use 32-bit indexes. Note that you shouldonly use 32-bit indexes when you need to, since they incur more overhead than16-bit vertices, and are not supported on some older hardware.

numIndexesThe number of indexes in the buffer. As with vertex buffers, you should considerwhether you can use a shared index buffer which is used by multiple pieces ofgeometry, since there can be performance advantages to switching index buffers lessoften.

usage This tells the system how you intend to use the buffer. See Section 5.2 [BufferUsage], page 142

useShadowBufferTells the system whether you want this buffer backed by a system-memory copy.See Section 5.3 [Shadow Buffers], page 143

5.7.2 Updating Index Buffers

Updating index buffers can only be done when you lock the buffer for writing; See Section 5.4[Locking buffers], page 143 for details. Locking returns a void pointer, which must be cast tothe apropriate type; with index buffers this is either an unsigned short (for 16-bit indexes) oran unsigned long (for 32-bit indexes). For example:unsigned short* pIdx = static_cast<unsigned short*>(ibuf->lock(HardwareBuffer::HBL_DISCARD));

You can then write to the buffer using the usual pointer semantics, just remember to unlockthe buffer when you’re finished!

5.8 Hardware Pixel Buffers

Hardware Pixel Buffers are a special kind of buffer that stores graphical data in graphics cardmemory, generally for use as textures. Pixel buffers can represent a one dimensional, twodimensional or three dimensional image. A texture can consist of a multiple of these buffers.

In contrary to vertex and index buffers, pixel buffers are not constructed directly. Whencreating a texture, the necessary pixel buffers to hold its data are constructed automatically.

5.8.1 Textures

A texture is an image that can be applied onto the surface of a three dimensional model. InOgre, textures are represented by the Texture resource class.

Creating a texture

Textures are created through the TextureManager. In most cases they are created from imagefiles directly by the Ogre resource system. If you are reading this, you most probably want tocreate a texture manually so that you can provide it with image data yourself. This is donethrough TextureManager::createManual:ptex = TextureManager::getSingleton().createManual(

"MyManualTexture", // Name of texture"General", // Name of resource group in which the texture should be createdTEX_TYPE_2D, // Texture type256, // Width256, // Height1, // Depth (Must be 1 for two dimensional textures)0, // Number of mipmaps

Page 155: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 151

PF_A8R8G8B8, // Pixel formatTU_DYNAMIC_WRITE_ONLY // usage

);

This example creates a texture named MyManualTexture in resource group General. It is asquare two dimensional texture, with width 256 and height 256. It has no mipmaps, internalformat PF A8R8G8B8 and usage TU DYNAMIC WRITE ONLY.

The different texture types will be discussed in Section 5.8.3 [Texture Types], page 152. Pixelformats are summarised in Section 5.8.4 [Pixel Formats], page 153.

Texture usages

In addition to the hardware buffer usages as described in See Section 5.2 [Buffer Usage], page 142there are some usage flags specific to textures:

TU AUTOMIPMAPMipmaps for this texture will be automatically generated by the graphics hardware.The exact algorithm used is not defined, but you can assume it to be a 2x2 boxfilter.

TU RENDERTARGETThis texture will be a render target, ie. used as a target for render to texture.Setting this flag will ignore all other texture usages except TU AUTOMIPMAP.

TU DEFAULTThis is actualy a combination of usage flags, and is equivalent toTU AUTOMIPMAP | TU STATIC WRITE ONLY. The resource system usesthese flags for textures that are loaded from images.

Getting a PixelBuffer

A Texture can consist of multiple PixelBuffers, one for each combo if mipmap level and face num-ber. To get a PixelBuffer from a Texture object the method Texture::getBuffer(face, mipmap)is used:

face should be zero for non-cubemap textures. For cubemap textures it identifies the face touse, which is one of the cube faces described in See Section 5.8.3 [Texture Types], page 152.

mipmap is zero for the zeroth mipmap level, one for the first mipmap level, and so on. Ontextures that have automatic mipmap generation (TU AUTOMIPMAP) only level 0 should beaccessed, the rest will be taken care of by the rendering API.

A simple example of using getBuffer is// Get the PixelBuffer for face 0, mipmap 0.HardwarePixelBufferSharedPtr ptr = tex->getBuffer(0,0);

5.8.2 Updating Pixel Buffers

Pixel Buffers can be updated in two different ways; a simple, convient way and a more difficult(but in some cases faster) method. Both methods make use of PixelBox objects (See Section 5.8.5[Pixel boxes], page 155) to represent image data in memory.

blitFromMemory

The easy method to get an image into a PixelBuffer is by using HardwarePixel-Buffer::blitFromMemory. This takes a PixelBox object and does all necessary pixel formatconversion and scaling for you. For example, to create a manual texture and load an image intoit, all you have to do is// Manually loads an image and puts the contents in a manually created texture

Page 156: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 152

Image img;img.load("elephant.png", "General");// Create RGB texture with 5 mipmapsTexturePtr tex = TextureManager::getSingleton().createManual(

"elephant","General",TEX_TYPE_2D,img.getWidth(), img.getHeight(),5, PF_X8R8G8B8);

// Copy face 0 mipmap 0 of the image to face 0 mipmap 0 of the texture.tex->getBuffer(0,0)->blitFromMemory(img.getPixelBox(0,0));

Direct memory locking

A more advanced method to transfer image data from and to a PixelBuffer is to use locking.By locking a PixelBuffer you can directly access its contents in whatever the internal format ofthe buffer inside the GPU is.

/// Lock the buffer so we can write to itbuffer->lock(HardwareBuffer::HBL_DISCARD);const PixelBox &pb = buffer->getCurrentLock();

/// Update the contents of pb here/// Image data starts at pb.data and has format pb.format/// Here we assume data.format is PF_X8R8G8B8 so we can address pixels as uint32.uint32 *data = static_cast<uint32*>(pb.data);size_t height = pb.getHeight();size_t width = pb.getWidth();size_t pitch = pb.rowPitch; // Skip between rows of imagefor(size_t y=0; y<height; ++y){

for(size_t x=0; x<width; ++x){

// 0xRRGGBB -> fill the buffer with yellow pixelsdata[pitch*y + x] = 0x00FFFF00;

}}

/// Unlock the buffer again (frees it for use by the GPU)buffer->unlock();

5.8.3 Texture Types

There are four types of textures supported by current hardware, three of them only differ in theamount of dimensions they have (one, two or three). The fourth one is special. The differenttexture types are:

TEX TYPE 1DOne dimensional texture, used in combination with 1D texture coordinates.

TEX TYPE 2DTwo dimensional texture, used in combination with 2D texture coordinates.

TEX TYPE 3DThree dimensional volume texture, used in combination with 3D texture coordinates.

Page 157: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 153

TEX TYPE CUBE MAPCube map (six two dimensional textures, one for each cube face), used in combina-tion with 3D texture coordinates.

Cube map textures

The cube map texture type (TEX TYPE CUBE MAP) is a different beast from the others; acube map texture represents a series of six two dimensional images addressed by 3D texturecoordinates.

+X (face 0)Represents the positive x plane (right).

-X (face 1)Represents the negative x plane (left).

+Y (face 2)Represents the positive y plane (top).

-Y (face 3)Represents the negative y plane (bottom).

+Z (face 4)Represents the positive z plane (front).

-Z (face 5) Represents the negative z plane (back).

5.8.4 Pixel Formats

A pixel format described the storage format of pixel data. It defines the way pixels are encodedin memory. The following classes of pixel formats (PF *) are defined:

Native endian formats (PF A8R8G8B8 and other formats with bit counts)These are native endian (16, 24 and 32 bit) integers in memory. This means thatan image with format PF A8R8G8B8 can be seen as an array of 32 bit integers,defined as 0xAARRGGBB in hexadecimal. The meaning of the letters is describedbelow.

Byte formats (PF BYTE *)These formats have one byte per channel, and their channels in memory areorganized in the order they are specified in the format name. For example,PF BYTE RGBA consists of blocks of four bytes, one for red, one for green, onefor blue, one for alpha.

Short formats (PF SHORT *)These formats have one unsigned short (16 bit integer) per channel, and their chan-nels in memory are organized in the order they are specified in the format name.For example, PF SHORT RGBA consists of blocks of four 16 bit integers, one forred, one for green, one for blue, one for alpha.

Float16 formats (PF FLOAT16 *)These formats have one 16 bit floating point number per channel, and their channelsin memory are organized in the order they are specified in the format name. Forexample, PF FLOAT16 RGBA consists of blocks of four 16 bit floats, one for red,one for green, one for blue, one for alpha. The 16 bit floats, also called half float)are very similar to the IEEE single-precision floating-point standard of the 32 bitsfloats, except that they have only 5 exponent bits and 10 mantissa. Note that thereis no standard C++ data type or CPU support to work with these efficiently, butGPUs can calculate with these much more efficiently than with 32 bit floats.

Page 158: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 154

Float32 formats (PF FLOAT32 *)These formats have one 32 bit floating point number per channel, and their channelsin memory are organized in the order they are specified in the format name. Forexample, PF FLOAT32 RGBA consists of blocks of four 32 bit floats, one for red,one for green, one for blue, one for alpha. The C++ data type for these 32 bits floatsis just "float".

Compressed formats (PF DXT[1-5])S3TC compressed texture formats, a good description can be found at | Wikipedia(http://en.wikipedia.org/wiki/S3TC)

Colour channels

The meaning of the channels R,G,B,A,L and X is defined as

R Red colour component, usually ranging from 0.0 (no red) to 1.0 (full red).

G Green colour component, usually ranging from 0.0 (no green) to 1.0 (full green).

B Blue colour component, usually ranging from 0.0 (no blue) to 1.0 (full blue).

A Alpha component, usually ranging from 0.0 (entire transparent) to 1.0 (opaque).

L Luminance component, usually ranging from 0.0 (black) to 1.0 (white). The lumi-nance component is duplicated in the R, G, and B channels to achieve a greyscaleimage.

X This component is completely ignored.

If none of red, green and blue components, or luminance is defined in a format, these defaultto 0. For the alpha channel this is different; if no alpha is defined, it defaults to 1.

Complete list of pixel formats

This pixel formats supported by the current version of Ogre are

Byte formatsPF BYTE RGB, PF BYTE BGR, PF BYTE BGRA, PF BYTE RGBA,PF BYTE L, PF BYTE LA, PF BYTE A

Short formatsPF SHORT RGBA

Float16 formatsPF FLOAT16 R, PF FLOAT16 RGB, PF FLOAT16 RGBA

Float32 formatsPF FLOAT32 R, PF FLOAT32 RGB, PF FLOAT32 RGBA

8 bit native endian formatsPF L8, PF A8, PF A4L4, PF R3G3B2

16 bit native endian formatsPF L16, PF R5G6B5, PF B5G6R5, PF A4R4G4B4, PF A1R5G5B5

24 bit native endian formatsPF R8G8B8, PF B8G8R8

32 bit native endian formatsPF A8R8G8B8, PF A8B8G8R8, PF B8G8R8A8, PF R8G8B8A8, PF X8R8G8B8,PF X8B8G8R8, PF A2R10G10B10 PF A2B10G10R10

Compressed formatsPF DXT1, PF DXT2, PF DXT3, PF DXT4, PF DXT5

Page 159: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 155

5.8.5 Pixel boxes

All methods in Ogre that take or return raw image data return a PixelBox object.A PixelBox is a primitive describing a volume (3D), image (2D) or line (1D) of pixels in CPU

memory. It describes the location and data format of a region of memory used for image data,but does not do any memory management in itself.

Inside the memory pointed to by the data member of a pixel box, pixels are stored as asuccession of "depth" slices (in Z), each containing "height" rows (Y) of "width" pixels (X).

Dimensions that are not used must be 1. For example, a one dimensional image will haveextents (width,1,1). A two dimensional image has extents (width,height,1).

A PixelBox has the following members:

data The pointer to the first component of the image data in memory.

format The pixel format (See Section 5.8.4 [Pixel Formats], page 153) of the image data.

rowPitch The number of elements between the leftmost pixel of one row and the left pixel of thenext. This value must always be equal to getWidth() (consecutive) for compressedformats.

slicePitch The number of elements between the top left pixel of one (depth) slice and the topleft pixel of the next. Must be a multiple of rowPitch. This value must always beequal to getWidth()*getHeight() (consecutive) for compressed formats.

left, top, right, bottom, front, backExtents of the box in three dimensional integer space. Note that the left, top, andfront edges are included but the right, bottom and top ones are not. left mustalways be smaller or equal to right, top must always be smaller or equal to bottom,and front must always be smaller or equal to back.

It also has some useful methods:

getWidth()Get the width of this box

getHeight()Get the height of this box. This is 1 for one dimensional images.

getDepth()Get the depth of this box. This is 1 for one and two dimensional images.

setConsecutive()Set the rowPitch and slicePitch so that the buffer is laid out consecutive in memory.

getRowSkip()Get the number of elements between one past the rightmost pixel of one row andthe leftmost pixel of the next row. This is zero if rows are consecutive.

getSliceSkip()Get the number of elements between one past the right bottom pixel of one sliceand the left top pixel of the next slice. This is zero if slices are consecutive.

isConsecutive()Return whether this buffer is laid out consecutive in memory (ie the pitches areequal to the dimensions)

getConsecutiveSize()Return the size (in bytes) this image would take if it was laid out consecutive inmemory

Page 160: Ogre User Manuel 1 7 A4

Chapter 5: Hardware Buffers 156

getSubVolume(const Box &def)Return a subvolume of this PixelBox, as a PixelBox.

For more information about these methods consult the API documentation.

Page 161: Ogre User Manuel 1 7 A4

Chapter 6: External Texture Sources 157

6 External Texture Sources

Introduction

This tutorial will provide a brief introduction of ExternalTextureSource and ExternalTexture-SourceManager classes, their relationship, and how the PlugIns work. For those interested indeveloping a Texture Source Plugin or maybe just wanting to know more about this system, takea look the ffmpegVideoSystem plugin, which you can find more about on the OGRE forums.

What Is An External Texture Source?

What is a texture source? Well, a texture source could be anything - png, bmp, jpeg, etc.However, loading textures from traditional bitmap files is already handled by another partOGRE. There are, however, other types of sources to get texture data from - i.e. mpeg/avi/etcmovie files, flash, run-time generated source, user defined, etc.

How do external texture source plugins benefit OGRE? Well, the main answer is: addingsupport for any type of texture source does not require changing OGRE to support it... all thatis involved is writing a new plugin. Additionally, because the manager uses the StringInterfaceclass to issue commands/params, no change to the material script reader is needs to be made.As a result, if a plugin needs a special parameter set, it just creates a new command in it’sParameter Dictionary. - see ffmpegVideoSystem plugin for an example. To make this work, twoclasses have been added to OGRE: ExternalTextureSource & ExternalTextureSourceManager.

ExternalTextureSource Class

The ExternalTextureSource class is the base class that Texture Source PlugIns must be derivedfrom. It provides a generic framework (via StringInterface class) with a very limited amountof functionality. The most common of parameters can be set through the TexturePlugInSourceclass interface or via the StringInterface commands contained within this class. While this mayseem like duplication of code, it is not. By using the string command interface, it becomesextremely easy for derived plugins to add any new types of parameters that it may need.

Default Command Parameters defined in ExternalTextureSource base class are:• Parameter Name: "filename" Argument Type: Ogre::String Sets a filename plugin will read

from• Parameter Name: "play mode" Argument Type: Ogre::String Sets initial play mode to be

used by the plugin - "play", "loop", "pause"• Parameter Name: "set T P S" Argument Type: Ogre::String Used to set the technique,

pass, and texture unit level to apply this texture to. As an example: To set a techniquelevel of 1, a pass level of 2, and a texture unit level of 3, send this string "1 2 3".

• Parameter Name: "frames per second" Argument Type: Ogre::String Set a Frames persecond update speed. (Integer Values only)

ExternalTextureSourceManager Class

ExternalTextureSourceManager is responsible for keeping track of loaded Texture Source Plug-Ins. It also aids in the creation of texture source textures from scripts. It also is the interfaceyou should use when dealing with texture source plugins.

Page 162: Ogre User Manuel 1 7 A4

Chapter 6: External Texture Sources 158

Note: The function prototypes shown below are mockups - param names are simplified tobetter illustrate purpose here... Steps needed to create a new texture via ExternalTexture-SourceManager:

• Obviously, the first step is to have the desired plugin included in plugin.cfg for it to beloaded.

• Set the desired PlugIn as Active via AdvancedTextureMan-ager::getSingleton().SetCurrentPlugIn( String Type ); – type is whatever theplugin registers as handling (e.g. "video", "flash", "whatever", etc).

• Note: Consult Desired PlugIn to see what params it needs/expects. Set params/valuepairs via AdvancedTextureManager::getSingleton().getCurrentPlugIn()->setParameter(String Param, String Value );

• After required params are set, a simple call to AdvancedTextureManager::getSingleton().getCurrentPlugIn()->createDefinedTexture( sMaterialName ); will create a texture to the material namegiven.

The manager also provides a method for deleting a texture source material: AdvancedTex-tureManager::DestroyAdvancedTexture( String sTextureName ); The destroy method works bybroadcasting the material name to all loaded TextureSourcePlugIns, and the PlugIn who ac-tually created the material is responsible for the deletion, while other PlugIns will just ignorethe request. What this means is that you do not need to worry about which PlugIn createdthe material, or activating the PlugIn yourself. Just call the manager method to remove thematerial. Also, all texture plugins should handle cleanup when they are shutdown.

Texture Source Material Script

As mentioned earlier, the process of defining/creating texture sources can be done within mate-rial script file. Here is an example of a material script definition - Note: This example is basedoff the ffmpegVideoSystem plugin parameters.

material Example/MyVideoExample{

technique{

pass{

texture_unit{

texture_source video{

filename mymovie.mpegplay_mode playsound_mode on

}}

}}

}

Page 163: Ogre User Manuel 1 7 A4

Chapter 6: External Texture Sources 159

Notice that the first two param/value pairs are defined in the ExternalTextureSource baseclass and that the third parameter/value pair is not defined in the base class... That parameteris added to the param dictionary by the ffmpegVideoPlugin... This shows that extending thefunctionality with the plugins is extremely easy. Also, pay particular attention to the line:texture source video. This line identifies that this texture unit will come from a texture sourceplugin. It requires one parameter that determines which texture plugin will be used. In theexample shown, the plugin requested is one that registered with "video" name.

Simplified Diagram of Process

This diagram uses ffmpegVideoPlugin as example, but all plug ins will work the same in howthey are registered/used here. Also note that TextureSource Plugins are loaded/registered beforescripts are parsed. This does not mean that they are initialized... Plugins are not initializeduntil they are set active! This is to ensure that a rendersystem is setup before the plugins mightmake a call the rendersystem.

Page 164: Ogre User Manuel 1 7 A4

Chapter 7: Shadows 160

7 Shadows

Shadows are clearly an important part of rendering a believable scene - they provide a moretangible feel to the objects in the scene, and aid the viewer in understanding the spatial relation-ship between objects. Unfortunately, shadows are also one of the most challenging aspects of 3Drendering, and they are still very much an active area of research. Whilst there are many tech-niques to render shadows, none is perfect and they all come with advantages and disadvantages.For this reason, Ogre provides multiple shadow implementations, with plenty of configurationsettings, so you can choose which technique is most appropriate for your scene.

Shadow implementations fall into basically 2 broad categories: Section 7.1 [Stencil Shadows],page 161 and Section 7.2 [Texture-based Shadows], page 164. This describes the method bywhich the shape of the shadow is generated. In addition, there is more than one way to renderthe shadow into the scene: Section 7.3 [Modulative Shadows], page 169, which darkens the scenein areas of shadow, and Section 7.4 [Additive Light Masking], page 170 which by contrast buildsup light contribution in areas which are not in shadow. You also have the option of [IntegratedTexture Shadows], page 168 which gives you complete control over texture shadow application,allowing for complex single-pass shadowing shaders. Ogre supports all these combinations.

Enabling shadows

Shadows are disabled by default, here’s how you turn them on and configure them in the generalsense:

1. Enable a shadow technique on the SceneManager as the first thing you doing your scenesetup. It is important that this is done first because the shadow technique can alter theway meshes are loaded. Here’s an example:

mSceneMgr->setShadowTechnique(SHADOWTYPE_STENCIL_ADDITIVE);

2. Create one or more lights. Note that not all light types are necessarily supported by allshadow techniques, you should check the sections about each technique to check. Note thatif certain lights should not cast shadows, you can turn that off by calling setCastShad-ows(false) on the light, the default is true.

3. Disable shadow casting on objects which should not cast shadows. Call setCastShad-ows(false) on objects you don’t want to cast shadows, the default for all objects is tocast shadows.

4. Configure shadow far distance. You can limit the distance at which shadows are consideredfor performance reasons, by calling SceneManager::setShadowFarDistance.

5. Turn off the receipt of shadows on materials that should not receive them. You can turnoff the receipt of shadows (note, not the casting of shadows - that is done per-object) bycalling Material::setReceiveShadows or using the receive shadows material attribute. Thisis useful for materials which should be considered self-illuminated for example. Note thattransparent materials are typically excluded from receiving and casting shadows, althoughsee the [transparency casts shadows], page 16 option for exceptions.

Opting out of shadows

By default Ogre treats all non-transparent objects as shadow casters and receivers (dependingon the shadow technique they may not be able to be both at once, check the docs for yourchosen technique first). You can disable shadows in various ways:

Page 165: Ogre User Manuel 1 7 A4

Chapter 7: Shadows 161

Turning off shadow casting on the lightCalling Light::setCastsShadows(false) will mean this light casts no shadows at all.

Turn off shadow receipt on a materialCalling Material::setReceiveShadows(false) will prevent any objects using this ma-terial from receiving shadows.

Turn off shadow casting on individual objectsCalling MovableObject::setCastsShadows(false) will disable shadow casting for thisobject.

Turn off shadows on an entire rendering queue groupCalling RenderQueueGroup::setShadowsEnabled(false) will turn off both shadowcasting and receiving on an entire rendering queue group. This is useful becauseOgre has to do light setup tasks per group in order to preserve the inter-group or-dering. Ogre automatically disables shadows on a number of groups automatically,such as RENDER QUEUE BACKGROUND, RENDER QUEUE OVERLAY,RENDER QUEUE SKIES EARLY and RENDER QUEUE SKIES LATE. If youchoose to use more rendering queues (and by default, you won’t be using anymore than this plus the ’standard’ queue, so ignore this if you don’t know whatit means!), be aware that each one can incur a light setup cost, and you shoulddisable shadows on the additional ones you use if you can.

7.1 Stencil Shadows

Stencil shadows are a method by which a ’mask’ is created for the screen using a feature called thestencil buffer. This mask can be used to exclude areas of the screen from subsequent renders,and thus it can be used to either include or exclude areas in shadow. They are enabled bycalling SceneManager::setShadowTechnique with a parameter of either SHADOWTYPE_STENCIL_ADDITIVE or SHADOWTYPE_STENCIL_MODULATIVE. Because the stencil can only mask areas to beeither ’enabled’ or ’disabled’, stencil shadows have ’hard’ edges, that is to say clear dividinglines between light and shadow - it is not possible to soften these edges.

In order to generate the stencil, ’shadow volumes’ are rendered by extruding the silhouetteof the shadow caster away from the light. Where these shadow volumes intersect other objects(or the caster, since self-shadowing is supported using this technique), the stencil is updated,allowing subsequent operations to differentiate between light and shadow. How exactly this isused to render the shadows depends on whether Section 7.3 [Modulative Shadows], page 169 orSection 7.4 [Additive Light Masking], page 170 is being used. Objects can both cast and receivestencil shadows, so self-shadowing is inbuilt.

The advantage of stencil shadows is that they can do self-shadowing simply on low-endhardware, provided you keep your poly count under control. In contrast doing self-shadowingwith texture shadows requires a fairly modern machine (See Section 7.2 [Texture-based Shadows],page 164). For this reason, you’re likely to pick stencil shadows if you need an accurate shadowingsolution for an application aimed at older or lower-spec machines.

The disadvantages of stencil shadows are numerous though, especially on more modern hard-ware. Because stencil shadows are a geometric technique, they are inherently more costly thehigher the number of polygons you use, meaning you are penalized the more detailed you make

Page 166: Ogre User Manuel 1 7 A4

Chapter 7: Shadows 162

your meshes. The fillrate cost, which comes from having to render shadow volumes, also escalatesthe same way. Since more modern applications are likely to use higher polygon counts, stencilshadows can start to become a bottleneck. In addition, the visual aspects of stencil shadows arepretty primitive - your shadows will always be hard-edged, and you have no possibility of doingclever things with shaders since the stencil is not available for manipulation there. Therefore,if your application is aimed at higher-end machines you should definitely consider switching totexture shadows (See Section 7.2 [Texture-based Shadows], page 164).

There are a number of issues to consider which are specific to stencil shadows:• [CPU Overhead], page 162• [Extrusion distance], page 162• [Camera far plane positioning], page 162• [Mesh edge lists], page 163• [The Silhouette Edge], page 163• [Be realistic], page 163• [Stencil Optimisations Performed By Ogre], page 163

CPU Overhead

Calculating the shadow volume for a mesh can be expensive, and it has to be done on theCPU, it is not a hardware accelerated feature. Therefore, you can find that if you overusethis feature, you can create a CPU bottleneck for your application. Ogre quite aggressivelyeliminates objects which cannot be casting shadows on the frustum, but there are limits to howmuch it can do, and large, elongated shadows (e.g. representing a very low sun position) arevery difficult to cull efficiently. Try to avoid having too many shadow casters around at once,and avoid long shadows if you can. Also, make use of the ’shadow far distance’ parameter on theSceneManager, this can eliminate distant shadow casters from the shadow volume constructionand save you some time, at the expense of only having shadows for closer objects. Lastly,make use of Ogre’s Level-Of-Detail (LOD) features; you can generate automatically calculatedLODs for your meshes in code (see the Mesh API docs) or when using the mesh tools suchas OgreXmlConverter and OgreMeshUpgrader. Alternatively, you can assign your own manualLODs by providing alternative mesh files at lower detail levels. Both methods will cause theshadow volume complexity to decrease as the object gets further away, which saves you valuablevolume calculation time.

Extrusion distance

When vertex programs are not available, Ogre can only extrude shadow volumes a finite distancefrom the object. If an object gets too close to a light, any finite extrusion distance will beinadequate to guarantee all objects will be shadowed properly by this object. Therefore, youare advised not to let shadow casters pass too close to light sources if you can avoid it, unlessyou can guarantee that your target audience will have vertex program capable hardware (inthis case, Ogre extrudes the volume to infinity using a vertex program so the problem does notoccur).

When infinite extrusion is not possible, Ogre uses finite extrusion, either derived from theattenuation range of a light (in the case of a point light or spotlight), or a fixed extrusiondistance set in the application in the case of directional lights. To change the directional lightextrusion distance, use SceneManager::setShadowDirectionalLightExtrusionDistance.

Camera far plane positioning

Stencil shadow volumes rely very much on not being clipped by the far plane. When you enablestencil shadows, Ogre internally changes the far plane settings of your cameras such that there is

Page 167: Ogre User Manuel 1 7 A4

Chapter 7: Shadows 163

no far plane - i.e. it is placed at infinity (Camera::setFarClipDistance(0)). This avoids artifactscaused by clipping the dark caps on shadow volumes, at the expense of a (very) small amountof depth precision.

Mesh edge lists

Stencil shadows can only be calculated when an ’edge list’ has been built for all the geometryin a mesh. The official exporters and tools automatically build this for you (or have an op-tion to do so), but if you create your own meshes, you must remember to build edge lists forthem before using them with stencil shadows - you can do that by using OgreMeshUpgrade orOgreXmlConverter, or by calling Mesh::buildEdgeList before you export or use the mesh. If amesh doesn’t have edge lists, OGRE assumes that it is not supposed to cast stencil shadows.

The Silhouette Edge

Stencil shadowing is about finding a silhouette of the mesh, and projecting it away to form avolume. What this means is that there is a definite boundary on the shadow caster betweenlight and shadow; a set of edges where where the triangle on one side is facing toward the light,and one is facing away. This produces a sharp edge around the mesh as the transition occurs.Provided there is little or no other light in the scene, and the mesh has smooth normals toproduce a gradual light change in its underlying shading, the silhouette edge can be hidden -this works better the higher the tessellation of the mesh. However, if the scene includes ambientlight, then the difference is far more marked. This is especially true when using Section 7.3[Modulative Shadows], page 169, because the light contribution of each shadowed area is nottaken into account by this simplified approach, and so using 2 or more lights in a scene usingmodulative stencil shadows is not advisable; the silhouette edges will be very marked. Additivelights do not suffer from this as badly because each light is masked individually, meaning thatit is only ambient light which can show up the silhouette edges.

Be realistic

Don’t expect to be able to throw any scene using any hardware at the stencil shadow algorithmand expect to get perfect, optimum speed results. Shadows are a complex and expensive tech-nique, so you should impose some reasonable limitations on your placing of lights and objects;they’re not really that restricting, but you should be aware that this is not a complete free-for-all.

• Try to avoid letting objects pass very close (or even through) lights - it might look nice butit’s one of the cases where artifacts can occur on machines not capable of running vertexprograms.

• Be aware that shadow volumes do not respect the ’solidity’ of the objects they pass through,and if those objects do not themselves cast shadows (which would hide the effect) then theresult will be that you can see shadows on the other side of what should be an occludingobject.

• Make use of SceneManager::setShadowFarDistance to limit the number of shadow volumesconstructed

• Make use of LOD to reduce shadow volume complexity at distance

• Avoid very long (dusk and dawn) shadows - they exacerbate other issues such as volumeclipping, fillrate, and cause many more objects at a greater distance to require volumeconstruction.

Page 168: Ogre User Manuel 1 7 A4

Chapter 7: Shadows 164

Stencil Optimisations Performed By Ogre

Despite all that, stencil shadows can look very nice (especially with Section 7.4 [Additive LightMasking], page 170) and can be fast if you respect the rules above. In addition, Ogre comespre-packed with a lot of optimisations which help to make this as quick as possible. This sectionis more for developers or people interested in knowing something about the ’under the hood’behaviour of Ogre.

Vertex program extrusionAs previously mentioned, Ogre performs the extrusion of shadow volumes in hard-ware on vertex program-capable hardware (e.g. GeForce3, Radeon 8500 or better).This has 2 major benefits; the obvious one being speed, but secondly that vertexprograms can extrude points to infinity, which the fixed-function pipeline cannot,at least not without performing all calculations in software. This leads to more ro-bust volumes, and also eliminates more than half the volume triangles on directionallights since all points are projected to a single point at infinity.

Scissor test optimisationOgre uses a scissor rectangle to limit the effect of point / spot lights when their rangedoes not cover the entire viewport; that means we save fillrate when rendering stencilvolumes, especially with distant lights

Z-Pass and Z-Fail algorithmsThe Z-Fail algorithm, often attributed to John Carmack, is used in Ogre to makesure shadows are robust when the camera passes through the shadow volume. How-ever, the Z-Fail algorithm is more expensive than the traditional Z-Pass; so Ogredetects when Z-Fail is required and only uses it then, Z-Pass is used at all othertimes.

2-Sided stencilling and stencil wrappingOgre supports the 2-Sided stencilling / stencil wrapping extensions, which whensupported allow volumes to be rendered in a single pass instead of having to do onepass for back facing tris and another for front-facing tris. This doesn’t save fillrate,since the same number of stencil updates are done, but it does save primitive setupand the overhead incurred in the driver every time a render call is made.

Aggressive shadow volume cullingOgre is pretty good at detecting which lights could be affecting the frustum, andfrom that, which objects could be casting a shadow on the frustum. This means wedon’t waste time constructing shadow geometry we don’t need. Setting the shadowfar distance is another important way you can reduce stencil shadow overhead sinceit culls far away shadow volumes even if they are visible, which is beneficial inpractice since you’re most interested in shadows for close-up objects.

7.2 Texture-based Shadows

Texture shadows involve rendering shadow casters from the point of view of the light into atexture, which is then projected onto shadow receivers. The main advantage of texture shadowsas opposed to Section 7.1 [Stencil Shadows], page 161 is that the overhead of increasing thegeometric detail is far lower, since there is no need to perform per-triangle calculations. Mostof the work in rendering texture shadows is done by the graphics card, meaning the techniquescales well when taking advantage of the latest cards, which are at present outpacing CPUs interms of their speed of development. In addition, texture shadows are much more customisable- you can pull them into shaders to apply as you like (particularly with [Integrated TextureShadows], page 168, you can perform filtering to create softer shadows or perform other specialeffects on them. Basically, most modern engines use texture shadows as their primary shadow

Page 169: Ogre User Manuel 1 7 A4

Chapter 7: Shadows 165

technique simply because they are more powerful, and the increasing speed of GPUs is rapidlyamortizing the fillrate / texture access costs of using them.

The main disadvantage to texture shadows is that, because they are simply a texture, theyhave a fixed resolution which means if stretched, the pixellation of the texture can becomeobvious. There are ways to combat this though:

Choosing a projection basisThe simplest projection is just to render the shadow casters from the lights perspec-tive using a regular camera setup. This can look bad though, so there are manyother projections which can help to improve the quality from the main camera’s per-spective. OGRE supports pluggable projection bases via it’s ShadowCameraSetupclass, and comes with several existing options - Uniform (which is the simplest),Uniform Focussed (which is still a normal camera projection, except that the cam-era is focussed into the area that the main viewing camera is looking at), LiSPSM(Light Space Perspective Shadow Mapping - which both focusses and distorts theshadow frustum based on the main view camera) and Plan Optimal (which seeks tooptimise the shadow fidelity for a single receiver plane).

Filtering You can also sample the shadow texture multiple times rather than once to softenthe shadow edges and improve the appearance. Percentage Closest Filtering (PCF)is the most popular approach, although there are multiple variants depending onthe number and pattern of the samples you take. Our shadows demo includes a5-tap PCF example combined with depth shadow mapping.

Using a larger textureAgain as GPUs get faster and gain more memory, you can scale up to take advantageof this.

If you combine all 3 of these techniques you can get a very high quality shadow solution.The other issue is with point lights. Because texture shadows require a render to texture

in the direction of the light, omnidirectional lights (point lights) would require 6 renders tototally cover all the directions shadows might be cast. For this reason, Ogre primarily supportsdirectional lights and spotlights for generating texture shadows; you can use point lights butthey will only work if off-camera since they are essentially turned into a spotlight shining intoyour camera frustum for the purposes of texture shadows.

Directional Lights

Directional lights in theory shadow the entire scene from an infinitely distant light. Now, sincewe only have a finite texture which will look very poor quality if stretched over the entire scene,clearly a simplification is required. Ogre places a shadow texture over the area immediately infront of the camera, and moves it as the camera moves (although it rounds this movement tomultiples of texels so that the slight ’swimming shadow’ effect caused by moving the texture isminimised). The range to which this shadow extends, and the offset used to move it in frontof the camera, are configurable (See [Configuring Texture Shadows], page 166). At the far edgeof the shadow, Ogre fades out the shadow based on other configurable parameters so that thetermination of the shadow is softened.

Spotlights

Spotlights are much easier to represent as renderable shadow textures than directional lights,since they are naturally a frustum. Ogre represents spotlight directly by rendering the shadow

Page 170: Ogre User Manuel 1 7 A4

Chapter 7: Shadows 166

from the light position, in the direction of the light cone; the field-of-view of the texture camerais adjusted based on the spotlight falloff angles. In addition, to hide the fact that the shadowtexture is square and has definite edges which could show up outside the spotlight, Ogre usesa second texture unit when projecting the shadow onto the scene which fades out the shadowgradually in a projected circle around the spotlight.

Point Lights

As mentioned above, to support point lights properly would require multiple renders (either 6for a cubic render or perhaps 2 for a less precise parabolic mapping), so rather than do that weapproximate point lights as spotlights, where the configuration is changed on the fly to makethe light shine from its position over the whole of the viewing frustum. This is not an idealsetup since it means it can only really work if the point light’s position is out of view, and inaddition the changing parameterisation can cause some ’swimming’ of the texture. Generallywe recommend avoiding making point lights cast texture shadows.

Shadow Casters and Shadow Receivers

To enable texture shadows, use the shadow technique SHADOW-TYPE TEXTURE MODULATIVE or SHADOWTYPE TEXTURE ADDITIVE; asthe name suggests this produces Section 7.3 [Modulative Shadows], page 169 or Section 7.4[Additive Light Masking], page 170 respectively. The cheapest and simplest texture shadowtechniques do not use depth information, they merely render casters to a texture and renderthis onto receivers as plain colour - this means self-shadowing is not possible using thesemethods. This is the default behaviour if you use the automatic, fixed-function compatible(and thus usable on lower end hardware) texture shadow techniques. You can howeveruse shaders-based techniques through custom shadow materials for casters and receivers toperform more complex shadow algorithms, such as depth shadow mapping which does allowself-shadowing. OGRE comes with an example of this in its shadows demo, although it’sonly usable on Shader Model 2 cards or better. Whilst fixed-function depth shadow mappingis available in OpenGL, it was never standardised in Direct3D so using shaders in customcaster & receiver materials is the only portable way to do it. If you use this approach, callSceneManager::setShadowTextureSelfShadow with a parameter of ’true’ to allow textureshadow casters to also be receivers.

If you’re not using depth shadow mapping, OGRE divides shadow casters and receivers into 2disjoint groups. Simply by turning off shadow casting on an object, you automatically makeit a shadow receiver (although this can be disabled by setting the ’receive shadows’ option to’false’ in a material script. Similarly, if an object is set as a shadow caster, it cannot receiveshadows.

Configuring Texture Shadows

There are a number of settings which will help you configure your texture-based shadows so thatthey match your requirements.

bullet [Maximum number of shadow textures], page 167bullet [Shadow texture size], page 167bullet [Shadow far distance], page 167bullet [Shadow texture offset (Directional Lights)], page 167

Page 171: Ogre User Manuel 1 7 A4

Chapter 7: Shadows 167

bullet [Shadow fade settings], page 168

bullet [Custom shadow camera setups], page 168

bullet [Integrated Texture Shadows], page 168

Maximum number of shadow textures

Shadow textures take up texture memory, and to avoid stalling the rendering pipeline Ogre doesnot reuse the same shadow texture for multiple lights within the same frame. This means thateach light which is to cast shadows must have its own shadow texture. In practice, if you havea lot of lights in your scene you would not wish to incur that sort of texture overhead.

You can adjust this manually by simply turning off shadow casting for lights you do not wishto cast shadows. In addition, you can set a maximum limit on the number of shadow texturesOgre is allowed to use by calling SceneManager::setShadowTextureCount. Each frame, Ogredetermines the lights which could be affecting the frustum, and then allocates the number ofshadow textures it is allowed to use to the lights on a first-come-first-served basis. Any additionallights will not cast shadows that frame.

Note that you can set the number of shadow textures and their size at the same time by usingthe SceneManager::setShadowTextureSettings method; this is useful because both the individualcalls require the potential creation / destruction of texture resources.

Shadow texture size

The size of the textures used for rendering the shadow casters into can be altered; clearly usinglarger textures will give you better quality shadows, but at the expense of greater memory usage.Changing the texture size is done by calling SceneManager::setShadowTextureSize - textures areassumed to be square and you must specify a texture size that is a power of 2. Be aware thateach modulative shadow texture will take size*size*3 bytes of texture memory.

Important: if you use the GL render system your shadow texture size can only be larger(in either dimension) than the size of your primary window surface if the hardware sup-ports the Frame Buffer Object (FBO) or Pixel Buffer Object (PBO) extensions. Mostmodern cards support this now, but be careful of older cards - you can check the abil-ity of the hardware to manage this through ogreRoot->getRenderSystem()->getCapabilities()->hasCapability(RSC HWRENDER TO TEXTURE). If this returns false, if you create ashadow texture larger in any dimension than the primary surface, the rest of the shadow texturewill be blank.

Shadow far distance

This determines the distance at which shadows are terminated; it also determines how far intothe distance the texture shadows for directional lights are stretched - by reducing this value, orincreasing the texture size, you can improve the quality of shadows from directional lights atthe expense of closer shadow termination or increased memory usage, respectively.

Shadow texture offset (Directional Lights)

As mentioned above in the directional lights section, the rendering of shadows for directionallights is an approximation that allows us to use a single render to cover a largish area withshadows. This offset parameter affects how far from the camera position the center of the shadowtexture is offset, as a proportion of the shadow far distance. The greater this value, the more ofthe shadow texture is ’useful’ to you since it’s ahead of the camera, but also the further you offsetit, the more chance there is of accidentally seeing the edge of the shadow texture at more extreme

Page 172: Ogre User Manuel 1 7 A4

Chapter 7: Shadows 168

angles. You change this value by calling SceneManager::setShadowDirLightTextureOffset, thedefault is 0.6.

Shadow fade settings

Shadows fade out before the shadow far distance so that the termination of shadow is notabrupt. You can configure the start and end points of this fade by calling the SceneM-anager::setShadowTextureFadeStart and SceneManager::setShadowTextureFadeEnd methods,both take distances as a proportion of the shadow far distance. Because of the inaccuraciescaused by using a square texture and a radial fade distance, you cannot use 1.0 as the fade end,if you do you’ll see artifacts at the extreme edges. The default values are 0.7 and 0.9, whichserve most purposes but you can change them if you like.

Texture shadows and vertex / fragment programs

When rendering shadow casters into a modulative shadow texture, Ogre turns off all textures,and all lighting contributions except for ambient light, which it sets to the colour of the shadow([Shadow Colour], page 169). For additive shadows, it render the casters into a black & whitetexture instead. This is enough to render shadow casters for fixed-function material techniques,however where a vertex program is used Ogre doesn’t have so much control. If you use a vertexprogram in the first pass of your technique, then you must also tell ogre which vertex programyou want it to use when rendering the shadow caster; see [Shadows and Vertex Programs],page 81 for full details.

Custom shadow camera setups

As previously mentioned, one of the downsides of texture shadows is that the texture resolutionis finite, and it’s possible to get aliasing when the size of the shadow texel is larger than ascreen pixel, due to the projection of the texture. In order to address this, you can specifyalternative projection bases by using or creating subclasses of the ShadowCameraSetup class.The default version is called DefaultShadowCameraSetup and this sets up a simple regularfrustum for point and spotlights, and an orthographic frustum for directional lights. There isalso a PlaneOptimalShadowCameraSetup class which specialises the projection to a plane, thusgiving you much better definition provided your shadow receivers exist mostly in a single plane.Other setup classes (e.g. you might create a perspective or trapezoid shadow mapping version)can be created and plugged in at runtime, either on individual lights or on the SceneManageras a whole.

Integrated Texture Shadows

Texture shadows have one major advantage over stencil shadows - the data used to representthem can be referenced in regular shaders. Whilst the default texture shadow modes (SHAD-OWTYPE TEXTURE MODULATIVE and SHADOWTYPE TEXTURE ADDITIVE) auto-matically render shadows for you, their disadvantage is that because they are generalised add-onsto your own materials, they tend to take more passes of the scene to use. In addition, you don’thave a lot of control over the composition of the shadows.

Here is where ’integrated’ texture shadows step in. Both of the texture shadow types abovehave alternative versions called SHADOWTYPE TEXTURE MODULATIVE INTEGRATEDand SHADOWTYPE TEXTURE ADDITIVE INTEGRATED, where instead of rendering theshadows for you, it just creates the texture shadow and then expects you to use that shadowtexture as you see fit when rendering receiver objects in the scene. The downside is that you haveto take into account shadow receipt in every one of your materials if you use this option - theupside is that you have total control over how the shadow textures are used. The big advantage

Page 173: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 169

here is that you can can perform more complex shading, taking into account shadowing, thanis possible using the generalised bolt-on approaches, AND you can probably write them in asmaller number of passes, since you know precisely what you need and can combine passeswhere possible. When you use one of these shadowing approaches, the only difference betweenadditive and modulative is the colour of the casters in the shadow texture (the shadow colour formodulative, black for additive) - the actual calculation of how the texture affects the receiversis of course up to you. No separate modulative pass will be performed, and no splitting of yourmaterials into ambient / per-light / decal etc will occur - absolutely everything is determinedby your original material (which may have modulative passes or per-light iteration if you wantof course, but it’s not required).

You reference a shadow texture in a material which implements this approach by usingthe ’[content type], page 45 shadow’ directive in your {texture unit. It implicitly references ashadow texture based on the number of times you’ve used this directive in the same pass, andthe light start option or light-based pass iteration, which might start the light index higher than0.

7.3 Modulative Shadows

Modulative shadows work by darkening an already rendered scene with a fixed colour. First, thescene is rendered normally containing all the objects which will be shadowed, then a modulativepass is done per light, which darkens areas in shadow. Finally, objects which do not receiveshadows are rendered.

There are 2 modulative shadow techniques; stencil-based (See Section 7.1 [Stencil Shadows],page 161 : SHADOWTYPE STENCIL MODULATIVE) and texture-based (See Section 7.2[Texture-based Shadows], page 164 : SHADOWTYPE TEXTURE MODULATIVE). Modula-tive shadows are an inaccurate lighting model, since they darken the areas of shadow uniformly,irrespective of the amount of light which would have fallen on the shadow area anyway. However,they can give fairly attractive results for a much lower overhead than more ’correct’ methods likeSection 7.4 [Additive Light Masking], page 170, and they also combine well with pre-baked staticlighting (such as pre-calculated lightmaps), which additive lighting does not. The main thing toconsider is that using multiple light sources can result in overly dark shadows (where shadowsoverlap, which intuitively looks right in fact, but it’s not physically correct) and artifacts whenusing stencil shadows (See [The Silhouette Edge], page 163).

Shadow Colour

The colour which is used to darken the areas in shadow is set by SceneMan-ager::setShadowColour; it defaults to a dark grey (so that the underlying colour still showsthrough a bit).

Note that if you’re using texture shadows you have the additional option of using [IntegratedTexture Shadows], page 168 rather than being forced to have a separate pass of the scene torender shadows. In this case the ’modulative’ aspect of the shadow technique just affects thecolour of the shadow texture.

Page 174: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 170

7.4 Additive Light Masking

Additive light masking is about rendering the scene many times, each time representing a singlelight contribution whose influence is masked out in areas of shadow. Each pass is combined with(added to) the previous one such that when all the passes are complete, all the light contributionhas correctly accumulated in the scene, and each light has been prevented from affecting areaswhich it should not be able to because of shadow casters. This is an effective technique whichresults in very realistic looking lighting, but it comes at a price: more rendering passes.

As many technical papers (and game marketing) will tell you, rendering realistic lighting likethis requires multiple passes. Being a friendly sort of engine, Ogre frees you from most of thehard work though, and will let you use the exact same material definitions whether you usethis lighting technique or not (for the most part, see [Pass Classification and Vertex Programs],page 171). In order to do this technique, Ogre automatically categorises the Section 3.1.2[Passes], page 20 you define in your materials into 3 types:1. ambient Passes categorised as ’ambient’ include any base pass which is not lit by any

particular light, i.e. it occurs even if there is no ambient light in the scene. The ambientpass always happens first, and sets up the initial depth value of the fragments, and theambient colour if applicable. It also includes any emissive / self illumination contribution.Only textures which affect ambient light (e.g. ambient occlusion maps) should be renderedin this pass.

2. diffuse/specular Passes categorised as ’diffuse/specular’ (or ’per-light’) are rendered onceper light, and each pass contributes the diffuse and specular colour from that single lightas reflected by the diffuse / specular terms in the pass. Areas in shadow from that lightare masked and are thus not updated. The resulting masked colour is added to the existingcolour in the scene. Again, no textures are used in this pass (except for textures used forlighting calculations such as normal maps).

3. decal Passes categorised as ’decal’ add the final texture colour to the scene, which is mod-ulated by the accumulated light built up from all the ambient and diffuse/specular passes.

In practice, Section 3.1.2 [Passes], page 20 rarely fall nicely into just one of these categories.For each Technique, Ogre compiles a list of ’Illumination Passes’, which are derived from theuser defined passes, but can be split, to ensure that the divisions between illumination passcategories can be maintained. For example, if we take a very simple material definition:material TestIllumination{

technique{

pass{

ambient 0.5 0.2 0.2diffuse 1 0 0specular 1 0.8 0.8 15texture_unit{

texture grass.png}

}}

}

Ogre will split this into 3 illumination passes, which will be the equivalent of this:

Page 175: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 171

material TestIlluminationSplitIllumination{

technique{

// Ambient passpass{

ambient 0.5 0.2 0.2diffuse 0 0 0specular 0 0 0

}

// Diffuse / specular passpass{

scene_blend additeration once_per_lightdiffuse 1 0 0specular 1 0.8 0.8 15

}

// Decal passpass{

scene_blend modulatelighting offtexture_unit{

texture grass.png}

}}

}

So as you can see, even a simple material requires a minimum of 3 passes when using thisshadow technique, and in fact it requires (num lights + 2) passes in the general sense. Youcan use more passes in your original material and Ogre will cope with that too, but be awarethat each pass may turn into multiple ones if it uses more than one type of light contribution(ambient vs diffuse/specular) and / or has texture units. The main nice thing is that you get thefull multipass lighting behaviour even if you don’t define your materials in terms of it, meaningthat your material definitions can remain the same no matter what lighting approach you decideto use.

Manually Categorising Illumination Passes

Alternatively, if you want more direct control over the categorisation of your passes, you can usethe [illumination stage], page 30 option in your pass to explicitly assign a pass unchanged to anillumination stage. This way you can make sure you know precisely how your material will berendered under additive lighting conditions.

Page 176: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 172

Pass Classification and Vertex Programs

Ogre is pretty good at classifying and splitting your passes to ensure that the multipass render-ing approach required by additive lighting works correctly without you having to change yourmaterial definitions. However, there is one exception; when you use vertex programs, the normallighting attributes ambient, diffuse, specular etc are not used, because all of that is determinedby the vertex program. Ogre has no way of knowing what you’re doing inside that vertex pro-gram, so you have to tell it.

In practice this is very easy. Even though your vertex program could be doing a lot ofcomplex, highly customised processing, it can still be classified into one of the 3 types listedabove. All you need to do to tell Ogre what you’re doing is to use the pass attributes ambient,diffuse, specular and self illumination, just as if you were not using a vertex program. Sure,these attributes do nothing (as far as rendering is concerned) when you’re using vertex programs,but it’s the easiest way to indicate to Ogre which light components you’re using in your vertexprogram. Ogre will then classify and potentially split your programmable pass based on thisinformation - it will leave the vertex program as-is (so that any split passes will respect anyvertex modification that is being done).

Note that when classifying a diffuse/specular programmable pass, Ogre checks to see whetheryou have indicated the pass can be run once per light (iteration once per light). If so, the passis left intact, including it’s vertex and fragment programs. However, if this attribute is notincluded in the pass, Ogre tries to split off the per-light part, and in doing so it will disable thefragment program, since in the absence of the ’iteration once per light’ attribute it can onlyassume that the fragment program is performing decal work and hence must not be used perlight.

So clearly, when you use additive light masking as a shadow technique, you need to make surethat programmable passes you use are properly set up so that they can be classified correctly.However, also note that the changes you have to make to ensure the classification is correct doesnot affect the way the material renders when you choose not to use additive lighting, so theprinciple that you should be able to use the same material definitions for all lighting scenariosstill holds. Here is an example of a programmable material which will be classified correctly bythe illumination pass classifier:// Per-pixel normal mapping Any number of lights, diffuse and specularmaterial Examples/BumpMapping/MultiLightSpecular{

technique{

// Base ambient passpass{

// ambient only, not needed for rendering, but as information// to lighting pass categorisation routineambient 1 1 1diffuse 0 0 0specular 0 0 0 0// Really basic vertex program

Page 177: Ogre User Manuel 1 7 A4

Chapter 3: Scripts 173

vertex_program_ref Ogre/BasicVertexPrograms/AmbientOneTexture{

param_named_auto worldViewProj worldviewproj_matrixparam_named_auto ambient ambient_light_colour

}

}// Now do the lighting pass// NB we don’t do decal texture here because this is repeated per lightpass{

// set ambient off, not needed for rendering, but as information// to lighting pass categorisation routineambient 0 0 0// do this for each lightiteration once_per_light

scene_blend add

// Vertex program referencevertex_program_ref Examples/BumpMapVPSpecular{

param_named_auto lightPosition light_position_object_space 0param_named_auto eyePosition camera_position_object_spaceparam_named_auto worldViewProj worldviewproj_matrix

}

// Fragment programfragment_program_ref Examples/BumpMapFPSpecular{

param_named_auto lightDiffuse light_diffuse_colour 0param_named_auto lightSpecular light_specular_colour 0

}

// Base bump maptexture_unit{

texture NMBumpsOut.pngcolour_op replace

}// Normalisation cube maptexture_unit{

cubic_texture nm.png combinedUVWtex_coord_set 1tex_address_mode clamp

}// Normalisation cube map #2texture_unit{

cubic_texture nm.png combinedUVW

Page 178: Ogre User Manuel 1 7 A4

Chapter 7: Shadows 174

tex_coord_set 1tex_address_mode clamp

}}

// Decal passpass{

lighting off// Really basic vertex programvertex_program_ref Ogre/BasicVertexPrograms/AmbientOneTexture{

param_named_auto worldViewProj worldviewproj_matrixparam_named ambient float4 1 1 1 1

}scene_blend dest_colour zerotexture_unit{

texture RustedMetal.jpg}

}}

}

Note that if you’re using texture shadows you have the additional option of using [IntegratedTexture Shadows], page 168 rather than being forced to use this explicit sequence - allowingyou to compress the number of passes into a much smaller number at the expense of definingan upper number of shadow casting lights. In this case the ’additive’ aspect of the shadowtechnique just affects the colour of the shadow texture and it’s up to you to combine the shadowtextures in your receivers however you like.

Static Lighting

Despite their power, additive lighting techniques have an additional limitation; they do notcombine well with pre-calculated static lighting in the scene. This is because they are based onthe principle that shadow is an absence of light, but since static lighting in the scene alreadyincludes areas of light and shadow, additive lighting cannot remove light to create new shadows.Therefore, if you use the additive lighting technique you must either use it exclusively as yourlighting solution (and you can combine it with per-pixel lighting to create a very impressivedynamic lighting solution), or you must use [Integrated Texture Shadows], page 168 to combinethe static lighting according to your chosen approach.

Page 179: Ogre User Manuel 1 7 A4

Chapter 8: Animation 175

8 Animation

OGRE supports a pretty flexible animation system that allows you to script animation for severaldifferent purposes:

Section 8.1 [Skeletal Animation], page 175Mesh animation using a skeletal structure to determine how the mesh deforms.

Section 8.3 [Vertex Animation], page 176Mesh animation using snapshots of vertex data to determine how the shape of themesh changes.

Section 8.4 [SceneNode Animation], page 180Animating SceneNodes automatically to create effects like camera sweeps, objectsfollowing predefined paths, etc.

Section 8.5 [Numeric Value Animation], page 180Using OGRE’s extensible class structure to animate any value.

8.1 Skeletal Animation

Skeletal animation is a process of animating a mesh by moving a set of hierarchical bones withinthe mesh, which in turn moves the vertices of the model according to the bone assignments storedin each vertex. An alternative term for this approach is ’skinning’. The usual way of creatingthese animations is with a modelling tool such as Softimage XSI, Milkshape 3D, Blender, 3DStudio or Maya among others. OGRE provides exporters to allow you to get the data out ofthese modellers and into the engine See Section 4.1 [Exporters], page 140.

There are many grades of skeletal animation, and not all engines (or modellers for thatmatter) support all of them. OGRE supports the following features:• Each mesh can be linked to a single skeleton• Unlimited bones per skeleton• Hierarchical forward-kinematics on bones• Multiple named animations per skeleton (e.g. ’Walk’, ’Run’, ’Jump’, ’Shoot’ etc)• Unlimited keyframes per animation• Linear or spline-based interpolation between keyframes• A vertex can be assigned to multiple bones and assigned weightings for smoother skinning• Multiple animations can be applied to a mesh at the same time, again with a blend weighting

Skeletons and the animations which go with them are held in .skeleton files, which are producedby the OGRE exporters. These files are loaded automatically when you create an Entity basedon a Mesh which is linked to the skeleton in question. You then use Section 8.2 [AnimationState], page 176 to set the use of animation on the entity in question.

Skeletal animation can be performed in software, or implemented in shaders (hardware skin-ning). Clearly the latter is preferable, since it takes some of the work away from the CPUand gives it to the graphics card, and also means that the vertex data does not need to bere-uploaded every frame. This is especially important for large, detailed models. You should tryto use hardware skinning wherever possible; this basically means assigning a material which has

Page 180: Ogre User Manuel 1 7 A4

Chapter 8: Animation 176

a vertex program powered technique. See 〈undefined〉 [Skeletal Animation in Vertex Programs],page 〈undefined〉 for more details. Skeletal animation can be combined with vertex animation,See Section 8.3.3 [Combining Skeletal and Vertex Animation], page 179.

8.2 Animation State

When an entity containing animation of any type is created, it is given an ’animation state’object per animation to allow you to specify the animation state of that single entity (youcan animate multiple entities using the same animation definitions, OGRE sorts the reuse outinternally).

You can retrieve a pointer to the AnimationState object by calling Entity::getAnimationState.You can then call methods on this returned object to update the animation, probably in theframeStarted event. Each AnimationState needs to be enabled using the setEnabled methodbefore the animation it refers to will take effect, and you can set both the weight and thetime position (where appropriate) to affect the application of the animation using correlatingmethods. AnimationState also has a very simple method ’addTime’ which allows you to alterthe animation position incrementally, and it will automatically loop for you. addTime can takepositive or negative values (so you can reverse the animation if you want).

8.3 Vertex Animation

Vertex animation is about using information about the movement of vertices directly to animatethe mesh. Each track in a vertex animation targets a single VertexData instance. Vertexanimation is stored inside the .mesh file since it is tightly linked to the vertex structure of themesh.

There are actually two subtypes of vertex animation, for reasons which will be discussed ina moment.

Section 8.3.1 [Morph Animation], page 178Morph animation is a very simple technique which interpolates mesh snapshotsalong a keyframe timeline. Morph animation has a direct correlation to old-schoolcharacter animation techniques used before skeletal animation was widely used.

Section 8.3.2 [Pose Animation], page 178Pose animation is about blending multiple discrete poses, expressed as offsets to thebase vertex data, with different weights to provide a final result. Pose animation’smost obvious use is facial animation.

Why two subtypes?

So, why two subtypes of vertex animation? Couldn’t both be implemented using the samesystem? The short answer is yes; in fact you can implement both types using pose animation.But for very good reasons we decided to allow morph animation to be specified separately sincethe subset of features that it uses is both easier to define and has lower requirements on hardwareshaders, if animation is implemented through them. If you don’t care about the reasons whythese are implemented differently, you can skip to the next part.

Page 181: Ogre User Manuel 1 7 A4

Chapter 8: Animation 177

Morph animation is a simple approach where we have a whole series of snapshots of vertexdata which must be interpolated, e.g. a running animation implemented as morph targets.Because this is based on simple snapshots, it’s quite fast to use when animating an entire meshbecause it’s a simple linear change between keyframes. However, this simplistic approach doesnot support blending between multiple morph animations. If you need animation blending, youare advised to use skeletal animation for full-mesh animation, and pose animation for animationof subsets of meshes or where skeletal animation doesn’t fit - for example facial animation. Foranimating in a vertex shader, morph animation is quite simple and just requires the 2 vertexbuffers (one the original position buffer) of absolute position data, and an interpolation factor.Each track in a morph animation refrences a unique set of vertex data.

Pose animation is more complex. Like morph animation each track references a single uniqueset of vertex data, but unlike morph animation, each keyframe references 1 or more ’poses’, eachwith an influence level. A pose is a series of offsets to the base vertex data, and may be sparse -i.e. it may not reference every vertex. Because they’re offsets, they can be blended - both withina track and between animations. This set of features is very well suited to facial animation.

For example, let’s say you modelled a face (one set of vertex data), and defined a set ofposes which represented the various phonetic positions of the face. You could then define ananimation called ’SayHello’, containing a single track which referenced the face vertex data, andwhich included a series of keyframes, each of which referenced one or more of the facial positionsat different influence levels - the combination of which over time made the face form the shapesrequired to say the word ’hello’. Since the poses are only stored once, but can be referencedmay times in many animations, this is a very powerful way to build up a speech system.

The downside of pose animation is that it can be more difficult to set up, requiring poses to beseparately defined and then referenced in the keyframes. Also, since it uses more buffers (one forthe base data, and one for each active pose), if you’re animating in hardware using vertex shadersyou need to keep an eye on how many poses you’re blending at once. You define a maximumsupported number in your vertex program definition, via the includes pose animation materialscript entry, See 〈undefined〉 [Pose Animation in Vertex Programs], page 〈undefined〉.

So, by partitioning the vertex animation approaches into 2, we keep the simple morph tech-nique easy to use, whilst still allowing all the powerful techniques to be used. Note that morphanimation cannot be blended with other types of vertex animation on the same vertex data(pose animation or other morph animation); pose animation can be blended with other poseanimation though, and both types can be combined with skeletal animation. This combinationlimitation applies per set of vertex data though, not globally across the mesh (see below). Alsonote that all morph animation can be expressed (in a more complex fashion) as pose animation,but not vice versa.

Subtype applies per track

It’s important to note that the subtype in question is held at a track level, not at the animationor mesh level. Since tracks map onto VertexData instances, this means that if your mesh is splitinto SubMeshes, each with their own dedicated geometry, you can have one SubMesh animatedusing pose animation, and others animated with morph animation (or not vertex animated atall).

Page 182: Ogre User Manuel 1 7 A4

Chapter 8: Animation 178

For example, a common set-up for a complex character which needs both skeletal and facialanimation might be to split the head into a separate SubMesh with its own geometry, then applyskeletal animation to both submeshes, and pose animation to just the head.

To see how to apply vertex animation, See Section 8.2 [Animation State], page 176.

Vertex buffer arrangements

When using vertex animation in software, vertex buffers need to be arranged such that vertexpositions reside in their own hardware buffer. This is to avoid having to upload all the othervertex data when updating, which would quickly saturate the GPU bus. When using the OGRE.mesh format and the tools / exporters that go with it, OGRE organises this for you automati-cally. But if you create buffers yourself, you need to be aware of the layout arrangements.

To do this, you have a set of helper functions in Ogre::Mesh. SeeAPI Reference entries for Ogre::VertexData::reorganiseBuffers() andOgre::VertexDeclaration::getAutoOrganisedDeclaration(). The latter will turn a ver-tex declaration into one which is recommended for the usage you’ve indicated, and the formerwill reorganise the contents of a set of buffers to conform to that layout.

8.3.1 Morph Animation

Morph animation works by storing snapshots of the absolute vertex positions in each keyframe,and interpolating between them. Morph animation is mainly useful for animating objects whichcould not be adequately handled using skeletal animation; this is mostly objects that have toradically change structure and shape as part of the animation such that a skeletal structure isn’tappropriate.

Because absolute positions are used, it is not possible to blend more than one morph anima-tion on the same vertex data; you should use skeletal animation if you want to include animationblending since it is much more efficient. If you activate more than one animation which includesmorph tracks for the same vertex data, only the last one will actually take effect. This alsomeans that the ’weight’ option on the animation state is not used for morph animation.

Morph animation can be combined with skeletal animation if required See Section 8.3.3 [Com-bining Skeletal and Vertex Animation], page 179. Morph animation can also be implementedin hardware using vertex shaders, See 〈undefined〉 [Morph Animation in Vertex Programs],page 〈undefined〉.

8.3.2 Pose Animation

Pose animation allows you to blend together potentially multiple vertex poses at different in-fluence levels into final vertex state. A common use for this is facial animation, where eachfacial expression is placed in a separate animation, and influences used to either blend from oneexpression to another, or to combine full expressions if each pose only affects part of the face.

Page 183: Ogre User Manuel 1 7 A4

Chapter 8: Animation 179

In order to do this, pose animation uses a set of reference poses defined in the mesh, expressedas offsets to the original vertex data. It does not require that every vertex has an offset - thosethat don’t are left alone. When blending in software these vertices are completely skipped -when blending in hardware (which requires a vertex entry for every vertex), zero offsets forvertices which are not mentioned are automatically created for you.

Once you’ve defined the poses, you can refer to them in animations. Each pose animationtrack refers to a single set of geometry (either the shared geometry of the mesh, or dedicatedgeometry on a submesh), and each keyframe in the track can refer to one or more poses, eachwith its own influence level. The weight applied to the entire animation scales these influencelevels too. You can define many keyframes which cause the blend of poses to change over time.The absence of a pose reference in a keyframe when it is present in a neighbouring one causesit to be treated as an influence of 0 for interpolation.

You should be careful how many poses you apply at once. When performing pose animationin hardware (See 〈undefined〉 [Pose Animation in Vertex Programs], page 〈undefined〉), everyactive pose requires another vertex buffer to be added to the shader, and in when animating insoftware it will also take longer the more active poses you have. Bear in mind that if you have2 poses in one keyframe, and a different 2 in the next, that actually means there are 4 activekeyframes when interpolating between them.

You can combine pose animation with skeletal animation, See Section 8.3.3 [Combining Skele-tal and Vertex Animation], page 179, and you can also hardware accelerate the application ofthe blend with a vertex shader, See 〈undefined〉 [Pose Animation in Vertex Programs], page 〈un-defined〉.

8.3.3 Combining Skeletal and Vertex Animation

Skeletal animation and vertex animation (of either subtype) can both be enabled on the sameentity at the same time (See Section 8.2 [Animation State], page 176). The effect of this isthat vertex animation is applied first to the base mesh, then skeletal animation is applied to theresult. This allows you, for example, to facially animate a character using pose vertex animation,whilst performing the main movement animation using skeletal animation.

Combining the two is, from a user perspective, as simple as just enabling both animationsat the same time. When it comes to using this feature efficiently though, there are a few pointsto bear in mind:

bullet [Combined Hardware Skinning], page 179

bullet [Submesh Splits], page 180

Combined Hardware Skinning

For complex characters it is a very good idea to implement hardware skinning by includinga technique in your materials which has a vertex program which can perform the kinds ofanimation you are using in hardware. See 〈undefined〉 [Skeletal Animation in Vertex Programs],

Page 184: Ogre User Manuel 1 7 A4

Chapter 8: Animation 180

page 〈undefined〉, 〈undefined〉 [Morph Animation in Vertex Programs], page 〈undefined〉, 〈un-defined〉 [Pose Animation in Vertex Programs], page 〈undefined〉.

When combining animation types, your vertex programs must support both types of anima-tion that the combined mesh needs, otherwise hardware skinning will be disabled. You shouldimplement the animation in the same way that OGRE does, ie perform vertex animation first,then apply skeletal animation to the result of that. Remember that the implementation ofmorph animation passes 2 absolute snapshot buffers of the from & to keyframes, along with asingle parametric, which you have to linearly interpolate, whilst pose animation passes the basevertex data plus ’n’ pose offset buffers, and ’n’ parametric weight values.

Submesh Splits

If you only need to combine vertex and skeletal animation for a small part of your mesh, e.g.the face, you could split your mesh into 2 parts, one which needs the combination and one whichdoes not, to reduce the calculation overhead. Note that it will also reduce vertex buffer usagesince vertex keyframe / pose buffers will also be smaller. Note that if you use hardware skinningyou should then implement 2 separate vertex programs, one which does only skeletal animation,and the other which does skeletal and vertex animation.

8.4 SceneNode Animation

SceneNode animation is created from the SceneManager in order to animate the movementof SceneNodes, to make any attached objects move around automatically. You can see thisperforming a camera swoop in Demo CameraTrack, or controlling how the fish move around inthe pond in Demo Fresnel.

At it’s heart, scene node animation is mostly the same code which animates the under-lying skeleton in skeletal animation. After creating the main Animation using SceneMan-ager::createAnimation you can create a NodeAnimationTrack per SceneNode that you wantto animate, and create keyframes which control its position, orientation and scale which can beinterpolated linearly or via splines. You use Section 8.2 [Animation State], page 176 in the sameway as you do for skeletal/vertex animation, except you obtain the state from SceneManagerinstead of from an individual Entity.Animations are applied automatically every frame, or thestate can be applied manually in advance using the applySceneAnimations() method on Scene-Manager. See the API reference for full details of the interface for configuring scene animations.

8.5 Numeric Value Animation

Apart from the specific animation types which may well comprise the most common uses of theanimation framework, you can also use animations to alter any value which is exposed via the[AnimableObject], page 180 interface.

Page 185: Ogre User Manuel 1 7 A4

Chapter 8: Animation 181

AnimableObject

AnimableObject is an abstract interface that any class can extend in order to provide accessto a number of [AnimableValue], page 181s. It holds a ’dictionary’ of the available animableproperties which can be enumerated via the getAnimableValueNames method, and when itscreateAnimableValue method is called, it returns a reference to a value object which forms abridge between the generic animation interfaces, and the underlying specific object property.

One example of this is the Light class. It extends AnimableObject and provides AnimableVal-ues for properties such as "diffuseColour" and "attenuation". Animation tracks can be createdfor these values and thus properties of the light can be scripted to change. Other objects, in-cluding your custom objects, can extend this interface in the same way to provide animationsupport to their properties.

AnimableValue

When implementing custom animable properties, you have to also implement a number of meth-ods on the AnimableValue interface - basically anything which has been marked as unimple-mented. These are not pure virtual methods simply because you only have to implement themethods required for the type of value you’re animating. Again, see the examples in Light tosee how this is done.