object orientated course manuel

115
Unit 1: Introduction Level: Beginner Time: This unit should not take you more than 1 hour Resources: A licence of Definiens Developer (Version 7.0 was used to develop these units). An image you can load into Definiens for this exercise. For this unit the Landsat 7 image orthol7_20423xs100999.img (Row: 204 Path: 23 Date 10/09/1999) over North Wales and available from the Landmap service is recommended. By the end of this unit you should: Be aware of the purpose of the Definiens Developer software and the types of projects the software can be used for. Know how to create a new project within Definiens and the options available for creating a project. Know the main elements of the Definiens Developer user interface. Know how to change the viewing properties. 1.1. Background This section will provide you with a brief outline of the purpose of the Definiens software and an overview of the collection of software tools Definiens have to offer. For a more detail on Definiens products please visit the Definiens website http://www.definiens.com or contact them directly. 1.1.1 Purpose The purpose of the Definiens Developer is to facilitate the development of object oriented rule based classification procedures. Therefore, rather than simply independently classifying the individual pixels within the scene the image is split (segmented) into regions representing objects within the scene. Working with objects rather than pixels have numerous benefits over traditional pixel based analysis, for example, the spatial relationship between objects can be represented or the shape of an object analysed. Definiens Developer provides an easy to use (although relatively step learning curve) interface to represent the classification rules and visual scripting interface (processes) to control the segmentation and classification process. 1.1.2 The Suite of Definiens Tools Definiens Developer is one of a number of tools which Definiens produce which together form the Definiens Enterprise Image Intelligence TM suite (Figure 1).

Upload: ionu-rus

Post on 15-Apr-2017

238 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Object Orientated Course Manuel

Unit 1: Introduction Level:

• Beginner Time:

• This unit should not take you more than 1 hour Resources:

• A licence of Definiens Developer (Version 7.0 was used to develop these units).

• An image you can load into Definiens for this exercise. For this unit the Landsat 7 image orthol7_20423xs100999.img (Row: 204 Path: 23 Date 10/09/1999) over North Wales and available from the Landmap service is recommended.

By the end of this unit you should:

• Be aware of the purpose of the Definiens Developer software and the types of projects the software can be used for.

• Know how to create a new project within Definiens and the options available for creating a project.

• Know the main elements of the Definiens Developer user interface. • Know how to change the viewing properties.

1.1. Background This section will provide you with a brief outline of the purpose of the Definiens software and an overview of the collection of software tools Definiens have to offer. For a more detail on Definiens products please visit the Definiens website http://www.definiens.com or contact them directly. 1.1.1 Purpose The purpose of the Definiens Developer is to facilitate the development of object oriented rule based classification procedures. Therefore, rather than simply independently classifying the individual pixels within the scene the image is split (segmented) into regions representing objects within the scene. Working with objects rather than pixels have numerous benefits over traditional pixel based analysis, for example, the spatial relationship between objects can be represented or the shape of an object analysed. Definiens Developer provides an easy to use (although relatively step learning curve) interface to represent the classification rules and visual scripting interface (processes) to control the segmentation and classification process. 1.1.2 The Suite of Definiens Tools Definiens Developer is one of a number of tools which Definiens produce which together form the Definiens Enterprise Image IntelligenceTM suite (Figure 1).

Page 2: Object Orientated Course Manuel

Figure 1.1. Definiens Enterprise Image Intelligence™ Suite, Client and Server software.

(Source: Definiens.com) The software can be divided into three categories End-user, Developer and Server-side, where the use of each depends on your role within the image processing chain. The end-user products include Definiens Architect, Definiens Analyst and Definiens Viewer. Definiens Viewer is the simplest of the three and intended for a user to simply view results that have been previously processed. Definiens Analyst allows a use to import and execute fully automatic classification processes (previously developed) and view the results. Finally, Definiens Architect provides a framework within which a user can run fully and semi automatic image analysis programs, written using Definiens processes, and provides a mechanism for manual correction of results. Definiens Developer encapsulates the functionality of the Viewer, Analyst and Architect products but with the additional functionality to develop the ruleware (classification algorithms) required to process image data. Finally, the Definiens eCognitionTM Server provides functionality to process your images through a dataset type infrastructure where the ruleware (developed using Definiens Developer) can be executed simultaneously across a series of servers. Additionally, this infrastructure supports automatic tiling and stitching of images and results to allow very large images to be efficiently processed. 1.1.3. Help while using Definiens Definiens provide customer (and user to user) support through their online forums (http://forum.definiens.com/index.php) where you can post your problems or read previous answers. Additionally there is a section where you can download sample rulesets. Definiens also make a number of documents available online including presentations, case studies, white papers and scientific papers (http://www.definiens.com/resource-center_61_24_0.html).

Page 3: Object Orientated Course Manuel

Additionally, your installation of Definiens Developer contains sample data and a user guide alongside a technical reference guide which contains a wealth of information/ 1.2. Technical Specification For a full technical specification of the software please refer to the Definiens User Guide and Reference Guide. These documents are usual install alongside the software C:\Program Files\Definiens Developer 7.0\UserGuides. 1.3. Understanding the User Interface 1.3.1. Starting Definiens Developer Definiens Developer is normally available from the Windows start menu: Start > All Programs > Definiens Developer 7.0 1.3.2 The User Interface Once started, you will be presented with an interface similar to the one shown in Figure 1.2.

Figure 1.2. The User interface for Definiens Developer 7.0.

Page 4: Object Orientated Course Manuel

Where the interface is different, from Figure 1.2, you will need to toggle the view buttons . The view buttons result in the following interfaces, Figure 1.3, but for this series of units you will only require the Developer interface (4) as shown in Figure 1.2.

1) The workspace interface 2) Analysis Interface

3) Results Interface 4) Developer Interface

Figure 1.3. The interface views available with Definiens Developer. 1.3.2.1 Components of the User Interface

Page 5: Object Orientated Course Manuel

Figure 1.4. The user interface with the components labelled.

Data Viewer: The image and classification data viewer. The viewer allows you to view the imagery you are classifying, including manipulating the band order and image stretching. Process Tree: The window within which you develop your ruleset script. Class Hierarchy: The window displaying the classes you develop. Image Object Information: This window displays selected feature values for a selected object. Feature View: This window displays a list of all the available features within Definiens Developer and allows the current image objects to be coloured (green high values and blue low values) given their value for a select feature. 1.3.2.2 Toolbar Icons Table 1.1 provides a glossary of the icon available on the various toolbars within Definiens Developer.

Icon Description File Toolbar

Create New Project

Open Existing Project

Page 6: Object Orientated Course Manuel

Save Project

New Workspace

Open Workspace

Save Workspace

Predefined Import View Settings Toolbar

Workspace view

Analysis View

Results View

Developer View

View Image data

View Classification

View Samples (for Nearest Neighbour Classification)

Feature View

Toggle Object Means and Pixel Data

Toggle Object Outlines

Toggle Polygons

Toggle Skeletons

Toggle Image View and Project Pixel View.

Single Layer (Grey Scale)

Mix Three Layers RGB

Show Previous Layer

Show Next Layer

Select Layers to be Displayed and Image Stretch View Navigation Toolbar

Delete Level

Select Level For Display

Down a Level

Up a Level Tools Toolbar

Object Information

Object Table

Undo process

Redo process

Class hierarchy

Page 7: Object Orientated Course Manuel

Process tree

Feature View

Managed customised features

Toggle manual editing toolbar Zoom Toolbar

Manual Editing Toolbar

Single Selection

Polygon Selection

Line Selection

Rectangle Selection

Cut Object

Merge Object Selection

Merge Selected Objects

Clear Merge Object Selection

Filter Classes for Multiple Image Object Selection

Classify Image Objects Samples Toolbar

Select Samples

Drag and Click Brush to Assign Image Object Samples

Sample Editor

Sample Selection Information

Toggle Sample Navigation Table 1.1. A glossary of toolbar icons.

1.4. Creating Your First Definiens Developer Project The first step when using Definiens Developer is to load your data into the software. During these units this will always be done through the creation of an individual project. If you later require to process very large datasets or a large number of individual images a workspace provides a convenient and efficient way to store and manage these data. Please refer to Definiens Developer user guide for information about workspaces. To create your new project either select File > New Project or select the new

project icon ( ). You will be presented with the following dialog box (Figure 1.5a) through which you will enter your datasets and parameters to create your project. In this example you will load the image orthol7_20423xs100999.img (Figure 1.5b), by selecting ‘Insert’ next to the Image Layer List. Once you have loaded your data you need to define the

Page 8: Object Orientated Course Manuel

layer aliases, as shown in Figure 1.5b, where bands 1-6 correspond with aliases BLUE, GREEN, RED, NIR SWIR1 and SWIR2, respectively. To bring up the layer properties dialog double click on each image band in turn, or select the band and select on the edit button.

a) Empty Create Project Dialog b) Create Project Dialog with Layer

Properties Figure 1.5. Shows the create project dialog.

You can also add thematic information in the list below the image layers list which could be used during your classification and segmentation, for example a polygon shapefile of building. The next step is to give your project a name, in this case call it ‘Example 1’ and check the projection information for your image has been correctly read. If this information is incorrect then you need to check the ‘Pixel size (unit)’ on the right-hand side. The ‘Pixel size (unit)’ should be set to ‘auto’ and the unit to ‘meters’ and the ‘Use geocoding’ option ticked on. You also have the option of re-sampling you imagery to a resolution of your choice using the ‘Resolution (m/pxl)’ dialog box and to select a subset using the ‘Subset Selection’ button, which present a dialog similar to the one shown in Figure 1.6. To select a subset you can either draw a red box on the image in the dialog or provide the pixel limits of your subset. Before finalising the project and selecting OK we will subset the image (as shown in Figure 1.6), where minimum X is set to 4600, the maximum X is set to 5200, the minimum Y is set to 4400 and maximum Y is set to 5000.

Page 9: Object Orientated Course Manuel

Figure 1.6. Create a subset of the datasets for the project.

Now click OK to create the project and you will move back to the Definiens Developer interface, Figure 1.7.

Figure 1.7. The Definiens Developer interface once the project has been loaded.

Page 10: Object Orientated Course Manuel

1.5. Using the Display Options within Definiens Developer 1.5.1. Using the Zoom Toolbar. Once the project has been loaded you can pan and zoom around the data, in the display region, using the zoom toolbar, shown below in Figure 1.8.

Figure 1.8. Zoom Functions toolbar

If the zoom functions toolbar is not displayed you can turn it on using the View>Toolbars menu. 1.5.2. Selecting bands for Display To select the layer(s) to be displayed you need to use the ‘Edit Image Layer

Mixing’ dialog, Figure 1.9, available via the following icon .

Figure 1.9. The Edit Image Layer Mixing dialog.

Using the ‘Layer Mixing’ drop down menu you can select the number of layers to be mixed in the display and then by selecting the individual layers you may turn then on and off (or increase the weight), Figure 1.20.

Page 11: Object Orientated Course Manuel

a) One layer grey level b) 3 layer RBG model c) Six layer mixing

Figure 1.20. Selecting the layers for display. Also, you can adjust the equalisation (or stretch) of the data layers being displayed using the ‘Equalizing’ drop down menu. The available options are ‘Linear (1.00%)’, ‘Standard Deviation (3.00)’ ‘Gamma Correction (0.50)’, ‘Histogram’ and ‘Manual’. 1.5.3. Multiple Views Definiens Developer also allows you to split your display, therefore allowing you to have multiple views of the same data. This functionality is available from the Window menu (Figure 5.21.).

Figure 5.21. The Window Menu.

Here the current display can be split horizontally and/or vertically and once split can be ‘linked’ to provide views which automatically move together. Once you have split your screen by selecting the window you wish to change the same tools as outlined above can be used to manipulate the display properties in each of the different views. 1.6. Conclusion In summary, you should now be able to open Definiens Developer, create a project and manipulate the display to view the data as you wish. The following unitswill take you through the segmentation and classification of the imagery you have loaded into your project and some more advanced features of the Definiens software. 1.7. Exercises

Page 12: Object Orientated Course Manuel

1) Experiment with the layer properties, such that you can view each image band individually and then a number of 3 and 6 band mixings. Observe how the different land cover types visually change as you change the band mixings. 2) Using the layer combination of your choice (R: NIR G: SWIR1 B: RED, is recommended) experiment wit the image equalisations available. Again, observe how the various land cover types change to these changes. 3) Produce a four way split of the display (i.e., a vertical and horizontal split) and set each region to different viewing properties. Finally, link all four together (side by side).

Page 13: Object Orientated Course Manuel

Unit 2: Image Segmentation Level:

• Beginner Time:

• This unit should not take you more than 1.5 hours Resources:

• A licence of Definiens Developer (Version 7.0 was used to develop these units).

• An image you can load into Definiens for this exercise. For this unit the multispectral Landsat 7 image orthol7_20423xs100999.img and its corresponding panchromatic scene o20423_pan.tif (Row: 204 Path: 23 Date 10/09/1999) over North Wales and available from the Landmap Service is recommended.

Processes:

• RulesetTemplate.dcp • Chessboard_Segmentation.dcp • Quadtree_Segmentation.dcp • Multiresolution_Segmentation.dcp • SpectralDifference_Segmentation.dcp • ContrastSplit_Segmentation.dcp • ContrastFilter_Segmentation.dcp

By the end of this unit you should:

• Be able to apply each of the segmentation techniques available with Definiens Developer to an image.

• Be aware of the difference between the various segmentation algorithms and the types of objects (size and shape) they each produce.

2.1. Introduction Segmentation is always the first step of any process within Definiens Developer as it generates the image objects on which the classification process will be performed. The important part is for the segmentation process to identify objects which a representative of the features you wish to classify and are distinct in terms of the features available within Definiens (e.g., spectral values, shape, texture). 2.2. Setup a Project As with all work within Definiens Developer the first step is to create a project containing all the datasets required for the study. Although, it is important to keep an eye on the size of the images you are creating a project with, as Definiens Developer can become very slow with very large datasets, due to

Page 14: Object Orientated Course Manuel

the number of objects generated during the segmentation process. Therefore, a subset of the inputted images will be created once again. Your project should have the same parameters as that shown in Figures 2.1.a and 2.1.b.

a) The project parameters b) The subset parameters Figure 2.1. The parameters for setting up the Definiens Project.

Please note the order in which the image files have been loaded, i.e., the panchromatic band first, as this will decide on the image resolution for the project. In this case the 25 m multispectral landsat 7 data will be resampled to the 15 m of the panchromatic data. Once you have matched your project window to those shown in Figure 2.1. select OK and create your project. 2.3. Setup your data display For these exercises it is recommend that you split the display horizontally (Window menu), where one of the windows contains the panchromatic data (Figure 2.2a) and the other the multispectral data using the band combination NIR, SWIR1 and RED as the red, green and blue components (Figure 2.2b), respectfully (Figure 2.3).

Page 15: Object Orientated Course Manuel

a) The layer mixing properties for the

panchromatic display b) The layer mixing properties for the

multispectral display. Figure 2.2. The layer mixing properties for the split windows.

Figure 2.3. The Definiens Developer interface with the project and display parameters defined 2.4. Setup your Process Tree

The process tree ( ) will contain the script that you produce to control the processes (algorithms) which run and the order in which they are executed. It is important to keep the script that you produce during your segmentation and classification procedures as organised as possible, this will allow you to understand what you have done when you come back to it. With this in mind Figure 2.4 contains the template to which you should aim to adhere to.

Page 16: Object Orientated Course Manuel

Figure 2.4. Template Process Tree.

To insert a process right-click within the process tree window and the following menu will appear, Figure 2.5. Select ‘Append New’ and the Edit Process dialog will appear, Figure 2.6.

Figure 2.5. Process tree context menu.

Figure 2.6. Edit process dialog

Page 17: Object Orientated Course Manuel

The ‘Edit Process’ dialog is made up of 6 elements. Each of which will become clear as you move through the notes. Name: The name of the process. This can either be manual entered or automatically provided by the software. A good convention is to only manually edit the name where nothing else within the process has been changed, otherwise use the automatic. Finally, the note icon ( ) allows a comment to be written about the process. Algorithm: The algorithm to execute. This drop down menu allows you to select the algorithm you wish to execute, there is an extensive list of algorithms a number of which will be used during these units. Image Object Domain: The image object domain defines the object(s) on which the algorithm will be executed. The drop down box and ‘Parameter’ box allow the level to be selected. The following button (‘all objects’) allows a class(es) to be defined while the final button (‘no condition’) allows a rule to be used, for example, area > 20 m2. Loops & Cycles: It is possible to allow a process to form a loop, often in the form of a while loop and the tick box allows this to be selected. Algorithm Description: A simple description of the algorithm you are using. Algorithm Parameters: These are the parameters which are associated with algorithm which has been selected. To recreated the template shown in Figure 2.4, edit the name of the process to be ‘Process Template’ and select the comments button and enter ‘This is a template ruleset which most process trees will adhere to.’, the rest of the process should be left unchanged. Select OK, you have now created your first process which will simply execute any process which is create beneath it. To create the next process ‘Segmentation’ right-click’ of the process you have just created and select ‘Insert Child’, this will create a new process under your previous process. Edit the name of this new process to be ‘Segmentation’ and select OK. Select the ‘Segmentation’ you have just created right-click and select ‘Append New’, edit this name of this process to be ‘Classification’. Now repeat this to add the processes ‘Merge’ and ‘Export’. To move processes you can drag and drop them while holding down the left mouse button. To place a process under another process drag and drop holding down the right mouse button.

Page 18: Object Orientated Course Manuel

Finally, you can save and load your process independently of your project (although, your process is saved within the project), this is done by right-clicking within the process tree window and selecting ‘Save Rule Set…’. Alongside the contains of your process tree this will also save any classes or customised features you have created, which are associated with you process. 2.5. Multi-Resolution Segmentation The first and most general segmentation technique available within Definiens Developer is the Multi-Resolution segmentation. To insert this algorithm within your process tree right-click on your ‘Segmentation’ process in the template you previous entered and select ‘Insert Child’. Within the following dialog box select the algorithm as ‘multiresolution segmentation’. If this algorithm is not available scroll to the bottom of the list and select more and move the algorithms you wish to have in the list to the right-hand column. You should now be presented with the dialog shown in Figure 2.7. Table 2.1 describes the parameters available for this segmentation algorithm.

Figure 2.7. The Multi-Resolution segmentation process.

Table 2.1. An overview of parameters for segmentation

Parameter Description Level Name Name of the level in the hierarchy created by the

segmentation.

Image Layer Weights

Increases the weighting of the layer when calculating the heterogeneity measure used to decide whether pixels/objects are merged. Zero ignores the layer.

Thematic Layer usage

If any thematic layers are available, allows thematic layers to be turned on and off individually for use within the segmentation.

Page 19: Object Orientated Course Manuel

Scale Parameter Controls the amount of spectral variation within objects and therefore their resultant size. Has no unit.

Shape - Colour A weighting between the objects shape and its spectral colour whereby if 0, only the colour is considered whereas if > 0, the objects shape along with the colour are considered and therefore less fractal boundaries are produced. The higher the value, the more that shape is considered.

Compactness A weighting for representing the compactness of the objects formed during the segmentation.

The multiresolution segmentation creates objects using an iterative algorithm, whereby objects (starting with individual pixels) are grouped until a threshold representing the upper object variance is reached. The variance threshold (scale parameter) is weighted with shape parameters (with separation of shape and compactness parameters) to minimize the fractal borders of the objects. By increasing the variance threshold larger objects will be created although their exact size and dimensions is dependant on the underlying data. To run the segmentation process, leave the parameters at their default values and click execute but it is recommended that you give your level a suitable name. A common level name convention is to number them, starting with Level 1. Once you are happy with the parameters and have executed the process you should have successful completed your first segmentation. Once the segmentation has been executed, select the ‘Show or Hide Outlines’

icon ( ) and the outlines of the objects (segments) created will be displayed over the image. Making sure you have the cursor in the ‘cursor mode’ rather than the ’zoom mode’, select the objects (with either the outlines turned on or

off) in turn. Using the ‘Image Object Information’ window ( ), you will see the values for features associated with selected object (e.g., band values) displayed. To select the features displayed in the ‘Image Object Information’ window, right click within the ‘Image Object Information’ window and select Select Features to Display. By double clicking on the feature, you can move them from one side of the displayed menu to the other. The right hand side contains those features which will be displayed when selecting an object. 2. 2.5.1 Simple Exercise The aim of this exercise is to get used to the multi-resolution segmentation and experiment with the parameters outlined above. The process is the same as the one you implemented above but the segmentation parameters can be altered almost indefinitely, but your task is to alter the parameters in such a way as to achieve the best segmentation over the scene. At this point in time, you may wish to ignore part of the image (e.g., the field areas) and

Page 20: Object Orientated Course Manuel

concentrate on the upland areas or forests and then switch to concentrate on another area. You should find that each of these areas require slightly different segmentation parameters, which we will deal with in later units. Spend approximately 30 minutes experimenting with different segmentation input parameters and observe the differences in the images objects created. Initially try the following:

• High Scale factor • Low Scale factor • High Shape weighting • Low Shape weighting • High Compactness weighting • Low Compactness weighting • Use just the panchromatic layer (set all other layer to weight zero).

To remove your segmentation and try new parameters, you need to delete the level before re-executing your segmentation process, this is done using the

“Delete Level” icon ( ). An example is shown in Figure 2.8. However, there is no definitive answer as to whether one segmentation is better than another and the final selection depends upon whether you are satisfied that the objects you are interested in classifying are adequately delineated.

Figure 2.8. Segmentation of the Landsat imagery.

2.6. Quad-tree Segmentation

Page 21: Object Orientated Course Manuel

A quad-tree segmentation creates regular square objects, where the size is defined by the objects variation. Unlike the multi-resolution segmentation, the objects are created by dividing larger objects until the resultant objects are all within the upper boundary of allowed variation. As with the multi-resolution segmentation, the variation at which a final object is created is defined using a scale parameter. 2.6.1. Simple Exercise To create a quad-tree segmentation, follow the same procedure used to create the multi-resolution segmentation but select the quad-tree segmentation algorithm instead. A scale factor of 20 is recommended for the segmentation, although others might be more appropriate. Try several scale factors for representing a) the fields, b) the uplands, c) forests and d) compromise for all surfaces. Write these down below. Scale Factor Fields: ___________ Scale Factor Uplands: ____________ Scale Factor Forests: ______________ Scale Factor Compromise for all surfaces _____________ 2.7. Chessboard Segmentation A chessboard segmentation is the simplest segmentation available as it just splits the image into square objects with a size predefined by the user. The segmentation does not consider the underlying data and therefore when large objects are created, the features within the data you are trying to classify will not be delineated. This segmentation tends to be used in more advanced processes where segmentation is undertaken in a number of steps combined with a classification. An example is the tree crown delineation algorithm created within Definiens Developer for delineation of crowns within mixed forests in Australia (Bunting and Lucas 2006). 2.7.1. Simple Exercise To perform a chessboard segmentation, setup the process in the same way as the multi-resolution and quad-tree segmentations, but select the chessboard segmentation. To being with, use a value of 1 for the segmentation (This will generate objects of 1 x 1 pixel) and then progressively increase this value. Notice, in each case, how the boundaries and spectral information of the underlying data are ignored. 2.8. Contrast Split Segmentation The aim of this algorithm is to split bright and dark objects using a threshold that maximises the contrast been the resulting bright objects (consisting of

Page 22: Object Orientated Course Manuel

pixel values above the thresholds) and dark objects (consisting of pixel values below the threshold). The algorithm aims to optimize this separation by considering different pixel values, within the range provided by the user parameters, with values selected based on the inputted step size and stepping parameter. Table 2.2 provides a list of the parameters for the algorithm. Parameter Description Chessboard Tile Size

If no level is already present then a chessboard segmentation is undertaken to generate a set of large objects which are iterated through during the segmentation process.

Minimum Threshold

The minimum grey-level value that will be considered for splitting

Maximum Threshold

The maximum grey-level value that will be considered for splitting.

Step Size The sizes of the steps the algorithm will use to move from the minimum threshold to the maximum threshold. Large values will make the algorithm quicker to calculate but smaller values will tend to produce better results.

Stepping Type Either: Add – Calculate each step by adding the value in the scan step field. Multiply – Calculate each step by multiplying by the value in the scan step field.

Image Layer The image layer where on which the algorithm will be applied.

Class for Bright Objects

The class the brighter objects (above the threshold) will be given.

Class for Dark Objects

The class the darker objects (below the threshold) will be given.

Contrast Mode The method the algorithm uses to calculate contrast between bright and dark objects. The algorithm uses the mean of possible bright border pixels and the mean of possible dark border pixels to calculate either the edge ratio or edge difference which can be used to define the contrast between two objects. Alternatively, the object mean (of all pixels) within the bright and dark pixels can be used.

Execute Splitting

If yes objects will be split while if no only the threshold value will be calculated.

Page 23: Object Orientated Course Manuel

Best Threshold A variable (see units 8) to store the threshold that

maximises the contrast.

Best Contrast A variable to store the best contrast calculated.

Minimum Rel. Area Dark

The minimum (relative) area identified as dark objects for the segmentation to be performed. Range 0-1.

Minimum Rel. Area Bright

The minimum (relative) area identified as bright objects for the segmentation to be performed. Range 0-1.

Minimum Contrast

The minimum contrast threshold for the segmentation to occur.

Minimum Object Size

The minimum object size for the segmentation to take place.

Table 2.2. The parameters associated with the contrast split segmentation. To execute this algorithm you will need to create two classes, one for the bright objects and one for the dark objects. To do this within Definiens Developer right-click within the class hierarchy window and select New Class, you do not need to enter any parameters at this point, so just select OK. 2.8.1. Simple Exercise As with the other segmentation algorithms try to achieve the best segmentation of the landscape you can using this algorithm. Remember to investigate all the parameters to observe there effect of the final segmentation. 2.9. Spectral Difference Segmentation This is a merging algorithm where neighbouring objects with a spectral mean below the threshold given (maximum spectral difference) will be merged to produce the final objects. To use this segmentation algorithm you are required to already have a segmentation (level) in place you cannot create a new level using this algorithm. 2.9.1. Simple Exercise Using one of the segmentations you have previously generated extend your process to include a spectral difference segmentation process. Remember, to add a process right-click and select ‘Append new process…”, in this case on the previous segmentation process. Again try to achieve a segmentation that you think is the best for the landscape within the scene. 2.10. Contrast Filter Segmentation

Page 24: Object Orientated Course Manuel

The contrast filter segmentation uses a combination of two pixel filters to create a thematic raster layer, with the values no object, object in first layer (filter response 1), object in second layer (filter response 2), object in both layers and ignored by threshold. Finally, the raster segmentation is converted to Definiens image objects using a chessboard segmentation. The parameters for the algorithm are outlined in Tables 2.3, 2.4 and 2.5. Parameter Description Chessboard Settings

The chessboard segmentation parameters for producing the final segmentation from the filter results.

Layer The image layer on which the filters will be applied.

Scale 1-4 The scale parameters can be used to define membership to the various classes. See the reference manual for more information.

Gradient A minimum gradient threshold

Lower Threshold

Pixels with a filter response below this threshold will be assigned to the ‘ignored by threshold’ class.

Upper Threshold

Pixels with a filter response above this threshold will be assigned to the ‘ignored by threshold’ class.

Table 2.3. The main parameters for the contrast filter segmentation. The contrast filter segmentation allows the definition of shape parameters to control the outputted image objects, the param Parameter Description Shape Criteria Value

Larger values reduce the inclusion of irregularly shaped objects.

Working on Class.

The class (see class assignment) on which the operation will be applied.

Table 2.4. The shape parameters for the contrast filter segmentation. Parameter Description Enable Class Assignment

If set as no, the remaining parameters are not used.

No Objects The class to be pixels with the value ‘no objects’ will be given.

Ignored by Threshold

The class to be pixels with the value ‘Ignored by Threshold’ will be given.

Object in First Layer

The class to be pixels with the value ‘Object in First Layer’ will be given.

Page 25: Object Orientated Course Manuel

Object in Second Layer

The class to be pixels with the value ‘Object in Second Layer’ will be given.

Object in Both layers

The class to be pixels with the value ‘Object in Both layers’ will be given.

Table 2.5. The classification parameters for the contrast filter segmentation. 2.10.1. Simple Exercise As with the previous simple exercises experiment with this algorithm to achieve a segmentation on the image provided. Although, as you will see this algorithm does not produce results on par with the other algorithms outlined in this unit for the image subset provided. 2.11. Conclusion Following the completion of this unit you should have knowledge of all the segmentation processes available within Definiens Developer and implemented each of the algorithms on the image provided. 2.12. Exercises 1) Decide on the most appropriate segmentation algorithm for segmenting this scene. As you are doing this think of what elements you think provide a good segmentation and how the different characteristics of the various algorithms could be used to achieve the segmentation you require.

Page 26: Object Orientated Course Manuel

Unit 3: Nearest Neighbour Classification Level:

• Beginner Time:

• This unit should not take you more than 1.5 hours Resources:

• A licence of Definiens Developer (Version 7.0 was used to develop these units).

• The multispectral Landsat 7 image orthol7_20423xs100999.img and its corresponding panchromatic scene o20423_pan.tif (Row: 204 Path: 23 Date 10/09/1999) over North Wales and available from the Landmap service.

Processes:

• NN_Classification_Process.dcp By the end of this unit you should:

• Be able to complete all the steps require in the process tree to complete a classification (using the nearest neighbour classifier) within Definiens Developer.

• Be aware of the parameters and features to aid the nearest neighbour classification.

• Be aware of the classification, merge and export processes. 3.1. Introduction Within in this worksheet you will create a nearest neighbour classification of a segmented Landsat 7 image of the area around Llyn Brenig, Denbigh Moors, North Wales (Figure 3.1). This area contains extensive tracts of upland heath and bog as well as coniferous forest plantations and grasslands at various levels of improvement.

Page 27: Object Orientated Course Manuel

Figure 3.1. Ordnance Survey Map of the study area

(http://www.ordnancesurvey.co.uk/getamap). The Nearest Neighbour (NN) classifier is a supervised classification approach whereby for each of the classes required, training samples are located and used to classify all remaining (unknown) objects in the image. The NN classifier has been used successfully for many classification problems (e.g., tree species; Leckie et al., 2005). 3.2. Create a project As with all work within Definiens Developer the first step is to create a project containing all the datasets required for the study. A subset of the inputted images will be created once again. Your project should have the same parameters as that shown in Figures 3.2.a and 3.2.b.

Page 28: Object Orientated Course Manuel

a) The project parameters b) The subset parameters Figure 3.2. The parameters for setting up the Definiens Project.

Please note the order in which the image files have been loaded, i.e., the panchromatic band first, as this will decide on the image resolution for the project. In this case the 25 m multispectral Landsat 7 data will be resampled to the 15 m of the panchromatic data. Once you have matched your project window to those shown in Figure 3.2. select OK and create your project. 3.3. Setup your Class Hierarchy For classification, the first task is to create the classes you require and (in this case) to insert the Nearest Neighbour Feature into each class. To create a class, you require the ‘Class Hierarchy’ window (shown in Figure 3.3) to be

open. If the window is not already visible, then click on the icon.

Figure 3.3: Class Hierarchy Window before inserting any classes.

To insert a class, right click in the Class Hierarchy window and select ‘Insert Class’ (Figure 3.4).

Page 29: Object Orientated Course Manuel

Figure 3.4. Inserting a class into the class hierarchy.

This provides you with an empty ‘Class Description’ (Figure 3.5).

Figure 3.5. Empty Class Description.

The next step is to edit your class description by first giving your class a name. For example, give the class the name “Water” and assign a blue

Page 30: Object Orientated Course Manuel

colour. When you have done this, insert and name new classes of “Forest”, “Other vegetation” and “Not Vegetation”. You should then have four classes inserted and named:

• Water • Forest • Other Vegetation • Not Vegetation

After giving each class a name, select an appropriate colour for each. This can be anything you wish, although the final classification will be easier to understand and interpret if you chose a logical colour (e.g., Green for Forest). Next, the features (e.g., mean object spectral response) to be used for classification (in this case, the standard nearest neighbour algorithm) need to be inserted into the class. To do this, right-click on the ‘and (min)’ and select ‘Insert new Expression’ (Figure 3.6).

Figure 3.6. Inserting a new expression into the class.

This will present the window (Figure 3.7), where you need to select ‘Standard Nearest Neighbour’ and click Insert.

Page 31: Object Orientated Course Manuel

Figure 3.7. Selecting the expression to be used for the classification.

Your resulting class description should be similar to that shown in Figure 3.8 for the forest class.

Figure 3.8. The resulting class description to be used for the classification.

The same procedure now needs to be repeated for the remaining three classes so that you end up with a classification hierarchy similar to that shown in Figure 3.9.

Page 32: Object Orientated Course Manuel

Figure 3.9 The final class hierarchy. To select the features used for the nearest neighbour classification use the ‘Edit Standard NN feature Space…’ function, Figure 3.10a, where initially you should just use the mean spectral values of the objects, Figure 3.10b.

a) The menu for editing the NN feature space

b) The dialog for selecting the features within the NN feature space

Figure 3.10. Editing the features used within the nearest neighbour classification. 3.4. Setting up the Process Tree 3.4.1 Segmentation As with all classifications in Definiens Developer, the first task is to perform a segmentation. In this case the use of a multi-resolution segmentation is recommended, although others could be investigated. As with previous units, it is recommend you create an outline within your process tree mirroring that outlined in Figure 3.11. Remember, a process is created by right-clicking in the process tree window and select ‘Append Process’ or ‘Insert Child Process’.

Figure 3.11: Process outline and Segmentation process.

To create the process that performs the segmentation, right-click on the Segmentation process you have already created, select ‘Insert Child Process’, and then the algorithm ‘Multiresolution segmentation’. Choose the parameters shown in Figure 3.12 and once you have entered these parameters, click on ‘Execute’ to perform the segmentation.

Page 33: Object Orientated Course Manuel

Figure 3.12. Parameters used for the segmentation of the Landsat image.

Note, that the layer weighting for the panchromatic band (PAN) has been increased to 2. This is in take advantage of the extra spatial resolution of the panchromatic band, 15 m rather than 25 m of the multispectral. 3.4.2. Classification To run the classification, you need to add a classification process to your process tree. This is achieved by right-clicking on the process you named ‘Classification’ and selecting ‘Insert Child Process’. Edit the new process such that it is similar in appearance to that shown in Figure 3.13. To select multiple classes, use the ‘Shift’ and ‘Control’ keys as you would in Windows Explorer.

Page 34: Object Orientated Course Manuel

Figure 3.13. The process parameters to be used for the classification.

After inputting the parameters into the process, click on the ‘OK button at the bottom, you need to select samples before performing your classification. Your process tree should now be similar to that shown in Figure 3.14.

Figure 3.14: The process tree after the inclusion of the classification process.

3.4.3 Merge Result The next step is to setup the processes which will merge your classification so that all neighbouring objects of the same class will form single objects. It is important to merge your classification to identify complete objects. For example, once merged you can query the lake to find its complete area. To merge the result you will need to enter a merge process for each class (Figure 3.16; ‘Insert Child’), the merge parameters for the Forest class are shown in Figure 3.15. The class for merging is defined using the Image Object Domain where the class of interest is defined, if you were to select multiple classes all the select classes would be merged, removing the boundaries and classification of these objects.

Page 35: Object Orientated Course Manuel

Figure 3.15. The process parameters to merge the Forest class.

To save time, once you have created you first merge process you can copy-and-paste (ctrl-c, ctrl-v or right-click on the process) this process to duplicate it and then edit the class you wish to merge.

Figure 3.16. The process tree including the merge processes.

3.4.4. Export Result Finally, we usually wish to export the classification result from Definiens into a GIS for further processing or the production of a map. Therefore, our final process will be to export the classification to an ESRI shapefile, Figure 3.17.

Page 36: Object Orientated Course Manuel

Figure 3.17. Process parameters to export the classification as a shapefile.

To select the classes to export you again edit the Image Object Domain, remember these parameters define the image objects the process will be applied to. The name of the outputted shapefile has been defined as ‘Classification’ while the features to be exported are the area (of the image object) and the class name. Area is found under Object Features > Shape > Generic while class name is found under Class-Related features > Relations to Classification > Class name. For the class name feature you will need to create it, right-click on the ‘Create new Class name’ and select Create, leave the parameters as their default values and just select OK. The shapefile will output to the directory within which your project is saved, if you have not yet saved your project then the shapefile will be outputted to the directory containing the input imagery. You final process tree should then be the same as the one shown below in Figure 3.18.

Figure 3.18. Final process tree.

3.5. Select Samples to Train Classifier

Page 37: Object Orientated Course Manuel

The next stage is to select the samples for each of the four classes, you need to have executed the segmentation process before undertaking these steps If you are unsure of the distribution of ground cover types, please refer to the shapefile LlynBrenig_BasicLandcover.shp. To create a sample, you need to first activate the tool for sample selection (Select Samples) as shown in Figure 3.19.

Figure 3.19 Activating sample selection.

Once you have activated sample selection, highlight the class you wish create a sample for in the class hierarchy window. Either double click on the objects you wish to select as samples or hold down the Shift key and use a single click. To unselect a sample, repeat the process of selection for each chosen object. To aid the selection of your samples, Definiens Developer offers two windows (both available from the menu in Figure 3.19) of information based on the selected samples. Firstly the ‘Sample Editor’ window (Figure 3.20) and secondly the ‘Sample Selection Information’ window (Figure 3.21).

Page 38: Object Orientated Course Manuel

Figure 3.20. Sample Editor Window

The Sample Editor provides a visual comparison of two classes using a range of selected features. In Figure 3.20 the Forest and Water classes are compared using the object means from each spectral band of the Landsat data. When an object is selected, a red arrow is displayed to illustrate where the object mean fits in relation to the mean of the other samples. To change the displayed features, right-click within the main window and select ‘Features to Display…’ or, if you only want the features being used within the NN calculation, select ‘Display Standard Nearest Neighbour Features’.

Figure 3.21: Sample Selection Information Window

The Sample Selection Information window displays information on the NN membership boundaries and the distances to the other classes of the selected object from the selected samples. To select the classes to be displayed, right-click within the window and select ‘Select classes to Display’. Classes displayed in red have, for the selected object, an overlap in the distance measure and therefore the samples may need to be re-analysed and altered. To set the threshold at which a class is highlighted in red, right-click in the

Page 39: Object Orientated Course Manuel

Sample Selection Information window and select ‘Modify critical sample membership overlap’. The threshold ranges from 0 - 1, and 0 highlights any overlap. Once you have selected your samples, you should have an image which is similar in appearance to that shown in Figure 3.22. Bear in mind that the selection of samples does not have a correct answer. Just select the samples you consider to be most representative of the classes you wish to separate and which also give the best separation in the Sample Editor and Sample Selection windows.

Figure 3.22: Samples selected for the nearest neighbour classification.

3.6. Run your Process. Now execute the classification process you previously created (right-click and select execute on the classification process). You should now have a nicely classified image similar to that in Figure 3.18. If you are unhappy with the classification, repeat the procedure but select more or alternative samples before reclassifying the image. To re-run the classification, open the process and simply click on execute or select the process and press F5 or right-click on the process and select ‘execute’.

Page 40: Object Orientated Course Manuel

Figure 3.18. Final classification of the Landsat data.

3.6.1. Merge the Image Objects Once you are happy with your classification, execute the merge image objects processes you have previous created, your results should appear similar to those shown in Figure 3.19b.

a) Before merging b) After Merging

Figure 3.19. Before and after merging classes.

Page 41: Object Orientated Course Manuel

The purpose of merging is to create a final classification which is a closer representation to the objects in the scene. For example, we will now be able to calculate the area of the whole lake. But, be aware that merging the image objects removes your samples, as the segmentation will have changed. 3.6.2. Export the Results Finally, run the process to export the results. This will result in an ESRI shapefile and allow the creation of a map such as the one shown in Figure 3.20.

Figure 3.20. A map produced using ESRI ArcMap from the result of the Definiens

classification 3.7. Feature Space Optimization Tool To refine the classification further Definiens Developer offers an automated feature, the feature space optimization tool, to automatically identify the features which ‘best’ separate the classes for which samples have been selected. To use this feature delete your classification (delete level) and re-run the segmentation process. You will also need to re-select your samples as these will be deleted each time you change the segmentation (i.e., merge or delete level).

Page 42: Object Orientated Course Manuel

Once you have selected your samples open the feature space optimization tool, Figures 3.21 and 3.22.

Figure 3.21. The NN classification menu.

Figure 3.22. The Feature space optimization tool.

To use this tool, select the features you wish to compare – Initially try the mean, standard deviation and the pixel ratio but later try other combinations. Then select calculate, once the calculation has finished then select advanced to see which features offered the best separation, and ‘Apply to the Std NN’ to use within the classification. You can now run your classification step.

Page 43: Object Orientated Course Manuel

3.8. Conclusions Following the completion of this unit you should now understand basis the process of classification within Definiens Developer where future examples will simply build more complex classification and segmentation routines. 3.9. Exercises 1) Experiment with different segmentation parameters, both within the multi-resolution segmentation and the other segmentation algorithms. Be aware that you will have to select new samples each time you delete the level. 2) Experiment with different sets of features within the standard NN feature space. (Classification > Nearest Neighbor > Edit Standard NN feature space…) 3) Experiment with different sets of features and maximum dimension levels within the feature optimisation tool. (Classification > Nearest Neighbor > Feature Space Optimisation) References Leckie, D.G., Gougeon, F.A., Tinis, S., Nelson, T., Burnett, C.N., & Paradine, D. (2005). Automated tree recognition in old growth conifer stands with high resolution digital imagery. Remote Sensing of Environment, 94, 311-326.

Page 44: Object Orientated Course Manuel

Unit 3: Nearest Neighbour Classification Level:

• Beginner Time:

• This unit should not take you more than 1.5 hours Resources:

• A licence of Definiens Developer (Version 7.0 was used to develop these units).

• The multispectral Landsat 7 image orthol7_20423xs100999.img and its corresponding panchromatic scene o20423_pan.tif (Row: 204 Path: 23 Date 10/09/1999) over North Wales and available from the Landmap service.

Processes:

• NN_Classification_Process.dcp By the end of this unit you should:

• Be able to complete all the steps require in the process tree to complete a classification (using the nearest neighbour classifier) within Definiens Developer.

• Be aware of the parameters and features to aid the nearest neighbour classification.

• Be aware of the classification, merge and export processes. 3.1. Introduction Within in this worksheet you will create a nearest neighbour classification of a segmented Landsat 7 image of the area around Llyn Brenig, Denbigh Moors, North Wales (Figure 3.1). This area contains extensive tracts of upland heath and bog as well as coniferous forest plantations and grasslands at various levels of improvement.

Page 45: Object Orientated Course Manuel

Figure 3.1. Ordnance Survey Map of the study area

(http://www.ordnancesurvey.co.uk/getamap). The Nearest Neighbour (NN) classifier is a supervised classification approach whereby for each of the classes required, training samples are located and used to classify all remaining (unknown) objects in the image. The NN classifier has been used successfully for many classification problems (e.g., tree species; Leckie et al., 2005). 3.2. Create a project As with all work within Definiens Developer the first step is to create a project containing all the datasets required for the study. A subset of the inputted images will be created once again. Your project should have the same parameters as that shown in Figures 3.2.a and 3.2.b.

Page 46: Object Orientated Course Manuel

a) The project parameters b) The subset parameters Figure 3.2. The parameters for setting up the Definiens Project.

Please note the order in which the image files have been loaded, i.e., the panchromatic band first, as this will decide on the image resolution for the project. In this case the 25 m multispectral Landsat 7 data will be resampled to the 15 m of the panchromatic data. Once you have matched your project window to those shown in Figure 3.2. select OK and create your project. 3.3. Setup your Class Hierarchy For classification, the first task is to create the classes you require and (in this case) to insert the Nearest Neighbour Feature into each class. To create a class, you require the ‘Class Hierarchy’ window (shown in Figure 3.3) to be

open. If the window is not already visible, then click on the icon.

Figure 3.3: Class Hierarchy Window before inserting any classes.

To insert a class, right click in the Class Hierarchy window and select ‘Insert Class’ (Figure 3.4).

Page 47: Object Orientated Course Manuel

Figure 3.4. Inserting a class into the class hierarchy.

This provides you with an empty ‘Class Description’ (Figure 3.5).

Figure 3.5. Empty Class Description.

The next step is to edit your class description by first giving your class a name. For example, give the class the name “Water” and assign a blue

Page 48: Object Orientated Course Manuel

colour. When you have done this, insert and name new classes of “Forest”, “Other vegetation” and “Not Vegetation”. You should then have four classes inserted and named:

• Water • Forest • Other Vegetation • Not Vegetation

After giving each class a name, select an appropriate colour for each. This can be anything you wish, although the final classification will be easier to understand and interpret if you chose a logical colour (e.g., Green for Forest). Next, the features (e.g., mean object spectral response) to be used for classification (in this case, the standard nearest neighbour algorithm) need to be inserted into the class. To do this, right-click on the ‘and (min)’ and select ‘Insert new Expression’ (Figure 3.6).

Figure 3.6. Inserting a new expression into the class.

This will present the window (Figure 3.7), where you need to select ‘Standard Nearest Neighbour’ and click Insert.

Page 49: Object Orientated Course Manuel

Figure 3.7. Selecting the expression to be used for the classification.

Your resulting class description should be similar to that shown in Figure 3.8 for the forest class.

Figure 3.8. The resulting class description to be used for the classification.

The same procedure now needs to be repeated for the remaining three classes so that you end up with a classification hierarchy similar to that shown in Figure 3.9.

Page 50: Object Orientated Course Manuel

Figure 3.9 The final class hierarchy. To select the features used for the nearest neighbour classification use the ‘Edit Standard NN feature Space…’ function, Figure 3.10a, where initially you should just use the mean spectral values of the objects, Figure 3.10b.

a) The menu for editing the NN feature space

b) The dialog for selecting the features within the NN feature space

Figure 3.10. Editing the features used within the nearest neighbour classification. 3.4. Setting up the Process Tree 3.4.1 Segmentation As with all classifications in Definiens Developer, the first task is to perform a segmentation. In this case the use of a multi-resolution segmentation is recommended, although others could be investigated. As with previous units, it is recommend you create an outline within your process tree mirroring that outlined in Figure 3.11. Remember, a process is created by right-clicking in the process tree window and select ‘Append Process’ or ‘Insert Child Process’.

Figure 3.11: Process outline and Segmentation process.

To create the process that performs the segmentation, right-click on the Segmentation process you have already created, select ‘Insert Child Process’, and then the algorithm ‘Multiresolution segmentation’. Choose the parameters shown in Figure 3.12 and once you have entered these parameters, click on ‘Execute’ to perform the segmentation.

Page 51: Object Orientated Course Manuel

Figure 3.12. Parameters used for the segmentation of the Landsat image.

Note, that the layer weighting for the panchromatic band (PAN) has been increased to 2. This is in take advantage of the extra spatial resolution of the panchromatic band, 15 m rather than 25 m of the multispectral. 3.4.2. Classification To run the classification, you need to add a classification process to your process tree. This is achieved by right-clicking on the process you named ‘Classification’ and selecting ‘Insert Child Process’. Edit the new process such that it is similar in appearance to that shown in Figure 3.13. To select multiple classes, use the ‘Shift’ and ‘Control’ keys as you would in Windows Explorer.

Page 52: Object Orientated Course Manuel

Figure 3.13. The process parameters to be used for the classification.

After inputting the parameters into the process, click on the ‘OK button at the bottom, you need to select samples before performing your classification. Your process tree should now be similar to that shown in Figure 3.14.

Figure 3.14: The process tree after the inclusion of the classification process.

3.4.3 Merge Result The next step is to setup the processes which will merge your classification so that all neighbouring objects of the same class will form single objects. It is important to merge your classification to identify complete objects. For example, once merged you can query the lake to find its complete area. To merge the result you will need to enter a merge process for each class (Figure 3.16; ‘Insert Child’), the merge parameters for the Forest class are shown in Figure 3.15. The class for merging is defined using the Image Object Domain where the class of interest is defined, if you were to select multiple classes all the select classes would be merged, removing the boundaries and classification of these objects.

Page 53: Object Orientated Course Manuel

Figure 3.15. The process parameters to merge the Forest class.

To save time, once you have created you first merge process you can copy-and-paste (ctrl-c, ctrl-v or right-click on the process) this process to duplicate it and then edit the class you wish to merge.

Figure 3.16. The process tree including the merge processes.

3.4.4. Export Result Finally, we usually wish to export the classification result from Definiens into a GIS for further processing or the production of a map. Therefore, our final process will be to export the classification to an ESRI shapefile, Figure 3.17.

Page 54: Object Orientated Course Manuel

Figure 3.17. Process parameters to export the classification as a shapefile.

To select the classes to export you again edit the Image Object Domain, remember these parameters define the image objects the process will be applied to. The name of the outputted shapefile has been defined as ‘Classification’ while the features to be exported are the area (of the image object) and the class name. Area is found under Object Features > Shape > Generic while class name is found under Class-Related features > Relations to Classification > Class name. For the class name feature you will need to create it, right-click on the ‘Create new Class name’ and select Create, leave the parameters as their default values and just select OK. The shapefile will output to the directory within which your project is saved, if you have not yet saved your project then the shapefile will be outputted to the directory containing the input imagery. You final process tree should then be the same as the one shown below in Figure 3.18.

Figure 3.18. Final process tree.

3.5. Select Samples to Train Classifier

Page 55: Object Orientated Course Manuel

The next stage is to select the samples for each of the four classes, you need to have executed the segmentation process before undertaking these steps If you are unsure of the distribution of ground cover types, please refer to the shapefile LlynBrenig_BasicLandcover.shp. To create a sample, you need to first activate the tool for sample selection (Select Samples) as shown in Figure 3.19.

Figure 3.19 Activating sample selection.

Once you have activated sample selection, highlight the class you wish create a sample for in the class hierarchy window. Either double click on the objects you wish to select as samples or hold down the Shift key and use a single click. To unselect a sample, repeat the process of selection for each chosen object. To aid the selection of your samples, Definiens Developer offers two windows (both available from the menu in Figure 3.19) of information based on the selected samples. Firstly the ‘Sample Editor’ window (Figure 3.20) and secondly the ‘Sample Selection Information’ window (Figure 3.21).

Page 56: Object Orientated Course Manuel

Figure 3.20. Sample Editor Window

The Sample Editor provides a visual comparison of two classes using a range of selected features. In Figure 3.20 the Forest and Water classes are compared using the object means from each spectral band of the Landsat data. When an object is selected, a red arrow is displayed to illustrate where the object mean fits in relation to the mean of the other samples. To change the displayed features, right-click within the main window and select ‘Features to Display…’ or, if you only want the features being used within the NN calculation, select ‘Display Standard Nearest Neighbour Features’.

Figure 3.21: Sample Selection Information Window

The Sample Selection Information window displays information on the NN membership boundaries and the distances to the other classes of the selected object from the selected samples. To select the classes to be displayed, right-click within the window and select ‘Select classes to Display’. Classes displayed in red have, for the selected object, an overlap in the distance measure and therefore the samples may need to be re-analysed and altered. To set the threshold at which a class is highlighted in red, right-click in the

Page 57: Object Orientated Course Manuel

Sample Selection Information window and select ‘Modify critical sample membership overlap’. The threshold ranges from 0 - 1, and 0 highlights any overlap. Once you have selected your samples, you should have an image which is similar in appearance to that shown in Figure 3.22. Bear in mind that the selection of samples does not have a correct answer. Just select the samples you consider to be most representative of the classes you wish to separate and which also give the best separation in the Sample Editor and Sample Selection windows.

Figure 3.22: Samples selected for the nearest neighbour classification.

3.6. Run your Process. Now execute the classification process you previously created (right-click and select execute on the classification process). You should now have a nicely classified image similar to that in Figure 3.18. If you are unhappy with the classification, repeat the procedure but select more or alternative samples before reclassifying the image. To re-run the classification, open the process and simply click on execute or select the process and press F5 or right-click on the process and select ‘execute’.

Page 58: Object Orientated Course Manuel

Figure 3.18. Final classification of the Landsat data.

3.6.1. Merge the Image Objects Once you are happy with your classification, execute the merge image objects processes you have previous created, your results should appear similar to those shown in Figure 3.19b.

a) Before merging b) After Merging

Figure 3.19. Before and after merging classes.

Page 59: Object Orientated Course Manuel

The purpose of merging is to create a final classification which is a closer representation to the objects in the scene. For example, we will now be able to calculate the area of the whole lake. But, be aware that merging the image objects removes your samples, as the segmentation will have changed. 3.6.2. Export the Results Finally, run the process to export the results. This will result in an ESRI shapefile and allow the creation of a map such as the one shown in Figure 3.20.

Figure 3.20. A map produced using ESRI ArcMap from the result of the Definiens

classification 3.7. Feature Space Optimization Tool To refine the classification further Definiens Developer offers an automated feature, the feature space optimization tool, to automatically identify the features which ‘best’ separate the classes for which samples have been selected. To use this feature delete your classification (delete level) and re-run the segmentation process. You will also need to re-select your samples as these will be deleted each time you change the segmentation (i.e., merge or delete level).

Page 60: Object Orientated Course Manuel

Once you have selected your samples open the feature space optimization tool, Figures 3.21 and 3.22.

Figure 3.21. The NN classification menu.

Figure 3.22. The Feature space optimization tool.

To use this tool, select the features you wish to compare – Initially try the mean, standard deviation and the pixel ratio but later try other combinations. Then select calculate, once the calculation has finished then select advanced to see which features offered the best separation, and ‘Apply to the Std NN’ to use within the classification. You can now run your classification step.

Page 61: Object Orientated Course Manuel

3.8. Conclusions Following the completion of this unit you should now understand basis the process of classification within Definiens Developer where future examples will simply build more complex classification and segmentation routines. 3.9. Exercises 1) Experiment with different segmentation parameters, both within the multi-resolution segmentation and the other segmentation algorithms. Be aware that you will have to select new samples each time you delete the level. 2) Experiment with different sets of features within the standard NN feature space. (Classification > Nearest Neighbor > Edit Standard NN feature space…) 3) Experiment with different sets of features and maximum dimension levels within the feature optimisation tool. (Classification > Nearest Neighbor > Feature Space Optimisation) References Leckie, D.G., Gougeon, F.A., Tinis, S., Nelson, T., Burnett, C.N., & Paradine, D. (2005). Automated tree recognition in old growth conifer stands with high resolution digital imagery. Remote Sensing of Environment, 94, 311-326.

Page 62: Object Orientated Course Manuel

Unit 4: Rule Based Classification Level:

• Beginner Time:

• This unit should not take you more than 1 hour Resources:

• A licence of Definiens Developer (Version 7.0 was used to develop these units).

• The multispectral Landsat 7 image orthol7_20423xs100999.img and its corresponding panchromatic scene o20423_pan.tif (Row: 204 Path: 23 Date 10/09/1999) over North Wales and available from the Landmap Service.

Processes:

• Rulebased_Classification_Process.dcp By the end of this unit you should:

• Know how to create a rule-base classification within Definiens Developer.

• Know about the difference between absolute and fuzzy thresholds. • Know how to create a customized feature within Definiens Developer to

represent a band ratio or relationship (e.g., NDVI). 4.1. Introduction Following on from the previous unit, you will now implement a more detailed rule-based classification by using thresholds manually defined within the class hierarchy rather than a nearest neighbour classification. This unit uses the same Landsat 7 subset of Llyn Brenig in the Denbigh Moors, North Wales, although it now aims to identify more classes to increase the detail of the habitat classification. The aim of the unit is to provide you with experience in entering thresholds for a rule-based classification and creating the corresponding processes. You are not expected to identify any thresholds as these will be given and the next unit will cover techniques used commonly to identify these. 4.2. How to Define a Threshold in Definiens Developer Definiens Developer deals with two types of threshold,

1) Absolute Thresholds 2) Fuzzy Thresholds

where each is defined in a similar way within a class description, of a class within the class hierarchy. The absolute threshold (e.g., > 20 or < 30 or = 15)

Page 63: Object Orientated Course Manuel

is the simplest. To define one of these thresholds within the Class Description, right-click on ‘and(min)’ (as when including the NN classifier) and navigate through the tree structure to find the feature with which you wish to create a threshold. Right-click on this feature and select ‘Insert Threshold…’(Figure 4.1) to create the threshold.

Figure 4.1: Inserting a threshold for a chosen feature.

When selected, you will be presented with the window shown in Figure 4.2. Here, you set the threshold and the operator (e.g., <, ≤, =, > or ≥).

Figure 4.2: Edit Threshold Condition Window.

Within the class description you can add as many of these thresholds as you require. You can also include ‘and’ and ‘or’ statements, as shown in Figure 4.3. By default, all the features you introduce are considered within an ‘and’ statement and therefore all thresholds have to be met for the object to be classified. On the other hand, if the statement is an ‘or’ statement, only one of the thresholds needs to be met for the object to be classified. By combining these statements (as shown in Figure 4.3), more complex class descriptions can be developed.

Page 64: Object Orientated Course Manuel

Figure 4.3: A class description using both ‘and’ and ‘or’ statements.

To edit the ‘and(min)’ to ‘or(max)’, right-click on the ‘and(min)’ (Figure 4.4) and select ‘Edit Expression’.

Figure 4.4: Editing the ‘and(min)’ expression.

Within the resulting window (Figure 4.5), select ‘or(max)’ and click OK. To add ‘and(min)’ operators beneath the ‘or(max)’ (as in Figure 3), right-click on ‘or(min)’ as before and select ‘Insert new Expression’. From the list of features (see Figure 1), you will find the same operators (at the bottom) shown in Figure 5. By selecting ‘and(min)’ and then adding other features/thresholds under this operator, you can create structures similar to those in Figure 4.3.

Figure 4.5: Select Operator Expression for the class description.

Fuzzy thresholds differ from absolute thresholds as they allow a degree of uncertainty to be included. So, objects classified using fuzzy thresholds might be associated with the following values: Forest = 0.8 Water = 0.7 Urban = 0.2

Page 65: Object Orientated Course Manuel

In this example, the object is assigned to the class forest but the fuzzyness of other classes (water and urban) will also be allocated within Definiens Developer to give a fuller picture of the contents of the object. Since the introduction of the processes into the functionality of Definiens Developer, careful consideration needs to be given to the use of the fuzzy logic thresholds. Therefore, for most of these units, only absolute thresholds are included. To create a fuzzy (membership function) threshold, follow the same process as outlined above but rather than selecting ‘Insert Threshold’ in Figure 4.1 select ‘Insert Membership Function’. You will then be presented with a new window (Figure 4.6) where you can select a membership function (not all of these are fuzzy) and the corresponding thresholds.

Figure 4.6. Creating a Fuzzy Membership Function.

4.3. Create a project As with all work within Definiens Developer the first step is to create a project containing all the datasets required for the study. A subset of the inputted images will be created once again. Your project should have the same parameters as that shown in Figures 4.7.a and 4.7.b.

Page 66: Object Orientated Course Manuel

a) The project parameters b) The subset parameters Figure 4.7. The parameters for setting up the Definiens Project.

Once you have matched your project window to those shown in Figure 4.7. select OK and create your project. 4.4. Setup a Customised Feature The first step to undertake is to create a customised feature within Definiens Developer to calculate the Normalised Vegetation Difference Index (NDVI), Equation 4.1.

REDNIRREDNIRNDVI

+−

=

Equation 4.1. The normalised vegetation difference index.

To setup the customised feature enter the feature view ( ; object features > Customised) and Select ‘Create new Arithmetic Feature’ this will produce a dialog in which you enter your customised feature, Figure 4.8a. Enter the NDVI into this customised feature, Figure 4.8b, and select OK.

Page 67: Object Orientated Course Manuel

a) Empty Edit Customised Feature b) Customised Feature for NDVI Figure 4.8. Edit Customised Feature.

4.5. Classification Hierarchy The next step is to create the class hierarchy shown in Figure 4.9.

Figure 4.9. Class Hierarchy

Tables 4.1 – 4.6 give the thresholds for each class. Note that when an upper and a lower boundary are required, a membership function (see explanation of fuzzy logic) can be used (see Figure 4.10).

Acid Semi Improved Grassland Leave empty – No rules

Table 4.1. Rules for the class Acid Semi Improved Grassland.

Bog/Heath Mean GREEN > 30 Mean GREEN < 42

Table 4.2. Rules of the class Bog/Heath.

Forest Mean NIR < 100

Page 68: Object Orientated Course Manuel

Mean SWIR1 < 40 Mean NDVI > 0.3 Mean NDVI < 0.6

Table 4.3. Rules for the class Forest.

Improved Grassland Mean NIR > 100

Mean NDVI >= 0.5 Table 4.4. Rules for the class Improved Grassland.

Not Vegetation

Mean NDVI <= 0.275 Table 4.5. Rules for the class Not vegetation.

Water

Mean NDVI <= 0.05 Table 4.6. Rules for the class Water.

Figure 4.10. Setting a membership function with an upper and lower bound.

4.6. Create Process Outline 4.6.1. Segmentation As with all classifications in Definiens Developer, the first task is to perform a segmentation. In this case the use of a multi-resolution segmentation is recommended, although others could be investigated, and the segmentation parameters will be the same as those used in the previous unit.

Page 69: Object Orientated Course Manuel

As with previous units, it is recommend you create an outline within your process tree mirroring that outlined in Figure 4.11. Remember, a process is created by right-clicking in the process tree window and select ‘Append Process’ or ‘Insert Child Process’.

Figure 4.11: Process outline and Segmentation process.

To create the process that performs the segmentation, right-click on the Segmentation process you have already created, select ‘Insert Child Process’, and then the algorithm ‘Multiresolution segmentation’. Choose the parameters shown in Figure 4.12 and once you have entered these parameters, click on ‘Execute’ to perform the segmentation.

Figure 4.12. Parameters used for the segmentation of the Landsat image.

4.6.2. Classification The classification process is similar to the previous unit but here each class will be classified with a separate classification process and the classification will only be performed on the objects which are remaining to be classified. Therefore, you need to update your process tree to appear like the one in Figure 4.13, please make sure you have the same order as shown as the order is important for the classification to work correctly.

Page 70: Object Orientated Course Manuel

Figure 4.13. The process tree including the classification processes.

Figure 4.14. shows the parameters for the classes Water and Not Vegetation. Make sure that you match these parameters, paying attention to the Image Object Domain for the Not Vegetation classification process which restricts the classification to only those object which are currently unclassified.

a) The classification for Water. b) The classification for Not vegetation Figure 4.14. Example classification process parameters.

By classifying the scene in this way the aim is to initial remove those elements when can be easily identified and classified, in this case water, and remove it from the scene before classifying the next class. 4.6.3. Merge and Export Image Objects The merging and exportation operation is, again, the same as the one used in unit 3 but with the inclusion of the extra classes. Therefore, your final process tree should be like the one shown in Figure 4.15.

Page 71: Object Orientated Course Manuel

Figure 4.15. The final process tree.

4.7. Run Process Once you have run the process, either step by step or executing the parent process which will execute the complete algorithm you will have a result like the one shown in Figure 4.16a and a shapefile, which can be imported into a GIS, Figure 4.16b.

a) The classification shown in Definiens b) The classification shown in ESRI ArcMap

Figure 4.16. The classification result. 4.8. Conclusion Following the completion of this unit you should now be aware of the ability to define an object oriented rule based classification within Definiens Developer. Using a rule based classification can allow you to encode your expert know, for example Lucas et al (2007) developed an object oriented rule based classification for upland habitats within Wales using Definiens Developer to encode the expert knowledge of ecologists. One of the problems with rule based classification is to define the rules used within the

Page 72: Object Orientated Course Manuel

classification the next unit will go through the techniques available with Definiens to aid the development of these rules. 4.9. Exercises 1) Experiment with different segmentation algorithms and parameters, you should not have to edit the thresholds you have already entered to reclassify the resulting segments but you may notice varying levels of accuracy between different segmentations. 2) The classification which has been produced during this unit is superficially OK but when viewed in more detail contents numerous errors. Try to improve the quality of the classification through the refinement of the existing rules. 3) In addition to the rule used within the classification there maybe other features available within Definiens Developer which could aid the classification. Review the feature available and try to include extra features (or remove currently used features) from the classification to try and improve the result. Please refer to the reference guide for details of other features. References Lucas R.M. Rowlands A., Brown, A., Keyworth, S and Bunting, P. (2007). Rule-based classification of multi-temporal satellite imagery for habitat and agricultural land cover mapping. International Society for Photogrammetry and Remote Sensing, 62(3), 165-185.

Page 73: Object Orientated Course Manuel

Unit 5: Threshold Identification Level:

• Beginner Time:

• This unit should not take you more than 2 hours Resources:

• A licence of Definiens Developer (Version 7.0 was used to develop these units).

• A copy of the image ‘Identification_of_Thresholds.tif’ generated for this exercise.

Processes:

• ClassifyImage_example.dcp • ClassifyImage.dcp

By the end of this unit you should:

• Be aware of the different tools and methods available within Definiens Developer to help you identify suitable thresholds to undertake a classification.

5.1 Introduction Through this unit, you will go over a number of techniques to aid the identification of thresholds. To illustrate the techniques more easily and simply, an artificial image (Figure 5.1) has been created and will be used throughout this unit. Afterwards, you can try these techniques on actual data acquired by remote sensing instruments.

Figure 5.1. Artificial image created to illustrate the different techniques of threshold

identification.

Page 74: Object Orientated Course Manuel

5.2. Getting started within Definiens Developer 5.2.1 Create the project. As before, the first step is to create your project using the image “Identification_of_Thresholds.tif” as input, where Bands 1, 2 and 3 should be named Red, Green and Blue respectively. When viewing the image use no stretch to see the same image as shown in Figure 5.1. 5.2.2. Create process and perform segmentation As with previous projects, the first step is to create your process outline (Figure 5.2) and perform a multi-resolution segmentation. The parameters for the segmentation are shown in Figure 5.2, through the automatic name provided.

Figure 5.2. Process outline and segmentation parameters.

5.2.3. Create and include a class hierarchy in the process tree. Initially, when creating the class hierarchy, create empty classes (without associated features) as shown in Figure 5.3. Through the processes outlined below, you will identify and create the required thresholds within the classes.

Figure 5.3. The class hierarchy to be duplicated within your project.

You also need to create new processes to perform the classification once you have created the rules within your class hierarchy. You can do this in two ways: 1) Create an individual classification process for each class as in the previous unit, or 2) Create a single process and edit while developing the rules and finally select all classes and classify them in one process once the rules have been developed (Figure 5.4).

Page 75: Object Orientated Course Manuel

5.2.4. Create the merging processes. These are the same as the processes created in the previous unit and need to be created for each of the classes within the hierarchy, as shown in Figure 5.4.

Figure 5.4. The final process.

5.3 How to identify a threshold: The next consideration is how to develop the rules required for classification of the image. This seems difficult to start with but will become a lot easier with perseverance. To help identify thresholds, a series of functions/options are available which are:

• The Feature View • The Sample Editor • The Feature Space Optimization • The Object Information Window • The Sample Selection Information • The order of classification defined in the process.

But, also the extent to which you know your imagery in terms of.

• The range of values. • What you are seeing. (e.g., What is vegetation type X likely to be

doing at the time of image capture?) • The nature of the objects you are trying to extract (e.g., in the form of

a model such as a ‘hill and valley’ model for tree crown delineation). • Interpreting the colour you can see within the image. For example, if

the object is yellow in the image, which bands need to be used for classification?

But above all it comes down to experience!! So, take your time going through the following exercises and consider how the features and options outlined above help. Experiment with each of these and decide which ones you are most comfortable with and use these. Note, that you quite often produce a

Page 76: Object Orientated Course Manuel

different result using these different methods but there is no ‘right’ answer and the most important consideration is that your classification works and is appropriate to your application. 5.3.1 The Feature View The feature view window (Figure 5.5) can be used to colour the objects (using a colour bar) within the scene based on a single feature. The upper (green) and lower (blue) bounds can be edited manually. Moving these upper and lower bounds until only the area of interest is in the coloured area allows the upper and lower bounds to be identified. These values can then be inserted as a rule into the appropriate class.

Figure 5.5. Feature view window.

To test this form of threshold identification, go to the Feature View window within your Definiens project. If the Feature View window is not already available, click on the Feature View icon ( ) which will reveal the required window. Once you have identified the Feature View window, navigate to the ‘Brightness’ feature using: Object features > Layer Values > Mean > Brightness Upon identification of the Brightness feature, right-click and select ‘Update Range’ and then select the check-box at the lower left corner of the feature view window (if not already ticked). Figure 5.6 shows the result of this operation and you should now try to identify a single threshold (or pair of) to separate as many of the colours in the image as possible.

Page 77: Object Orientated Course Manuel

Figure 5.6. Feature view selection and the image colouring

Write down the brightness thresholds in the table below. Note, that not all classes may be identified using the brightness feature.

Object Thresholds Black object Blue object Green object Orange object Red object Yellow object White background

The Brightness Feature: The brightness feature in Definiens Developer is defined as the sum of the mean values (C ) of each (selected) layer for the object divided by the number of layers ( ). Ln

∑=

⋅=Ln

ii

L

Cn

b1

1

To edit the layer used to contribute to the brightness feature, navigate through the following menus Classification > Advanced Setting > Select Image Layers for Brightness …

Now, try identifying further thresholds for the objects which could not be separated using the brightness feature by using other features. Once you have identified thresholds for each of the classes, add these thresholds into the class descriptions and run the classification process.

Page 78: Object Orientated Course Manuel

5.3.2 Sample Editor One of the disadvantages with the Feature View method of threshold identification is that it is only possible to examine features one at a time. By using the Sample Editor and selecting samples (in the same way as you did to train the NN classifier), multiple features can be compared. Before using the Sample Editor, delete your existing classification (using ‘Classification > Class Hierarchy > Delete Classification’). If you have fused the classes previously, delete the level and resegment. Now, turn on the ‘Sample Selection’ and select samples for each of the classes (in the same way as with the NN classification preparation). Once you have created your samples, open the Sample Editor window and select the features you wish to compare by right-clicking within the window and select ‘Select Features to Display …’. Then, move the features you wish to display in the feature editor to the right-hand side of the window and click OK. Now, using the drop down boxes at the top of the window (Figure 5.7), select the two classes you wish to compare. Note, that the features you wish to display need to match those which are specified to be used for the NN classifier.

Figure 5.7. Sample Editor Window.

From Figure 5.7, you can see that the black and yellow classes have a good separation using the features ‘brightness’, ‘mean red’ and ‘mean green’ but a reduced separation in the ‘mean blue’ feature. Therefore, you can start to get a feel for where suitable thresholds may exist. By continuing the process through comparing the black class to all others, you should be able to identify feature rules or combinations of these that separate the classes of interest.

Page 79: Object Orientated Course Manuel

Again, identify features with their thresholds to separate the given classes and list below. These may differ from those you might have listed using the Feature View. After you have defined these, add the new thresholds to the hierarchy and classify your image.

Object Thresholds Black object Blue object Green object Orange object Red object Yellow object White background

5.3.3. Feature Space Optimization Within Definiens Developer, another useful function for identify features that provide best separation of classes is the Feature Space Optimization tool.

The function is available through the icon or by navigating through Classification > Nearest Neighbour > Feature Space Optimization. To use the function, you again need to create samples representing each of your classes and then run the Feature Space Optimization afterwards. The Feature Space Optimization window will be similar to that in Figure 5.8.

Figure 5.8. Feature Space Optimization Window.

The first step is to select the classes you wish to consider. In Figure 5.8, all of the available classes have been selected but you can select a subset of classes if you want to focus on these. Second, select the features you wish to consider for the separation, and select the level (if appropriate) you wish to work on (Levels are discussed in the next unit so for the moment, you don’t need to worry about this). Finally, you need to select the number of

Page 80: Object Orientated Course Manuel

dimensions you wish to consider, which equates to the maximum number of features you want to be use together to identify the best separation of your classes. Once you have entered those parameters, click on ‘Calculate’ and you will notice that numbers appear in the lower left box. These indicate a) the best separation distance and b) the number of dimensions (features) used to arrive at that separation. To see which features were used to identify a separation, click on ‘Advanced’ and a dialog similar to that shown in Figure 5.9 will appear.

Figure 5.9. Advanced results window for the Feature Space Optimization.

In Figure 5.9, the most significant information is within the textbox where, as you saw in the previous window (Figure 5.8) and in the displayed graph, 5 dimensions produce the best separation of the classes. By scrolling down to the ‘Dimension 5’ information you can discover the features which produced the separation. You could now, by using the ‘Apply to Std NN’ button, add these classes to the standard nearest neighbour and use the nearest neighbour classifier but here we are identifying thresholds so we will not do this. Now you have identified the features which give the best separation for all classes, experiment to identify those features which are most suited for the separation of individual or groups of classes. After which, use that knowledge and the two techniques above to refine the thresholds required for the classification.

Page 81: Object Orientated Course Manuel

List these in the table below.

Object Thresholds Black object Blue object Green object Orange object Red object Yellow object White background

5.3.4. Object Information Window Another interface Definiens Developer offers for exploring the data is the Object Information window, which is accessed using the icon. When an object is selected, this window will display the values for the selected features. To select the features you wish to have displayed, right-click within the window and select ‘Features to Display…’. To help the identification of thresholds (once you have an initial classification), you can go through the objects you judge to be in error and find the reason for the errors. Through this approach, you can adjust your thresholds accordingly and subsequently refine your classification. Within one of your previous classifications, examine objects to see whether the classification is correct. If not, then use the information extracted from objects to refine your classification. Write down the new rules in the table below.

Object Thresholds Black object Blue object Green object Orange object Red object Yellow object White background

5.3.5. Order of classification defined in the Process Tree The order of classification does not really aid the identification of thresholds but it does provide another layer of logic that you can include within your classification hierarchy and associated processes. By using the classification process and limiting the objects being considered for classification, the rules within your hierarchy can be made simpler. For example, in the previous unit, the class description for ‘Acid Semi Improved Grassland’ was left empty but by classifying objects to the other classes first and restricting the classification of ‘Acid Semi Improved Grassland’ to the currently unclassified

Page 82: Object Orientated Course Manuel

objects, this class can be identified. This class might otherwise be very difficult and complex to identify because of the variation in the data values associated with the broad range of vegetation types that is likely to exist within this class. 5.3.6. Knowing your imagery One of the most important aspects of classification is to know what you are viewing and equally what you are not viewing within the imagery. For example, in the Landsat 7 imagery for North Wales you have used for the previous units, the date of the imagery is important as the vegetation behaves differently at different times of the year and will therefore need a different set of rules at different times. Equally, with temporal data from different seasons these variations can be exploited for identifying and classifying the land cover. Also, in knowing your imagery and the objects you wish to classify you may be able to think of them in the form of a model. For example, when trying to identify tree crowns, it is useful to visualise the image as conforming to a ‘hill and valley’ model, where the crowns form the hills. This can be used to identify seeds at the crown tops (brightest parts of the image on the hill tops) which can be expanded to identify the crown edges (in the valleys). 5.3.7. Interpreting the colour within the image The image you are seeing on the screen is displayed (for the most part) as a Red, Green and Blue (RGB) composite and therefore, if the object looks red on the screen you know it must have a large contribution from the channel you are displaying as red. From this observation, you can use the channel in red in the classification. Figure 5.10 shows the RGB colour space and by considering the colours you observe in the image and in this figure, you can start to establish which channels are contributing to the appearance of the image as displayed in a particular colour combination. Note that when using this approach, consider also the stretch you are applying to the image to enhance the display as this can change the colour you see and the contrast between features.

Page 83: Object Orientated Course Manuel

Figure 5.10. RGB Colour model.

5.3.8. Experience Finally, and perhaps the most important thing to recognise, is that it takes experience to become good at identifying thresholds and developing the processes and methods which fit around those thresholds and which form your classification. The more imagery you gain experience with, the better you will become at classifying and you’ll be able to apply your knowledge from one set of imagery to the next. Another aspect of classification that should be considered is the ability of your developed rule bases and processes to be applied to imagery other than those to on which they have been developed. Ideally, this should be done such that no or minimal adjustments are applied. 5.4. Conclusions Following the completion of this unit you should now be aware of the tools and concept through which you can identify the thresholds you will require to classify a scene using a rule-base. 5.5 Exercises 1) Experiment with different segmentation algorithms and parameters. You should not have to edit the thresholds you have already entered to reclassify the resulting segments.

Page 84: Object Orientated Course Manuel

Unit 6: Working with Levels Level:

• Intermediate Time:

• This unit should not take you more than 1.5 hours Resources:

• A licence of Definiens Developer (Version 7.0 was used to develop these units).

• A copy of the image ‘crowns_forest_image.tif’ generated for this exercise.

Processes:

• LevelsExampleProcess.dcp By the end of this unit you should:

• Be aware of the concept of levels within Definiens Developer and how they can be used to increase the concepts represented through the classification.

• Know how to use the ‘enclosed by class’ process. 6.1. Introduction Within this unit, you will learn how to use Levels within Definiens Developer and some of the features which allow interaction between levels. These features increase the knowledge available within the system as different scales of information are used. To illustrate the use of Definiens Developer levels you will use an artificial image that has been created for this unit (Figure 6.1). Within this image, the green objects represent trees (herein referred to as Level 1) and a second level (herein referred to as Level 2) will be created to represent the forest extent. To identify the forest extent, the use of more complex processes will be required to fill in the gaps between the crowns to create the forest mask.

Page 85: Object Orientated Course Manuel

Figure 6.1. Image to be used for classification.

6.1.1 What is a Definiens Level? In its simplest form, a Level in Definiens describes a set of objects represented at a particular scale. Levels form a hierarchy where, as shown in Figure 6.2, the lowest level contains the smallest scale (objects) and then higher levels progressively increase in scale (object size). Through this approach, hierarchical relationships can be described such as “a tree is within a forest” or “a house is within a town”.

Figure 6.2. Hierarchical Structure of Definiens Developer Levels.

6.2. Getting started within Definiens Developer 6.2.1. Create the project As before, the first step is to create your project using the image “crowns_forest_image.tif” as input, where bands 1, 2 and 3 should be named Red, Green and Blue respectively. Make sure the unit it set to pixels and geocoding is turned off. When viewing the image use no stretch to see the same image as shown in Figure 6.1.

Page 86: Object Orientated Course Manuel

6.2.2. Create process and perform segmentation As with previous projects, the first step is to create your process outline (Figure 6.3) and perform a multi-resolution segmentation to create Level 1. The parameters for the segmentation are given in Figure 6.3.

Figure 6.3. Process outline and multiresolution segmentation parameters.

6.2.3. Class hierarchy and setting of thresholds. Create a class hierarchy which contains classes representing background, crowns and forest (Figure 6.4). Note that parameters (Figure 6.5) are assigned only to the crown class.

Figure 6.4. Class Hierarchy.

Figure 6.5. Class Description of the Crown Class.

6.2.4. Classification and Merging at Level 1 Implement and execute the components shown in Figure 6.6 into your process tree under the ‘Classification > Level 1’ and ‘Merge and Tidy’ processes.

Page 87: Object Orientated Course Manuel

Figure 6.6. Process for classification and merging at level 1.

The new part of this process is the chessboard segmentation of the background following fusion for two classes. This is done to separate the background into small objects which can then be used within the upper layer to fill in the gaps between crowns and form a region “forest” containing the crowns. 6.2.5. Creation of Level 2 The next process is to copy Level 1 to create Level 2. To create the process, insert a new process under the ‘Creation Level 2’ process and input the parameters shown in Figure 6.7.

Figure 6.7. Parameters from copying Level 1 to create Level 2.

6.2.6. Classification of Level 2 Reproduce the processes shown in Figure 6.8 into the process hierarchy under the ‘Classification > Level 2’ process but notice the first process has a restriction that object needs a border to an object within the class crowns. To define this restriction click on the ‘no condition’ button in the Edit Process dialog and edit the resultant dialog to match that shown in Figure 6.9.

Page 88: Object Orientated Course Manuel

Figure 6.8. The classification processes for Level 2.

Figure 6.9. The classification process to identify objects within a border to a crown.

The feature ‘Rel. border to Crowns’ is used rather than ‘Border to Crowns’ as it is normalised and independent of the object size and border length and therefore creates a more stable threshold. 6.2.7. Merge and Tidy the result The final part of the classification is to fuse and tidy the result. To do this, you need to reproduce the processes shown in Figure 6.10.

Page 89: Object Orientated Course Manuel

Figure 6.10. Processes to tidy and merge to give the final result.

You should already be familiar with the fusion process but take note of which processes require execution on Level 1 and Level 2. To switch the Level, remember to use the ‘Parameter…’ button next to the drop down box. The new process here fills in the gaps within the areas of forest so when executing, it is worth stepping through the processing and executing one step at a time to observe the workings of each of the processes. The parameters required for the process which fills the gaps are given in Figure 6.11.

Page 90: Object Orientated Course Manuel

Figure 6.11. Parameters for the process which fills any gaps within the forested areas.

6.3. Results Once the process has been executed you should a result at each level as shown in Figure 6.12.

a) The result at Level 1 b) The result at Level 2

Figure 6.12. The results of the classification process. 6.4. Conclusions Following the completion of this unit you should be aware of the concept of levels within Definiens Developer and how to implement them and represent the relationships between the levels. You have also come into contact with another process, in this case the ‘fill enclosed by class’ process. 6.5. Exercises 1) Experiment with different segmentation strategies when creating a new level. Figure 6.12 demonstrates a multi-resolution segmentation process which will create a new Level above the existing one.

Page 91: Object Orientated Course Manuel

Figure 6.12. A segmentation process which creates the segmented layer as a new level

above the existing one. 2) Examine and experiment with the other features which allow interaction between objects within and between levels (e.g., Relations to sub objects and Relations to super objects). Note that super objects are those on the level above while sub objects are those on the level below. 3) Explain below why the class background on Level 1 cannot be fully fused to create one large object. __________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Page 92: Object Orientated Course Manuel

Unit 7: Putting it all together Level:

• Intermediate Time:

• This unit should not take you more than 4 hours Resources:

• A licence of Definiens Developer (Version 7.0 was used to develop these units).

• The multispectral Landsat 7 image orthol7_20423xs100999.img and its corresponding panchromatic scene o20423_pan.tif (Row: 204 Path: 23 Date 10/09/1999) over North Wales and available from the Landmap Service.

Processes:

• LandcoverClassificationExample_provided.dcp • LandcoverClassificationExample.dcp

By the end of this unit you should:

• Be able to put together a real world land use classification. • Be able to demonstrate that you can calculate thresholds for

classification for real world images. • Be aware of the process to grow a class from an identified core.

7.1. Introduction The aim of this unit is to allow you to pull together the skills you have developed within Definiens Developer to produce a single more complex example. The process outline will be provided, with a segmentation and initial classification of elements such as water to illustrate some more advanced features but you will be require to identify and enter the thresholds for the classification of the scene. 7.2. Setup the Project To setup the project you are required to load the following layers, Figure 7.1a, create a subset, Figure 7.1b.

Page 93: Object Orientated Course Manuel

a) The project layers and alias’ b) The project subset.

Figure 7.1. The project parameters. Please note the inclusion of the two thematic layers. The first defines the area of the image where cloud is present and the second defines the upland and lowland areas of the scene and will be used for segmentation. 7.3. Setup the Process Tree 7.3.1. The segmentation The first step is to set out your process tree with the default structure, Figure 7.2.

Figure 7.2. The default Process Tree.

The initial segmentation is performed with the parameters shown in Figure 7.3, please notice the inclusion of the thematic layer to define the cloud covered area. Please note the order of the layers in the Figure.

Page 94: Object Orientated Course Manuel

Figure 7.3. Segmentation parameters.

Once executed, you should observe that the segmentation process has identified the areas defined within the shapefile defining the cloud cover area. The following step is to classify these as such and ignore them for the remainder of the classification process. The classification is performed with reference to the thematic layer (Figure 7.4) and results in the following process tree, Figure 7.5.

a) The classification description b) The features which references the

thematic layers Figure 7.4. The classification of cloud.

Page 95: Object Orientated Course Manuel

Figure 7.5. The process tree, including the cloud classification.

Once the cloud has been removed from the scene, the follow segmentation process, Figure 7.6a, will be added to the process tree, Figure 7.6b. Please note the use of the second shapefile to separate the lowland and upland regions of the scene. Also, note that the segmentation is being performed at Level 1 and the Level Usage parameter is set to ‘Use current’.

a) The segmentation parameters b) The process tree, including the second segmentation step.

Figure 7.6. The second segmentation process to define the upland and lowland regions. Once segmented the classes of upland need to be defined using the thematic layer, Figure 7.7.

a) The classification description b) The classification process.

Page 96: Object Orientated Course Manuel

Figure 7.7. The classification process and description to define the regions of upland and lowland.

The final part of the segmentation is to segment within the upland and lowland regions to produce the segments for classification. The process and parameters are shown in Figure 7.8.

a) The process tree b) The segmentation parameters

Figure 7.8. The process and segmentation parameters upland and lowland regions These steps result in a segmentation which varies depending on the land cover. So, in the lowlands where land units are defined by fields and hedges the segmentation needs to be coarser to pull out these broad similar regions. While, in the upland regions the land units are much smaller the segmentation is much finer allowing these regions to be classified. 7.3.2. The classification process. The classification process will be defined under the ‘Classification of Landcover’ process, Figure 7.2. The first part of this process tree has been provided, below, but you are required to input the remaining processes and rules for the classes shown in Figure 7.9.

Page 97: Object Orientated Course Manuel

Figure 7.9. The classes to be identified.

Within the Groups tab of the class hierarchy classes can placed in a hierarchy allowing the relationships between the different classes to be defined. For example, all the forest classes have been placed under the Forest class. Therefore, Definiens Developer is aware that the Broadleaf Forest, Coniferous Forest and Young Coniferous Forest classes are all types of forest. Once classified if you collapse the Forest group all these forest regions will be coloured as forest. But, be aware that if you merge the Forest class all the sub-classes will be merged forming only a single class and removing information from your classification. If you are unsure of the classes to be identified please refer to the shapefile Landcover_classification.shp. The first step within this classification is to identify the ‘Not Vegetation’ regions within both the upland and lowland regions. The NDVI has been calculated using a customised feature, you will need to create (see earlier unit), and a threshold of NDVI < 0.25 has been identified (enter within the class description of the class ‘Not Vegetation’) to separate the ‘Not Vegetation’ regions, Figure 7.10.

Page 98: Object Orientated Course Manuel

Figure 7.10. The process tree for the classifying the ‘Not Vegetation’ regions. Within the ‘Not Vegetation’ regions the area of ‘Water’ have been identify, using the rules SWIR2 < 15 AND NDVI < 0.1, but when you run these rules you will notice that not all the areas of ‘Water’ have been identified. This is because there are still some small regions of cloud over the lake. To correctly classify these regions we will grow the ‘Water’ class using the ‘Water Grow’ class. The ‘Grow Water’ class contains the rules shown in Figure 7.11, where the new rule ‘Rel. border to Water > 0’ defines that to be a member of the class ‘Grow Water’ the object needs to have a border to a ‘Water’ object.

Figure 7.11. The class description of the ‘Grow Water’ class.

By defining the process tree as shown in Figure 7.12 the ‘Grow Water’ class is iteratively classified 10 times (10x: for all, Figure 7.13) where the identified ‘Grow Water’ objects are assigned to the ‘Water’ class in between each iteration.

Figure 7.12. The process tree to classify the regions of water within the scene.

Page 99: Object Orientated Course Manuel

Figure 7.13. Using a process to loop a group of process 10 times.

Finally, the ‘Water’ regions are merged and the remainder of the classification will concentrate on the vegetation within the scene, where you are require to develop you our process and rules. 7.3.3. Tidy and exportation. As with the previous processes the final steps are to merge and tidy classification classes and export to results for use within a GIS. Therefore, based on the knowledge gained within the previous units developed these parts of the process. 7.4. Results. Once you have completed the classification and executed the tidy and exportation processes you should have results similar to those shown in Figure 7.14.

Page 100: Object Orientated Course Manuel

a) The classification with Definiens b) The classification with a GIS. Figure 7.14. The results of the classification.

Finally, it is recommended that you check you classification process with the model process developed (LandcoverClassificationExample.dcp). To do this open another instance of Definiens Developer and setup the same project structure, then right-click within the process tree window and select ‘Load Rule Set…’, Figure 7.15. You may now execute this process and should gain the same result as the one shown in Figure 7.14. Observe how each object can have membership to multiple classes (use the ‘membership to’ feature and the object information window) as fuzzy membership functions have been developed for each of the added classes.

Page 101: Object Orientated Course Manuel

Figure 7.15. Importing a saved rule set.

7.5. Conclusions Following the classification of this scene you should now be confident in performing you own classifications including several structured classes and many of the classification features available within Definiens. It is recommend that you look through the reference guide within your installation of Definiens to observe the large number of features available for you during classification. 7.6. Exercises 1) The classification ruleset provided is a fuzzy classification; therefore each object has a membership to all the classes. Look up the classification stability feature and observe the objects which are on the boarder between two classes.

Page 102: Object Orientated Course Manuel

Unit 8: Calculating Image Thresholds Level:

• Advanced Time:

• This unit should not take you more than 3 hours Resources:

• A licence of Definiens Developer (Version 7.0 was used to develop these units).

• The multispectral Landsat 7 image orthol7_20424xs240799.img (Row: 204 Path: 24 Date 24/07/1999) over South Wales and available from the Landmap Service.

Processes:

• CalculatingThresholdsExample.dcp By the end of this unit you should:

• Be able to develop Definiens processes which calculate features from the image.

• Be able to implement iterative processes to loop through a series of objects and/or grow an object from its core.

• Be able to use variables during the classification. 8.1. Introduction A limitation of the methods which have so far been presented is that the thresholds for classification have be identified manually. This is time consuming and thresholds can vary between images and even across a single image. With appropriate image pre-processing atmospheric correction and topographic correction many of these differences can be corrected for but not for all. Therefore, this unit will demonstrate the how Definiens Developer can calculate thresholds from the imagery and use it for classification, in this case for cloud and shadow detection. 8.2. Setup the Project. To set up the project you need to define the same layer alias and subset as that shown in Figure 8.1.

Page 103: Object Orientated Course Manuel

a) The layers and alias’ b) The project subset

Figure 8.1. Setting up the project. 8.3. The Underlying Concept of the Classification The methodology of the process you are to develop is an iterative process that spatially breaks down the problem allowing the thresholds used within the process to be calculated separately across the image. Therefore, the first step is to split the image into a large chessboard where each segment (or tile) will be classified individually (1 to 42), Figure 8.2.

Figure 8.2. The iterative process of processing the image.

Once a segment of the chessboard has been selected an upper and lower quartile within the segment will be calculated and used as the thresholds for identifying seeds for the clouds and the shadows within the scene. A fine segmentation is then performed on the segment and once these seeds (or

Page 104: Object Orientated Course Manuel

cores) have been identified the remainder of the scene will be used to recalculate these threshold values. The seeds will then be grown to the limit of these new thresholds. Finally, the cloud and shadow objects will be merged and the process will move on to the next large segment until all parts of the image are processed. 8.4. Setup the Process Tree 8.3.1. Setting up the looping process The first step is to setup up the large chessboard segmentation and iterating through the segments, but the standard process tree is still used, Figure 8.3.

Figure 8.3. The standard process tree.

The segmentation process within this structure is a large chessboard segmentation, Figure 8.4.

Figure 8.4. The large chessboard segmentation process

Figure 8.5, shows the process tree which you should have up to this point.

Figure 8.5. The process tree.

Page 105: Object Orientated Course Manuel

You then need to define the iterative process which will allow each segment to be selected in turn, Figure 8.6. A while loop needs to be set up to loop while the number of ‘unclassified’ objects is greater than 0. Within this loop the first process selects a single object. The ‘Find domain extreme’ process will be used, Figure 8.7., where the object with the maximum value in its Y location is selected. Where multiple objects have the same value one will be selected at random, as the option ‘Accept Equal Extrema’ has been set to no, otherwise all objects with the same value would be selected together. The select object will be given the class ‘_active’. The preceding underscore is used to define a class which is used for processing and not a classification class.

Figure 8.6. The iterative process.

Figure 8.7. The find domain extreme process and parameters.

Once an object has the class ‘_active’ it can then be processed individually using the Image Object Domain filter within a process. Finally, once the processing has finished all the remaining ‘_active objects need to be removed to allow the loop to terminate.

Page 106: Object Orientated Course Manuel

8.3.2. Implementing the processing stage. The next stage is set up the template within the loop to allow the processing of the individual segments, Figure 8.8.

Figure 8.8. The standard template within the loop.

Within the segmentation process of the template a quad-tree segmentation with a scale parameter of 40, Figure 8.9.

Figure 8.9. The quad-tree segmentation process.

The process you will use to calculate the thresholds is the ‘compute statistical value’ process which allows the number, sum, minimum, maximum, mean, standard deviation, median and quantile to be calculated. The value from this calculation is outputted into a variable, Definiens Developer supports the concept of variables within the process tree. Definiens offer 5 variable types, Scene, Object, Class, Feature and Level, where the scope of the variable is defined by the type. For example, an object variable will be created for each individual object, while a Level variable will be defined for an individual level (i.e., a different value can be stored for each level) and a scene variable is defined for the whole project (i.e., only one value for the whole project). For this project you will only use scene variables and calculate the quantile from the ‘compute statistical value’ process. To calculate the quantile you first need to define the quantile you are interested in, for example the 90 % quantile, to

Page 107: Object Orientated Course Manuel

increase the flexibility of the process we will define a variable to store this value. The classification processes, Figure 8.10, starts by defining the set of variables to be used during the classification. The first ‘LowerQuantile’ contains the quantile threshold used to calculate the threshold for the shadow seeds. While the second ‘UpperQuantile’ is used to define the quantile for the cloud seeds. Finally, the ‘LowerQuantileBrightness’ and ‘UpperQuantileBrightness’ need to be defined to store the brightness thresholds to be used for classification. To setup a variable the ‘update variable’ process needs to be used, Figure 8.11. The initial values for the variables is shown in Table 8.1.

Figure 8.10. The process tree to set up the variables.

Figure 8.11. The process parameters for setting up a variable.

Page 108: Object Orientated Course Manuel

Variable Initial Value LowerQuantile 5 UpperQuantile 88

LowerQuantileBrightness 0 UpperQuantileBrightness 0 Table 8.1. The initial values of the required variables.

The next stage of the process, Figure 8.12, is to compute the threshold values into the LowerQuantileBrightness and UpperQuantileBrightness variables. Note, the Image Object Domain specifies the ‘_active’ class, therefore the values are only computed over objects which have the class ‘_active’ and will therefore vary across the scene, as each chessboard segment is selected in turn.

a) The process tree b) The process parameters

c) The process parameters for an ‘if statement’ Figure 8.12. Computing the threshold values.

Following the calculation of the thresholds (UpperQuantileBrightness and LowerQuantileBrightness) two ‘if statements’ are provided to identify particularly bright tiles and to lower the quantile threshold to take into account this situation, Figure 8.12c. Otherwise, the threshold used for classification of the cloud cores will be too high, resulting in an under estimation of the cloud cover. Once these thresholds have been calculated the classification of the classes ‘Cloud’ and ‘Shadow’ can be completed to produce their respective seeds. Figure 8.13 shows the class descriptions of the two classes. Note, the

Page 109: Object Orientated Course Manuel

variables UpperQuantileBrightness and LowerQuantileBrightness are used in place of the threshold. Finally, a classification process is added to the process tree, followed by merging processes for the two classes, Figure 8.14.

a) Cloud Class Description b) Shadow Class Description

Figure 8.13. The class descriptions for Cloud and Shadow classes.

Page 110: Object Orientated Course Manuel

Figure 8.14. The process tree. The next stage is to grow the cloud and shadow seeds to identify the full extent of the clouds and their shadows, this will require another loop. But first, the thresholds UpperQuantileBrightness and LowerQuantileBrightness need to recalculate to provide the thresholds to terminate the loop used to grow the seeds. The same process as before is used to calculate the new upper and lower quantiles of the ‘_active’ class. The new value calculated will differ from the one previous calculated because the identified cloud and shadows seeds no longer have the class ‘_active’ and therefore not included in this calculation. To define the loop create a new process, below calculate threshold, and tick on the ‘Loop while something changes’ option, Figure 8.15. The elements within the loop will now be inserted as child processes.

Page 111: Object Orientated Course Manuel

a) Process Tree b) Process parameters

Figure 8.15. The process tree and process parameters to set up the loop. Once the loop has been defined, the classes ‘Cloud Grow’ and ‘Shadow Grow’ need to be created, where the class description will be the same as Cloud and Shadow but for the inclusion of the relative border features to restrict the classification objects to those bordering the Shadow and Cloud features, Figure 8.16.

a) Cloud Grow Description b) Shadow Grow Description

Figure 8.16. The class descriptions for the Cloud Grow and Shadow Grow classes. Following the classification of Cloud Grow and Shadow Grow the two class need to be assigned to the Cloud and Shadow classes, before being merged and any remaining _active class objects assigned to _processed, Figure 8.17. These three steps, classification, assign and tidy will happen for each iteration of the loop, where the loop will continue until all the objects fits the rules have been identified.

Page 112: Object Orientated Course Manuel

Figure 8.17. The process tree.

The next step is to tidy the classification, which consists of three steps. The first is to assign all the _processed objects to be unclassified and then merge them. The next is to fill any holes in the cloud or shadow object with an area less than 20000 m2. To do this the ‘fill enclosed by class’ process was used, Figure 8.18, where all the unclassified objects with an area less than 20000m2 enclosed by cloud are assigned (i.e., use class description = no) to the class cloud. This is repeated for the shadow class. Finally, the Shadow and Cloud classes are merged and exported (remember to export the class names), Figure 8.19.

Page 113: Object Orientated Course Manuel

Figure 8.18. The fill enclosed by class process parameters.

Figure 8.19. The process tree to tidy and export the classification.

8.5. The Results Once you have run the process you will have a map of the clouds and their shadows, Figure 8.20a, and a shapefile containing these objects, Figure 8.20b.

Page 114: Object Orientated Course Manuel

a) Within Definiens Developer b) From ArcGIS Figure 8.20. The results

Finally, to increase your understanding of the process you can make use of the ‘Update View’ option (right-click on a process) which is available on every process and will update the view in the data window after a process has executed. This will allow you to watch the progress of your classification. Initially, select ‘Update View’ on the processes which selects an active object, the merging of the cloud shadow seeds and the merging of the cloud shadow during the grow. Now execute the process and you can watch your classification being performed. Beware of over using this feature as updating the view is a slow process and can significantly increase the processing time. For example, switching on the three updates as suggested will double the processing time for this algorithm. 8.6. Conclusions From this worksheet you should be aware some of the more advanced process and functions available within Definiens, including growing a class, using variables and calculating thresholds. 8.7. Exercises 1) Although the method superficially works well there are numerous small errors when area of vegetation have been included in the cloud mask; Develop rules to remove this mis-classification. 2) Currently if a segment (from the chessboard segmentation) does not contain any cloud or shadow objects will be identified as such regardless. Add in extra ‘if’ statements to try to remove or reduce this problem.

Page 115: Object Orientated Course Manuel

3) Select a new subset and try the classification on the new subset to check for the algorithm robustness.