Transcript
Page 1: Computational Intelligence:  Methods and Applications

Computational Intelligence: Computational Intelligence: Methods and ApplicationsMethods and Applications

Lecture 28 Non-parametric density modeling

Source: Włodzisław Duch; Dept. of Informatics, UMK; Google: W Duch

Page 2: Computational Intelligence:  Methods and Applications

Density from histogramsDensity from histogramsIn 1-D or 2-D it is rather simple, histograms provide piecewise constant approximation – since we do not assume any particular functional dependence such estimation is called “nonparametric”.

Histograms change, depending on the size of the bin Bi that measures frequency P(XBi).

Smoothing histograms may be done by fitting some smooth functions, such as Gaussians.

How good is this approximation?

Page 3: Computational Intelligence:  Methods and Applications

Why histogram estimation works?Why histogram estimation works?Probability that a data point comes from some region R (belongs to some category, etc) is:

We are given n data points, what is the chance Pr that k of these points are from region R? If n=k=1, this Pr=P, in general Pr is the number of combinations in which k points could be selected out of n, multiplied by probability of selecting k points from R, i.e. Pk, and selecting n-k points not from R, i.e. (1-P)n-k, that is, the distribution is binomial:

R

P P d X X

Pr( ) 1n kkn

k P Pk

Expected k value:

Expected variance:

2

2

( )

( ) 1

1

E k nP

k nP P

k PP

n n

Since P(X) VR P = k/n, for a large number of samples n small variance of k/n is expected, therefore this is useful approximation to P(X).

Page 4: Computational Intelligence:  Methods and Applications

Parzen windows 1DParzen windows 1DDensity estimate using (for standardized data) a bin of size h (a window on the data) in each dimension.

For 1D cumulative density function CP(x)=(# observation<x)/N

Density is given as a derivative of this function, estimated as:

( ) ( )( )

2

CP x h CP x hP x

h

( )

1

in

i

k Hh

X X

For example, hyperrectangular windows with H(u)=1 for all |uj|<0.5,

or hard sphere with 1 inside and 0 outside.

( )

1

1 in

di

kP H

nV nh h

X X

XDensity estimate:

Number of points inside:

h is called a “smoothing“ parameter

Kernel should be H(u)0 and should integrate to 1.

Page 5: Computational Intelligence:  Methods and Applications

Parzen windows 1DParzen windows 1DEstimate density using (for standardized data) a bin of size h (a window on the data) in each dimension. For 1D cumulative density function is:

P(x<a)=(# observation x<a)/n 1 (this is probability that x<a ).

( / 2) ( / 2)( )

P x a h P x a hP x a

h

Density is given as a derivative of this function, but for such staircase

data it will be discontinuous, a series of spikes for x=xi values

corresponding to real observations.

Numerical estimation of density at point x is calculated as:

Cumulative contributions from all points should sum up to 1, contribution from

each interval [xh/2, xh/2] with a

single observation xi inside is 1/n.

For real data this is a stairway function.

Page 6: Computational Intelligence:  Methods and Applications

Parzen 1D kernelsParzen 1D kernelsWe need continuous density estimation, not spikes.

0 | | 1( )

1 | | 1

uH u

u

ix xH dx h

h

1

1 ni

i

x xP x H

nh h

Integrating over all x gives therefore total probability=1.

Smooth cumulative density for x a is then:

Density may be now written as:

Density in the window is constant=1, so integrating over each kernel:

Introduce a kernel function indicating if variable is in interval:

( ) ( )a

P x a P x dx

This is equal to 1/n times the number of

xi a plus a fraction from the last interval

[xih/2,a] if a < xih/2

Page 7: Computational Intelligence:  Methods and Applications

Parzen windows dDParzen windows dDThe window moves with X which is in the middle, therefore density is smoothed. 1D generalizes to dD situations easily:

Volume V=hd and the kernel (window) function:

( )

( )i

H Hh

X Xu

( )

1

in

i

k Hh

X X

Typically hyperrectangular windows with H(u)=1 for all |uj|<1 are used, or hard sphere windows with 1 inside and 0 outside, or some other localized functions.

( )

1

1 in

di

kP H

nV nh h

X X

XDensity estimate:

Number of points inside:h is called a “smoothing“ parameter

Any function with H(u)0 integrating to 1 may be used as a kernel.

Page 8: Computational Intelligence:  Methods and Applications

Example with rectanglesExample with rectanglesWith large h strong smoothing is achieved (imagine window covering all data ...)

Details are picked up when h is small, general shape when it is large.

Use as H(u) a smooth function, such as Gaussian; if it is normalized than also the final density is normalized:

( )

1

1( ) 1 ( ) 1

in

i

x xH u du P x dx H dx

nh h

Page 9: Computational Intelligence:  Methods and Applications

Example with GaussiansExample with GaussiansDispersion h is also called here smoothing or regularization parameter.

A. Webb, Chapter 3.5 has a good explanation of Parzen windows.

Page 10: Computational Intelligence:  Methods and Applications

IdeaIdea

Assume that P(X) is a combination of some smooth functions (X);

use an iterative algorithm that adapts the density to the incoming data.

Estimate density P(X|C) for each class separately.

Since calculation of parameters may be done on a network of independent processors, this leads to the basis set networks, such as the radial basis set networks.

This may be used for function approximation, classification and discovery of logical rules by covering algorithms.

1

m

i ii

P W

X X

Page 11: Computational Intelligence:  Methods and Applications

Computational Intelligence: Computational Intelligence: Methods and ApplicationsMethods and Applications

Lecture 29Approximation theory, RBF and

SFN networks

Source: Włodzisław Duch; Dept. of Informatics, UMK; Google: W Duch

Page 12: Computational Intelligence:  Methods and Applications

Basis set functionsBasis set functionsA combination of m functions

may be used for discrimination or for density estimation.

What type of functions are useful here?

Most basis functions are composed of two functions (X)=g(f(X)):

f(X) activation, defining how to use input features, returning a scalar f. g(f) output, converting activation into a new, transformed feature.

Example: multivariate Gaussian function, localized at R:

Activation f. computes distance, output f. localizes it around zero.

1

m

i ii

W

X X

2; expf g f f X X R

Page 13: Computational Intelligence:  Methods and Applications

Radial functionsRadial functionsGeneral form of multivariate Gaussian:

This is a radial basis function (RBF), with Mahalanobis distance and Gaussian decay, a popular choice in neural networks and approximation theory. Radial functions are spherically symmetric in respect to some center. Some examples of RBF functions:

T 1 21; exp

2f g f f

X X R Σ X R X R

Distance

Inverse multiquadratic

Multiquadratic

Thin splines

2 2

2 2

2

( )

( ) , 0

( ) , 1 0

( ) ( ) ln( )

f r r

h r r

h r r

h r r r

X R

Page 14: Computational Intelligence:  Methods and Applications

G + r functionsG + r functions

Multivariate Gaussian function and its contour.

Distance function and its contour.

( )dh r r X R

2

( ) ;rG r e r

X R

Page 15: Computational Intelligence:  Methods and Applications

Multiquadratic and thin splineMultiquadratic and thin spline

Multiquadratic and an inverse

Thin spline function

2 2

2 2

( ) , 1;

( ) , 1/ 2

h r r

h r r

2( ) ( ) ln( )h r r r

All these functions are useful in theory of function approximation.

Page 16: Computational Intelligence:  Methods and Applications

Scalar product activationScalar product activationRadial functions are useful for density estimation and function approximation. For discrimination, creation of decision borders, activation function equal to linear combination of inputs is most useful:

Note that this activation may be presented as

1

;N

i ii

f W X

X W W X

22 2 21, ,

2L D W X W X W X W X W X

The first term L is constant if the length of W and X is fixed. This is true for standardized data vectors; square of Euclidean distance is equivalent (up to a constant) to a scalar product !

If ||X||=1 replace W.X by ||W-X||2 and decision borders will still be linear, but using instead of Euclidean various other distance functions will lead to non-linear decision borders!

Page 17: Computational Intelligence:  Methods and Applications

More basis set functionsMore basis set functions

More sophisticated combinations of activation functions are useful, ex:

This is a combination of distance-based activation with scalar product activaiton, allowing to achieve very flexible PDF/decision border shapes. Another interesting choice is separable activation function:

Separable functions with Gaussian factors have radial form, but Gaussian is the only localized radial function that is also separable.

The fi(X;) factors may represent probabilities (like in the Naive Bayes method), estimated from histograms using Parzen windows, or may be modeled using some functional form or logical rule.

( ; , )f X W D W X X D

1

( ; ) ( ; )d

i i ii

f f X

X

Page 18: Computational Intelligence:  Methods and Applications

Output functionsOutput functions

Gaussians and similar “bell shaped” functions are useful to localize output in some region of space.

For discrimination weighted combination f(X;W)=W·X is filtered through a step function, or to create a gradual change through a function with sigmoidal shape (called “squashing f.”, such as the logistic function:

Parameter sets the slope of the sigmoidal function.

Other commonly used function are:

tangh(fsimilar to logistic function;

semi-linear function: first constant , then linear and then constant +1.

1( ; ) 0,1

1 expx

x

Page 19: Computational Intelligence:  Methods and Applications

Convergence propertiesConvergence properties

• Multivariate Gaussians and weighted sigmoidal functions may approximate any function: such systems are universal approximators.

• The choice of functions determines the speed of convergence of the approximation and the number of functions need for approximation.

• The approximation error in d-dimensional spaces using weighted activation with sigmoidal functions does not depend on d. The rate of convergence with m functions is O(1/m)

• Polynomials, orthogonal polynomials etc need for reliable estimation a number of points that grows exponentially with d like making them useless for high-dimensional problems!

The error convergence rate is:

In 2-D we need 10 time more data points to achieve the same error as in 1D, but in 10-D we need 10G times more points!

1/

1d

On

Page 20: Computational Intelligence:  Methods and Applications

Radial basis networks (RBF)Radial basis networks (RBF)

RBF is a linear approximation in space of radial basis functions

( )

1 1

; ;m m

ii i i i

i i

W W

X θ X X X

Such computations are frequently presented in a network form:

inputs nodes: Xi values;

internal (hidden) nodes: functions;

outgoing connections: Wi coefficients.

output node: summation.

Sometimes RBF networks are called “neural”, due to inspiration for their development.

; X θ

i X

iW

1X

2X

3X

4X

Page 21: Computational Intelligence:  Methods and Applications

RBF for approximationRBF for approximation

RBF networks may be used for function approximation, or classification with infinitely many classes. Function should pass through points:

( ) ( ); , 1...i iY i n X

Approximation function should also be smooth to avoid high variance of the model, but not too smooth, to avoid high bias. Taking n identical functions centered at the data vectors:

( ) ( ) ( ) ( )

1 1

1

;n n

i i j ij ij j

j j

W W Y

X X X H

HW Y W H Y

If matrix H is not too big and non-singular this will work; in practice many iterative schemes to solve the approximation problem have been devised. For classification Y(i)=0 or 1.

Page 22: Computational Intelligence:  Methods and Applications

Separable Function Networks (SFN)Separable Function Networks (SFN)

For knowledge discovery and mixture of Naive Bayes models separable functions are preferred. Each function component

specifies the output; several outputs Fc are defined, for different classes, conclusions, class-conditional probability distributions etc.

( )

( ) ( )

1

( ; , ) ;j

n cc c j j

cj

F W f

X θ W X

is represented by a single node and if localized functions are used may represent some local conditions.

Linear combination of these component functions:

( ) ( ) ( )

1

; ;d

j j ji i i

i

f f X

X

1F X

(1) (1);f X θ

11W1X

2X

3X

4X

2F X

23W

Page 23: Computational Intelligence:  Methods and Applications

SFN for logical rulesSFN for logical rules

If the component functions are rectangular:

( ) ( )

( )

( ) ( )

1 if ,;

0 if ,

j ji i ij

i i i j ji i i

Xf X

X

then the product function realized by the node is a hyperrectangle, and it may represent crisp logic rule:

( ) ( ) ( ) ( ) ( ) ( )1 1 1

( )

IF , ... , ... ,

THEN Fact =True

j j j j j ji i i d d d

j

X X X

( )1

j

( )2

j

( )1

j

( )2

j

Conditions that cover whole data may be deleted.

Page 24: Computational Intelligence:  Methods and Applications

SNF rulesSNF rules

Final function is a sum of all rules for a given fact (class).

This may additionally be multiplied by the coverage of the rule, or

C|Rule |Rulej j jW N N X X

The output weights are either:

all Wj = 1, all rules are on equal footing;

Wj ~ Rule precision (confidence): a ratio of the number of vectors correctly covered by the rule over the number of all elements covered.

( )

( ) ( )

1

( ; , ) ;j

n cc c j j

cj

F W f

X θ W X

W may also be fitted to data to increase accuracy of predictions.

2C|Rule |Rule Cj j jW N N N X X X

Page 25: Computational Intelligence:  Methods and Applications

Rules with weighted conditionsRules with weighted conditions

Instead of rectangular functions Gaussian, triangular or trapezoidal functions may be used to evaluate the degree (not always equivalent to probability) of a condition being fulfilled.

A fuzzy rule based on “triangular membership functions” is a product of such functions (conditions):

For example, triangular functions

| |( ; , ) max 0,1 i i

i i ii

X Df X D

1

; , ; ,d

i i i ii

f f X D

X D σ

The conclusion is highly justified in areas where f() is large, shapes =>

Page 26: Computational Intelligence:  Methods and Applications

RBFs and SFNsRBFs and SFNs

Many basis set expansions have been proposed in approximation theory.

In some branches of science and engineering such expansions have been widely used, for example in computational chemistry.

There is no particular reason why radial functions should be used, but all basis set expansions are mistakenly called now RBFs ...

In practice Gaussian functions are used most often, and Gaussian approximators and classifiers have been used long before RBFs.

Gaussian functions are also separable, so RBF=SFN for Gaussians.

For other functions:

• SFNs have natural interpretation in terms of fuzzy logic membership functions and trains as neurofuzzy systems.

• SFNs can be used to extract logical (crisp and fuzzy) rules from data.

• SFNs may be treated as extension of Naive Bayes, with voting committees of NB models.

• SFNs may be used in combinatorial reasoning (see Lecture 31).

Page 27: Computational Intelligence:  Methods and Applications

but remember ... but remember ...

that all this is just a poor approximation to Bayesian analysis.

It allows to model situations where we have linguistic knowledge but no data – for Bayesians one may say that we guess prior distributions from rough descriptions and improve the results later by collecting real data.

Example: RBF regression

Neural Java tutorial: http://diwww.epfl.ch/mantra/tutorial/

Transfer function interactive tutorial.

Page 28: Computational Intelligence:  Methods and Applications

Computational Intelligence: Computational Intelligence: Methods and ApplicationsMethods and Applications

Lecture 30Neurofuzzy system FSM and covering algorithms.

Source: Włodzisław Duch; Dept. of Informatics, UMK; Google: W Duch

Page 29: Computational Intelligence:  Methods and Applications

Training FSM networkTraining FSM network

Parameters of the network nodes may be estimated using maximum likelihood Expectation Maximization learning.

Computationally simpler iterative schemes have been proposed.

An outline of the FSM (Feature Space Mapping) separable function network algorithm implemented in GhostMiner:

• Select the type of functions and desired accuracy.

• Initialize network parameters: find main clusters, their centers and dispersions; include cluster rotations.

• Adaptation phase: read the training data, if error is made adapt the parameters of the network to reduce them - move the closest centers towards the data, increase dispersions.

• Growth phase: if accuracy cannot be improved further add new nodes (functions) in areas where most errors occur.

• Cleaning phase: remove functions with smallest coverage, retrain.

Page 30: Computational Intelligence:  Methods and Applications

Example 1: Wine rulesExample 1: Wine rulesSelect rectangular functions;

default initialization is based on histograms, looking for clusters around maxima in each dimension;

• Create simplest model, starting from low learning accuracy, 0.90.

FSM window shows the convergence and the number of neurons (logical rules for rectangular functions) created by the FSM system.

Different rules with similar accuracy (especially for low accuracy) exist, and the learning algorithm is stochastic (the data is presented in randomized order), so many rules sets are created.

Page 31: Computational Intelligence:  Methods and Applications

Experiments with Wine rulesExperiments with Wine rules

Run FSM with different parameters and note how different set of rules are generated.

FSM includes stochastic learning (samples are randomized).

Weak: large variance for high accuracy models.

Strong: many simple models may be generated, experts may like some more than the others.

FSM may discover new, simple rules, that trees will not find, for ex:

if proline > 929.5 then class 1 (48 covered, 3 errors, but 2 corrected by other rules).

if color < 3.792 then class 2 (63 cases, 60 correct, 3 errors)

Trees generate hierarchical path; FSM covers the data samples with rectangular functions, minimizing the number of features used.

Page 32: Computational Intelligence:  Methods and Applications

Example 2: PyrimidinesExample 2: PyrimidinesExample 2: PyrimidinesExample 2: PyrimidinesQSAR – Qualitative Structure-Activity Relationship problem.

Given a family of molecules try to predict their biological activity.

Pyrimidine family has a common template:

9 features are given per chemical group: name, polarity, group name, polarity, size, hydrogen-bond donor, hydrogen bond acceptor, pi-donor, pi-acceptor, polarizability, and the sigma effect. For a single pyrimidine 27(=3*9) features are given; evaluation of relative activity strength requires pair-wise comparison

A, B, True(AB)There were 54 features (columns), and 2788 pairs compared (rows).

R3, R4, R5 are places where chemical groups are substituted.The site may be also empty.

Page 33: Computational Intelligence:  Methods and Applications

Pyrimidine resultsPyrimidine resultsPyrimidine resultsPyrimidine resultsSince ranking of activities is important an appropriate measure of success is the Spearman rank order correlation coefficient: d – distance in ranking pairs, n – number of pairs.

5xCV results

LDA 0.65CART tree 0.50 nodesFSM (Gauss) 0.770.02 (86)FSM (crisp) 0.770.03 (41)

41 nodes with rectangular functions, equivalent to 41 crisp logic rules.

2

21

61 [ 1, 1]

1

n

S ii

r dn n

Perfect agreement gives +1, perfect disagreement , ex:

True ranking: X1 X2 ... Xn, predicted ranks Xn Xn-1 .. X1

differences: di= (n-1), (n-2) ... 0 (or 1, for odd n) ... (n-2), (n-1)

Sum of d2 is n(n2-1)/3, so rs=

Page 34: Computational Intelligence:  Methods and Applications

Covering algorithmsCovering algorithmsCovering algorithmsCovering algorithmsMany machine learning algorithms for learning rules try to cover as many “positive” examples as possible.

WEKA contains one such algorithm, called PRISM.

For each class C

• E = training data

• Create a rule R: IF () THEN C (with empty conditions)

• Until there are no more features or R covers all C cases– For each feature A and its possible subset of values (or an

interval) consider adding a condition (A=v) or A [v,v’]– Select feature A and values v that maximize rule precision,

N(C,R)/N(R) = number of samples from class C covered by R, divided by number of all samples covered by R (ties are broken by selecting largest N(C,R)).

– Add to R: IF ( (A=v) ... ) THEN C

• Remove samples covered by R from E

Page 35: Computational Intelligence:  Methods and Applications

PRISM for WinePRISM for WinePRISM for WinePRISM for WinePRISM in the WEKA implementation cannot handle numerical attributes, so discretization is needed first: this is done automatically via Filtered Classifier approach.

Run PRISM on Wine data and note that:

• PRISM has no parameters to play with!

• Discretization determines complexity of the rules.

• Perfect covering is achieved on whole data.

• Rules frequently contain single condition, sometimes two, rarely more conditions.

• Rules require manual simplification.

• Large number of rules (>30) has been produced.

• 10xCV accuracy is 86%, error is 7% and 7% of vectors remain unclassified: covering leaves gaps in the feature space.


Top Related