computational intelligence: methods and applications

Download Computational Intelligence:  Methods and Applications

Post on 11-Jan-2016

27 views

Category:

Documents

1 download

Embed Size (px)

DESCRIPTION

Computational Intelligence: Methods and Applications. Lecture 28 Non-parametric density modeling Source: Włodzisław Duch ; Dept. of Informatics, UMK ; Google: W Duch. Density from histograms. - PowerPoint PPT Presentation

TRANSCRIPT

  • Computational Intelligence: Methods and ApplicationsLecture 28 Non-parametric density modeling

    Source: Wodzisaw Duch; Dept. of Informatics, UMK; Google: W Duch

  • Density from histogramsIn 1-D or 2-D it is rather simple, histograms provide piecewise constant approximation since we do not assume any particular functional dependence such estimation is called nonparametric. Histograms change, depending on the size of the bin Bi that measures frequency P(XBi).

    Smoothing histograms may be done by fitting some smooth functions, such as Gaussians. How good is this approximation?

  • Why histogram estimation works?Probability that a data point comes from some region R (belongs to some category, etc) is:We are given n data points, what is the chance Pr that k of these points are from region R? If n=k=1, this Pr=P, in general Pr is the number of combinations in which k points could be selected out of n, multiplied by probability of selecting k points from R, i.e. Pk, and selecting n-k points not from R, i.e. (1-P)n-k, that is, the distribution is binomial: Expected k value: Expected variance: Since P(X) VR P = k/n, for a large number of samples n small variance of k/n is expected, therefore this is useful approximation to P(X).

  • Parzen windows 1DDensity estimate using (for standardized data) a bin of size h (a window on the data) in each dimension.For 1D cumulative density function CP(x)=(# observation
  • Parzen windows 1DEstimate density using (for standardized data) a bin of size h (a window on the data) in each dimension. For 1D cumulative density function is: P(x
  • Parzen 1D kernelsWe need continuous density estimation, not spikes. Integrating over all x gives therefore total probability=1. Smooth cumulative density for x a is then:Density may be now written as:Density in the window is constant=1, so integrating over each kernel:Introduce a kernel function indicating if variable is in [-1,+1] interval:This is equal to 1/n times the number of xi a plus a fraction from the last interval [xi-h/2,a] if a < xi+h/2

  • Parzen windows dDThe window moves with X which is in the middle, therefore density is smoothed. 1D generalizes to dD situations easily:Volume V=hd and the kernel (window) function:Typically hyperrectangular windows with H(u)=1 for all |uj|
  • Example with rectanglesWith large h strong smoothing is achieved (imagine window covering all data ...)Details are picked up when h is small, general shape when it is large.Use as H(u) a smooth function, such as Gaussian; if it is normalized than also the final density is normalized:

  • Example with GaussiansDispersion h is also called here smoothing or regularization parameter.A. Webb, Chapter 3.5 has a good explanation of Parzen windows.

  • IdeaAssume that P(X) is a combination of some smooth functions F(X);use an iterative algorithm that adapts the density to the incoming data.Estimate density P(X|C) for each class separately.

    Since calculation of parameters may be done on a network of independent processors, this leads to the basis set networks, such as the radial basis set networks.

    This may be used for function approximation, classification and discovery of logical rules by covering algorithms.

  • Computational Intelligence: Methods and ApplicationsLecture 29Approximation theory, RBF and SFN networks

    Source: Wodzisaw Duch; Dept. of Informatics, UMK; Google: W Duch

  • Basis set functionsA combination of m functions may be used for discrimination or for density estimation. What type of functions are useful here?Most basis functions are composed of two functions F(X)=g(f(X)): f(X) activation, defining how to use input features, returning a scalar f. g(f) output, converting activation into a new, transformed feature. Example: multivariate Gaussian function, localized at R: Activation f. computes distance, output f. localizes it around zero.

  • Radial functionsGeneral form of multivariate Gaussian: This is a radial basis function (RBF), with Mahalanobis distance and Gaussian decay, a popular choice in neural networks and approximation theory. Radial functions are spherically symmetric in respect to some center. Some examples of RBF functions: DistanceInverse multiquadraticMultiquadratic Thin splines

  • G + r functionsMultivariate Gaussian function and its contour. Distance function and its contour.

  • Multiquadratic and thin splineMultiquadratic and an inverseThin spline functionAll these functions are useful in theory of function approximation.

  • Scalar product activationRadial functions are useful for density estimation and function approximation. For discrimination, creation of decision borders, activation function equal to linear combination of inputs is most useful:Note that this activation may be presented asThe first term L is constant if the length of W and X is fixed. This is true for standardized data vectors; square of Euclidean distance is equivalent (up to a constant) to a scalar product ! If ||X||=1 replace W.X by ||W-X||2 and decision borders will still be linear, but using instead of Euclidean various other distance functions will lead to non-linear decision borders!

  • More basis set functionsMore sophisticated combinations of activation functions are useful, ex:This is a combination of distance-based activation with scalar product activaiton, allowing to achieve very flexible PDF/decision border shapes. Another interesting choice is separable activation function:Separable functions with Gaussian factors have radial form, but Gaussian is the only localized radial function that is also separable.

    The fi(X;q) factors may represent probabilities (like in the Naive Bayes method), estimated from histograms using Parzen windows, or may be modeled using some functional form or logical rule.

  • Output functionsGaussians and similar bell shaped functions are useful to localize output in some region of space. For discrimination weighted combination f(X;W)=WX is filtered through a step function, or to create a gradual change through a function with sigmoidal shape (called squashing f., such as the logistic function:Parameter b sets the slope of the sigmoidal function.

    Other commonly used function are: tangh(bf) (-1,+1), similar to logistic function;semi-linear function: first constant -1, t hen linear and then constant +1.

  • Convergence propertiesMultivariate Gaussians and weighted sigmoidal functions may approximate any function: such systems are universal approximators. The choice of functions determines the speed of convergence of the approximation and the number of functions need for approximation. The approximation error in d-dimensional spaces using weighted activation with sigmoidal functions does not depend on d. The rate of convergence with m functions is O(1/m) Polynomials, orthogonal polynomials etc need for reliable estimation a number of points that grows exponentially with d like making them useless for high-dimensional problems!The error convergence rate is:In 2-D we need 10 time more data points to achieve the same error as in 1D, but in 10-D we need 10G times more points!

  • Radial basis networks (RBF)RBF is a linear approximation in space of radial basis functionsSuch computations are frequently presented in a network form:inputs nodes: Xi values; internal (hidden) nodes: functions; outgoing connections: Wi coefficients.output node: summation.

    Sometimes RBF networks are called neural, due to inspiration for their development.

  • RBF for approximationRBF networks may be used for function approximation, or classification with infinitely many classes. Function should pass through points:Approximation function should also be smooth to avoid high variance of the model, but not too smooth, to avoid high bias. Taking n identical functions centered at the data vectors: If matrix H is not too big and non-singular this will work; in practice many iterative schemes to solve the approximation problem have been devised. For classification Y(i)=0 or 1.

  • Separable Function Networks (SFN)For knowledge discovery and mixture of Naive Bayes models separable functions are preferred. Each function component specifies the output; several outputs Fc are defined, for different classes, conclusions, class-conditional probability distributions etc. is represented by a single node and if localized functions are used may represent some local conditions. Linear combination of these component functions:

  • SFN for logical rulesIf the component functions are rectangular: then the product function realized by the node is a hyperrectangle, and it may represent crisp logic rule: Conditions that cover whole data may be deleted.

  • SNF rulesFinal function is a sum of all rules for a given fact (class).This may additionally be multiplied by the coverage of the rule, or The output weights are either: all Wj = 1, all rules are on equal footing;Wj ~ Rule precision (confidence): a ratio of the number of vectors correctly covered by the rule over the number of all elements covered.W may also be fitted to data to increase accuracy of predictions.

  • Rules with weighted conditionsInstead of rectangular functions Gaussian, triangular or trapezoidal functions may be used to evaluate the degree (not always equivalent to probability) of a condition being fulfilled. A fuzzy rule based on triangular membership functions is a product of such functions (conditions):For example, triangular functionsThe conclusion is highly justified in areas where f() is large, shapes =>

  • RBFs and SFNsMany basis set expansions have been proposed in approximation theory.In some branches of science a

Recommended

View more >