neural networks in signal processing

2
Neurocomputing 69 (2005) 1–2 Guest Editorial Neural Networks in Signal Processing This special issue on ‘‘Neural Networks for Signal Processing’’ features a selection of extended and updated versions of papers that have originally been presented at the 2003 IEEE International Workshop on Neural Networks for Signal Processing (NNSP03) in Toulouse, France (September 17–19, 2003) (now called Workshop on Machine Learning for Signal Processing—MLSP). The authors have been invited to contribute to this special issue on the basis of originality, technical quality, and relevance of the papers presented at the workshop. The invited papers have been subjected to the usual rigorous peer review process by anonymous reviewers. The editors of this special issue are convinced that this selection of excellent papers provides the reader with an up-to-date account of what neural networks have to offer for today’s challenging signal processing problems. The papers can be grouped into the following categories: (1) clustering, classification & regression, (2) adaptive filtering, noise estimation & system identification, (3) recursive learning, and (4) blind inverse problem solving. 1. Clustering, classification & regression: In the first paper of this category, Guerrero- Curieses and co-workers focus on multiclass decision problems and introduce a parametric family of loss functions that provides accurate estimates for the posterior class probabilities near the decision regions. Kaski and co-workers introduce a new method for clustering of continuous data with the help of discrete-valued auxiliary data in such a way that the discrete data effectively supervises the clustering. La´ zaro and co-workers consider the problem of simultaneously approximating a function and its derivatives in the framework of support vector machines. They show that the problem can be solved by the insensitive loss function and by introducing additional constraints in the approximation of the derivative. 2. Adaptive filtering, noise estimation & system identification: Yamazaki and Watanabe develop an algebraic geometrical method for analyzing non-regular and non-identifiable models and apply this to HMMs. Ypma and Heskes formulate the problem of inference in nonlinear dynamical systems and propose an algorithm that leads to an unscented Kalman smoother for which the dynamics need not be inverted explicitly. Pelckmans and co-workers develop a method for ARTICLE IN PRESS www.elsevier.com/locate/neucom 0925-2312/$ - see front matter r 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2005.07.001

Upload: marc-m-van-hulle

Post on 10-Sep-2016

228 views

Category:

Documents


8 download

TRANSCRIPT

Page 1: Neural Networks in Signal Processing

ARTICLE IN PRESS

/neucom

Neurocomputing 69 (2005) 1–2

www.elsevier.com/locate

1.

doi

Guest Editorial

Neural Networks in Signal Processing

This special issue on ‘‘Neural Networks for Signal Processing’’ features a selectionnted atcessinghop onited toy, ande beens. Thepapersave to

tering,system

of extended and updated versions of papers that have originally been presethe 2003 IEEE International Workshop on Neural Networks for Signal Pro(NNSP03) in Toulouse, France (September 17–19, 2003) (now called WorksMachine Learning for Signal Processing—MLSP). The authors have been invcontribute to this special issue on the basis of originality, technical qualitrelevance of the papers presented at the workshop. The invited papers havsubjected to the usual rigorous peer review process by anonymous reviewereditors of this special issue are convinced that this selection of excellentprovides the reader with an up-to-date account of what neural networks hoffer for today’s challenging signal processing problems.The papers can be grouped into the following categories: (1) clus

classification & regression, (2) adaptive filtering, noise estimation &identification, (3) recursive learning, and (4) blind inverse problem solving.

Clustering, classification & regression: In the first paper of this category, Guerrero-duce aor the

orkerselp ofctivelylem ofork ofy the �in the

i andregular

Curieses and co-workers focus on multiclass decision problems and introparametric family of loss functions that provides accurate estimates fposterior class probabilities near the decision regions. Kaski and co-wintroduce a new method for clustering of continuous data with the hdiscrete-valued auxiliary data in such a way that the discrete data effesupervises the clustering. Lazaro and co-workers consider the probsimultaneously approximating a function and its derivatives in the framewsupport vector machines. They show that the problem can be solved binsensitive loss function and by introducing additional constraintsapproximation of the derivative.

2. Adaptive filtering, noise estimation & system identification: YamazakWatanabe develop an algebraic geometrical method for analyzing non-

Heskes

roposenamicsod for

and non-identifiable models and apply this to HMMs. Ypma andformulate the problem of inference in nonlinear dynamical systems and pan algorithm that leads to an unscented Kalman smoother for which the dyneed not be inverted explicitly. Pelckmans and co-workers develop a meth

0925-2312/$ - see front matter r 2005 Elsevier B.V. All rights reserved.

:10.1016/j.neucom.2005.07.001

Page 2: Neural Networks in Signal Processing

noise variance estimation which they apply to model selection, and the parameterg andmationression

uentials on a

ARTICLE IN PRESS

Guest Editorial / Neurocomputing 69 (2005) 1–22

tuning of Least Squares Support Vector Machines. Finally, TippinLawrence show, for regression and ICA, that a variational approxischeme enables the inference of the form of the noise distribution in the regcase, and the form of the source distributions in the ICA case.

3. Recursive learning: Asirvadam and co-workers develop an efficient seqlearning algorithm by decomposing the existing recursive learning algorithm

a fast

iteningnoise.amuraies and

iew onlution,

layer by layer and neuron by neuron basis. Rao and co-workers deriveQuasi-Newton type of recursive algorithm based on their Error WhCriterion for linear parameter estimation in the presence of additive whiteThey also extend their approach to colored noise. Finally, Maeda and Wakintroduce a recursive learning scheme for Bidirectional Associative Memordiscuss its implementation in FPGAs.

4. Blind inverse problem solving: Luengo and co-workers present a unified vthree closely related blind inverse problem for sparse signals: blind deconvo

fficient

nd theed thel issue,utting

blind equalization, and blind source separation (BSS) and propose an ealgorithm.

The editors would like to thank all the authors for their excellent papers, aanonymous reviewers for their comments and useful suggestions that improvpapers. Special thanks go to Dr. Tom Heskes for inviting us to edit this speciaand to Vera Kamphuis from Neurocomputing Editorial Office for her help in pthe issue together.

Marc M. Van Hulleelgium

n.ac.be

K.U.Leuven, B

E-mail address: [email protected]

Jan Larsennmark

dtu.dk

Technical University of Denmark, De

E-mail address: jl@imm.