research article a novel single neuron perceptron with...
TRANSCRIPT
![Page 1: Research Article A Novel Single Neuron Perceptron with ...downloads.hindawi.com/journals/cin/2014/746376.pdf · e following theorem shows that ( , ) is dense in ( [ ], ),where [ ]](https://reader034.vdocuments.mx/reader034/viewer/2022042404/5f18758b7726e07b0309689d/html5/thumbnails/1.jpg)
Research ArticleA Novel Single Neuron Perceptron with UniversalApproximation and XOR Computation Properties
Ehsan Lotfi1 and M-R Akbarzadeh-T2
1 Department of Computer Engineering Torbat-e-Jam Branch Islamic Azad University Torbat-e-Jam Iran2 Electrical and Computer Engineering Departments Center of Excellence on Soft Computing and Intelligent Information ProcessingFerdowsi University of Mashhad Iran
Correspondence should be addressed to Ehsan Lotfi esilotfgmailcom
Received 9 February 2014 Accepted 7 April 2014 Published 28 April 2014
Academic Editor Cheng-Jian Lin
Copyright copy 2014 E Lotfi and M-R Akbarzadeh-T This is an open access article distributed under the Creative CommonsAttribution License which permits unrestricted use distribution and reproduction in any medium provided the original work isproperly cited
We propose a biologically motivated brain-inspired single neuron perceptron (SNP) with universal approximation and XORcomputation propertiesThis computational model extends the input pattern and is based on the excitatory and inhibitory learningrules inspired from neural connections in the human brainrsquos nervous system The resulting architecture of SNP can be trained bysupervised excitatory and inhibitory online learning rules The main features of proposed single layer perceptron are universalapproximation property and low computational complexity The method is tested on 6 UCI (University of California Irvine)pattern recognition and classification datasets Various comparisons with multilayer perceptron (MLP) with gradient decentbackpropagation (GDBP) learning algorithm indicate the superiority of the approach in terms of higher accuracy lower timeand spatial complexity as well as faster training Hence we believe the proposed approach can be generally applicable to variousproblems such as in pattern recognition and classification
1 Introduction
In various computer applications such as pattern recognitionclassification and prediction a learning module can beimplemented by various approaches including statisticalstructural and neural approaches Among these methodsartificial neural networks (ANNs) are inspired by physiolog-ical workings of the brain They are based on mathematicalmodel of single neural cell (neuron) named single neuronperceptron (SNP) and try to resemble the actual networksof neurons in the brain As computational models SNP hasparticular characteristics such as the ability to learn andgeneralize Although the multilayer perceptron (MLP) canapproximate any functions [1 2] traditional SNP is notuniversal approximator MLP can learn through the errorbackpropagation algorithm (EBP) whereby the error of out-put units is propagated back to adjust the connecting weightswithin the network In MLP architecture by increasing thenumber of neurons in input layer or (and) the number ofneurons in output layer or (and) the number of neurons
in hidden layer(s) the number of learning parameters andthe algorithm computational complexity are significantlyincreased This problem is usually referred to as the curseof dimensionality [3 4] So many researchers have tried topropose more powerful single layer architectures and fasteralgorithms such as functional link networks (FLNs) andLevenberg-Marquardt (LM) and its modified and extendedversions [5ndash20]
In contrast to the MLP SNP and FLNs do not imposehigh computational complexity and are far from the curseof dimensionality But because of disregarding the universalapproximation property SNP and FLNs are not very popularin the applications In contrast to the previse knowledge aboutSNP this paper aims to propose a novel SNP model that cansolve the XOR problem and we show that it can be universalapproximator Proposed SNP can solve XOR problem onlyif additional nonlinear operator is used As illustrated in thenext section the SNP universal approximation property cansimply be archived by extending the input patterns and usingthe nonlinear operator max Like functional link networks
Hindawi Publishing CorporationComputational Intelligence and NeuroscienceVolume 2014 Article ID 746376 6 pageshttpdxdoiorg1011552014746376
2 Computational Intelligence and Neuroscience
Input Initial random weights w1w2 wnwn+1 and input bias b(1) Take 119896th learning sample (119896th 119901 and 119879)(2) 119901
119899+1= max
119895=1119899(119901119895)
(3) Calculate the final output 119864119900and error
119864119900= tan sig(
119899+1
sum
119895=1
119908119895times 119901119895+ 119887)
119890 = 119879 minus 119864119900
(4) Update the weights by using excitatory rule119908119895= 119908119895+ 120572max(119890 0)119904
119895 for 119895 = 1 119899 + 1
119887 = 119887 + 120572max(119890 0)(5) Update the weights by using inhibitory rule
119908119895= 119908119895minus 120572max(minus119890 0)119904
119895 for 119895 = 1 119899 + 1
119887 = 119887 minus 120572max(minus119890 0)(6) If 119896 lt number of training patterns then 119896 = 119896 + 1 and proceed to the first(7) Let epoch = epoch + 1 and 119896 = 1(8) If the stop criterion has not satisfied proceed to the first
Algorithm 1 Proposed SNP algorithm
Figure 1 Proposed SNP
(FLNs) [21] the proposed SNP does not include hiddenunits or expand the input vector but guarantees universalapproximation FLNs are single-layer neural networks thatcan be considered as an alternative approach in the datamining to overcome the complexities associated with MLP[22] but they do not guarantee universal approximation
The paper is organized as follows Proposed SNP anduniversal approximation theorem are proposed in Section 2Section 3 presents the numerical results where the proposedSNP is compared with backpropagation MLPThere are vari-ous versions of backpropagation algorithms In classificationproblems we compare with gradient descent backpropa-gation (GDBP) [23] that is the standard basic algorithmFinally conclusions are made in Section 4
2 Proposed Single Neuron Perceptron
Figure 1 shows the proposed SNP In the figure the modelis presented as 119899 + 1-inputs single-output architecture Thevariable 119901 is the input pattern and the variable 119879 is relatedtarget applied in the learning process (3) Let us extend theinput pattern as follows
119901119899+1
= max119895=1119899
(119901119895) (1)
Actually max operation increases the input dimension to119899 + 1
So the new input pattern has 119899 + 1 elements In Figure 1the input pattern is illustrated by vector 119901
1le119895le119899+1and the 119864
119900
calculated by the following formula is the final output
119864119900(119901) = 119891(
119899+1
sum
119895=1
119908119895times 119901119895+ 119887) (2)
where119891 is activation function and1199081 1199082 119908
119899+1 and b are
adjustable weights So error can be achieved as follows
119890 = 119879 minus 119864119900
(3)
and the learning weights can be adjusted by the followingexcitatory learning rule
119908119895= 119908119895+ 120572max (119890 0) 119901
119895 for 119895 = 1 119899 + 1 (4)
and then by the following inhibitory rule
119908119895= 119908119895minus 120572max (minus119890 0) 119901
119895 for 119895 = 1 119899 + 1 (5)
where 119879 is target 119864119900is output of network 119890 is related error
and 120572 is the learning rate Also 119887 can be trained by
119887 = 119887 + 120572max (119890 0)
119887 = 119887 minus 120572max (minus119890 0) (6)
It should be added that max operation applied on theinput pattern and also in the learning phase has beenmotivated from computationalmodels of limbic system in thebrain [24ndash26] Limbic system is an emotional processor in themammalian brain [27ndash29] In these models [24ndash26] the maxoperator prepares the output and input ofmain parts of limbicsystem
In summary the feedforward computation and backwardlearning algorithm of proposed SNP in an online form andwith tansig activation function is as in Algorithm 1
Computational Intelligence and Neuroscience 3
In the algorithm 120572 can be picked empirically or changedadaptively during the learning process according to theadaptive learning [30 31]
The proposed SNP solves theXORproblem Consider 2minus1architecture with hardlim activation function and by usingthe following weights 119908
1= minus1 119908
2= minus2 V
3= 2 and 119887 = minus1
thus_119891 (1 1) = hardlim (119908
1times 1 + 119908
2times 1 + 119908
3timesmax (1 1) + 119887)
= 0
_119891 (0 0) = hardlim (119908
1times 0 + 119908
2times 0 + 119908
3timesmax (0 0) + 119887)
= 0
_119891 (1 0) = hardlim (119908
1times 1 + 119908
2times 0 + 119908
3timesmax (1 0) + 119887)
= 1
_119891 (0 1) = hardlim (119908
1times 0 + 119908
2times 1 + 119908
3timesmax (0 1) + 119887)
= 1
(7)
where hardlim is calculated by the following formula
hardlim (119909) =
1 if (119909 ge 0)0 otherwise
(8)
Since_119891 is in the form of (2) so
_119891 based on SNP can
approximate the XOR function The proposed model has alower computational complexity than other methods suchas spiking neural networks [32] that solved XOR problemThe computational complexity of proposed SNP is 119874(119899) thisis while it profits from very simple questions adjusting theweights
In the next section we prove that SNP is a universalapproximator and can approximate all real continuous func-tions
21 Universal Approximation Theorem Let us ignore theactivation function from the model and rewrite (2) like this
119864 (119904) =
119899
sum
119895=1
(119908119895) times 119904119895+ 119887 (9)
Consider 119884 as the set of all equations in form (9) and119889infin(1198641 1198642) = sup
119901isin119880|1198641(119901) minus 119864
2(119901)| as a submetric then
(119884 119889infin) is ametric space [33] The following theorem shows
that (119884 119889infin) is dense in (119862[119880] 119889
infin) where 119862[119880] is the set of
all real continues functions defined on 119880
SNP Universal Approximation Theorem For any given realcontinuous function 119892 on the compact set 119880 sub 119877
119899 andarbitrary 120576 gt 0 there exists 119864 isin 119884 such that
sup119901isin119880
1003816100381610038161003816
119892 (119901) minus 119864 (119901)
1003816100381610038161003816
lt 120576 (10)
We use the following Stone-Weierstrass theorem to prove thetheorem
Stone-Weierstrass Theorem (see [33 34]) Let 119884 be a set ofreal continuous functions on compact set 119880 If (1) 119884 is alge-brathat is the set 119884 is closed under scalar multiplication (theclosing under addition andmultiplication is not necessary forreal continuous functions [34]) (2) 119884 separates points on 119880that is for every 119909 119910 isin 119880 such that 119909 = 119910 there exists 119864 isin 119884
such that 119864(119909) = 119864(119910) and (3) 119885 vanishes at no point of 119880that is for each 119909 isin 119880 there exists 119864 isin 119884 such that 119864(119909) = 0then the uniform closure of 119884 consists of all real continuousfunctions on 119880 that is (119884 119889
infin) is dense in (119862[119880] 119889
infin)
SNP Universal Approximation Proof First we prove that(119884 119889infin) is algebra Let 119864 isin 119884 for arbitrary 119888 isin 119877
119864 (119901) =
119899
sum
119895=1
119908119895119901119895+ 119887
119888119864 (119901) =
119899
sum
119895=1
119888119908119895119901119895+ 119888119887
(11)
which is given in form (8) Thus 119888119864 isin 119884 and (119884 119889infin) is an
algebraNext we prove that (119884 119889
infin) separates the points on U
We prove this by constructing a required E for arbitrary119901
1 119901
2isin 119880 such that 1199011 = 119901
2 and we choose 119908 V such that119908119895= 119901
1
119895 for 119895 = 1 119899 and V
119899+1= 119908119899+1
= 0Thus
119901
1
= 119901
2
997904rArr 119901
1
sdot 119901
1
= 119901
1
sdot 119901
2
997904rArr
119899
sum
119895=1
(119908119895) 119901
1
119895=
119899
sum
119895=1
(119908119895) 119901
2
119895997904rArr 119864(119901
1
) = 119864 (119901
2
)
(12)
Therefore (119884 119889infin) separates the point on 119880
Finally we prove that (119884 119889infin) vanishes at no point of 119880
We choose 119908119895= 0 for 119895 = 1 119899 V
119899+1= 0 119908
119899+1gt 0 and
119887 = 1Since for all 119901 isin 119877119899
119899
sum
119895=1
(119908119895) 119901119895= 0 (13)
and 119887 gt 0 then for all 119901 isin 119877
119899 there exists 119864 such that119864(119901) = 0
So SNP independently from activation function is uni-versal approximator
3 Numerical Results
One parameter that related to computational complexity ofa learning method is the number of learning weights in eachepochThe lower number of learningweights concludes lowernumber of computations and lower computational complex-ity To evaluate the number of proposed SNP learningweights
4 Computational Intelligence and Neuroscience
Table 1 Datasets and related learning information
Dataset Dataset information Parameters ENN model MLP model ComparisonID Name Instance Class Attribute Learning rates Architecture Weights Architecture Weights Rw1 Diabetes 768 2 8 0050 9-1 10 8-2-1 21 522 Heart 270 2 13 0050 14-1 15 13-2-1 31 513 Pima 768 2 8 0005 9-1 10 8-2-1 21 524 Ionosphere 351 2 34 0050 35-1 36 34-2-1 73 505 Sonar 208 2 60 00005 61-1 62 60-2-1 125 506 Tic-tac 958 2 9 00005 10-1 11 9-2-2 23 52
with respect to the MLP we propose a measure named thereducing ratio of number of weights (Rw) as follows
Rw = (1 minus
Number Of Learning Weights of SNP ModelNumber Of Learning Weights of MLP Model
)
times 100
(14)
The Rw is a measure that can be used to compare thecomputational complexity of proposed SNP and MPL Thehigher Rw shows SNP has a lower number of learningweights Thus it has a lower number of computations andso has a lower computational complexity Additionally in theclassification problems the accuracy can be a proper perfor-mance measure to evaluate the algorithms This measure isgenerally expressed as follows
Accuracy = Correct DetectionAll
(15)
For all learning scenarios listed below the training setcontained 70 while the testing set contained 15 of thedata and the remaining was used for the validation set Inputpatterns have been normalized between [0 1] Output targetsare binary digits (ie the single class is labeled by digits ldquo1rdquoand ldquo0rdquo the two classes are labeled as ldquo01rdquo and ldquo10rdquo and thethree classes are labeled as ldquo001rdquo ldquo010rdquo and ldquo100rdquo and )Also the initial weights were randomly selected between[0 1]
Here and prior to entering comparative numerical stud-ies let us analyze the computational complexity Regardingthe proposed learning algorithm the algorithm adjusts119874(2119899)weights for each learning sample where 119899 is number of inputattributes In contrast computational time is 119874(119888119899) for MLPwhere 119888 is number of hidden neurons (the lowest 119888 is 2)Additionally GDBP MLP compared here is based on deriva-tive computations which impose high complexity while theproposed method is derivative free So the proposed methodhas lower computational complexity and higher efficiencywith respect to the MLPThis improved computing efficiencycan be important for online predictions especially when thetime interval of observations is small
To test and assess the SNP in classification 6 single classdatasets have been downloaded from UCI (University ofCalifornia Irvine) Data Center In all datasets the target
10000
9000
8000
7000
6000
5000
4000
3000
2000
1000
000
Diabetes Heart Pima Ionosphere Sonar Tic-tac
Average accuracy
Proposed SNPGDBP MLP
Figure 2 Comparisons between SNP and MLP
labeling was binary Table 1 shows the information relatedto the datasets that include the number of attributes andinstances Additionally the SNP and MLP architectures andthe number of learning weights and Rw are presented in thetable too As illustrated in Table 1 SNP reduces the number oflearning weights approximately about 50 for each dataset
In the proposed SNP algorithm we consider 119887 = 0And the learning parameters values are shown in Table 1The activation function was tansig and the stop criterion inlearning process was the maximum epochs which means themaximum number of epochs has been reached The maxi-mum and minimum values of each dataset were determinedand the scaled data (between 0 and 1) were used to adjust theweights The training was repeated 10 times and the averageof accuracy in test set was recorded Figure 2 presents theaccuracy average and the confidence interval obtained fromSNP and MLP It is obvious that SNP is more accurate thanMLP with GDBP algorithm in some datasets The resultsindicated in Figure 2 are based on studentrsquos 119905-test with 95confidence
Although according to Figure 2 it seems that GDBP isbetter in some cases what is very important in the results isnumber of learning epochs Table 2 shows the learning epochcomparisons According to Table 2 MLP needs many epochsto reach the results of SNP It is the main feature of proposedSNP fast learning with lower computational complexitythat makes it suitable for usage in various applications andespecially in online problems
Computational Intelligence and Neuroscience 5
Table 2 Number of learning epoch comparison
Model Proposed SNP GDBP MLPDiabetes 6091 plusmn 1973 9500 plusmn 1046
Heart 3833 plusmn 2098 10000 plusmn 0
Pima 6245 plusmn 2021 10000 plusmn 0
Ionosphere 2632 plusmn 1638 10000 plusmn 0
Sonar 922 plusmn 1027 9111 plusmn 2247
Tic-tac 628 plusmn 576 9952 plusmn 98
Average 3392 9760
4 Conclusion
In this paper we prove that a single neuron perceptron(SNP) can solve XOR problem and can be a universalapproximator These features can be achieved by extendinginput pattern and by using max operator SNP with thisextension ability is a novel computationalmodel of neural cellthat is learnt by excitatory and inhibitory rulesThis new SNParchitecture works with fewer numbers of learning weightsSpecifically it only generates 119874(2119899) learning weights andonly requires119874(2119899) operations during each training iterationwhere 119899 is size of input vector Furthermore the univer-sal approximation property is theoretically proved for thisarchitectureThe source code of proposed algorithm is acces-sible from httpwwwbitoolsirprojectshtml In numericalstudies SNP was utilized to classify 6 UCI datasets Thecomparisons between proposed SNP and backpropagationMLP present the following conclusions Firstly the numberof learning parameters of SNP is much lower with respect tothe standard MLP Secondly in classification problems theperformance of supervised excitatory and inhibitory learningalgorithm is higher than gradient descent backpropagation(GDBP) Thirdly lower computational complexity causedfrom the fewer learning parameters and faster training ofproposed SNP make it suitable for real time classification Inshort SNP is a universal approximatorwith a simple structureand is motivated by neurophysiological knowledge of thehumanrsquos brain We believe based on the multiple case studiesas well as the theoretical results in this report that SNP canbe effectively used in pattern recognition and classificationproblems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[2] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[3] S Bengio and Y Bengio ldquoTaking on the curse of dimensionalityin joint distributions using neural networksrdquo IEEE Transactionson Neural Networks vol 11 no 3 pp 550ndash557 2000
[4] M Verleysen and D Francois ldquoThe curse of dimensionalityin data mining and time series predictionrdquo in ComputationalIntelligence and Bioinspired Systems vol 3512 pp 758ndash770Springer Berlin Germany 2005
[5] B Verma ldquoFast training of multilayer perceptronsrdquo IEEETransactions on Neural Networks vol 8 no 6 pp 1314ndash13201997
[6] M THagan andM BMenhaj ldquoTraining feedforward networkswith the Marquardt algorithmrdquo IEEE Transactions on NeuralNetworks vol 5 no 6 pp 989ndash993 1994
[7] J-X Peng K Li and G W Irwin ldquoA new Jacobian matrixfor optimal learning of single-layer neural networksrdquo IEEETransactions onNeural Networks vol 19 no 1 pp 119ndash129 2008
[8] J-M Wu ldquoMultilayer Potts perceptrons with Levenberg-Marquardt learningrdquo IEEE Transactions on Neural Networksvol 19 no 12 pp 2032ndash2043 2008
[9] B M Wilamowski and H Yu ldquoImproved computation forlevenbergmarquardt trainingrdquo IEEE Transactions on NeuralNetworks vol 21 no 6 pp 930ndash937 2010
[10] BMWilamowski andHYu ldquoNeural network learningwithoutbackpropagationrdquo IEEE Transactions on Neural Networks vol21 no 11 pp 1793ndash1803 2010
[11] K Y Huang L C Shen K J Chen and M C Huang ldquoMulti-layer perceptron learning with particle swarm optimization forwell log data inversionrdquo in Proceedings of the International JointConference on Neural Networks (IJCNN rsquo12) pp 1ndash6 BrisbaneAustralia June 2012
[12] N Ampazis and S J Perantonis ldquoTwo highly efficient second-order algorithms for training feedforward networksrdquo IEEETransactions on Neural Networks vol 13 no 5 pp 1064ndash10742002
[13] C-T Kim and J-J Lee ldquoTraining two-layered feedforwardnetworks with variable projection methodrdquo IEEE Transactionson Neural Networks vol 19 no 2 pp 371ndash375 2008
[14] S Ferrari and M Jensenius ldquoA constrained optimizationapproach to preserving prior knowledge during incrementaltrainingrdquo IEEE Transactions on Neural Networks vol 19 no 6pp 996ndash1009 2008
[15] Y Liu J A Starzyk and Z Zhu ldquoOptimized approximationalgorithm in neural networks without overfittingrdquo IEEE Trans-actions on Neural Networks vol 19 no 6 pp 983ndash995 2008
[16] G-B Huang Q-Y Zhu and C-K Siew ldquoReal-time learningcapability of neural networksrdquo IEEE Transactions on NeuralNetworks vol 17 no 4 pp 863ndash878 2006
[17] A Bortoletti C di Fiore S Fanelli and P Zellini ldquoA newclass of quasi-Newtonianmethods for optimal learning inMLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no 2pp 263ndash273 2003
[18] V V Phansalkar and P S Sastry ldquoAnalysis of the back-propagation algorithm with momentumrdquo IEEE Transactions onNeural Networks vol 5 no 3 pp 505ndash506 1994
[19] A Khashman ldquoAmodified backpropagation learning algorithmwith added emotional coefficientsrdquo IEEETransactions onNeuralNetworks vol 19 no 11 pp 1896ndash1909 2008
[20] A Khashman ldquoModeling cognitive and emotional processes anovel neural network architecturerdquoNeural Networks vol 23 no10 pp 1155ndash1163 2010
[21] A Sierra J AMacias and FCorbacho ldquoEvolution of functionallink networksrdquo IEEE Transactions on Evolutionary Computa-tion vol 5 no 1 pp 54ndash65 2001
6 Computational Intelligence and Neuroscience
[22] B B Misra and S Dehuri ldquoFunctional link artificial neuralnetwork for classification task in data miningrdquo Journal ofComputer Science vol 3 no 12 pp 948ndash955 2007
[23] M T Hagan H B Demuth and M H Beale Neural NetworkDesign PWS Boston Mass USA 1996
[24] E Lotfi and M-R Akbarzadeh-T ldquoAdaptive brain emotionaldecayed learning for online prediction of geomagnetic activityindicesrdquo Neurocomputing vol 126 pp 188ndash196 2014
[25] E Lotfi and M-R Akbarzadeh-T ldquoBrain emotional learningbased pattern recognizerrdquo Cybernetics and Systems vol 44 no5 pp 402ndash421 2013
[26] E Lotfi and M-R Akbarzadeh-T ldquoSupervised brain emotionallearningrdquo in Proceedings of the International Joint Conference onNeural Networks (IJCNN rsquo12) pp 1ndash6 Brisbane Australia June2012
[27] J E LeDoux ldquoEmotion circuits in the brainrdquo Annual Review ofNeuroscience vol 23 pp 155ndash184 2000
[28] J LeDouxTheEmotional Brain Simon and SchusterNewYorkNY USA 1996
[29] D Goleman Emotional Intelligence Why It Can Matter MoreThan IQ Bantam New York NY USA 2006
[30] BWidrow andM E Hoff ldquoAdaptive switching circuitsrdquo in 1960IRE WESCON Convention Record pp 96ndash104 IRE New YorkNY USA 1960
[31] BWidrowand SD SternsAdaptive Signal Processing Prentice-Hall New York NY USA 1985
[32] S Ferrari B Mehta G di Muro A M J VanDongen and CHenriquez ldquoBiologically realizable reward-modulated hebbiantraining for spiking neural networksrdquo in Proceedings of the IEEEInternational Joint Conference on Neural Networks (IJCNN rsquo08)pp 1780ndash1786 Hong Kong June 2008
[33] L-X Wang and J M Mendel ldquoFuzzy basis functions universalapproximation and orthogonal least-squares learningrdquo IEEETransactions on Neural Networks vol 3 no 5 pp 807ndash814 1992
[34] W Rudin Principles of Mathematical Analysis McGraw-HillNew York NY USA 1976
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
![Page 2: Research Article A Novel Single Neuron Perceptron with ...downloads.hindawi.com/journals/cin/2014/746376.pdf · e following theorem shows that ( , ) is dense in ( [ ], ),where [ ]](https://reader034.vdocuments.mx/reader034/viewer/2022042404/5f18758b7726e07b0309689d/html5/thumbnails/2.jpg)
2 Computational Intelligence and Neuroscience
Input Initial random weights w1w2 wnwn+1 and input bias b(1) Take 119896th learning sample (119896th 119901 and 119879)(2) 119901
119899+1= max
119895=1119899(119901119895)
(3) Calculate the final output 119864119900and error
119864119900= tan sig(
119899+1
sum
119895=1
119908119895times 119901119895+ 119887)
119890 = 119879 minus 119864119900
(4) Update the weights by using excitatory rule119908119895= 119908119895+ 120572max(119890 0)119904
119895 for 119895 = 1 119899 + 1
119887 = 119887 + 120572max(119890 0)(5) Update the weights by using inhibitory rule
119908119895= 119908119895minus 120572max(minus119890 0)119904
119895 for 119895 = 1 119899 + 1
119887 = 119887 minus 120572max(minus119890 0)(6) If 119896 lt number of training patterns then 119896 = 119896 + 1 and proceed to the first(7) Let epoch = epoch + 1 and 119896 = 1(8) If the stop criterion has not satisfied proceed to the first
Algorithm 1 Proposed SNP algorithm
Figure 1 Proposed SNP
(FLNs) [21] the proposed SNP does not include hiddenunits or expand the input vector but guarantees universalapproximation FLNs are single-layer neural networks thatcan be considered as an alternative approach in the datamining to overcome the complexities associated with MLP[22] but they do not guarantee universal approximation
The paper is organized as follows Proposed SNP anduniversal approximation theorem are proposed in Section 2Section 3 presents the numerical results where the proposedSNP is compared with backpropagation MLPThere are vari-ous versions of backpropagation algorithms In classificationproblems we compare with gradient descent backpropa-gation (GDBP) [23] that is the standard basic algorithmFinally conclusions are made in Section 4
2 Proposed Single Neuron Perceptron
Figure 1 shows the proposed SNP In the figure the modelis presented as 119899 + 1-inputs single-output architecture Thevariable 119901 is the input pattern and the variable 119879 is relatedtarget applied in the learning process (3) Let us extend theinput pattern as follows
119901119899+1
= max119895=1119899
(119901119895) (1)
Actually max operation increases the input dimension to119899 + 1
So the new input pattern has 119899 + 1 elements In Figure 1the input pattern is illustrated by vector 119901
1le119895le119899+1and the 119864
119900
calculated by the following formula is the final output
119864119900(119901) = 119891(
119899+1
sum
119895=1
119908119895times 119901119895+ 119887) (2)
where119891 is activation function and1199081 1199082 119908
119899+1 and b are
adjustable weights So error can be achieved as follows
119890 = 119879 minus 119864119900
(3)
and the learning weights can be adjusted by the followingexcitatory learning rule
119908119895= 119908119895+ 120572max (119890 0) 119901
119895 for 119895 = 1 119899 + 1 (4)
and then by the following inhibitory rule
119908119895= 119908119895minus 120572max (minus119890 0) 119901
119895 for 119895 = 1 119899 + 1 (5)
where 119879 is target 119864119900is output of network 119890 is related error
and 120572 is the learning rate Also 119887 can be trained by
119887 = 119887 + 120572max (119890 0)
119887 = 119887 minus 120572max (minus119890 0) (6)
It should be added that max operation applied on theinput pattern and also in the learning phase has beenmotivated from computationalmodels of limbic system in thebrain [24ndash26] Limbic system is an emotional processor in themammalian brain [27ndash29] In these models [24ndash26] the maxoperator prepares the output and input ofmain parts of limbicsystem
In summary the feedforward computation and backwardlearning algorithm of proposed SNP in an online form andwith tansig activation function is as in Algorithm 1
Computational Intelligence and Neuroscience 3
In the algorithm 120572 can be picked empirically or changedadaptively during the learning process according to theadaptive learning [30 31]
The proposed SNP solves theXORproblem Consider 2minus1architecture with hardlim activation function and by usingthe following weights 119908
1= minus1 119908
2= minus2 V
3= 2 and 119887 = minus1
thus_119891 (1 1) = hardlim (119908
1times 1 + 119908
2times 1 + 119908
3timesmax (1 1) + 119887)
= 0
_119891 (0 0) = hardlim (119908
1times 0 + 119908
2times 0 + 119908
3timesmax (0 0) + 119887)
= 0
_119891 (1 0) = hardlim (119908
1times 1 + 119908
2times 0 + 119908
3timesmax (1 0) + 119887)
= 1
_119891 (0 1) = hardlim (119908
1times 0 + 119908
2times 1 + 119908
3timesmax (0 1) + 119887)
= 1
(7)
where hardlim is calculated by the following formula
hardlim (119909) =
1 if (119909 ge 0)0 otherwise
(8)
Since_119891 is in the form of (2) so
_119891 based on SNP can
approximate the XOR function The proposed model has alower computational complexity than other methods suchas spiking neural networks [32] that solved XOR problemThe computational complexity of proposed SNP is 119874(119899) thisis while it profits from very simple questions adjusting theweights
In the next section we prove that SNP is a universalapproximator and can approximate all real continuous func-tions
21 Universal Approximation Theorem Let us ignore theactivation function from the model and rewrite (2) like this
119864 (119904) =
119899
sum
119895=1
(119908119895) times 119904119895+ 119887 (9)
Consider 119884 as the set of all equations in form (9) and119889infin(1198641 1198642) = sup
119901isin119880|1198641(119901) minus 119864
2(119901)| as a submetric then
(119884 119889infin) is ametric space [33] The following theorem shows
that (119884 119889infin) is dense in (119862[119880] 119889
infin) where 119862[119880] is the set of
all real continues functions defined on 119880
SNP Universal Approximation Theorem For any given realcontinuous function 119892 on the compact set 119880 sub 119877
119899 andarbitrary 120576 gt 0 there exists 119864 isin 119884 such that
sup119901isin119880
1003816100381610038161003816
119892 (119901) minus 119864 (119901)
1003816100381610038161003816
lt 120576 (10)
We use the following Stone-Weierstrass theorem to prove thetheorem
Stone-Weierstrass Theorem (see [33 34]) Let 119884 be a set ofreal continuous functions on compact set 119880 If (1) 119884 is alge-brathat is the set 119884 is closed under scalar multiplication (theclosing under addition andmultiplication is not necessary forreal continuous functions [34]) (2) 119884 separates points on 119880that is for every 119909 119910 isin 119880 such that 119909 = 119910 there exists 119864 isin 119884
such that 119864(119909) = 119864(119910) and (3) 119885 vanishes at no point of 119880that is for each 119909 isin 119880 there exists 119864 isin 119884 such that 119864(119909) = 0then the uniform closure of 119884 consists of all real continuousfunctions on 119880 that is (119884 119889
infin) is dense in (119862[119880] 119889
infin)
SNP Universal Approximation Proof First we prove that(119884 119889infin) is algebra Let 119864 isin 119884 for arbitrary 119888 isin 119877
119864 (119901) =
119899
sum
119895=1
119908119895119901119895+ 119887
119888119864 (119901) =
119899
sum
119895=1
119888119908119895119901119895+ 119888119887
(11)
which is given in form (8) Thus 119888119864 isin 119884 and (119884 119889infin) is an
algebraNext we prove that (119884 119889
infin) separates the points on U
We prove this by constructing a required E for arbitrary119901
1 119901
2isin 119880 such that 1199011 = 119901
2 and we choose 119908 V such that119908119895= 119901
1
119895 for 119895 = 1 119899 and V
119899+1= 119908119899+1
= 0Thus
119901
1
= 119901
2
997904rArr 119901
1
sdot 119901
1
= 119901
1
sdot 119901
2
997904rArr
119899
sum
119895=1
(119908119895) 119901
1
119895=
119899
sum
119895=1
(119908119895) 119901
2
119895997904rArr 119864(119901
1
) = 119864 (119901
2
)
(12)
Therefore (119884 119889infin) separates the point on 119880
Finally we prove that (119884 119889infin) vanishes at no point of 119880
We choose 119908119895= 0 for 119895 = 1 119899 V
119899+1= 0 119908
119899+1gt 0 and
119887 = 1Since for all 119901 isin 119877119899
119899
sum
119895=1
(119908119895) 119901119895= 0 (13)
and 119887 gt 0 then for all 119901 isin 119877
119899 there exists 119864 such that119864(119901) = 0
So SNP independently from activation function is uni-versal approximator
3 Numerical Results
One parameter that related to computational complexity ofa learning method is the number of learning weights in eachepochThe lower number of learningweights concludes lowernumber of computations and lower computational complex-ity To evaluate the number of proposed SNP learningweights
4 Computational Intelligence and Neuroscience
Table 1 Datasets and related learning information
Dataset Dataset information Parameters ENN model MLP model ComparisonID Name Instance Class Attribute Learning rates Architecture Weights Architecture Weights Rw1 Diabetes 768 2 8 0050 9-1 10 8-2-1 21 522 Heart 270 2 13 0050 14-1 15 13-2-1 31 513 Pima 768 2 8 0005 9-1 10 8-2-1 21 524 Ionosphere 351 2 34 0050 35-1 36 34-2-1 73 505 Sonar 208 2 60 00005 61-1 62 60-2-1 125 506 Tic-tac 958 2 9 00005 10-1 11 9-2-2 23 52
with respect to the MLP we propose a measure named thereducing ratio of number of weights (Rw) as follows
Rw = (1 minus
Number Of Learning Weights of SNP ModelNumber Of Learning Weights of MLP Model
)
times 100
(14)
The Rw is a measure that can be used to compare thecomputational complexity of proposed SNP and MPL Thehigher Rw shows SNP has a lower number of learningweights Thus it has a lower number of computations andso has a lower computational complexity Additionally in theclassification problems the accuracy can be a proper perfor-mance measure to evaluate the algorithms This measure isgenerally expressed as follows
Accuracy = Correct DetectionAll
(15)
For all learning scenarios listed below the training setcontained 70 while the testing set contained 15 of thedata and the remaining was used for the validation set Inputpatterns have been normalized between [0 1] Output targetsare binary digits (ie the single class is labeled by digits ldquo1rdquoand ldquo0rdquo the two classes are labeled as ldquo01rdquo and ldquo10rdquo and thethree classes are labeled as ldquo001rdquo ldquo010rdquo and ldquo100rdquo and )Also the initial weights were randomly selected between[0 1]
Here and prior to entering comparative numerical stud-ies let us analyze the computational complexity Regardingthe proposed learning algorithm the algorithm adjusts119874(2119899)weights for each learning sample where 119899 is number of inputattributes In contrast computational time is 119874(119888119899) for MLPwhere 119888 is number of hidden neurons (the lowest 119888 is 2)Additionally GDBP MLP compared here is based on deriva-tive computations which impose high complexity while theproposed method is derivative free So the proposed methodhas lower computational complexity and higher efficiencywith respect to the MLPThis improved computing efficiencycan be important for online predictions especially when thetime interval of observations is small
To test and assess the SNP in classification 6 single classdatasets have been downloaded from UCI (University ofCalifornia Irvine) Data Center In all datasets the target
10000
9000
8000
7000
6000
5000
4000
3000
2000
1000
000
Diabetes Heart Pima Ionosphere Sonar Tic-tac
Average accuracy
Proposed SNPGDBP MLP
Figure 2 Comparisons between SNP and MLP
labeling was binary Table 1 shows the information relatedto the datasets that include the number of attributes andinstances Additionally the SNP and MLP architectures andthe number of learning weights and Rw are presented in thetable too As illustrated in Table 1 SNP reduces the number oflearning weights approximately about 50 for each dataset
In the proposed SNP algorithm we consider 119887 = 0And the learning parameters values are shown in Table 1The activation function was tansig and the stop criterion inlearning process was the maximum epochs which means themaximum number of epochs has been reached The maxi-mum and minimum values of each dataset were determinedand the scaled data (between 0 and 1) were used to adjust theweights The training was repeated 10 times and the averageof accuracy in test set was recorded Figure 2 presents theaccuracy average and the confidence interval obtained fromSNP and MLP It is obvious that SNP is more accurate thanMLP with GDBP algorithm in some datasets The resultsindicated in Figure 2 are based on studentrsquos 119905-test with 95confidence
Although according to Figure 2 it seems that GDBP isbetter in some cases what is very important in the results isnumber of learning epochs Table 2 shows the learning epochcomparisons According to Table 2 MLP needs many epochsto reach the results of SNP It is the main feature of proposedSNP fast learning with lower computational complexitythat makes it suitable for usage in various applications andespecially in online problems
Computational Intelligence and Neuroscience 5
Table 2 Number of learning epoch comparison
Model Proposed SNP GDBP MLPDiabetes 6091 plusmn 1973 9500 plusmn 1046
Heart 3833 plusmn 2098 10000 plusmn 0
Pima 6245 plusmn 2021 10000 plusmn 0
Ionosphere 2632 plusmn 1638 10000 plusmn 0
Sonar 922 plusmn 1027 9111 plusmn 2247
Tic-tac 628 plusmn 576 9952 plusmn 98
Average 3392 9760
4 Conclusion
In this paper we prove that a single neuron perceptron(SNP) can solve XOR problem and can be a universalapproximator These features can be achieved by extendinginput pattern and by using max operator SNP with thisextension ability is a novel computationalmodel of neural cellthat is learnt by excitatory and inhibitory rulesThis new SNParchitecture works with fewer numbers of learning weightsSpecifically it only generates 119874(2119899) learning weights andonly requires119874(2119899) operations during each training iterationwhere 119899 is size of input vector Furthermore the univer-sal approximation property is theoretically proved for thisarchitectureThe source code of proposed algorithm is acces-sible from httpwwwbitoolsirprojectshtml In numericalstudies SNP was utilized to classify 6 UCI datasets Thecomparisons between proposed SNP and backpropagationMLP present the following conclusions Firstly the numberof learning parameters of SNP is much lower with respect tothe standard MLP Secondly in classification problems theperformance of supervised excitatory and inhibitory learningalgorithm is higher than gradient descent backpropagation(GDBP) Thirdly lower computational complexity causedfrom the fewer learning parameters and faster training ofproposed SNP make it suitable for real time classification Inshort SNP is a universal approximatorwith a simple structureand is motivated by neurophysiological knowledge of thehumanrsquos brain We believe based on the multiple case studiesas well as the theoretical results in this report that SNP canbe effectively used in pattern recognition and classificationproblems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[2] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[3] S Bengio and Y Bengio ldquoTaking on the curse of dimensionalityin joint distributions using neural networksrdquo IEEE Transactionson Neural Networks vol 11 no 3 pp 550ndash557 2000
[4] M Verleysen and D Francois ldquoThe curse of dimensionalityin data mining and time series predictionrdquo in ComputationalIntelligence and Bioinspired Systems vol 3512 pp 758ndash770Springer Berlin Germany 2005
[5] B Verma ldquoFast training of multilayer perceptronsrdquo IEEETransactions on Neural Networks vol 8 no 6 pp 1314ndash13201997
[6] M THagan andM BMenhaj ldquoTraining feedforward networkswith the Marquardt algorithmrdquo IEEE Transactions on NeuralNetworks vol 5 no 6 pp 989ndash993 1994
[7] J-X Peng K Li and G W Irwin ldquoA new Jacobian matrixfor optimal learning of single-layer neural networksrdquo IEEETransactions onNeural Networks vol 19 no 1 pp 119ndash129 2008
[8] J-M Wu ldquoMultilayer Potts perceptrons with Levenberg-Marquardt learningrdquo IEEE Transactions on Neural Networksvol 19 no 12 pp 2032ndash2043 2008
[9] B M Wilamowski and H Yu ldquoImproved computation forlevenbergmarquardt trainingrdquo IEEE Transactions on NeuralNetworks vol 21 no 6 pp 930ndash937 2010
[10] BMWilamowski andHYu ldquoNeural network learningwithoutbackpropagationrdquo IEEE Transactions on Neural Networks vol21 no 11 pp 1793ndash1803 2010
[11] K Y Huang L C Shen K J Chen and M C Huang ldquoMulti-layer perceptron learning with particle swarm optimization forwell log data inversionrdquo in Proceedings of the International JointConference on Neural Networks (IJCNN rsquo12) pp 1ndash6 BrisbaneAustralia June 2012
[12] N Ampazis and S J Perantonis ldquoTwo highly efficient second-order algorithms for training feedforward networksrdquo IEEETransactions on Neural Networks vol 13 no 5 pp 1064ndash10742002
[13] C-T Kim and J-J Lee ldquoTraining two-layered feedforwardnetworks with variable projection methodrdquo IEEE Transactionson Neural Networks vol 19 no 2 pp 371ndash375 2008
[14] S Ferrari and M Jensenius ldquoA constrained optimizationapproach to preserving prior knowledge during incrementaltrainingrdquo IEEE Transactions on Neural Networks vol 19 no 6pp 996ndash1009 2008
[15] Y Liu J A Starzyk and Z Zhu ldquoOptimized approximationalgorithm in neural networks without overfittingrdquo IEEE Trans-actions on Neural Networks vol 19 no 6 pp 983ndash995 2008
[16] G-B Huang Q-Y Zhu and C-K Siew ldquoReal-time learningcapability of neural networksrdquo IEEE Transactions on NeuralNetworks vol 17 no 4 pp 863ndash878 2006
[17] A Bortoletti C di Fiore S Fanelli and P Zellini ldquoA newclass of quasi-Newtonianmethods for optimal learning inMLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no 2pp 263ndash273 2003
[18] V V Phansalkar and P S Sastry ldquoAnalysis of the back-propagation algorithm with momentumrdquo IEEE Transactions onNeural Networks vol 5 no 3 pp 505ndash506 1994
[19] A Khashman ldquoAmodified backpropagation learning algorithmwith added emotional coefficientsrdquo IEEETransactions onNeuralNetworks vol 19 no 11 pp 1896ndash1909 2008
[20] A Khashman ldquoModeling cognitive and emotional processes anovel neural network architecturerdquoNeural Networks vol 23 no10 pp 1155ndash1163 2010
[21] A Sierra J AMacias and FCorbacho ldquoEvolution of functionallink networksrdquo IEEE Transactions on Evolutionary Computa-tion vol 5 no 1 pp 54ndash65 2001
6 Computational Intelligence and Neuroscience
[22] B B Misra and S Dehuri ldquoFunctional link artificial neuralnetwork for classification task in data miningrdquo Journal ofComputer Science vol 3 no 12 pp 948ndash955 2007
[23] M T Hagan H B Demuth and M H Beale Neural NetworkDesign PWS Boston Mass USA 1996
[24] E Lotfi and M-R Akbarzadeh-T ldquoAdaptive brain emotionaldecayed learning for online prediction of geomagnetic activityindicesrdquo Neurocomputing vol 126 pp 188ndash196 2014
[25] E Lotfi and M-R Akbarzadeh-T ldquoBrain emotional learningbased pattern recognizerrdquo Cybernetics and Systems vol 44 no5 pp 402ndash421 2013
[26] E Lotfi and M-R Akbarzadeh-T ldquoSupervised brain emotionallearningrdquo in Proceedings of the International Joint Conference onNeural Networks (IJCNN rsquo12) pp 1ndash6 Brisbane Australia June2012
[27] J E LeDoux ldquoEmotion circuits in the brainrdquo Annual Review ofNeuroscience vol 23 pp 155ndash184 2000
[28] J LeDouxTheEmotional Brain Simon and SchusterNewYorkNY USA 1996
[29] D Goleman Emotional Intelligence Why It Can Matter MoreThan IQ Bantam New York NY USA 2006
[30] BWidrow andM E Hoff ldquoAdaptive switching circuitsrdquo in 1960IRE WESCON Convention Record pp 96ndash104 IRE New YorkNY USA 1960
[31] BWidrowand SD SternsAdaptive Signal Processing Prentice-Hall New York NY USA 1985
[32] S Ferrari B Mehta G di Muro A M J VanDongen and CHenriquez ldquoBiologically realizable reward-modulated hebbiantraining for spiking neural networksrdquo in Proceedings of the IEEEInternational Joint Conference on Neural Networks (IJCNN rsquo08)pp 1780ndash1786 Hong Kong June 2008
[33] L-X Wang and J M Mendel ldquoFuzzy basis functions universalapproximation and orthogonal least-squares learningrdquo IEEETransactions on Neural Networks vol 3 no 5 pp 807ndash814 1992
[34] W Rudin Principles of Mathematical Analysis McGraw-HillNew York NY USA 1976
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
![Page 3: Research Article A Novel Single Neuron Perceptron with ...downloads.hindawi.com/journals/cin/2014/746376.pdf · e following theorem shows that ( , ) is dense in ( [ ], ),where [ ]](https://reader034.vdocuments.mx/reader034/viewer/2022042404/5f18758b7726e07b0309689d/html5/thumbnails/3.jpg)
Computational Intelligence and Neuroscience 3
In the algorithm 120572 can be picked empirically or changedadaptively during the learning process according to theadaptive learning [30 31]
The proposed SNP solves theXORproblem Consider 2minus1architecture with hardlim activation function and by usingthe following weights 119908
1= minus1 119908
2= minus2 V
3= 2 and 119887 = minus1
thus_119891 (1 1) = hardlim (119908
1times 1 + 119908
2times 1 + 119908
3timesmax (1 1) + 119887)
= 0
_119891 (0 0) = hardlim (119908
1times 0 + 119908
2times 0 + 119908
3timesmax (0 0) + 119887)
= 0
_119891 (1 0) = hardlim (119908
1times 1 + 119908
2times 0 + 119908
3timesmax (1 0) + 119887)
= 1
_119891 (0 1) = hardlim (119908
1times 0 + 119908
2times 1 + 119908
3timesmax (0 1) + 119887)
= 1
(7)
where hardlim is calculated by the following formula
hardlim (119909) =
1 if (119909 ge 0)0 otherwise
(8)
Since_119891 is in the form of (2) so
_119891 based on SNP can
approximate the XOR function The proposed model has alower computational complexity than other methods suchas spiking neural networks [32] that solved XOR problemThe computational complexity of proposed SNP is 119874(119899) thisis while it profits from very simple questions adjusting theweights
In the next section we prove that SNP is a universalapproximator and can approximate all real continuous func-tions
21 Universal Approximation Theorem Let us ignore theactivation function from the model and rewrite (2) like this
119864 (119904) =
119899
sum
119895=1
(119908119895) times 119904119895+ 119887 (9)
Consider 119884 as the set of all equations in form (9) and119889infin(1198641 1198642) = sup
119901isin119880|1198641(119901) minus 119864
2(119901)| as a submetric then
(119884 119889infin) is ametric space [33] The following theorem shows
that (119884 119889infin) is dense in (119862[119880] 119889
infin) where 119862[119880] is the set of
all real continues functions defined on 119880
SNP Universal Approximation Theorem For any given realcontinuous function 119892 on the compact set 119880 sub 119877
119899 andarbitrary 120576 gt 0 there exists 119864 isin 119884 such that
sup119901isin119880
1003816100381610038161003816
119892 (119901) minus 119864 (119901)
1003816100381610038161003816
lt 120576 (10)
We use the following Stone-Weierstrass theorem to prove thetheorem
Stone-Weierstrass Theorem (see [33 34]) Let 119884 be a set ofreal continuous functions on compact set 119880 If (1) 119884 is alge-brathat is the set 119884 is closed under scalar multiplication (theclosing under addition andmultiplication is not necessary forreal continuous functions [34]) (2) 119884 separates points on 119880that is for every 119909 119910 isin 119880 such that 119909 = 119910 there exists 119864 isin 119884
such that 119864(119909) = 119864(119910) and (3) 119885 vanishes at no point of 119880that is for each 119909 isin 119880 there exists 119864 isin 119884 such that 119864(119909) = 0then the uniform closure of 119884 consists of all real continuousfunctions on 119880 that is (119884 119889
infin) is dense in (119862[119880] 119889
infin)
SNP Universal Approximation Proof First we prove that(119884 119889infin) is algebra Let 119864 isin 119884 for arbitrary 119888 isin 119877
119864 (119901) =
119899
sum
119895=1
119908119895119901119895+ 119887
119888119864 (119901) =
119899
sum
119895=1
119888119908119895119901119895+ 119888119887
(11)
which is given in form (8) Thus 119888119864 isin 119884 and (119884 119889infin) is an
algebraNext we prove that (119884 119889
infin) separates the points on U
We prove this by constructing a required E for arbitrary119901
1 119901
2isin 119880 such that 1199011 = 119901
2 and we choose 119908 V such that119908119895= 119901
1
119895 for 119895 = 1 119899 and V
119899+1= 119908119899+1
= 0Thus
119901
1
= 119901
2
997904rArr 119901
1
sdot 119901
1
= 119901
1
sdot 119901
2
997904rArr
119899
sum
119895=1
(119908119895) 119901
1
119895=
119899
sum
119895=1
(119908119895) 119901
2
119895997904rArr 119864(119901
1
) = 119864 (119901
2
)
(12)
Therefore (119884 119889infin) separates the point on 119880
Finally we prove that (119884 119889infin) vanishes at no point of 119880
We choose 119908119895= 0 for 119895 = 1 119899 V
119899+1= 0 119908
119899+1gt 0 and
119887 = 1Since for all 119901 isin 119877119899
119899
sum
119895=1
(119908119895) 119901119895= 0 (13)
and 119887 gt 0 then for all 119901 isin 119877
119899 there exists 119864 such that119864(119901) = 0
So SNP independently from activation function is uni-versal approximator
3 Numerical Results
One parameter that related to computational complexity ofa learning method is the number of learning weights in eachepochThe lower number of learningweights concludes lowernumber of computations and lower computational complex-ity To evaluate the number of proposed SNP learningweights
4 Computational Intelligence and Neuroscience
Table 1 Datasets and related learning information
Dataset Dataset information Parameters ENN model MLP model ComparisonID Name Instance Class Attribute Learning rates Architecture Weights Architecture Weights Rw1 Diabetes 768 2 8 0050 9-1 10 8-2-1 21 522 Heart 270 2 13 0050 14-1 15 13-2-1 31 513 Pima 768 2 8 0005 9-1 10 8-2-1 21 524 Ionosphere 351 2 34 0050 35-1 36 34-2-1 73 505 Sonar 208 2 60 00005 61-1 62 60-2-1 125 506 Tic-tac 958 2 9 00005 10-1 11 9-2-2 23 52
with respect to the MLP we propose a measure named thereducing ratio of number of weights (Rw) as follows
Rw = (1 minus
Number Of Learning Weights of SNP ModelNumber Of Learning Weights of MLP Model
)
times 100
(14)
The Rw is a measure that can be used to compare thecomputational complexity of proposed SNP and MPL Thehigher Rw shows SNP has a lower number of learningweights Thus it has a lower number of computations andso has a lower computational complexity Additionally in theclassification problems the accuracy can be a proper perfor-mance measure to evaluate the algorithms This measure isgenerally expressed as follows
Accuracy = Correct DetectionAll
(15)
For all learning scenarios listed below the training setcontained 70 while the testing set contained 15 of thedata and the remaining was used for the validation set Inputpatterns have been normalized between [0 1] Output targetsare binary digits (ie the single class is labeled by digits ldquo1rdquoand ldquo0rdquo the two classes are labeled as ldquo01rdquo and ldquo10rdquo and thethree classes are labeled as ldquo001rdquo ldquo010rdquo and ldquo100rdquo and )Also the initial weights were randomly selected between[0 1]
Here and prior to entering comparative numerical stud-ies let us analyze the computational complexity Regardingthe proposed learning algorithm the algorithm adjusts119874(2119899)weights for each learning sample where 119899 is number of inputattributes In contrast computational time is 119874(119888119899) for MLPwhere 119888 is number of hidden neurons (the lowest 119888 is 2)Additionally GDBP MLP compared here is based on deriva-tive computations which impose high complexity while theproposed method is derivative free So the proposed methodhas lower computational complexity and higher efficiencywith respect to the MLPThis improved computing efficiencycan be important for online predictions especially when thetime interval of observations is small
To test and assess the SNP in classification 6 single classdatasets have been downloaded from UCI (University ofCalifornia Irvine) Data Center In all datasets the target
10000
9000
8000
7000
6000
5000
4000
3000
2000
1000
000
Diabetes Heart Pima Ionosphere Sonar Tic-tac
Average accuracy
Proposed SNPGDBP MLP
Figure 2 Comparisons between SNP and MLP
labeling was binary Table 1 shows the information relatedto the datasets that include the number of attributes andinstances Additionally the SNP and MLP architectures andthe number of learning weights and Rw are presented in thetable too As illustrated in Table 1 SNP reduces the number oflearning weights approximately about 50 for each dataset
In the proposed SNP algorithm we consider 119887 = 0And the learning parameters values are shown in Table 1The activation function was tansig and the stop criterion inlearning process was the maximum epochs which means themaximum number of epochs has been reached The maxi-mum and minimum values of each dataset were determinedand the scaled data (between 0 and 1) were used to adjust theweights The training was repeated 10 times and the averageof accuracy in test set was recorded Figure 2 presents theaccuracy average and the confidence interval obtained fromSNP and MLP It is obvious that SNP is more accurate thanMLP with GDBP algorithm in some datasets The resultsindicated in Figure 2 are based on studentrsquos 119905-test with 95confidence
Although according to Figure 2 it seems that GDBP isbetter in some cases what is very important in the results isnumber of learning epochs Table 2 shows the learning epochcomparisons According to Table 2 MLP needs many epochsto reach the results of SNP It is the main feature of proposedSNP fast learning with lower computational complexitythat makes it suitable for usage in various applications andespecially in online problems
Computational Intelligence and Neuroscience 5
Table 2 Number of learning epoch comparison
Model Proposed SNP GDBP MLPDiabetes 6091 plusmn 1973 9500 plusmn 1046
Heart 3833 plusmn 2098 10000 plusmn 0
Pima 6245 plusmn 2021 10000 plusmn 0
Ionosphere 2632 plusmn 1638 10000 plusmn 0
Sonar 922 plusmn 1027 9111 plusmn 2247
Tic-tac 628 plusmn 576 9952 plusmn 98
Average 3392 9760
4 Conclusion
In this paper we prove that a single neuron perceptron(SNP) can solve XOR problem and can be a universalapproximator These features can be achieved by extendinginput pattern and by using max operator SNP with thisextension ability is a novel computationalmodel of neural cellthat is learnt by excitatory and inhibitory rulesThis new SNParchitecture works with fewer numbers of learning weightsSpecifically it only generates 119874(2119899) learning weights andonly requires119874(2119899) operations during each training iterationwhere 119899 is size of input vector Furthermore the univer-sal approximation property is theoretically proved for thisarchitectureThe source code of proposed algorithm is acces-sible from httpwwwbitoolsirprojectshtml In numericalstudies SNP was utilized to classify 6 UCI datasets Thecomparisons between proposed SNP and backpropagationMLP present the following conclusions Firstly the numberof learning parameters of SNP is much lower with respect tothe standard MLP Secondly in classification problems theperformance of supervised excitatory and inhibitory learningalgorithm is higher than gradient descent backpropagation(GDBP) Thirdly lower computational complexity causedfrom the fewer learning parameters and faster training ofproposed SNP make it suitable for real time classification Inshort SNP is a universal approximatorwith a simple structureand is motivated by neurophysiological knowledge of thehumanrsquos brain We believe based on the multiple case studiesas well as the theoretical results in this report that SNP canbe effectively used in pattern recognition and classificationproblems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[2] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[3] S Bengio and Y Bengio ldquoTaking on the curse of dimensionalityin joint distributions using neural networksrdquo IEEE Transactionson Neural Networks vol 11 no 3 pp 550ndash557 2000
[4] M Verleysen and D Francois ldquoThe curse of dimensionalityin data mining and time series predictionrdquo in ComputationalIntelligence and Bioinspired Systems vol 3512 pp 758ndash770Springer Berlin Germany 2005
[5] B Verma ldquoFast training of multilayer perceptronsrdquo IEEETransactions on Neural Networks vol 8 no 6 pp 1314ndash13201997
[6] M THagan andM BMenhaj ldquoTraining feedforward networkswith the Marquardt algorithmrdquo IEEE Transactions on NeuralNetworks vol 5 no 6 pp 989ndash993 1994
[7] J-X Peng K Li and G W Irwin ldquoA new Jacobian matrixfor optimal learning of single-layer neural networksrdquo IEEETransactions onNeural Networks vol 19 no 1 pp 119ndash129 2008
[8] J-M Wu ldquoMultilayer Potts perceptrons with Levenberg-Marquardt learningrdquo IEEE Transactions on Neural Networksvol 19 no 12 pp 2032ndash2043 2008
[9] B M Wilamowski and H Yu ldquoImproved computation forlevenbergmarquardt trainingrdquo IEEE Transactions on NeuralNetworks vol 21 no 6 pp 930ndash937 2010
[10] BMWilamowski andHYu ldquoNeural network learningwithoutbackpropagationrdquo IEEE Transactions on Neural Networks vol21 no 11 pp 1793ndash1803 2010
[11] K Y Huang L C Shen K J Chen and M C Huang ldquoMulti-layer perceptron learning with particle swarm optimization forwell log data inversionrdquo in Proceedings of the International JointConference on Neural Networks (IJCNN rsquo12) pp 1ndash6 BrisbaneAustralia June 2012
[12] N Ampazis and S J Perantonis ldquoTwo highly efficient second-order algorithms for training feedforward networksrdquo IEEETransactions on Neural Networks vol 13 no 5 pp 1064ndash10742002
[13] C-T Kim and J-J Lee ldquoTraining two-layered feedforwardnetworks with variable projection methodrdquo IEEE Transactionson Neural Networks vol 19 no 2 pp 371ndash375 2008
[14] S Ferrari and M Jensenius ldquoA constrained optimizationapproach to preserving prior knowledge during incrementaltrainingrdquo IEEE Transactions on Neural Networks vol 19 no 6pp 996ndash1009 2008
[15] Y Liu J A Starzyk and Z Zhu ldquoOptimized approximationalgorithm in neural networks without overfittingrdquo IEEE Trans-actions on Neural Networks vol 19 no 6 pp 983ndash995 2008
[16] G-B Huang Q-Y Zhu and C-K Siew ldquoReal-time learningcapability of neural networksrdquo IEEE Transactions on NeuralNetworks vol 17 no 4 pp 863ndash878 2006
[17] A Bortoletti C di Fiore S Fanelli and P Zellini ldquoA newclass of quasi-Newtonianmethods for optimal learning inMLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no 2pp 263ndash273 2003
[18] V V Phansalkar and P S Sastry ldquoAnalysis of the back-propagation algorithm with momentumrdquo IEEE Transactions onNeural Networks vol 5 no 3 pp 505ndash506 1994
[19] A Khashman ldquoAmodified backpropagation learning algorithmwith added emotional coefficientsrdquo IEEETransactions onNeuralNetworks vol 19 no 11 pp 1896ndash1909 2008
[20] A Khashman ldquoModeling cognitive and emotional processes anovel neural network architecturerdquoNeural Networks vol 23 no10 pp 1155ndash1163 2010
[21] A Sierra J AMacias and FCorbacho ldquoEvolution of functionallink networksrdquo IEEE Transactions on Evolutionary Computa-tion vol 5 no 1 pp 54ndash65 2001
6 Computational Intelligence and Neuroscience
[22] B B Misra and S Dehuri ldquoFunctional link artificial neuralnetwork for classification task in data miningrdquo Journal ofComputer Science vol 3 no 12 pp 948ndash955 2007
[23] M T Hagan H B Demuth and M H Beale Neural NetworkDesign PWS Boston Mass USA 1996
[24] E Lotfi and M-R Akbarzadeh-T ldquoAdaptive brain emotionaldecayed learning for online prediction of geomagnetic activityindicesrdquo Neurocomputing vol 126 pp 188ndash196 2014
[25] E Lotfi and M-R Akbarzadeh-T ldquoBrain emotional learningbased pattern recognizerrdquo Cybernetics and Systems vol 44 no5 pp 402ndash421 2013
[26] E Lotfi and M-R Akbarzadeh-T ldquoSupervised brain emotionallearningrdquo in Proceedings of the International Joint Conference onNeural Networks (IJCNN rsquo12) pp 1ndash6 Brisbane Australia June2012
[27] J E LeDoux ldquoEmotion circuits in the brainrdquo Annual Review ofNeuroscience vol 23 pp 155ndash184 2000
[28] J LeDouxTheEmotional Brain Simon and SchusterNewYorkNY USA 1996
[29] D Goleman Emotional Intelligence Why It Can Matter MoreThan IQ Bantam New York NY USA 2006
[30] BWidrow andM E Hoff ldquoAdaptive switching circuitsrdquo in 1960IRE WESCON Convention Record pp 96ndash104 IRE New YorkNY USA 1960
[31] BWidrowand SD SternsAdaptive Signal Processing Prentice-Hall New York NY USA 1985
[32] S Ferrari B Mehta G di Muro A M J VanDongen and CHenriquez ldquoBiologically realizable reward-modulated hebbiantraining for spiking neural networksrdquo in Proceedings of the IEEEInternational Joint Conference on Neural Networks (IJCNN rsquo08)pp 1780ndash1786 Hong Kong June 2008
[33] L-X Wang and J M Mendel ldquoFuzzy basis functions universalapproximation and orthogonal least-squares learningrdquo IEEETransactions on Neural Networks vol 3 no 5 pp 807ndash814 1992
[34] W Rudin Principles of Mathematical Analysis McGraw-HillNew York NY USA 1976
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
![Page 4: Research Article A Novel Single Neuron Perceptron with ...downloads.hindawi.com/journals/cin/2014/746376.pdf · e following theorem shows that ( , ) is dense in ( [ ], ),where [ ]](https://reader034.vdocuments.mx/reader034/viewer/2022042404/5f18758b7726e07b0309689d/html5/thumbnails/4.jpg)
4 Computational Intelligence and Neuroscience
Table 1 Datasets and related learning information
Dataset Dataset information Parameters ENN model MLP model ComparisonID Name Instance Class Attribute Learning rates Architecture Weights Architecture Weights Rw1 Diabetes 768 2 8 0050 9-1 10 8-2-1 21 522 Heart 270 2 13 0050 14-1 15 13-2-1 31 513 Pima 768 2 8 0005 9-1 10 8-2-1 21 524 Ionosphere 351 2 34 0050 35-1 36 34-2-1 73 505 Sonar 208 2 60 00005 61-1 62 60-2-1 125 506 Tic-tac 958 2 9 00005 10-1 11 9-2-2 23 52
with respect to the MLP we propose a measure named thereducing ratio of number of weights (Rw) as follows
Rw = (1 minus
Number Of Learning Weights of SNP ModelNumber Of Learning Weights of MLP Model
)
times 100
(14)
The Rw is a measure that can be used to compare thecomputational complexity of proposed SNP and MPL Thehigher Rw shows SNP has a lower number of learningweights Thus it has a lower number of computations andso has a lower computational complexity Additionally in theclassification problems the accuracy can be a proper perfor-mance measure to evaluate the algorithms This measure isgenerally expressed as follows
Accuracy = Correct DetectionAll
(15)
For all learning scenarios listed below the training setcontained 70 while the testing set contained 15 of thedata and the remaining was used for the validation set Inputpatterns have been normalized between [0 1] Output targetsare binary digits (ie the single class is labeled by digits ldquo1rdquoand ldquo0rdquo the two classes are labeled as ldquo01rdquo and ldquo10rdquo and thethree classes are labeled as ldquo001rdquo ldquo010rdquo and ldquo100rdquo and )Also the initial weights were randomly selected between[0 1]
Here and prior to entering comparative numerical stud-ies let us analyze the computational complexity Regardingthe proposed learning algorithm the algorithm adjusts119874(2119899)weights for each learning sample where 119899 is number of inputattributes In contrast computational time is 119874(119888119899) for MLPwhere 119888 is number of hidden neurons (the lowest 119888 is 2)Additionally GDBP MLP compared here is based on deriva-tive computations which impose high complexity while theproposed method is derivative free So the proposed methodhas lower computational complexity and higher efficiencywith respect to the MLPThis improved computing efficiencycan be important for online predictions especially when thetime interval of observations is small
To test and assess the SNP in classification 6 single classdatasets have been downloaded from UCI (University ofCalifornia Irvine) Data Center In all datasets the target
10000
9000
8000
7000
6000
5000
4000
3000
2000
1000
000
Diabetes Heart Pima Ionosphere Sonar Tic-tac
Average accuracy
Proposed SNPGDBP MLP
Figure 2 Comparisons between SNP and MLP
labeling was binary Table 1 shows the information relatedto the datasets that include the number of attributes andinstances Additionally the SNP and MLP architectures andthe number of learning weights and Rw are presented in thetable too As illustrated in Table 1 SNP reduces the number oflearning weights approximately about 50 for each dataset
In the proposed SNP algorithm we consider 119887 = 0And the learning parameters values are shown in Table 1The activation function was tansig and the stop criterion inlearning process was the maximum epochs which means themaximum number of epochs has been reached The maxi-mum and minimum values of each dataset were determinedand the scaled data (between 0 and 1) were used to adjust theweights The training was repeated 10 times and the averageof accuracy in test set was recorded Figure 2 presents theaccuracy average and the confidence interval obtained fromSNP and MLP It is obvious that SNP is more accurate thanMLP with GDBP algorithm in some datasets The resultsindicated in Figure 2 are based on studentrsquos 119905-test with 95confidence
Although according to Figure 2 it seems that GDBP isbetter in some cases what is very important in the results isnumber of learning epochs Table 2 shows the learning epochcomparisons According to Table 2 MLP needs many epochsto reach the results of SNP It is the main feature of proposedSNP fast learning with lower computational complexitythat makes it suitable for usage in various applications andespecially in online problems
Computational Intelligence and Neuroscience 5
Table 2 Number of learning epoch comparison
Model Proposed SNP GDBP MLPDiabetes 6091 plusmn 1973 9500 plusmn 1046
Heart 3833 plusmn 2098 10000 plusmn 0
Pima 6245 plusmn 2021 10000 plusmn 0
Ionosphere 2632 plusmn 1638 10000 plusmn 0
Sonar 922 plusmn 1027 9111 plusmn 2247
Tic-tac 628 plusmn 576 9952 plusmn 98
Average 3392 9760
4 Conclusion
In this paper we prove that a single neuron perceptron(SNP) can solve XOR problem and can be a universalapproximator These features can be achieved by extendinginput pattern and by using max operator SNP with thisextension ability is a novel computationalmodel of neural cellthat is learnt by excitatory and inhibitory rulesThis new SNParchitecture works with fewer numbers of learning weightsSpecifically it only generates 119874(2119899) learning weights andonly requires119874(2119899) operations during each training iterationwhere 119899 is size of input vector Furthermore the univer-sal approximation property is theoretically proved for thisarchitectureThe source code of proposed algorithm is acces-sible from httpwwwbitoolsirprojectshtml In numericalstudies SNP was utilized to classify 6 UCI datasets Thecomparisons between proposed SNP and backpropagationMLP present the following conclusions Firstly the numberof learning parameters of SNP is much lower with respect tothe standard MLP Secondly in classification problems theperformance of supervised excitatory and inhibitory learningalgorithm is higher than gradient descent backpropagation(GDBP) Thirdly lower computational complexity causedfrom the fewer learning parameters and faster training ofproposed SNP make it suitable for real time classification Inshort SNP is a universal approximatorwith a simple structureand is motivated by neurophysiological knowledge of thehumanrsquos brain We believe based on the multiple case studiesas well as the theoretical results in this report that SNP canbe effectively used in pattern recognition and classificationproblems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[2] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[3] S Bengio and Y Bengio ldquoTaking on the curse of dimensionalityin joint distributions using neural networksrdquo IEEE Transactionson Neural Networks vol 11 no 3 pp 550ndash557 2000
[4] M Verleysen and D Francois ldquoThe curse of dimensionalityin data mining and time series predictionrdquo in ComputationalIntelligence and Bioinspired Systems vol 3512 pp 758ndash770Springer Berlin Germany 2005
[5] B Verma ldquoFast training of multilayer perceptronsrdquo IEEETransactions on Neural Networks vol 8 no 6 pp 1314ndash13201997
[6] M THagan andM BMenhaj ldquoTraining feedforward networkswith the Marquardt algorithmrdquo IEEE Transactions on NeuralNetworks vol 5 no 6 pp 989ndash993 1994
[7] J-X Peng K Li and G W Irwin ldquoA new Jacobian matrixfor optimal learning of single-layer neural networksrdquo IEEETransactions onNeural Networks vol 19 no 1 pp 119ndash129 2008
[8] J-M Wu ldquoMultilayer Potts perceptrons with Levenberg-Marquardt learningrdquo IEEE Transactions on Neural Networksvol 19 no 12 pp 2032ndash2043 2008
[9] B M Wilamowski and H Yu ldquoImproved computation forlevenbergmarquardt trainingrdquo IEEE Transactions on NeuralNetworks vol 21 no 6 pp 930ndash937 2010
[10] BMWilamowski andHYu ldquoNeural network learningwithoutbackpropagationrdquo IEEE Transactions on Neural Networks vol21 no 11 pp 1793ndash1803 2010
[11] K Y Huang L C Shen K J Chen and M C Huang ldquoMulti-layer perceptron learning with particle swarm optimization forwell log data inversionrdquo in Proceedings of the International JointConference on Neural Networks (IJCNN rsquo12) pp 1ndash6 BrisbaneAustralia June 2012
[12] N Ampazis and S J Perantonis ldquoTwo highly efficient second-order algorithms for training feedforward networksrdquo IEEETransactions on Neural Networks vol 13 no 5 pp 1064ndash10742002
[13] C-T Kim and J-J Lee ldquoTraining two-layered feedforwardnetworks with variable projection methodrdquo IEEE Transactionson Neural Networks vol 19 no 2 pp 371ndash375 2008
[14] S Ferrari and M Jensenius ldquoA constrained optimizationapproach to preserving prior knowledge during incrementaltrainingrdquo IEEE Transactions on Neural Networks vol 19 no 6pp 996ndash1009 2008
[15] Y Liu J A Starzyk and Z Zhu ldquoOptimized approximationalgorithm in neural networks without overfittingrdquo IEEE Trans-actions on Neural Networks vol 19 no 6 pp 983ndash995 2008
[16] G-B Huang Q-Y Zhu and C-K Siew ldquoReal-time learningcapability of neural networksrdquo IEEE Transactions on NeuralNetworks vol 17 no 4 pp 863ndash878 2006
[17] A Bortoletti C di Fiore S Fanelli and P Zellini ldquoA newclass of quasi-Newtonianmethods for optimal learning inMLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no 2pp 263ndash273 2003
[18] V V Phansalkar and P S Sastry ldquoAnalysis of the back-propagation algorithm with momentumrdquo IEEE Transactions onNeural Networks vol 5 no 3 pp 505ndash506 1994
[19] A Khashman ldquoAmodified backpropagation learning algorithmwith added emotional coefficientsrdquo IEEETransactions onNeuralNetworks vol 19 no 11 pp 1896ndash1909 2008
[20] A Khashman ldquoModeling cognitive and emotional processes anovel neural network architecturerdquoNeural Networks vol 23 no10 pp 1155ndash1163 2010
[21] A Sierra J AMacias and FCorbacho ldquoEvolution of functionallink networksrdquo IEEE Transactions on Evolutionary Computa-tion vol 5 no 1 pp 54ndash65 2001
6 Computational Intelligence and Neuroscience
[22] B B Misra and S Dehuri ldquoFunctional link artificial neuralnetwork for classification task in data miningrdquo Journal ofComputer Science vol 3 no 12 pp 948ndash955 2007
[23] M T Hagan H B Demuth and M H Beale Neural NetworkDesign PWS Boston Mass USA 1996
[24] E Lotfi and M-R Akbarzadeh-T ldquoAdaptive brain emotionaldecayed learning for online prediction of geomagnetic activityindicesrdquo Neurocomputing vol 126 pp 188ndash196 2014
[25] E Lotfi and M-R Akbarzadeh-T ldquoBrain emotional learningbased pattern recognizerrdquo Cybernetics and Systems vol 44 no5 pp 402ndash421 2013
[26] E Lotfi and M-R Akbarzadeh-T ldquoSupervised brain emotionallearningrdquo in Proceedings of the International Joint Conference onNeural Networks (IJCNN rsquo12) pp 1ndash6 Brisbane Australia June2012
[27] J E LeDoux ldquoEmotion circuits in the brainrdquo Annual Review ofNeuroscience vol 23 pp 155ndash184 2000
[28] J LeDouxTheEmotional Brain Simon and SchusterNewYorkNY USA 1996
[29] D Goleman Emotional Intelligence Why It Can Matter MoreThan IQ Bantam New York NY USA 2006
[30] BWidrow andM E Hoff ldquoAdaptive switching circuitsrdquo in 1960IRE WESCON Convention Record pp 96ndash104 IRE New YorkNY USA 1960
[31] BWidrowand SD SternsAdaptive Signal Processing Prentice-Hall New York NY USA 1985
[32] S Ferrari B Mehta G di Muro A M J VanDongen and CHenriquez ldquoBiologically realizable reward-modulated hebbiantraining for spiking neural networksrdquo in Proceedings of the IEEEInternational Joint Conference on Neural Networks (IJCNN rsquo08)pp 1780ndash1786 Hong Kong June 2008
[33] L-X Wang and J M Mendel ldquoFuzzy basis functions universalapproximation and orthogonal least-squares learningrdquo IEEETransactions on Neural Networks vol 3 no 5 pp 807ndash814 1992
[34] W Rudin Principles of Mathematical Analysis McGraw-HillNew York NY USA 1976
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
![Page 5: Research Article A Novel Single Neuron Perceptron with ...downloads.hindawi.com/journals/cin/2014/746376.pdf · e following theorem shows that ( , ) is dense in ( [ ], ),where [ ]](https://reader034.vdocuments.mx/reader034/viewer/2022042404/5f18758b7726e07b0309689d/html5/thumbnails/5.jpg)
Computational Intelligence and Neuroscience 5
Table 2 Number of learning epoch comparison
Model Proposed SNP GDBP MLPDiabetes 6091 plusmn 1973 9500 plusmn 1046
Heart 3833 plusmn 2098 10000 plusmn 0
Pima 6245 plusmn 2021 10000 plusmn 0
Ionosphere 2632 plusmn 1638 10000 plusmn 0
Sonar 922 plusmn 1027 9111 plusmn 2247
Tic-tac 628 plusmn 576 9952 plusmn 98
Average 3392 9760
4 Conclusion
In this paper we prove that a single neuron perceptron(SNP) can solve XOR problem and can be a universalapproximator These features can be achieved by extendinginput pattern and by using max operator SNP with thisextension ability is a novel computationalmodel of neural cellthat is learnt by excitatory and inhibitory rulesThis new SNParchitecture works with fewer numbers of learning weightsSpecifically it only generates 119874(2119899) learning weights andonly requires119874(2119899) operations during each training iterationwhere 119899 is size of input vector Furthermore the univer-sal approximation property is theoretically proved for thisarchitectureThe source code of proposed algorithm is acces-sible from httpwwwbitoolsirprojectshtml In numericalstudies SNP was utilized to classify 6 UCI datasets Thecomparisons between proposed SNP and backpropagationMLP present the following conclusions Firstly the numberof learning parameters of SNP is much lower with respect tothe standard MLP Secondly in classification problems theperformance of supervised excitatory and inhibitory learningalgorithm is higher than gradient descent backpropagation(GDBP) Thirdly lower computational complexity causedfrom the fewer learning parameters and faster training ofproposed SNP make it suitable for real time classification Inshort SNP is a universal approximatorwith a simple structureand is motivated by neurophysiological knowledge of thehumanrsquos brain We believe based on the multiple case studiesas well as the theoretical results in this report that SNP canbe effectively used in pattern recognition and classificationproblems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[2] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[3] S Bengio and Y Bengio ldquoTaking on the curse of dimensionalityin joint distributions using neural networksrdquo IEEE Transactionson Neural Networks vol 11 no 3 pp 550ndash557 2000
[4] M Verleysen and D Francois ldquoThe curse of dimensionalityin data mining and time series predictionrdquo in ComputationalIntelligence and Bioinspired Systems vol 3512 pp 758ndash770Springer Berlin Germany 2005
[5] B Verma ldquoFast training of multilayer perceptronsrdquo IEEETransactions on Neural Networks vol 8 no 6 pp 1314ndash13201997
[6] M THagan andM BMenhaj ldquoTraining feedforward networkswith the Marquardt algorithmrdquo IEEE Transactions on NeuralNetworks vol 5 no 6 pp 989ndash993 1994
[7] J-X Peng K Li and G W Irwin ldquoA new Jacobian matrixfor optimal learning of single-layer neural networksrdquo IEEETransactions onNeural Networks vol 19 no 1 pp 119ndash129 2008
[8] J-M Wu ldquoMultilayer Potts perceptrons with Levenberg-Marquardt learningrdquo IEEE Transactions on Neural Networksvol 19 no 12 pp 2032ndash2043 2008
[9] B M Wilamowski and H Yu ldquoImproved computation forlevenbergmarquardt trainingrdquo IEEE Transactions on NeuralNetworks vol 21 no 6 pp 930ndash937 2010
[10] BMWilamowski andHYu ldquoNeural network learningwithoutbackpropagationrdquo IEEE Transactions on Neural Networks vol21 no 11 pp 1793ndash1803 2010
[11] K Y Huang L C Shen K J Chen and M C Huang ldquoMulti-layer perceptron learning with particle swarm optimization forwell log data inversionrdquo in Proceedings of the International JointConference on Neural Networks (IJCNN rsquo12) pp 1ndash6 BrisbaneAustralia June 2012
[12] N Ampazis and S J Perantonis ldquoTwo highly efficient second-order algorithms for training feedforward networksrdquo IEEETransactions on Neural Networks vol 13 no 5 pp 1064ndash10742002
[13] C-T Kim and J-J Lee ldquoTraining two-layered feedforwardnetworks with variable projection methodrdquo IEEE Transactionson Neural Networks vol 19 no 2 pp 371ndash375 2008
[14] S Ferrari and M Jensenius ldquoA constrained optimizationapproach to preserving prior knowledge during incrementaltrainingrdquo IEEE Transactions on Neural Networks vol 19 no 6pp 996ndash1009 2008
[15] Y Liu J A Starzyk and Z Zhu ldquoOptimized approximationalgorithm in neural networks without overfittingrdquo IEEE Trans-actions on Neural Networks vol 19 no 6 pp 983ndash995 2008
[16] G-B Huang Q-Y Zhu and C-K Siew ldquoReal-time learningcapability of neural networksrdquo IEEE Transactions on NeuralNetworks vol 17 no 4 pp 863ndash878 2006
[17] A Bortoletti C di Fiore S Fanelli and P Zellini ldquoA newclass of quasi-Newtonianmethods for optimal learning inMLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no 2pp 263ndash273 2003
[18] V V Phansalkar and P S Sastry ldquoAnalysis of the back-propagation algorithm with momentumrdquo IEEE Transactions onNeural Networks vol 5 no 3 pp 505ndash506 1994
[19] A Khashman ldquoAmodified backpropagation learning algorithmwith added emotional coefficientsrdquo IEEETransactions onNeuralNetworks vol 19 no 11 pp 1896ndash1909 2008
[20] A Khashman ldquoModeling cognitive and emotional processes anovel neural network architecturerdquoNeural Networks vol 23 no10 pp 1155ndash1163 2010
[21] A Sierra J AMacias and FCorbacho ldquoEvolution of functionallink networksrdquo IEEE Transactions on Evolutionary Computa-tion vol 5 no 1 pp 54ndash65 2001
6 Computational Intelligence and Neuroscience
[22] B B Misra and S Dehuri ldquoFunctional link artificial neuralnetwork for classification task in data miningrdquo Journal ofComputer Science vol 3 no 12 pp 948ndash955 2007
[23] M T Hagan H B Demuth and M H Beale Neural NetworkDesign PWS Boston Mass USA 1996
[24] E Lotfi and M-R Akbarzadeh-T ldquoAdaptive brain emotionaldecayed learning for online prediction of geomagnetic activityindicesrdquo Neurocomputing vol 126 pp 188ndash196 2014
[25] E Lotfi and M-R Akbarzadeh-T ldquoBrain emotional learningbased pattern recognizerrdquo Cybernetics and Systems vol 44 no5 pp 402ndash421 2013
[26] E Lotfi and M-R Akbarzadeh-T ldquoSupervised brain emotionallearningrdquo in Proceedings of the International Joint Conference onNeural Networks (IJCNN rsquo12) pp 1ndash6 Brisbane Australia June2012
[27] J E LeDoux ldquoEmotion circuits in the brainrdquo Annual Review ofNeuroscience vol 23 pp 155ndash184 2000
[28] J LeDouxTheEmotional Brain Simon and SchusterNewYorkNY USA 1996
[29] D Goleman Emotional Intelligence Why It Can Matter MoreThan IQ Bantam New York NY USA 2006
[30] BWidrow andM E Hoff ldquoAdaptive switching circuitsrdquo in 1960IRE WESCON Convention Record pp 96ndash104 IRE New YorkNY USA 1960
[31] BWidrowand SD SternsAdaptive Signal Processing Prentice-Hall New York NY USA 1985
[32] S Ferrari B Mehta G di Muro A M J VanDongen and CHenriquez ldquoBiologically realizable reward-modulated hebbiantraining for spiking neural networksrdquo in Proceedings of the IEEEInternational Joint Conference on Neural Networks (IJCNN rsquo08)pp 1780ndash1786 Hong Kong June 2008
[33] L-X Wang and J M Mendel ldquoFuzzy basis functions universalapproximation and orthogonal least-squares learningrdquo IEEETransactions on Neural Networks vol 3 no 5 pp 807ndash814 1992
[34] W Rudin Principles of Mathematical Analysis McGraw-HillNew York NY USA 1976
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
![Page 6: Research Article A Novel Single Neuron Perceptron with ...downloads.hindawi.com/journals/cin/2014/746376.pdf · e following theorem shows that ( , ) is dense in ( [ ], ),where [ ]](https://reader034.vdocuments.mx/reader034/viewer/2022042404/5f18758b7726e07b0309689d/html5/thumbnails/6.jpg)
6 Computational Intelligence and Neuroscience
[22] B B Misra and S Dehuri ldquoFunctional link artificial neuralnetwork for classification task in data miningrdquo Journal ofComputer Science vol 3 no 12 pp 948ndash955 2007
[23] M T Hagan H B Demuth and M H Beale Neural NetworkDesign PWS Boston Mass USA 1996
[24] E Lotfi and M-R Akbarzadeh-T ldquoAdaptive brain emotionaldecayed learning for online prediction of geomagnetic activityindicesrdquo Neurocomputing vol 126 pp 188ndash196 2014
[25] E Lotfi and M-R Akbarzadeh-T ldquoBrain emotional learningbased pattern recognizerrdquo Cybernetics and Systems vol 44 no5 pp 402ndash421 2013
[26] E Lotfi and M-R Akbarzadeh-T ldquoSupervised brain emotionallearningrdquo in Proceedings of the International Joint Conference onNeural Networks (IJCNN rsquo12) pp 1ndash6 Brisbane Australia June2012
[27] J E LeDoux ldquoEmotion circuits in the brainrdquo Annual Review ofNeuroscience vol 23 pp 155ndash184 2000
[28] J LeDouxTheEmotional Brain Simon and SchusterNewYorkNY USA 1996
[29] D Goleman Emotional Intelligence Why It Can Matter MoreThan IQ Bantam New York NY USA 2006
[30] BWidrow andM E Hoff ldquoAdaptive switching circuitsrdquo in 1960IRE WESCON Convention Record pp 96ndash104 IRE New YorkNY USA 1960
[31] BWidrowand SD SternsAdaptive Signal Processing Prentice-Hall New York NY USA 1985
[32] S Ferrari B Mehta G di Muro A M J VanDongen and CHenriquez ldquoBiologically realizable reward-modulated hebbiantraining for spiking neural networksrdquo in Proceedings of the IEEEInternational Joint Conference on Neural Networks (IJCNN rsquo08)pp 1780ndash1786 Hong Kong June 2008
[33] L-X Wang and J M Mendel ldquoFuzzy basis functions universalapproximation and orthogonal least-squares learningrdquo IEEETransactions on Neural Networks vol 3 no 5 pp 807ndash814 1992
[34] W Rudin Principles of Mathematical Analysis McGraw-HillNew York NY USA 1976
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
![Page 7: Research Article A Novel Single Neuron Perceptron with ...downloads.hindawi.com/journals/cin/2014/746376.pdf · e following theorem shows that ( , ) is dense in ( [ ], ),where [ ]](https://reader034.vdocuments.mx/reader034/viewer/2022042404/5f18758b7726e07b0309689d/html5/thumbnails/7.jpg)
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014