martial arts competitive decision-making algorithm based

8
Research Article Martial Arts Competitive Decision-Making Algorithm Based on Improved BP Neural Network Huipeng Lv Northeast Petroleum University, Daqing, Heilongjiang, 63318, China CorrespondenceshouldbeaddressedtoHuipengLv;[email protected] Received 17 March 2021; Revised 5 April 2021; Accepted 13 April 2021; Published 11 May 2021 AcademicEditor:FazlullahKhan Copyright©2021HuipengLv.isisanopenaccessarticledistributedundertheCreativeCommonsAttributionLicense,which permitsunrestricteduse,distribution,andreproductioninanymedium,providedtheoriginalworkisproperlycited. emainbodyofmodernChinesemartialartscompetitionisthestrategy,andfightinghasjuststartedinsportscompetitions. Strategyandactioncorrespondtoeachotherandpracticeasaset.erefore,constructingtheChinesemartialartscompetition decision-makingalgorithmandperfectingthemartialartscompetitionareintuitiveandessential.eformulationofmartialarts competitionstrategiesrequiresscientificanalysisofathleticdataandmoreaccuratepredictions.Basedonthisobservation,this papercombinesthepopularneuralnetworktechnologytoproposeanoveladditionalmomentum-elasticgradientdescent.e BPneuralnetworkadaptstothelearningrate.ealgorithmisimprovedforthetraditionalBPneuralnetwork,suchasselecting learningsteplength,thedifficultyofdeterminingthesize,anddirectionoftheweight,andthelearningrateisnoteasytocontrol. eexperimentalresultsshowthatthispaper’salgorithmhasimprovedbothnetworkscaleandrunningtimeandcanpredict martial arts competition routines and formulate scientific strategies. 1. Introduction From the concept of martial arts [1, 2], martial art is a traditional sport with Chinese culture as the theoretical foundation, martial arts methods as the primary content, androutines,fighting,andexercisesastheprimaryformsof sports. e types of martial arts mainly include routines, fighting,actions,andexercises.Inthecontextofthecurrent globalizationofcompetitiveideas,competitivesportsarethe generaltrend,sotheroutinesorfightingwetalkaboutrefer to competitive events. e routines have taken shape, and onlySandahasbecomeaformaleventforfighting.Gongfuis theessentialskillofmartialartspractice[3].Bothroutines andfightingmustbepracticed.Here,wefocusonfighting. Fromtheessentialcoreofmartialarts,theathletesshouldbe focusedonfighting.Forexample,routines’formationisto preservesomepracticalfightingtechniquesandmovements inroutinesandfollowfixedrulesandsequencearrangement, thusformingaroutine.Somemovementsintheroutinecan also clearly see the traces of fighting [4]. When explaining routine movements, imaginary movements will hit the opponent’sneck,eyes,ribs,crotch,andotherpartstomake the opponent lose combat effectiveness, preserving the purposeofkillingtheotherparty. FromFigure1,wecanseethatthecontentoftheroutine is quite rich, but the content of the fighting is relatively barren. erefore, this paper believes that the Chinese martialartscompetitivefightingsystemisnotperfect,and theChinesemartialartscompetitivefightingprojecthasyet tobedeveloped.Fromthecomparisonwiththeroutine,it canbeseenthatSandainfightingincludeskicking,hitting, andthrowinginmartialarts,whichcanbecomparedwithall kindsofboxingintheroutine[5].eformofmovementof pushinghandsshouldberegardedasthetransitionfromthe gong method to fighting, which is the mastery of both parties’strength.Inthecomparisontest,theshortsoldieris alsoskinnyinthefourtypesofequipmentinthecomparison routine: long, short, double, and soft. Besides, strategic martialartisstilltheprimarymanifestationofmartialarts. However,thestrategiesofstrategicmartialartsareever- changing,withvariousformsandstyles[6].eformulation of scientific and reasonable martial arts competition strat- egies can improve the level of competition. e rapid de- velopment of neural networks [7–9] and deep learning Hindawi Journal of Healthcare Engineering Volume 2021, Article ID 9920186, 8 pages https://doi.org/10.1155/2021/9920186

Upload: others

Post on 03-Oct-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Martial Arts Competitive Decision-Making Algorithm Based

Research ArticleMartial Arts Competitive Decision-Making Algorithm Based onImproved BP Neural Network

Huipeng Lv

Northeast Petroleum University Daqing Heilongjiang 63318 China

Correspondence should be addressed to Huipeng Lv lvhuipeng2021163com

Received 17 March 2021 Revised 5 April 2021 Accepted 13 April 2021 Published 11 May 2021

Academic Editor Fazlullah Khan

Copyright copy 2021 Huipeng Lvis is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

e main body of modern Chinese martial arts competition is the strategy and fighting has just started in sports competitionsStrategy and action correspond to each other and practice as a set erefore constructing the Chinese martial arts competitiondecision-making algorithm and perfecting the martial arts competition are intuitive and essential e formulation of martial artscompetition strategies requires scientific analysis of athletic data and more accurate predictions Based on this observation thispaper combines the popular neural network technology to propose a novel additional momentum-elastic gradient descent eBP neural network adapts to the learning rate e algorithm is improved for the traditional BP neural network such as selectinglearning step length the difficulty of determining the size and direction of the weight and the learning rate is not easy to controle experimental results show that this paperrsquos algorithm has improved both network scale and running time and can predictmartial arts competition routines and formulate scientific strategies

1 Introduction

From the concept of martial arts [1 2] martial art is atraditional sport with Chinese culture as the theoreticalfoundation martial arts methods as the primary contentand routines fighting and exercises as the primary forms ofsports e types of martial arts mainly include routinesfighting actions and exercises In the context of the currentglobalization of competitive ideas competitive sports are thegeneral trend so the routines or fighting we talk about referto competitive events e routines have taken shape andonly Sanda has become a formal event for fighting Gongfu isthe essential skill of martial arts practice [3] Both routinesand fighting must be practiced Here we focus on fightingFrom the essential core of martial arts the athletes should befocused on fighting For example routinesrsquo formation is topreserve some practical fighting techniques and movementsin routines and follow fixed rules and sequence arrangementthus forming a routine Some movements in the routine canalso clearly see the traces of fighting [4] When explainingroutine movements imaginary movements will hit theopponentrsquos neck eyes ribs crotch and other parts to make

the opponent lose combat effectiveness preserving thepurpose of killing the other party

From Figure 1 we can see that the content of the routineis quite rich but the content of the fighting is relativelybarren erefore this paper believes that the Chinesemartial arts competitive fighting system is not perfect andthe Chinese martial arts competitive fighting project has yetto be developed From the comparison with the routine itcan be seen that Sanda in fighting includes kicking hittingand throwing in martial arts which can be compared with allkinds of boxing in the routine [5] e form of movement ofpushing hands should be regarded as the transition from thegong method to fighting which is the mastery of bothpartiesrsquo strength In the comparison test the short soldier isalso skinny in the four types of equipment in the comparisonroutine long short double and soft Besides strategicmartial art is still the primary manifestation of martial arts

However the strategies of strategic martial arts are ever-changing with various forms and styles [6]e formulationof scientific and reasonable martial arts competition strat-egies can improve the level of competition e rapid de-velopment of neural networks [7ndash9] and deep learning

HindawiJournal of Healthcare EngineeringVolume 2021 Article ID 9920186 8 pageshttpsdoiorg10115520219920186

[10 11] provides us with new ideas for formulating martialarts strategies Because artificial neural networks do not needto determine the mapping relationshiprsquos mathematicalequations between input and output in advance they canlearn specific rules only through training [12] Set the inputvalue to get the result closest to the expected output valueAn artificial neural networkrsquos core to realize its function isthe algorithm as an intelligent information processingsystem BP neural network is a multilayer feed-forwardnetwork trained by error backpropagation [13 14]

is paper combines the popular neural network tech-nology to propose a novel additional momentum-elasticgradient descent based on the above observations e BPneural network adapts to the learning rate e algorithm isimproved for the traditional BP neural network such asselecting learning step length the difficulty of determiningthe size and direction of the weight and the learning rate isnot easy to control e experimental results show that thispaperrsquos algorithm has improved both network scale andrunning time and can predict martial arts competitionroutines and formulate scientific strategies

e following are the main contributions points of thispaper

(1) A novel additional momentum-elastic gradient de-scent-adaptive learning rate BP neural network isproposed is algorithm is aimed at the traditionalBP neural network which is difficult to determinethe learning step size weight size and direction andthe learning rate is not easy to control And otherissues were improved

(2) is paper preliminarily constructs a martial artscompetitive strategy algorithm introduces a neuralnetwork algorithm for martial arts strategy andreferences related parts of competitive sports

(3) We also constructed a dataset and conducted ex-periments e experimental results show that themethod in this paper is superior to some currentadvanced methods

e rest of the paper is organized as follows In Section 2a background study is given followed by the methodology in

Section 3 In Section 4 experimental setup and result dis-cussion are presented followed by a conclusion in Section 5

2 Background

e concept of martial arts is defined in ldquoCihairdquo Martial artsalso known as ldquoWuyirdquo and ldquoNational Martial Artsrdquo Tradi-tional Chinese sports are based on offensive and defensivecombat actions such as kicking beating falling holdingfalling smashing hitting and stabbing It is composed of aparticular movement pattern It is divided into two formsroutine and confrontatione former is divided into boxingand equipment the latter can be divided into Sanshou pushhands long soldiers short soldiers and so on [15]According to the characteristics of sports and technicalforms there is internal and external Home divided into longboxing and short boxing e definition of sports isexplained in Cihai physical exercise [16] is the primarymethod combined with natural factors such as sunlight airwater and hygiene measures and organized and plannedexercise of the body and mind A type of social activity Itspurpose is to strengthen physical fitness improve sportsskills enrich cultural life and cultivate moral sentiment It isan integral part of social and cultural education At that timemartial arts were fighting skills to kill opponents and protectthemselves Since New Chinarsquos founding martial artsrsquocombative nature has been replaced by modern sports[17 18] and martial arts have been defined as a sportCompetitiveness refers to competition in sports ldquoCompe-tition refers to competition and competitionrdquo and skill refersto sports skills ldquoCompetitive sports have the characteristicsof competitionrdquo standardization fairness clusteringopenness and appreciation

Chinese martial arts [19] have a long history Generallymartial artsrsquo historical development is divided into threestages ancient modern and modern e essence of Chi-nese ancient martial arts is fighting so martial artsrsquo historicaldevelopment is the historical development of fightingWushu routines came into being after fighting and thecreation of routines is also for better fighting Until moderncompetitive martial arts used ldquohigh difficult beautiful newrdquo

Classification of martialarts content

Strategic martial arts

Boxing

Optional boxing

Prescribed boxing

Traditionalboxing

Instrument

Fighting sports

Sanda Push hand Short pawn

Short instrument

Long instrument

Double instrument

So instrument

Figure 1 e schematic diagram of martial arts content classification

2 Journal of Healthcare Engineering

as the criterion it was divided into two types of martial artsroutines and fighting Confucian founder Confucius em-phasized that both civil and martial arts should be ldquobe-nevolent in terms of the theoretical awareness of martialartsrdquo [20] One of the core content of martial arts Mo Di theMo School founder advocated universal love and nonag-gression supported force and used force to resist all ag-gression and injustice He became the representative andbirthplace of ldquochivalrousrdquo in later generations and a typicalchivalrous spirit in history Some thoughts of the Taoistschool have a significant influence on martial arts thoughtespecially the Taoist understanding of the origin of theuniverse such as the understanding of Tao and Qi the unityof nature and man and the theory of rigidity and softnesswhich established the theory and philosophy of martial artsere are also many places where martial arts absorb thedoctrine of military strategists For example false moves andfeint attacks are drawn from ldquosoldiers are not tired of deceitrdquoand the understanding of tactics and the grasp of timing areall borrowed from strategists Including the five elements ofyin and yang theory developed from the Book of Changes atthat time it has contributed a lot to the establishment of TaiChi Xingyiquan Baguazhang and other types of boxingese simple dialectics have enriched the theory of martialarts and led martial arts to a broad road

e artificial neural network is a nonlinear processingsystem with the input layer hidden layer and output layerformed by many processing units interconnected e ar-tificial neural network is a mathematical model inspired byhumansrsquo or animalsrsquo neural network functions to processinformation is mathematical modelrsquos application issimilar to the recognition memory data analysis and in-formation processing of the human brain [21] In processinginformation the input layer has different input quantities xand each input quantity x has an associated weight w

corresponding to it e input volume enters the neuralnetwork from the input layer passes through the stimulationof the weight threshold and the weighted summation andobtains the output y e output results from the layer underthe activation functionrsquos perception and continues the fol-lowing layerrsquos role In this way until input quantity is outputfrom the output layer the error between the output valueand the expected value is the smallest at is it has threeessential elements connection weight summation unit andactivation function e processing unit model is shown inFigure 2

ere are quite a lot of classification standards for ar-tificial neural networks ANN can be classified into feed-forward networks and feedback networks based solely on thenetwork structure Feed-forward neural networks includesingle-layer and multilayer feed-forward networksroughout the ANN application field feed-forward neuralnetworks are the most frequent and can successfully achievethe expected results e feed-forward neural networkrsquoslatest output is not related to the previous output state and isonly determined by the current excitation function andweight matrix [22] Compared with the feed-forward neuralnetwork the feedback neural network is complex and ad-vanced Its latest output is closely related to the previous

output and the latest input and its intelligence is evidentwith the associative memory function It is worthy of pro-motion and application under the guidance of artificialintelligence e most commonly used and classic feed-forward neural network is the error backpropagation neuralnetwork referred to as BP neural network

3 Methodology

In this section the methodology is discussed in detail

31 BP Neural Network BP neural network the errorbackpropagation neural network is the most widely usedartificial neural network model It was proposed by Kurkovain 1985 and gave practical guidance in solving the learningand training problems of multilayer neural networks andgave a comprehensive analysis and derivation in mathe-matics [21] e BP neural networkrsquos network structure issimple and mainly has three layers the input layer hiddenlayer and output layer as shown in Figure 3

e three-layer BP neural network is shown in Figure 3In this figure the number of input nodes isM the number ofoutput nodes is L the initial number of hidden layers is uand the BP neural networkrsquos actual input is x1 x2 xMe actual output is y1 y2 yL the target output is Tkand the output error is ek Secondly the input layerrsquosconnection weight to the hidden layer is wij that the weightcan also be called a weight vector e connection weightfrom the hidden layer to the output layer is the size anddirection of these two weights may be different e im-proved BP neural network can modify it to make it developtoward the desired size

e BP neural network is vividly understood as a su-pervised learning relationship between teachers and stu-dents It consists of the forward propagation of the first stagesignal and the backpropagation of the second stage error[22] In the first forward propagation the given signal firstenters the network from the input layer and then is pushedby the hidden layer to the output layer Finally the signal istransmitted by the output layer e first stage is completede weight of the network is maintained during the entiretransmission of the early stage Consistently the neuronsrsquostate in the next layer is only constrained by the previouslayerrsquos neurons Suppose the result of the signal transmittedfrom the output layer is far from the expected result In thatcase it will immediately go to the second stage of the errorbackpropagation learning process is process is a processof reducing the error e actual output value is differentfrom the expected value e difference is used as a new

sum u Ф ()

θ

x1

x2

xn

yn

W1

W2

Wn

Figure 2 Artificial neural network model diagram

Journal of Healthcare Engineering 3

signal input from the output layer layer by layer back-propagation to the first input layer and continue Continueuntil the final actual output is closer to the expected valuethat is the error is minimized In the process of back-propagation the weight is mainly modified to obtain thedesired result Among them the changes of other factors alsoplay a supporting role

32 Improved BP Neural Network Aiming at the short-comings and limitations of the steepest descent BP neuralnetwork and the momentum BP neural network a newalgorithm is proposed additional momentum-elastic gra-dient descent-adaptive learning rate BP neural networkFirst the learning steprsquos selection and the weight size anddirection have been improved and the learning rate andmomentum terms have been limited en the improvednew algorithm and the first two BP neural networks arecompared and analyzed by simulation experiments and thecorrectness error and prediction time are calculated Otherindicators found that the new algorithm has higher accuracyminor errors fewer iterations and better overall perfor-mance Finally the improved BP neural network used in thenext chapter of this paper has laid a strong foundation forpredicting martial arts strategies

321 SDBP Neural Network e SDBP neural networkrsquoslearning process [23] is also composed of two stages theforward propagation of the first stage signal and the secondstage errorrsquos backpropagation e weight wij and thethreshold aj from the input layer to the hidden layer arecontinuously adjustede weight wjk from the hidden layerto the output layer and the threshold bk are modified untilthe output result is almost the same as the expected resulte SDBP neural network structure is shown in Figure 4assuming the number of nodes in the input layer hiddenlayer and output layerey are M u and L e calculationformulas for each layer are as follows

Let k be the number of iterations the calculation formulafor weight update is

wk+1 wk minus αgk (1)

eweight of the kt iteration α is the learning rate whichcan be set reasonably during training e initial defaultlearning rate is 001

Average method take the average of the three channelsrsquopixels in the RGB image as the gray value of the gray image

gk zEk

zwk

(2)

where gk is the gradient vector of the k th iteration Ek is thetotal error performance function of the neural networkoutput of the k th iteration and the output formula of thehidden layer is

Hj 1113944n

i1wijxi + aj (3)

e input vector Hj is the hidden layerrsquos actual outputaj is the threshold from the input layer to the hidden layerand wij is the weight from the input layer to the hidden layere output formula of the output layer is as follows

Ok 1113944l

j1Hjwjk + bk (4)

e output layerrsquos actual output bk is the threshold fromthe hidden layer to the output layer and wjk is the weightfrom the hidden layer to the output layer e calculationformula is as follows

Ek 121113944

k

i1Yi minus Oi( 1113857

2 (5)

where Ek is the total error performance function of theneural network output of the k th iteration Yi is the expectedvalue of the i th iteration and Oi is the actual output of the i

th iterationFrom equations 1ndash5 and each layerrsquos function the

surface gradient gk of the total error of the k th iteration canbe obtained and equation 1 is respectively introduced eweight of each layer is revised to ensure that the absoluteerror is reducede small development direction is reducedto the minimum thus in line with the expected error

x bk

x1y1

y2

y3

yk

x2

x3

xn

f () f ()

f ()

f ()

f ()

f ()

f ()

f ()

wijaj wjk

Figure 4 e structure of SDBP neural network

Input layer Hidden layer Output layer

Figure 3 e structure of BP neural network

4 Journal of Healthcare Engineering

It can be seen from Figure 5 that there are too manyminimum points on the error surface because the transferfunction of the SDBP neural network is nonlinear

322 MOBP Neural Network MOBP neural network in-troduces momentum factor η(0lt ηlt 1) based on SDBPneural network

Δwk+1 ηΔwk + α(1 minus η)zEk

zwk

(6)

where η is the momentum factor wk is the weight of the k thiteration Δwk is the weight correction of the k th iteration αis the learning rate and Ek is the total error performancefunction

e new correction amount of the algorithm is closelyrelated to the previous correction result and restrictedSuppose the last amount correction is too significant In thatcase the sign of the second term of equation 6 is the oppositeof the previous correction amount resulting in the currentcorrection If the last revision is too minor the second termof equation 6 is the same as the previous correction sign As aresult the correction is increased and the training speed willbe improved for the network erefore the MOBP neuralnetwork essentially increases the correction in the samegradient direction that is increases the ldquomomentumrdquo in thesame gradient direction by increasing the momentum factorη reasonably

In the MOBP neural network compared to the SDBPneural networkrsquos learning rate the MOBP neural networkrsquoslearning rate can be increased to a greater extent Still theincreased learning rate will not make the network trainingimpossible and paralyzedere are two reasons On the onehand when the degree of correction is too large the algo-rithm can correct to decrease ensuring that the correctiondirection still maintains the direction of convergence On theother hand MOBP neural network does the same by in-troducing momentum factor e correction amount of thegradient direction is reasonably increased to increase thelearning rate From this it can be obtained that based onensuring the algorithmrsquos stability the MOBP neural net-workrsquos learning time is shortened and the convergencespeed has been dramatically improved

323 Limitations of BP Neural Network When applyingartificial neural network modeling in related fields mostneural network models use BP neural network SDBP neuralnetwork MOBP neural network and other improved formsHowever BP neural network still has its limitations

(1) e limitations of the network itself the BP neuralnetwork is a static transmission network It can onlycomplete nonlinear static mapping safely and doesnot have the information processing function thatchanges dynamically with time or space

(2) Selection of initial parameters It is difficult for BPneural network to select initial parameters especiallythe size and direction of initial weights andthresholds Although there are correction formulas

once the initial values are determined it is equivalentto determining the network convergence Whennetwork learning is over it will generally converge tothe local extreme value closest to the initial valuedirectly affecting the difference between the localextreme pointrsquos error and the ideal error

(3) e choice of learning step length e training steplength in BP neural network is a fixed value and thepath left by the gradient method is zigzag whenapproaching the minimum point e closer to theminimum point the faster the convergence speederefore to reduce the error the training steplength should be set as small as possible If the steplength is too small the training time will increasewhich leads to the difficulty of establishing thetraining step length To obtain relevant learningresults it generally needs to be rich practical expe-rience or repeated adjustments to the step length

(4) e choice of learning rate If the learning rate isselected too large it will make learning unstable onthe contrary if the learning rate is chosen too smallit may increase the learning time and cause networkparalysis At present a fast and reasonable methodhas not been explored to solve the difficulty ofselecting the learning rate of BP neural network

33 ImprovedBPNeuralNetwork Because of the limitationsof the SDBP neural network that is MOBP neural networkand BP neural network this paper first improves the se-lection of learning step size and the size and direction ofweights then enhances the selection of learning rate that isthe new algorithm that is BP neural network with addi-tional momentum-elastic gradient descent-adaptive learn-ing rate

e selection of the learning step length in the BP neuralnetwork is essential If the step length is too large thenetwork will converge quickly but it will cause instability innetwork learning too small a step length can avoid insta-bility but the convergence speed will slow down Solving thiscontradiction it is proposed to add an ldquoadditional mo-mentum termrdquo based on SDBP neural network and MOBPneural network add a correction value proportional to theprevious weight or threshold value in the correction of each

Error surface Error contour

Sum sq

uared error

Weight W

Bias B

Bias B

060402

ndash02ndash04ndash06

0

ndash4ndash2

ndash2ndash2

0

0

00

2

2

2

2 44

4

ndash4 ndash4ndash4

ndash3

ndash2

ndash1

1

3

4

Figure 5 Error surface plot

Journal of Healthcare Engineering 5

layer weight or threshold and recalculate the new weight (orthreshold) correction amount according to the back-propagation method e specific adjustment formula is asfollows

Δwij(n) λΔwij(n minus 1) + ζδj(t)vi(n) (7)

Δbi(n) λΔbi(n minus 1) + ζδi(t) (8)

where n is the number of trainings λ is the momentum termζ is the learning step size Δwij(n) is the correction of theweight of the n th trainingΔbi(n) is the correction of the n thtraining threshold and δj(t)vi(n) is the n th training gra-dient vector Write equation 8 as a time series t as a variablet isin [0 n] then equation 8 can be regarded as the first-orderdifference equation Δwij(n) e calculation equation is asfollows

Δwij(n) ζ 1113944n

t0λnminus 1δj(t)vi(n)

δj(t)vi(n) minuszE(n)

zwij(n)

(9)

where δj(t)vi(n) is understood as the gradient vector of the n

th training then

Δwij(n) minusζ 1113944n

t0λnminus t zE(n)

zwij(n) (10)

In the training process of the additional momentummethod to prevent the corrected weight from having anexcessive influence on the error the momentum term λneeds to be strengthened that is the judgment condition ofthe additional momentum method in the training process is

λ

0 E(n)gt 104E(n minus 1)

095 E(n)ltE(n minus 1)

λ otherwise

⎧⎪⎪⎨

⎪⎪⎩(11)

However the additional momentum BP neural networkalgorithmrsquos hidden layer generally uses an S-type activationfunction e S-type function is often called a ldquosqueezerdquofunction because it can compress an infinite input range to alimited output range When the input is significant the slopeis almost close It will cause the gradient amplitude in thealgorithm to be tiny making the correction processmeaningless In response to this problem the elastic gradientdescent method is introduced to improve again e elasticgradient descent method does not study the amplitude of thepartial derivative Only the partial derivative sign is studiedto determine the weightrsquos update direction and the revisedldquoupdate value determines the size of the weight changerdquo Ifthe sign of the partial derivative of the objective function to acertain weight is in two consecutive iterations and if themedium is unchanged increase the corresponding ldquoupdatevaluerdquo otherwise decrease the corresponding ldquoupdatevaluerdquo Use the additional momentum method and theelastic gradient descent method to improve the BP neuralnetwork at the same time It will provide the range the size

of the correction the number of weights and the direction ofupdating the weights It can also significantly prevent thetraining process from falling into a local minimum andspeed up the convergence speed

4 Experiments

41 Experimental Environment Since the experiment in thispaper needs to train a deep neural network the scale is largethe structure is more complex and the calculation scale isenormous e programming language used is Python theversion is 36 the deep learning framework used is Keras215 and the IDE for program deployment is Pycharm Allexperiments are conducted in the same environment All ourexperiments have been conducted on a desktop PC with anIntel Core i7-8700 processor and an NVIDIA GeForce GTX1080ti GPU

42 Datasets Preprocessing e data set used in this paper iscollected in martial arts competitions is paper removessingular values and unstable values and creates training setsand test sets

43 Experimental Results All experiments are performed onthe same platform and the experimental results areexpressed in terms of error rate and prediction time eexperimental results comparing various algorithms areshown in Table 1 below

It can be seen from Table 1 that the MDBP neuralnetwork adds a momentum term based on the SDBP neuralnetwork to modify the weight and threshold Its test errorrate is lower than that of the DBP neural network About 2percentage points reduce the error rate e new algorithmin this paper has improved the selection of learning step sizeand the size and direction of weights and limited the choiceof momentum and learning rate making the new algorithmmore efficient e error rate is significantly lower than thatof other algorithms and nearly 5 percentage points sig-nificantly reduce the new algorithm From the comparisonof error rates in Table 1 it can be fully seen that the error rateof the new BP neural network is significantly lower than theother two BP neural network lines and has a better pre-diction effect

It can be seen from Table 2 that the improvement of theMOBP neural network over the SDBP neural network is theintroduction of a momentum term that is a more signifi-cant learning rate can be used without paralyzing thelearning process e MOBP neural network always accel-erates the same gradient direction e correction amountthat is the MABP neural network has a faster convergencerate than the SDBP neural network and the learning time isshorter e new algorithm is more aimed at the selection ofthe first learning step e size and direction of the weightsare improved and the momentum term is improved ereare clear criteria for selecting learning rate that is the newalgorithm can fine-tune the correction of weights and in-crease the learning rate within a reasonable range and at thesame time accelerate the convergence speed that is reduce

6 Journal of Healthcare Engineering

the prediction time e comparison can clearly show thatthe new algorithmrsquos prediction time is significantly lowerthan that of the other two types of algorithms through theprediction time

5 Conclusion

e formulation of martial arts competitive strategies re-quires scientific analysis of competitive data and more ac-curate predictions Based on this observation this papercombines the popular neural network technology It pro-poses a novel additional momentum-elastic gradient de-scent-adaptive learning rate BP neural network ealgorithm is improved for the traditional BP neural networksuch as selecting learning step size and the difficulty ofdetermining the weightrsquos size and direction e learningrate is not easy to controle experimental results show thatthis paperrsquos algorithm has made good improvements in bothin-network scale and running time and can predict martialarts competition routines and formulate scientific strategies

Data Availability

e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

All the authors do not have any possible conflicts of interest

References

[1] V Romanenko L Podrigalo W J Cynarski et al ldquoAcomparative analysis of the short-termmemory of martial artsathletes of different levels of sportsmanshiprdquo Journal ofMartial Arts Anthropology vol 20 no 3 pp 18ndash24 2020

[2] A Harwood-Gross R Feldman O Zagoory-Sharon andY Rassovsky ldquoHormonal reactivity during martial artspractice among high-risk youthsrdquo Psychoneuroendocrinologyvol 121 Article ID 104806 2020

[3] R B Santos Jr A C Utter S R McAnulty et al ldquoWeight lossbehaviors in Brazilian mixed martial arts athletesrdquo SportSciences for Health vol 16 no 1 pp 117ndash122 2020

[4] X Ning W Li and W Liu ldquoA fast single image haze removalmethod based on human retina propertyrdquo IEICE Transactionson Information and Systems vol E100D no 1 pp 211ndash2142017

[5] Z L Yang S Y Zhang Y T Hu Z W Hu and Y F HuangldquoVAE-stega linguistic steganography based on variationalauto-encoderrdquo IEEE Transactions on Information Forensicsand Security vol 16 pp 880ndash895 2020

[6] X Ning W Li and J Xu ldquoe principle of homologycontinuity and geometrical covering learning for patternrecognitionrdquo International Journal of Pattern Recognition andArtificial Intelligence vol 32 no 12 Article ID 1850042 2018

[7] W Cai and Z Wei ldquoRemote sensing image classification basedon a cross-attention mechanism and graph convolutionrdquo IEEEGeoscience and Remote Sensing Letters pp 1ndash5 2020

[8] X Ning W Li and L Sun ldquoAge estimation method based ongenerative adversarial networksrdquo DEStech Transactions onComputer Science and Engineering 2017

[9] W Cai B Liu Z Wei M Li and J Kan ldquoTARDB-net triple-attention-guided residual dense and BiLSTM networks forhyperspectral image classificationrdquo Multimedia Tools andApplications vol 80 no 2 pp 1ndash22 2021

[10] N Xin LWeijun L Haoguang and LWenjie ldquoUncorrelatedlocality preserving discriminant analysis based on bionicsrdquoJournal of Computer Research and Development vol 53no 11 p 2623 2016

[11] Z Wang C Zou and W Cai ldquoSmall sample classification ofhyperspectral remote sensing images based on sequential jointdeeping learning modelrdquo IEEE Access vol 8 pp 71353ndash71363 2020

[12] X Zhang Y Yang Z Li X Ning Y Qin and W Cai ldquoAnimproved encoder-decoder network based on strip poolmethod applied to segmentation of farmland vacancy fieldrdquoEntropy vol 23 no 4 p 435 2021

[13] W Cai and Z Wei ldquoPiiGAN generative adversarial networksfor pluralistic image inpaintingrdquo IEEE Access vol 8pp 48451ndash48463 2020

[14] X Ning X Wang S Xu et al ldquoA review of research on co-trainingrdquo Concurrency and Computation Practice and Ex-perience 2021 In press

[15] I Portela-Pino A Lopez-Castedo M J Martınez-PatintildeoT Valverde-Esteve and J Domınguez-Alonso ldquoGenderdifferences in motivation and barriers for the practice ofphysical exercise in adolescencerdquo International Journal ofEnvironmental Research and Public Health vol 17 no 1p 168 2020

[16] A De Sire R de Sire V Petito et al ldquoGut-joint axis the roleof physical exercise on gut microbiota modulation in olderpeople with osteoarthritisrdquo Nutrients vol 12 no 2 p 5742020

[17] V Malkin S Serpa A Garcia-Mas and E Shurmanov ldquoAnew paradigm in modern sports psychologyrdquo Revista dePsicologıa del Deporte (Journal of Sport Psychology) vol 29no 2 pp 149ndash152 2020

[18] R Summerley ldquoe development of sports a comparativeanalysis of the early institutionalization of traditional sports

Table 1 Test error rate of different types of martial arts BP neuralnetwork ()

Type SDBP MOBP OursOptional boxing 623 485 137Prescribed boxing 735 597 178Traditional boxing 769 623 203Short instrument 823 514 199Long instrument 658 469 187Double instrument 769 528 201Soft instrument 859 499 189

Table 2 e prediction time of BP neural network for differentmartial arts(s)

Type SDBP MOBP OursOptional boxing 373 236 172Prescribed boxing 345 245 183Traditional boxing 359 229 152Short instrument 365 238 163Long instrument 359 249 152Double instrument 375 258 172Soft instrument 359 249 183

Journal of Healthcare Engineering 7

and E-sportsrdquo Games and Culture vol 15 no 1 pp 51ndash722020

[19] Q S Han M eeboom and D Zhu ldquoChinese martial artsand the olympics analysing the policy of the internationalwushu federationrdquo International Review for the Sociology ofSport 2020 In press

[20] C Hongtu and Z Cai ldquoe formal evolution of Chinesemartial arts action movies and the spread of core valuesrdquoAeFrontiers of Society Science and Technology vol 2 no 4 2020

[21] V Kurkova ldquoKolmogorovrsquos theorem and multilayer neuralnetworksrdquo Neural Networks vol 5 no 3 pp 501ndash506 1992

[22] W Jin Z J Li L S Wei and H Zhen ldquoe improvements ofBP neural network learning algorithmrdquo in Proceedings of theWCC 2000-ICSP 2000 2000 5th international conference onsignal processing proceedings 16th world computer congress2000 vol 3 pp 1647ndash1649 IEEE Beijing China August2000

[23] S W Piche ldquoSteepest descent algorithms for neural networkcontrollers and filtersrdquo IEEE Transactions on Neural Net-works vol 5 no 2 pp 198ndash212 1994

8 Journal of Healthcare Engineering

Page 2: Martial Arts Competitive Decision-Making Algorithm Based

[10 11] provides us with new ideas for formulating martialarts strategies Because artificial neural networks do not needto determine the mapping relationshiprsquos mathematicalequations between input and output in advance they canlearn specific rules only through training [12] Set the inputvalue to get the result closest to the expected output valueAn artificial neural networkrsquos core to realize its function isthe algorithm as an intelligent information processingsystem BP neural network is a multilayer feed-forwardnetwork trained by error backpropagation [13 14]

is paper combines the popular neural network tech-nology to propose a novel additional momentum-elasticgradient descent based on the above observations e BPneural network adapts to the learning rate e algorithm isimproved for the traditional BP neural network such asselecting learning step length the difficulty of determiningthe size and direction of the weight and the learning rate isnot easy to control e experimental results show that thispaperrsquos algorithm has improved both network scale andrunning time and can predict martial arts competitionroutines and formulate scientific strategies

e following are the main contributions points of thispaper

(1) A novel additional momentum-elastic gradient de-scent-adaptive learning rate BP neural network isproposed is algorithm is aimed at the traditionalBP neural network which is difficult to determinethe learning step size weight size and direction andthe learning rate is not easy to control And otherissues were improved

(2) is paper preliminarily constructs a martial artscompetitive strategy algorithm introduces a neuralnetwork algorithm for martial arts strategy andreferences related parts of competitive sports

(3) We also constructed a dataset and conducted ex-periments e experimental results show that themethod in this paper is superior to some currentadvanced methods

e rest of the paper is organized as follows In Section 2a background study is given followed by the methodology in

Section 3 In Section 4 experimental setup and result dis-cussion are presented followed by a conclusion in Section 5

2 Background

e concept of martial arts is defined in ldquoCihairdquo Martial artsalso known as ldquoWuyirdquo and ldquoNational Martial Artsrdquo Tradi-tional Chinese sports are based on offensive and defensivecombat actions such as kicking beating falling holdingfalling smashing hitting and stabbing It is composed of aparticular movement pattern It is divided into two formsroutine and confrontatione former is divided into boxingand equipment the latter can be divided into Sanshou pushhands long soldiers short soldiers and so on [15]According to the characteristics of sports and technicalforms there is internal and external Home divided into longboxing and short boxing e definition of sports isexplained in Cihai physical exercise [16] is the primarymethod combined with natural factors such as sunlight airwater and hygiene measures and organized and plannedexercise of the body and mind A type of social activity Itspurpose is to strengthen physical fitness improve sportsskills enrich cultural life and cultivate moral sentiment It isan integral part of social and cultural education At that timemartial arts were fighting skills to kill opponents and protectthemselves Since New Chinarsquos founding martial artsrsquocombative nature has been replaced by modern sports[17 18] and martial arts have been defined as a sportCompetitiveness refers to competition in sports ldquoCompe-tition refers to competition and competitionrdquo and skill refersto sports skills ldquoCompetitive sports have the characteristicsof competitionrdquo standardization fairness clusteringopenness and appreciation

Chinese martial arts [19] have a long history Generallymartial artsrsquo historical development is divided into threestages ancient modern and modern e essence of Chi-nese ancient martial arts is fighting so martial artsrsquo historicaldevelopment is the historical development of fightingWushu routines came into being after fighting and thecreation of routines is also for better fighting Until moderncompetitive martial arts used ldquohigh difficult beautiful newrdquo

Classification of martialarts content

Strategic martial arts

Boxing

Optional boxing

Prescribed boxing

Traditionalboxing

Instrument

Fighting sports

Sanda Push hand Short pawn

Short instrument

Long instrument

Double instrument

So instrument

Figure 1 e schematic diagram of martial arts content classification

2 Journal of Healthcare Engineering

as the criterion it was divided into two types of martial artsroutines and fighting Confucian founder Confucius em-phasized that both civil and martial arts should be ldquobe-nevolent in terms of the theoretical awareness of martialartsrdquo [20] One of the core content of martial arts Mo Di theMo School founder advocated universal love and nonag-gression supported force and used force to resist all ag-gression and injustice He became the representative andbirthplace of ldquochivalrousrdquo in later generations and a typicalchivalrous spirit in history Some thoughts of the Taoistschool have a significant influence on martial arts thoughtespecially the Taoist understanding of the origin of theuniverse such as the understanding of Tao and Qi the unityof nature and man and the theory of rigidity and softnesswhich established the theory and philosophy of martial artsere are also many places where martial arts absorb thedoctrine of military strategists For example false moves andfeint attacks are drawn from ldquosoldiers are not tired of deceitrdquoand the understanding of tactics and the grasp of timing areall borrowed from strategists Including the five elements ofyin and yang theory developed from the Book of Changes atthat time it has contributed a lot to the establishment of TaiChi Xingyiquan Baguazhang and other types of boxingese simple dialectics have enriched the theory of martialarts and led martial arts to a broad road

e artificial neural network is a nonlinear processingsystem with the input layer hidden layer and output layerformed by many processing units interconnected e ar-tificial neural network is a mathematical model inspired byhumansrsquo or animalsrsquo neural network functions to processinformation is mathematical modelrsquos application issimilar to the recognition memory data analysis and in-formation processing of the human brain [21] In processinginformation the input layer has different input quantities xand each input quantity x has an associated weight w

corresponding to it e input volume enters the neuralnetwork from the input layer passes through the stimulationof the weight threshold and the weighted summation andobtains the output y e output results from the layer underthe activation functionrsquos perception and continues the fol-lowing layerrsquos role In this way until input quantity is outputfrom the output layer the error between the output valueand the expected value is the smallest at is it has threeessential elements connection weight summation unit andactivation function e processing unit model is shown inFigure 2

ere are quite a lot of classification standards for ar-tificial neural networks ANN can be classified into feed-forward networks and feedback networks based solely on thenetwork structure Feed-forward neural networks includesingle-layer and multilayer feed-forward networksroughout the ANN application field feed-forward neuralnetworks are the most frequent and can successfully achievethe expected results e feed-forward neural networkrsquoslatest output is not related to the previous output state and isonly determined by the current excitation function andweight matrix [22] Compared with the feed-forward neuralnetwork the feedback neural network is complex and ad-vanced Its latest output is closely related to the previous

output and the latest input and its intelligence is evidentwith the associative memory function It is worthy of pro-motion and application under the guidance of artificialintelligence e most commonly used and classic feed-forward neural network is the error backpropagation neuralnetwork referred to as BP neural network

3 Methodology

In this section the methodology is discussed in detail

31 BP Neural Network BP neural network the errorbackpropagation neural network is the most widely usedartificial neural network model It was proposed by Kurkovain 1985 and gave practical guidance in solving the learningand training problems of multilayer neural networks andgave a comprehensive analysis and derivation in mathe-matics [21] e BP neural networkrsquos network structure issimple and mainly has three layers the input layer hiddenlayer and output layer as shown in Figure 3

e three-layer BP neural network is shown in Figure 3In this figure the number of input nodes isM the number ofoutput nodes is L the initial number of hidden layers is uand the BP neural networkrsquos actual input is x1 x2 xMe actual output is y1 y2 yL the target output is Tkand the output error is ek Secondly the input layerrsquosconnection weight to the hidden layer is wij that the weightcan also be called a weight vector e connection weightfrom the hidden layer to the output layer is the size anddirection of these two weights may be different e im-proved BP neural network can modify it to make it developtoward the desired size

e BP neural network is vividly understood as a su-pervised learning relationship between teachers and stu-dents It consists of the forward propagation of the first stagesignal and the backpropagation of the second stage error[22] In the first forward propagation the given signal firstenters the network from the input layer and then is pushedby the hidden layer to the output layer Finally the signal istransmitted by the output layer e first stage is completede weight of the network is maintained during the entiretransmission of the early stage Consistently the neuronsrsquostate in the next layer is only constrained by the previouslayerrsquos neurons Suppose the result of the signal transmittedfrom the output layer is far from the expected result In thatcase it will immediately go to the second stage of the errorbackpropagation learning process is process is a processof reducing the error e actual output value is differentfrom the expected value e difference is used as a new

sum u Ф ()

θ

x1

x2

xn

yn

W1

W2

Wn

Figure 2 Artificial neural network model diagram

Journal of Healthcare Engineering 3

signal input from the output layer layer by layer back-propagation to the first input layer and continue Continueuntil the final actual output is closer to the expected valuethat is the error is minimized In the process of back-propagation the weight is mainly modified to obtain thedesired result Among them the changes of other factors alsoplay a supporting role

32 Improved BP Neural Network Aiming at the short-comings and limitations of the steepest descent BP neuralnetwork and the momentum BP neural network a newalgorithm is proposed additional momentum-elastic gra-dient descent-adaptive learning rate BP neural networkFirst the learning steprsquos selection and the weight size anddirection have been improved and the learning rate andmomentum terms have been limited en the improvednew algorithm and the first two BP neural networks arecompared and analyzed by simulation experiments and thecorrectness error and prediction time are calculated Otherindicators found that the new algorithm has higher accuracyminor errors fewer iterations and better overall perfor-mance Finally the improved BP neural network used in thenext chapter of this paper has laid a strong foundation forpredicting martial arts strategies

321 SDBP Neural Network e SDBP neural networkrsquoslearning process [23] is also composed of two stages theforward propagation of the first stage signal and the secondstage errorrsquos backpropagation e weight wij and thethreshold aj from the input layer to the hidden layer arecontinuously adjustede weight wjk from the hidden layerto the output layer and the threshold bk are modified untilthe output result is almost the same as the expected resulte SDBP neural network structure is shown in Figure 4assuming the number of nodes in the input layer hiddenlayer and output layerey are M u and L e calculationformulas for each layer are as follows

Let k be the number of iterations the calculation formulafor weight update is

wk+1 wk minus αgk (1)

eweight of the kt iteration α is the learning rate whichcan be set reasonably during training e initial defaultlearning rate is 001

Average method take the average of the three channelsrsquopixels in the RGB image as the gray value of the gray image

gk zEk

zwk

(2)

where gk is the gradient vector of the k th iteration Ek is thetotal error performance function of the neural networkoutput of the k th iteration and the output formula of thehidden layer is

Hj 1113944n

i1wijxi + aj (3)

e input vector Hj is the hidden layerrsquos actual outputaj is the threshold from the input layer to the hidden layerand wij is the weight from the input layer to the hidden layere output formula of the output layer is as follows

Ok 1113944l

j1Hjwjk + bk (4)

e output layerrsquos actual output bk is the threshold fromthe hidden layer to the output layer and wjk is the weightfrom the hidden layer to the output layer e calculationformula is as follows

Ek 121113944

k

i1Yi minus Oi( 1113857

2 (5)

where Ek is the total error performance function of theneural network output of the k th iteration Yi is the expectedvalue of the i th iteration and Oi is the actual output of the i

th iterationFrom equations 1ndash5 and each layerrsquos function the

surface gradient gk of the total error of the k th iteration canbe obtained and equation 1 is respectively introduced eweight of each layer is revised to ensure that the absoluteerror is reducede small development direction is reducedto the minimum thus in line with the expected error

x bk

x1y1

y2

y3

yk

x2

x3

xn

f () f ()

f ()

f ()

f ()

f ()

f ()

f ()

wijaj wjk

Figure 4 e structure of SDBP neural network

Input layer Hidden layer Output layer

Figure 3 e structure of BP neural network

4 Journal of Healthcare Engineering

It can be seen from Figure 5 that there are too manyminimum points on the error surface because the transferfunction of the SDBP neural network is nonlinear

322 MOBP Neural Network MOBP neural network in-troduces momentum factor η(0lt ηlt 1) based on SDBPneural network

Δwk+1 ηΔwk + α(1 minus η)zEk

zwk

(6)

where η is the momentum factor wk is the weight of the k thiteration Δwk is the weight correction of the k th iteration αis the learning rate and Ek is the total error performancefunction

e new correction amount of the algorithm is closelyrelated to the previous correction result and restrictedSuppose the last amount correction is too significant In thatcase the sign of the second term of equation 6 is the oppositeof the previous correction amount resulting in the currentcorrection If the last revision is too minor the second termof equation 6 is the same as the previous correction sign As aresult the correction is increased and the training speed willbe improved for the network erefore the MOBP neuralnetwork essentially increases the correction in the samegradient direction that is increases the ldquomomentumrdquo in thesame gradient direction by increasing the momentum factorη reasonably

In the MOBP neural network compared to the SDBPneural networkrsquos learning rate the MOBP neural networkrsquoslearning rate can be increased to a greater extent Still theincreased learning rate will not make the network trainingimpossible and paralyzedere are two reasons On the onehand when the degree of correction is too large the algo-rithm can correct to decrease ensuring that the correctiondirection still maintains the direction of convergence On theother hand MOBP neural network does the same by in-troducing momentum factor e correction amount of thegradient direction is reasonably increased to increase thelearning rate From this it can be obtained that based onensuring the algorithmrsquos stability the MOBP neural net-workrsquos learning time is shortened and the convergencespeed has been dramatically improved

323 Limitations of BP Neural Network When applyingartificial neural network modeling in related fields mostneural network models use BP neural network SDBP neuralnetwork MOBP neural network and other improved formsHowever BP neural network still has its limitations

(1) e limitations of the network itself the BP neuralnetwork is a static transmission network It can onlycomplete nonlinear static mapping safely and doesnot have the information processing function thatchanges dynamically with time or space

(2) Selection of initial parameters It is difficult for BPneural network to select initial parameters especiallythe size and direction of initial weights andthresholds Although there are correction formulas

once the initial values are determined it is equivalentto determining the network convergence Whennetwork learning is over it will generally converge tothe local extreme value closest to the initial valuedirectly affecting the difference between the localextreme pointrsquos error and the ideal error

(3) e choice of learning step length e training steplength in BP neural network is a fixed value and thepath left by the gradient method is zigzag whenapproaching the minimum point e closer to theminimum point the faster the convergence speederefore to reduce the error the training steplength should be set as small as possible If the steplength is too small the training time will increasewhich leads to the difficulty of establishing thetraining step length To obtain relevant learningresults it generally needs to be rich practical expe-rience or repeated adjustments to the step length

(4) e choice of learning rate If the learning rate isselected too large it will make learning unstable onthe contrary if the learning rate is chosen too smallit may increase the learning time and cause networkparalysis At present a fast and reasonable methodhas not been explored to solve the difficulty ofselecting the learning rate of BP neural network

33 ImprovedBPNeuralNetwork Because of the limitationsof the SDBP neural network that is MOBP neural networkand BP neural network this paper first improves the se-lection of learning step size and the size and direction ofweights then enhances the selection of learning rate that isthe new algorithm that is BP neural network with addi-tional momentum-elastic gradient descent-adaptive learn-ing rate

e selection of the learning step length in the BP neuralnetwork is essential If the step length is too large thenetwork will converge quickly but it will cause instability innetwork learning too small a step length can avoid insta-bility but the convergence speed will slow down Solving thiscontradiction it is proposed to add an ldquoadditional mo-mentum termrdquo based on SDBP neural network and MOBPneural network add a correction value proportional to theprevious weight or threshold value in the correction of each

Error surface Error contour

Sum sq

uared error

Weight W

Bias B

Bias B

060402

ndash02ndash04ndash06

0

ndash4ndash2

ndash2ndash2

0

0

00

2

2

2

2 44

4

ndash4 ndash4ndash4

ndash3

ndash2

ndash1

1

3

4

Figure 5 Error surface plot

Journal of Healthcare Engineering 5

layer weight or threshold and recalculate the new weight (orthreshold) correction amount according to the back-propagation method e specific adjustment formula is asfollows

Δwij(n) λΔwij(n minus 1) + ζδj(t)vi(n) (7)

Δbi(n) λΔbi(n minus 1) + ζδi(t) (8)

where n is the number of trainings λ is the momentum termζ is the learning step size Δwij(n) is the correction of theweight of the n th trainingΔbi(n) is the correction of the n thtraining threshold and δj(t)vi(n) is the n th training gra-dient vector Write equation 8 as a time series t as a variablet isin [0 n] then equation 8 can be regarded as the first-orderdifference equation Δwij(n) e calculation equation is asfollows

Δwij(n) ζ 1113944n

t0λnminus 1δj(t)vi(n)

δj(t)vi(n) minuszE(n)

zwij(n)

(9)

where δj(t)vi(n) is understood as the gradient vector of the n

th training then

Δwij(n) minusζ 1113944n

t0λnminus t zE(n)

zwij(n) (10)

In the training process of the additional momentummethod to prevent the corrected weight from having anexcessive influence on the error the momentum term λneeds to be strengthened that is the judgment condition ofthe additional momentum method in the training process is

λ

0 E(n)gt 104E(n minus 1)

095 E(n)ltE(n minus 1)

λ otherwise

⎧⎪⎪⎨

⎪⎪⎩(11)

However the additional momentum BP neural networkalgorithmrsquos hidden layer generally uses an S-type activationfunction e S-type function is often called a ldquosqueezerdquofunction because it can compress an infinite input range to alimited output range When the input is significant the slopeis almost close It will cause the gradient amplitude in thealgorithm to be tiny making the correction processmeaningless In response to this problem the elastic gradientdescent method is introduced to improve again e elasticgradient descent method does not study the amplitude of thepartial derivative Only the partial derivative sign is studiedto determine the weightrsquos update direction and the revisedldquoupdate value determines the size of the weight changerdquo Ifthe sign of the partial derivative of the objective function to acertain weight is in two consecutive iterations and if themedium is unchanged increase the corresponding ldquoupdatevaluerdquo otherwise decrease the corresponding ldquoupdatevaluerdquo Use the additional momentum method and theelastic gradient descent method to improve the BP neuralnetwork at the same time It will provide the range the size

of the correction the number of weights and the direction ofupdating the weights It can also significantly prevent thetraining process from falling into a local minimum andspeed up the convergence speed

4 Experiments

41 Experimental Environment Since the experiment in thispaper needs to train a deep neural network the scale is largethe structure is more complex and the calculation scale isenormous e programming language used is Python theversion is 36 the deep learning framework used is Keras215 and the IDE for program deployment is Pycharm Allexperiments are conducted in the same environment All ourexperiments have been conducted on a desktop PC with anIntel Core i7-8700 processor and an NVIDIA GeForce GTX1080ti GPU

42 Datasets Preprocessing e data set used in this paper iscollected in martial arts competitions is paper removessingular values and unstable values and creates training setsand test sets

43 Experimental Results All experiments are performed onthe same platform and the experimental results areexpressed in terms of error rate and prediction time eexperimental results comparing various algorithms areshown in Table 1 below

It can be seen from Table 1 that the MDBP neuralnetwork adds a momentum term based on the SDBP neuralnetwork to modify the weight and threshold Its test errorrate is lower than that of the DBP neural network About 2percentage points reduce the error rate e new algorithmin this paper has improved the selection of learning step sizeand the size and direction of weights and limited the choiceof momentum and learning rate making the new algorithmmore efficient e error rate is significantly lower than thatof other algorithms and nearly 5 percentage points sig-nificantly reduce the new algorithm From the comparisonof error rates in Table 1 it can be fully seen that the error rateof the new BP neural network is significantly lower than theother two BP neural network lines and has a better pre-diction effect

It can be seen from Table 2 that the improvement of theMOBP neural network over the SDBP neural network is theintroduction of a momentum term that is a more signifi-cant learning rate can be used without paralyzing thelearning process e MOBP neural network always accel-erates the same gradient direction e correction amountthat is the MABP neural network has a faster convergencerate than the SDBP neural network and the learning time isshorter e new algorithm is more aimed at the selection ofthe first learning step e size and direction of the weightsare improved and the momentum term is improved ereare clear criteria for selecting learning rate that is the newalgorithm can fine-tune the correction of weights and in-crease the learning rate within a reasonable range and at thesame time accelerate the convergence speed that is reduce

6 Journal of Healthcare Engineering

the prediction time e comparison can clearly show thatthe new algorithmrsquos prediction time is significantly lowerthan that of the other two types of algorithms through theprediction time

5 Conclusion

e formulation of martial arts competitive strategies re-quires scientific analysis of competitive data and more ac-curate predictions Based on this observation this papercombines the popular neural network technology It pro-poses a novel additional momentum-elastic gradient de-scent-adaptive learning rate BP neural network ealgorithm is improved for the traditional BP neural networksuch as selecting learning step size and the difficulty ofdetermining the weightrsquos size and direction e learningrate is not easy to controle experimental results show thatthis paperrsquos algorithm has made good improvements in bothin-network scale and running time and can predict martialarts competition routines and formulate scientific strategies

Data Availability

e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

All the authors do not have any possible conflicts of interest

References

[1] V Romanenko L Podrigalo W J Cynarski et al ldquoAcomparative analysis of the short-termmemory of martial artsathletes of different levels of sportsmanshiprdquo Journal ofMartial Arts Anthropology vol 20 no 3 pp 18ndash24 2020

[2] A Harwood-Gross R Feldman O Zagoory-Sharon andY Rassovsky ldquoHormonal reactivity during martial artspractice among high-risk youthsrdquo Psychoneuroendocrinologyvol 121 Article ID 104806 2020

[3] R B Santos Jr A C Utter S R McAnulty et al ldquoWeight lossbehaviors in Brazilian mixed martial arts athletesrdquo SportSciences for Health vol 16 no 1 pp 117ndash122 2020

[4] X Ning W Li and W Liu ldquoA fast single image haze removalmethod based on human retina propertyrdquo IEICE Transactionson Information and Systems vol E100D no 1 pp 211ndash2142017

[5] Z L Yang S Y Zhang Y T Hu Z W Hu and Y F HuangldquoVAE-stega linguistic steganography based on variationalauto-encoderrdquo IEEE Transactions on Information Forensicsand Security vol 16 pp 880ndash895 2020

[6] X Ning W Li and J Xu ldquoe principle of homologycontinuity and geometrical covering learning for patternrecognitionrdquo International Journal of Pattern Recognition andArtificial Intelligence vol 32 no 12 Article ID 1850042 2018

[7] W Cai and Z Wei ldquoRemote sensing image classification basedon a cross-attention mechanism and graph convolutionrdquo IEEEGeoscience and Remote Sensing Letters pp 1ndash5 2020

[8] X Ning W Li and L Sun ldquoAge estimation method based ongenerative adversarial networksrdquo DEStech Transactions onComputer Science and Engineering 2017

[9] W Cai B Liu Z Wei M Li and J Kan ldquoTARDB-net triple-attention-guided residual dense and BiLSTM networks forhyperspectral image classificationrdquo Multimedia Tools andApplications vol 80 no 2 pp 1ndash22 2021

[10] N Xin LWeijun L Haoguang and LWenjie ldquoUncorrelatedlocality preserving discriminant analysis based on bionicsrdquoJournal of Computer Research and Development vol 53no 11 p 2623 2016

[11] Z Wang C Zou and W Cai ldquoSmall sample classification ofhyperspectral remote sensing images based on sequential jointdeeping learning modelrdquo IEEE Access vol 8 pp 71353ndash71363 2020

[12] X Zhang Y Yang Z Li X Ning Y Qin and W Cai ldquoAnimproved encoder-decoder network based on strip poolmethod applied to segmentation of farmland vacancy fieldrdquoEntropy vol 23 no 4 p 435 2021

[13] W Cai and Z Wei ldquoPiiGAN generative adversarial networksfor pluralistic image inpaintingrdquo IEEE Access vol 8pp 48451ndash48463 2020

[14] X Ning X Wang S Xu et al ldquoA review of research on co-trainingrdquo Concurrency and Computation Practice and Ex-perience 2021 In press

[15] I Portela-Pino A Lopez-Castedo M J Martınez-PatintildeoT Valverde-Esteve and J Domınguez-Alonso ldquoGenderdifferences in motivation and barriers for the practice ofphysical exercise in adolescencerdquo International Journal ofEnvironmental Research and Public Health vol 17 no 1p 168 2020

[16] A De Sire R de Sire V Petito et al ldquoGut-joint axis the roleof physical exercise on gut microbiota modulation in olderpeople with osteoarthritisrdquo Nutrients vol 12 no 2 p 5742020

[17] V Malkin S Serpa A Garcia-Mas and E Shurmanov ldquoAnew paradigm in modern sports psychologyrdquo Revista dePsicologıa del Deporte (Journal of Sport Psychology) vol 29no 2 pp 149ndash152 2020

[18] R Summerley ldquoe development of sports a comparativeanalysis of the early institutionalization of traditional sports

Table 1 Test error rate of different types of martial arts BP neuralnetwork ()

Type SDBP MOBP OursOptional boxing 623 485 137Prescribed boxing 735 597 178Traditional boxing 769 623 203Short instrument 823 514 199Long instrument 658 469 187Double instrument 769 528 201Soft instrument 859 499 189

Table 2 e prediction time of BP neural network for differentmartial arts(s)

Type SDBP MOBP OursOptional boxing 373 236 172Prescribed boxing 345 245 183Traditional boxing 359 229 152Short instrument 365 238 163Long instrument 359 249 152Double instrument 375 258 172Soft instrument 359 249 183

Journal of Healthcare Engineering 7

and E-sportsrdquo Games and Culture vol 15 no 1 pp 51ndash722020

[19] Q S Han M eeboom and D Zhu ldquoChinese martial artsand the olympics analysing the policy of the internationalwushu federationrdquo International Review for the Sociology ofSport 2020 In press

[20] C Hongtu and Z Cai ldquoe formal evolution of Chinesemartial arts action movies and the spread of core valuesrdquoAeFrontiers of Society Science and Technology vol 2 no 4 2020

[21] V Kurkova ldquoKolmogorovrsquos theorem and multilayer neuralnetworksrdquo Neural Networks vol 5 no 3 pp 501ndash506 1992

[22] W Jin Z J Li L S Wei and H Zhen ldquoe improvements ofBP neural network learning algorithmrdquo in Proceedings of theWCC 2000-ICSP 2000 2000 5th international conference onsignal processing proceedings 16th world computer congress2000 vol 3 pp 1647ndash1649 IEEE Beijing China August2000

[23] S W Piche ldquoSteepest descent algorithms for neural networkcontrollers and filtersrdquo IEEE Transactions on Neural Net-works vol 5 no 2 pp 198ndash212 1994

8 Journal of Healthcare Engineering

Page 3: Martial Arts Competitive Decision-Making Algorithm Based

as the criterion it was divided into two types of martial artsroutines and fighting Confucian founder Confucius em-phasized that both civil and martial arts should be ldquobe-nevolent in terms of the theoretical awareness of martialartsrdquo [20] One of the core content of martial arts Mo Di theMo School founder advocated universal love and nonag-gression supported force and used force to resist all ag-gression and injustice He became the representative andbirthplace of ldquochivalrousrdquo in later generations and a typicalchivalrous spirit in history Some thoughts of the Taoistschool have a significant influence on martial arts thoughtespecially the Taoist understanding of the origin of theuniverse such as the understanding of Tao and Qi the unityof nature and man and the theory of rigidity and softnesswhich established the theory and philosophy of martial artsere are also many places where martial arts absorb thedoctrine of military strategists For example false moves andfeint attacks are drawn from ldquosoldiers are not tired of deceitrdquoand the understanding of tactics and the grasp of timing areall borrowed from strategists Including the five elements ofyin and yang theory developed from the Book of Changes atthat time it has contributed a lot to the establishment of TaiChi Xingyiquan Baguazhang and other types of boxingese simple dialectics have enriched the theory of martialarts and led martial arts to a broad road

e artificial neural network is a nonlinear processingsystem with the input layer hidden layer and output layerformed by many processing units interconnected e ar-tificial neural network is a mathematical model inspired byhumansrsquo or animalsrsquo neural network functions to processinformation is mathematical modelrsquos application issimilar to the recognition memory data analysis and in-formation processing of the human brain [21] In processinginformation the input layer has different input quantities xand each input quantity x has an associated weight w

corresponding to it e input volume enters the neuralnetwork from the input layer passes through the stimulationof the weight threshold and the weighted summation andobtains the output y e output results from the layer underthe activation functionrsquos perception and continues the fol-lowing layerrsquos role In this way until input quantity is outputfrom the output layer the error between the output valueand the expected value is the smallest at is it has threeessential elements connection weight summation unit andactivation function e processing unit model is shown inFigure 2

ere are quite a lot of classification standards for ar-tificial neural networks ANN can be classified into feed-forward networks and feedback networks based solely on thenetwork structure Feed-forward neural networks includesingle-layer and multilayer feed-forward networksroughout the ANN application field feed-forward neuralnetworks are the most frequent and can successfully achievethe expected results e feed-forward neural networkrsquoslatest output is not related to the previous output state and isonly determined by the current excitation function andweight matrix [22] Compared with the feed-forward neuralnetwork the feedback neural network is complex and ad-vanced Its latest output is closely related to the previous

output and the latest input and its intelligence is evidentwith the associative memory function It is worthy of pro-motion and application under the guidance of artificialintelligence e most commonly used and classic feed-forward neural network is the error backpropagation neuralnetwork referred to as BP neural network

3 Methodology

In this section the methodology is discussed in detail

31 BP Neural Network BP neural network the errorbackpropagation neural network is the most widely usedartificial neural network model It was proposed by Kurkovain 1985 and gave practical guidance in solving the learningand training problems of multilayer neural networks andgave a comprehensive analysis and derivation in mathe-matics [21] e BP neural networkrsquos network structure issimple and mainly has three layers the input layer hiddenlayer and output layer as shown in Figure 3

e three-layer BP neural network is shown in Figure 3In this figure the number of input nodes isM the number ofoutput nodes is L the initial number of hidden layers is uand the BP neural networkrsquos actual input is x1 x2 xMe actual output is y1 y2 yL the target output is Tkand the output error is ek Secondly the input layerrsquosconnection weight to the hidden layer is wij that the weightcan also be called a weight vector e connection weightfrom the hidden layer to the output layer is the size anddirection of these two weights may be different e im-proved BP neural network can modify it to make it developtoward the desired size

e BP neural network is vividly understood as a su-pervised learning relationship between teachers and stu-dents It consists of the forward propagation of the first stagesignal and the backpropagation of the second stage error[22] In the first forward propagation the given signal firstenters the network from the input layer and then is pushedby the hidden layer to the output layer Finally the signal istransmitted by the output layer e first stage is completede weight of the network is maintained during the entiretransmission of the early stage Consistently the neuronsrsquostate in the next layer is only constrained by the previouslayerrsquos neurons Suppose the result of the signal transmittedfrom the output layer is far from the expected result In thatcase it will immediately go to the second stage of the errorbackpropagation learning process is process is a processof reducing the error e actual output value is differentfrom the expected value e difference is used as a new

sum u Ф ()

θ

x1

x2

xn

yn

W1

W2

Wn

Figure 2 Artificial neural network model diagram

Journal of Healthcare Engineering 3

signal input from the output layer layer by layer back-propagation to the first input layer and continue Continueuntil the final actual output is closer to the expected valuethat is the error is minimized In the process of back-propagation the weight is mainly modified to obtain thedesired result Among them the changes of other factors alsoplay a supporting role

32 Improved BP Neural Network Aiming at the short-comings and limitations of the steepest descent BP neuralnetwork and the momentum BP neural network a newalgorithm is proposed additional momentum-elastic gra-dient descent-adaptive learning rate BP neural networkFirst the learning steprsquos selection and the weight size anddirection have been improved and the learning rate andmomentum terms have been limited en the improvednew algorithm and the first two BP neural networks arecompared and analyzed by simulation experiments and thecorrectness error and prediction time are calculated Otherindicators found that the new algorithm has higher accuracyminor errors fewer iterations and better overall perfor-mance Finally the improved BP neural network used in thenext chapter of this paper has laid a strong foundation forpredicting martial arts strategies

321 SDBP Neural Network e SDBP neural networkrsquoslearning process [23] is also composed of two stages theforward propagation of the first stage signal and the secondstage errorrsquos backpropagation e weight wij and thethreshold aj from the input layer to the hidden layer arecontinuously adjustede weight wjk from the hidden layerto the output layer and the threshold bk are modified untilthe output result is almost the same as the expected resulte SDBP neural network structure is shown in Figure 4assuming the number of nodes in the input layer hiddenlayer and output layerey are M u and L e calculationformulas for each layer are as follows

Let k be the number of iterations the calculation formulafor weight update is

wk+1 wk minus αgk (1)

eweight of the kt iteration α is the learning rate whichcan be set reasonably during training e initial defaultlearning rate is 001

Average method take the average of the three channelsrsquopixels in the RGB image as the gray value of the gray image

gk zEk

zwk

(2)

where gk is the gradient vector of the k th iteration Ek is thetotal error performance function of the neural networkoutput of the k th iteration and the output formula of thehidden layer is

Hj 1113944n

i1wijxi + aj (3)

e input vector Hj is the hidden layerrsquos actual outputaj is the threshold from the input layer to the hidden layerand wij is the weight from the input layer to the hidden layere output formula of the output layer is as follows

Ok 1113944l

j1Hjwjk + bk (4)

e output layerrsquos actual output bk is the threshold fromthe hidden layer to the output layer and wjk is the weightfrom the hidden layer to the output layer e calculationformula is as follows

Ek 121113944

k

i1Yi minus Oi( 1113857

2 (5)

where Ek is the total error performance function of theneural network output of the k th iteration Yi is the expectedvalue of the i th iteration and Oi is the actual output of the i

th iterationFrom equations 1ndash5 and each layerrsquos function the

surface gradient gk of the total error of the k th iteration canbe obtained and equation 1 is respectively introduced eweight of each layer is revised to ensure that the absoluteerror is reducede small development direction is reducedto the minimum thus in line with the expected error

x bk

x1y1

y2

y3

yk

x2

x3

xn

f () f ()

f ()

f ()

f ()

f ()

f ()

f ()

wijaj wjk

Figure 4 e structure of SDBP neural network

Input layer Hidden layer Output layer

Figure 3 e structure of BP neural network

4 Journal of Healthcare Engineering

It can be seen from Figure 5 that there are too manyminimum points on the error surface because the transferfunction of the SDBP neural network is nonlinear

322 MOBP Neural Network MOBP neural network in-troduces momentum factor η(0lt ηlt 1) based on SDBPneural network

Δwk+1 ηΔwk + α(1 minus η)zEk

zwk

(6)

where η is the momentum factor wk is the weight of the k thiteration Δwk is the weight correction of the k th iteration αis the learning rate and Ek is the total error performancefunction

e new correction amount of the algorithm is closelyrelated to the previous correction result and restrictedSuppose the last amount correction is too significant In thatcase the sign of the second term of equation 6 is the oppositeof the previous correction amount resulting in the currentcorrection If the last revision is too minor the second termof equation 6 is the same as the previous correction sign As aresult the correction is increased and the training speed willbe improved for the network erefore the MOBP neuralnetwork essentially increases the correction in the samegradient direction that is increases the ldquomomentumrdquo in thesame gradient direction by increasing the momentum factorη reasonably

In the MOBP neural network compared to the SDBPneural networkrsquos learning rate the MOBP neural networkrsquoslearning rate can be increased to a greater extent Still theincreased learning rate will not make the network trainingimpossible and paralyzedere are two reasons On the onehand when the degree of correction is too large the algo-rithm can correct to decrease ensuring that the correctiondirection still maintains the direction of convergence On theother hand MOBP neural network does the same by in-troducing momentum factor e correction amount of thegradient direction is reasonably increased to increase thelearning rate From this it can be obtained that based onensuring the algorithmrsquos stability the MOBP neural net-workrsquos learning time is shortened and the convergencespeed has been dramatically improved

323 Limitations of BP Neural Network When applyingartificial neural network modeling in related fields mostneural network models use BP neural network SDBP neuralnetwork MOBP neural network and other improved formsHowever BP neural network still has its limitations

(1) e limitations of the network itself the BP neuralnetwork is a static transmission network It can onlycomplete nonlinear static mapping safely and doesnot have the information processing function thatchanges dynamically with time or space

(2) Selection of initial parameters It is difficult for BPneural network to select initial parameters especiallythe size and direction of initial weights andthresholds Although there are correction formulas

once the initial values are determined it is equivalentto determining the network convergence Whennetwork learning is over it will generally converge tothe local extreme value closest to the initial valuedirectly affecting the difference between the localextreme pointrsquos error and the ideal error

(3) e choice of learning step length e training steplength in BP neural network is a fixed value and thepath left by the gradient method is zigzag whenapproaching the minimum point e closer to theminimum point the faster the convergence speederefore to reduce the error the training steplength should be set as small as possible If the steplength is too small the training time will increasewhich leads to the difficulty of establishing thetraining step length To obtain relevant learningresults it generally needs to be rich practical expe-rience or repeated adjustments to the step length

(4) e choice of learning rate If the learning rate isselected too large it will make learning unstable onthe contrary if the learning rate is chosen too smallit may increase the learning time and cause networkparalysis At present a fast and reasonable methodhas not been explored to solve the difficulty ofselecting the learning rate of BP neural network

33 ImprovedBPNeuralNetwork Because of the limitationsof the SDBP neural network that is MOBP neural networkand BP neural network this paper first improves the se-lection of learning step size and the size and direction ofweights then enhances the selection of learning rate that isthe new algorithm that is BP neural network with addi-tional momentum-elastic gradient descent-adaptive learn-ing rate

e selection of the learning step length in the BP neuralnetwork is essential If the step length is too large thenetwork will converge quickly but it will cause instability innetwork learning too small a step length can avoid insta-bility but the convergence speed will slow down Solving thiscontradiction it is proposed to add an ldquoadditional mo-mentum termrdquo based on SDBP neural network and MOBPneural network add a correction value proportional to theprevious weight or threshold value in the correction of each

Error surface Error contour

Sum sq

uared error

Weight W

Bias B

Bias B

060402

ndash02ndash04ndash06

0

ndash4ndash2

ndash2ndash2

0

0

00

2

2

2

2 44

4

ndash4 ndash4ndash4

ndash3

ndash2

ndash1

1

3

4

Figure 5 Error surface plot

Journal of Healthcare Engineering 5

layer weight or threshold and recalculate the new weight (orthreshold) correction amount according to the back-propagation method e specific adjustment formula is asfollows

Δwij(n) λΔwij(n minus 1) + ζδj(t)vi(n) (7)

Δbi(n) λΔbi(n minus 1) + ζδi(t) (8)

where n is the number of trainings λ is the momentum termζ is the learning step size Δwij(n) is the correction of theweight of the n th trainingΔbi(n) is the correction of the n thtraining threshold and δj(t)vi(n) is the n th training gra-dient vector Write equation 8 as a time series t as a variablet isin [0 n] then equation 8 can be regarded as the first-orderdifference equation Δwij(n) e calculation equation is asfollows

Δwij(n) ζ 1113944n

t0λnminus 1δj(t)vi(n)

δj(t)vi(n) minuszE(n)

zwij(n)

(9)

where δj(t)vi(n) is understood as the gradient vector of the n

th training then

Δwij(n) minusζ 1113944n

t0λnminus t zE(n)

zwij(n) (10)

In the training process of the additional momentummethod to prevent the corrected weight from having anexcessive influence on the error the momentum term λneeds to be strengthened that is the judgment condition ofthe additional momentum method in the training process is

λ

0 E(n)gt 104E(n minus 1)

095 E(n)ltE(n minus 1)

λ otherwise

⎧⎪⎪⎨

⎪⎪⎩(11)

However the additional momentum BP neural networkalgorithmrsquos hidden layer generally uses an S-type activationfunction e S-type function is often called a ldquosqueezerdquofunction because it can compress an infinite input range to alimited output range When the input is significant the slopeis almost close It will cause the gradient amplitude in thealgorithm to be tiny making the correction processmeaningless In response to this problem the elastic gradientdescent method is introduced to improve again e elasticgradient descent method does not study the amplitude of thepartial derivative Only the partial derivative sign is studiedto determine the weightrsquos update direction and the revisedldquoupdate value determines the size of the weight changerdquo Ifthe sign of the partial derivative of the objective function to acertain weight is in two consecutive iterations and if themedium is unchanged increase the corresponding ldquoupdatevaluerdquo otherwise decrease the corresponding ldquoupdatevaluerdquo Use the additional momentum method and theelastic gradient descent method to improve the BP neuralnetwork at the same time It will provide the range the size

of the correction the number of weights and the direction ofupdating the weights It can also significantly prevent thetraining process from falling into a local minimum andspeed up the convergence speed

4 Experiments

41 Experimental Environment Since the experiment in thispaper needs to train a deep neural network the scale is largethe structure is more complex and the calculation scale isenormous e programming language used is Python theversion is 36 the deep learning framework used is Keras215 and the IDE for program deployment is Pycharm Allexperiments are conducted in the same environment All ourexperiments have been conducted on a desktop PC with anIntel Core i7-8700 processor and an NVIDIA GeForce GTX1080ti GPU

42 Datasets Preprocessing e data set used in this paper iscollected in martial arts competitions is paper removessingular values and unstable values and creates training setsand test sets

43 Experimental Results All experiments are performed onthe same platform and the experimental results areexpressed in terms of error rate and prediction time eexperimental results comparing various algorithms areshown in Table 1 below

It can be seen from Table 1 that the MDBP neuralnetwork adds a momentum term based on the SDBP neuralnetwork to modify the weight and threshold Its test errorrate is lower than that of the DBP neural network About 2percentage points reduce the error rate e new algorithmin this paper has improved the selection of learning step sizeand the size and direction of weights and limited the choiceof momentum and learning rate making the new algorithmmore efficient e error rate is significantly lower than thatof other algorithms and nearly 5 percentage points sig-nificantly reduce the new algorithm From the comparisonof error rates in Table 1 it can be fully seen that the error rateof the new BP neural network is significantly lower than theother two BP neural network lines and has a better pre-diction effect

It can be seen from Table 2 that the improvement of theMOBP neural network over the SDBP neural network is theintroduction of a momentum term that is a more signifi-cant learning rate can be used without paralyzing thelearning process e MOBP neural network always accel-erates the same gradient direction e correction amountthat is the MABP neural network has a faster convergencerate than the SDBP neural network and the learning time isshorter e new algorithm is more aimed at the selection ofthe first learning step e size and direction of the weightsare improved and the momentum term is improved ereare clear criteria for selecting learning rate that is the newalgorithm can fine-tune the correction of weights and in-crease the learning rate within a reasonable range and at thesame time accelerate the convergence speed that is reduce

6 Journal of Healthcare Engineering

the prediction time e comparison can clearly show thatthe new algorithmrsquos prediction time is significantly lowerthan that of the other two types of algorithms through theprediction time

5 Conclusion

e formulation of martial arts competitive strategies re-quires scientific analysis of competitive data and more ac-curate predictions Based on this observation this papercombines the popular neural network technology It pro-poses a novel additional momentum-elastic gradient de-scent-adaptive learning rate BP neural network ealgorithm is improved for the traditional BP neural networksuch as selecting learning step size and the difficulty ofdetermining the weightrsquos size and direction e learningrate is not easy to controle experimental results show thatthis paperrsquos algorithm has made good improvements in bothin-network scale and running time and can predict martialarts competition routines and formulate scientific strategies

Data Availability

e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

All the authors do not have any possible conflicts of interest

References

[1] V Romanenko L Podrigalo W J Cynarski et al ldquoAcomparative analysis of the short-termmemory of martial artsathletes of different levels of sportsmanshiprdquo Journal ofMartial Arts Anthropology vol 20 no 3 pp 18ndash24 2020

[2] A Harwood-Gross R Feldman O Zagoory-Sharon andY Rassovsky ldquoHormonal reactivity during martial artspractice among high-risk youthsrdquo Psychoneuroendocrinologyvol 121 Article ID 104806 2020

[3] R B Santos Jr A C Utter S R McAnulty et al ldquoWeight lossbehaviors in Brazilian mixed martial arts athletesrdquo SportSciences for Health vol 16 no 1 pp 117ndash122 2020

[4] X Ning W Li and W Liu ldquoA fast single image haze removalmethod based on human retina propertyrdquo IEICE Transactionson Information and Systems vol E100D no 1 pp 211ndash2142017

[5] Z L Yang S Y Zhang Y T Hu Z W Hu and Y F HuangldquoVAE-stega linguistic steganography based on variationalauto-encoderrdquo IEEE Transactions on Information Forensicsand Security vol 16 pp 880ndash895 2020

[6] X Ning W Li and J Xu ldquoe principle of homologycontinuity and geometrical covering learning for patternrecognitionrdquo International Journal of Pattern Recognition andArtificial Intelligence vol 32 no 12 Article ID 1850042 2018

[7] W Cai and Z Wei ldquoRemote sensing image classification basedon a cross-attention mechanism and graph convolutionrdquo IEEEGeoscience and Remote Sensing Letters pp 1ndash5 2020

[8] X Ning W Li and L Sun ldquoAge estimation method based ongenerative adversarial networksrdquo DEStech Transactions onComputer Science and Engineering 2017

[9] W Cai B Liu Z Wei M Li and J Kan ldquoTARDB-net triple-attention-guided residual dense and BiLSTM networks forhyperspectral image classificationrdquo Multimedia Tools andApplications vol 80 no 2 pp 1ndash22 2021

[10] N Xin LWeijun L Haoguang and LWenjie ldquoUncorrelatedlocality preserving discriminant analysis based on bionicsrdquoJournal of Computer Research and Development vol 53no 11 p 2623 2016

[11] Z Wang C Zou and W Cai ldquoSmall sample classification ofhyperspectral remote sensing images based on sequential jointdeeping learning modelrdquo IEEE Access vol 8 pp 71353ndash71363 2020

[12] X Zhang Y Yang Z Li X Ning Y Qin and W Cai ldquoAnimproved encoder-decoder network based on strip poolmethod applied to segmentation of farmland vacancy fieldrdquoEntropy vol 23 no 4 p 435 2021

[13] W Cai and Z Wei ldquoPiiGAN generative adversarial networksfor pluralistic image inpaintingrdquo IEEE Access vol 8pp 48451ndash48463 2020

[14] X Ning X Wang S Xu et al ldquoA review of research on co-trainingrdquo Concurrency and Computation Practice and Ex-perience 2021 In press

[15] I Portela-Pino A Lopez-Castedo M J Martınez-PatintildeoT Valverde-Esteve and J Domınguez-Alonso ldquoGenderdifferences in motivation and barriers for the practice ofphysical exercise in adolescencerdquo International Journal ofEnvironmental Research and Public Health vol 17 no 1p 168 2020

[16] A De Sire R de Sire V Petito et al ldquoGut-joint axis the roleof physical exercise on gut microbiota modulation in olderpeople with osteoarthritisrdquo Nutrients vol 12 no 2 p 5742020

[17] V Malkin S Serpa A Garcia-Mas and E Shurmanov ldquoAnew paradigm in modern sports psychologyrdquo Revista dePsicologıa del Deporte (Journal of Sport Psychology) vol 29no 2 pp 149ndash152 2020

[18] R Summerley ldquoe development of sports a comparativeanalysis of the early institutionalization of traditional sports

Table 1 Test error rate of different types of martial arts BP neuralnetwork ()

Type SDBP MOBP OursOptional boxing 623 485 137Prescribed boxing 735 597 178Traditional boxing 769 623 203Short instrument 823 514 199Long instrument 658 469 187Double instrument 769 528 201Soft instrument 859 499 189

Table 2 e prediction time of BP neural network for differentmartial arts(s)

Type SDBP MOBP OursOptional boxing 373 236 172Prescribed boxing 345 245 183Traditional boxing 359 229 152Short instrument 365 238 163Long instrument 359 249 152Double instrument 375 258 172Soft instrument 359 249 183

Journal of Healthcare Engineering 7

and E-sportsrdquo Games and Culture vol 15 no 1 pp 51ndash722020

[19] Q S Han M eeboom and D Zhu ldquoChinese martial artsand the olympics analysing the policy of the internationalwushu federationrdquo International Review for the Sociology ofSport 2020 In press

[20] C Hongtu and Z Cai ldquoe formal evolution of Chinesemartial arts action movies and the spread of core valuesrdquoAeFrontiers of Society Science and Technology vol 2 no 4 2020

[21] V Kurkova ldquoKolmogorovrsquos theorem and multilayer neuralnetworksrdquo Neural Networks vol 5 no 3 pp 501ndash506 1992

[22] W Jin Z J Li L S Wei and H Zhen ldquoe improvements ofBP neural network learning algorithmrdquo in Proceedings of theWCC 2000-ICSP 2000 2000 5th international conference onsignal processing proceedings 16th world computer congress2000 vol 3 pp 1647ndash1649 IEEE Beijing China August2000

[23] S W Piche ldquoSteepest descent algorithms for neural networkcontrollers and filtersrdquo IEEE Transactions on Neural Net-works vol 5 no 2 pp 198ndash212 1994

8 Journal of Healthcare Engineering

Page 4: Martial Arts Competitive Decision-Making Algorithm Based

signal input from the output layer layer by layer back-propagation to the first input layer and continue Continueuntil the final actual output is closer to the expected valuethat is the error is minimized In the process of back-propagation the weight is mainly modified to obtain thedesired result Among them the changes of other factors alsoplay a supporting role

32 Improved BP Neural Network Aiming at the short-comings and limitations of the steepest descent BP neuralnetwork and the momentum BP neural network a newalgorithm is proposed additional momentum-elastic gra-dient descent-adaptive learning rate BP neural networkFirst the learning steprsquos selection and the weight size anddirection have been improved and the learning rate andmomentum terms have been limited en the improvednew algorithm and the first two BP neural networks arecompared and analyzed by simulation experiments and thecorrectness error and prediction time are calculated Otherindicators found that the new algorithm has higher accuracyminor errors fewer iterations and better overall perfor-mance Finally the improved BP neural network used in thenext chapter of this paper has laid a strong foundation forpredicting martial arts strategies

321 SDBP Neural Network e SDBP neural networkrsquoslearning process [23] is also composed of two stages theforward propagation of the first stage signal and the secondstage errorrsquos backpropagation e weight wij and thethreshold aj from the input layer to the hidden layer arecontinuously adjustede weight wjk from the hidden layerto the output layer and the threshold bk are modified untilthe output result is almost the same as the expected resulte SDBP neural network structure is shown in Figure 4assuming the number of nodes in the input layer hiddenlayer and output layerey are M u and L e calculationformulas for each layer are as follows

Let k be the number of iterations the calculation formulafor weight update is

wk+1 wk minus αgk (1)

eweight of the kt iteration α is the learning rate whichcan be set reasonably during training e initial defaultlearning rate is 001

Average method take the average of the three channelsrsquopixels in the RGB image as the gray value of the gray image

gk zEk

zwk

(2)

where gk is the gradient vector of the k th iteration Ek is thetotal error performance function of the neural networkoutput of the k th iteration and the output formula of thehidden layer is

Hj 1113944n

i1wijxi + aj (3)

e input vector Hj is the hidden layerrsquos actual outputaj is the threshold from the input layer to the hidden layerand wij is the weight from the input layer to the hidden layere output formula of the output layer is as follows

Ok 1113944l

j1Hjwjk + bk (4)

e output layerrsquos actual output bk is the threshold fromthe hidden layer to the output layer and wjk is the weightfrom the hidden layer to the output layer e calculationformula is as follows

Ek 121113944

k

i1Yi minus Oi( 1113857

2 (5)

where Ek is the total error performance function of theneural network output of the k th iteration Yi is the expectedvalue of the i th iteration and Oi is the actual output of the i

th iterationFrom equations 1ndash5 and each layerrsquos function the

surface gradient gk of the total error of the k th iteration canbe obtained and equation 1 is respectively introduced eweight of each layer is revised to ensure that the absoluteerror is reducede small development direction is reducedto the minimum thus in line with the expected error

x bk

x1y1

y2

y3

yk

x2

x3

xn

f () f ()

f ()

f ()

f ()

f ()

f ()

f ()

wijaj wjk

Figure 4 e structure of SDBP neural network

Input layer Hidden layer Output layer

Figure 3 e structure of BP neural network

4 Journal of Healthcare Engineering

It can be seen from Figure 5 that there are too manyminimum points on the error surface because the transferfunction of the SDBP neural network is nonlinear

322 MOBP Neural Network MOBP neural network in-troduces momentum factor η(0lt ηlt 1) based on SDBPneural network

Δwk+1 ηΔwk + α(1 minus η)zEk

zwk

(6)

where η is the momentum factor wk is the weight of the k thiteration Δwk is the weight correction of the k th iteration αis the learning rate and Ek is the total error performancefunction

e new correction amount of the algorithm is closelyrelated to the previous correction result and restrictedSuppose the last amount correction is too significant In thatcase the sign of the second term of equation 6 is the oppositeof the previous correction amount resulting in the currentcorrection If the last revision is too minor the second termof equation 6 is the same as the previous correction sign As aresult the correction is increased and the training speed willbe improved for the network erefore the MOBP neuralnetwork essentially increases the correction in the samegradient direction that is increases the ldquomomentumrdquo in thesame gradient direction by increasing the momentum factorη reasonably

In the MOBP neural network compared to the SDBPneural networkrsquos learning rate the MOBP neural networkrsquoslearning rate can be increased to a greater extent Still theincreased learning rate will not make the network trainingimpossible and paralyzedere are two reasons On the onehand when the degree of correction is too large the algo-rithm can correct to decrease ensuring that the correctiondirection still maintains the direction of convergence On theother hand MOBP neural network does the same by in-troducing momentum factor e correction amount of thegradient direction is reasonably increased to increase thelearning rate From this it can be obtained that based onensuring the algorithmrsquos stability the MOBP neural net-workrsquos learning time is shortened and the convergencespeed has been dramatically improved

323 Limitations of BP Neural Network When applyingartificial neural network modeling in related fields mostneural network models use BP neural network SDBP neuralnetwork MOBP neural network and other improved formsHowever BP neural network still has its limitations

(1) e limitations of the network itself the BP neuralnetwork is a static transmission network It can onlycomplete nonlinear static mapping safely and doesnot have the information processing function thatchanges dynamically with time or space

(2) Selection of initial parameters It is difficult for BPneural network to select initial parameters especiallythe size and direction of initial weights andthresholds Although there are correction formulas

once the initial values are determined it is equivalentto determining the network convergence Whennetwork learning is over it will generally converge tothe local extreme value closest to the initial valuedirectly affecting the difference between the localextreme pointrsquos error and the ideal error

(3) e choice of learning step length e training steplength in BP neural network is a fixed value and thepath left by the gradient method is zigzag whenapproaching the minimum point e closer to theminimum point the faster the convergence speederefore to reduce the error the training steplength should be set as small as possible If the steplength is too small the training time will increasewhich leads to the difficulty of establishing thetraining step length To obtain relevant learningresults it generally needs to be rich practical expe-rience or repeated adjustments to the step length

(4) e choice of learning rate If the learning rate isselected too large it will make learning unstable onthe contrary if the learning rate is chosen too smallit may increase the learning time and cause networkparalysis At present a fast and reasonable methodhas not been explored to solve the difficulty ofselecting the learning rate of BP neural network

33 ImprovedBPNeuralNetwork Because of the limitationsof the SDBP neural network that is MOBP neural networkand BP neural network this paper first improves the se-lection of learning step size and the size and direction ofweights then enhances the selection of learning rate that isthe new algorithm that is BP neural network with addi-tional momentum-elastic gradient descent-adaptive learn-ing rate

e selection of the learning step length in the BP neuralnetwork is essential If the step length is too large thenetwork will converge quickly but it will cause instability innetwork learning too small a step length can avoid insta-bility but the convergence speed will slow down Solving thiscontradiction it is proposed to add an ldquoadditional mo-mentum termrdquo based on SDBP neural network and MOBPneural network add a correction value proportional to theprevious weight or threshold value in the correction of each

Error surface Error contour

Sum sq

uared error

Weight W

Bias B

Bias B

060402

ndash02ndash04ndash06

0

ndash4ndash2

ndash2ndash2

0

0

00

2

2

2

2 44

4

ndash4 ndash4ndash4

ndash3

ndash2

ndash1

1

3

4

Figure 5 Error surface plot

Journal of Healthcare Engineering 5

layer weight or threshold and recalculate the new weight (orthreshold) correction amount according to the back-propagation method e specific adjustment formula is asfollows

Δwij(n) λΔwij(n minus 1) + ζδj(t)vi(n) (7)

Δbi(n) λΔbi(n minus 1) + ζδi(t) (8)

where n is the number of trainings λ is the momentum termζ is the learning step size Δwij(n) is the correction of theweight of the n th trainingΔbi(n) is the correction of the n thtraining threshold and δj(t)vi(n) is the n th training gra-dient vector Write equation 8 as a time series t as a variablet isin [0 n] then equation 8 can be regarded as the first-orderdifference equation Δwij(n) e calculation equation is asfollows

Δwij(n) ζ 1113944n

t0λnminus 1δj(t)vi(n)

δj(t)vi(n) minuszE(n)

zwij(n)

(9)

where δj(t)vi(n) is understood as the gradient vector of the n

th training then

Δwij(n) minusζ 1113944n

t0λnminus t zE(n)

zwij(n) (10)

In the training process of the additional momentummethod to prevent the corrected weight from having anexcessive influence on the error the momentum term λneeds to be strengthened that is the judgment condition ofthe additional momentum method in the training process is

λ

0 E(n)gt 104E(n minus 1)

095 E(n)ltE(n minus 1)

λ otherwise

⎧⎪⎪⎨

⎪⎪⎩(11)

However the additional momentum BP neural networkalgorithmrsquos hidden layer generally uses an S-type activationfunction e S-type function is often called a ldquosqueezerdquofunction because it can compress an infinite input range to alimited output range When the input is significant the slopeis almost close It will cause the gradient amplitude in thealgorithm to be tiny making the correction processmeaningless In response to this problem the elastic gradientdescent method is introduced to improve again e elasticgradient descent method does not study the amplitude of thepartial derivative Only the partial derivative sign is studiedto determine the weightrsquos update direction and the revisedldquoupdate value determines the size of the weight changerdquo Ifthe sign of the partial derivative of the objective function to acertain weight is in two consecutive iterations and if themedium is unchanged increase the corresponding ldquoupdatevaluerdquo otherwise decrease the corresponding ldquoupdatevaluerdquo Use the additional momentum method and theelastic gradient descent method to improve the BP neuralnetwork at the same time It will provide the range the size

of the correction the number of weights and the direction ofupdating the weights It can also significantly prevent thetraining process from falling into a local minimum andspeed up the convergence speed

4 Experiments

41 Experimental Environment Since the experiment in thispaper needs to train a deep neural network the scale is largethe structure is more complex and the calculation scale isenormous e programming language used is Python theversion is 36 the deep learning framework used is Keras215 and the IDE for program deployment is Pycharm Allexperiments are conducted in the same environment All ourexperiments have been conducted on a desktop PC with anIntel Core i7-8700 processor and an NVIDIA GeForce GTX1080ti GPU

42 Datasets Preprocessing e data set used in this paper iscollected in martial arts competitions is paper removessingular values and unstable values and creates training setsand test sets

43 Experimental Results All experiments are performed onthe same platform and the experimental results areexpressed in terms of error rate and prediction time eexperimental results comparing various algorithms areshown in Table 1 below

It can be seen from Table 1 that the MDBP neuralnetwork adds a momentum term based on the SDBP neuralnetwork to modify the weight and threshold Its test errorrate is lower than that of the DBP neural network About 2percentage points reduce the error rate e new algorithmin this paper has improved the selection of learning step sizeand the size and direction of weights and limited the choiceof momentum and learning rate making the new algorithmmore efficient e error rate is significantly lower than thatof other algorithms and nearly 5 percentage points sig-nificantly reduce the new algorithm From the comparisonof error rates in Table 1 it can be fully seen that the error rateof the new BP neural network is significantly lower than theother two BP neural network lines and has a better pre-diction effect

It can be seen from Table 2 that the improvement of theMOBP neural network over the SDBP neural network is theintroduction of a momentum term that is a more signifi-cant learning rate can be used without paralyzing thelearning process e MOBP neural network always accel-erates the same gradient direction e correction amountthat is the MABP neural network has a faster convergencerate than the SDBP neural network and the learning time isshorter e new algorithm is more aimed at the selection ofthe first learning step e size and direction of the weightsare improved and the momentum term is improved ereare clear criteria for selecting learning rate that is the newalgorithm can fine-tune the correction of weights and in-crease the learning rate within a reasonable range and at thesame time accelerate the convergence speed that is reduce

6 Journal of Healthcare Engineering

the prediction time e comparison can clearly show thatthe new algorithmrsquos prediction time is significantly lowerthan that of the other two types of algorithms through theprediction time

5 Conclusion

e formulation of martial arts competitive strategies re-quires scientific analysis of competitive data and more ac-curate predictions Based on this observation this papercombines the popular neural network technology It pro-poses a novel additional momentum-elastic gradient de-scent-adaptive learning rate BP neural network ealgorithm is improved for the traditional BP neural networksuch as selecting learning step size and the difficulty ofdetermining the weightrsquos size and direction e learningrate is not easy to controle experimental results show thatthis paperrsquos algorithm has made good improvements in bothin-network scale and running time and can predict martialarts competition routines and formulate scientific strategies

Data Availability

e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

All the authors do not have any possible conflicts of interest

References

[1] V Romanenko L Podrigalo W J Cynarski et al ldquoAcomparative analysis of the short-termmemory of martial artsathletes of different levels of sportsmanshiprdquo Journal ofMartial Arts Anthropology vol 20 no 3 pp 18ndash24 2020

[2] A Harwood-Gross R Feldman O Zagoory-Sharon andY Rassovsky ldquoHormonal reactivity during martial artspractice among high-risk youthsrdquo Psychoneuroendocrinologyvol 121 Article ID 104806 2020

[3] R B Santos Jr A C Utter S R McAnulty et al ldquoWeight lossbehaviors in Brazilian mixed martial arts athletesrdquo SportSciences for Health vol 16 no 1 pp 117ndash122 2020

[4] X Ning W Li and W Liu ldquoA fast single image haze removalmethod based on human retina propertyrdquo IEICE Transactionson Information and Systems vol E100D no 1 pp 211ndash2142017

[5] Z L Yang S Y Zhang Y T Hu Z W Hu and Y F HuangldquoVAE-stega linguistic steganography based on variationalauto-encoderrdquo IEEE Transactions on Information Forensicsand Security vol 16 pp 880ndash895 2020

[6] X Ning W Li and J Xu ldquoe principle of homologycontinuity and geometrical covering learning for patternrecognitionrdquo International Journal of Pattern Recognition andArtificial Intelligence vol 32 no 12 Article ID 1850042 2018

[7] W Cai and Z Wei ldquoRemote sensing image classification basedon a cross-attention mechanism and graph convolutionrdquo IEEEGeoscience and Remote Sensing Letters pp 1ndash5 2020

[8] X Ning W Li and L Sun ldquoAge estimation method based ongenerative adversarial networksrdquo DEStech Transactions onComputer Science and Engineering 2017

[9] W Cai B Liu Z Wei M Li and J Kan ldquoTARDB-net triple-attention-guided residual dense and BiLSTM networks forhyperspectral image classificationrdquo Multimedia Tools andApplications vol 80 no 2 pp 1ndash22 2021

[10] N Xin LWeijun L Haoguang and LWenjie ldquoUncorrelatedlocality preserving discriminant analysis based on bionicsrdquoJournal of Computer Research and Development vol 53no 11 p 2623 2016

[11] Z Wang C Zou and W Cai ldquoSmall sample classification ofhyperspectral remote sensing images based on sequential jointdeeping learning modelrdquo IEEE Access vol 8 pp 71353ndash71363 2020

[12] X Zhang Y Yang Z Li X Ning Y Qin and W Cai ldquoAnimproved encoder-decoder network based on strip poolmethod applied to segmentation of farmland vacancy fieldrdquoEntropy vol 23 no 4 p 435 2021

[13] W Cai and Z Wei ldquoPiiGAN generative adversarial networksfor pluralistic image inpaintingrdquo IEEE Access vol 8pp 48451ndash48463 2020

[14] X Ning X Wang S Xu et al ldquoA review of research on co-trainingrdquo Concurrency and Computation Practice and Ex-perience 2021 In press

[15] I Portela-Pino A Lopez-Castedo M J Martınez-PatintildeoT Valverde-Esteve and J Domınguez-Alonso ldquoGenderdifferences in motivation and barriers for the practice ofphysical exercise in adolescencerdquo International Journal ofEnvironmental Research and Public Health vol 17 no 1p 168 2020

[16] A De Sire R de Sire V Petito et al ldquoGut-joint axis the roleof physical exercise on gut microbiota modulation in olderpeople with osteoarthritisrdquo Nutrients vol 12 no 2 p 5742020

[17] V Malkin S Serpa A Garcia-Mas and E Shurmanov ldquoAnew paradigm in modern sports psychologyrdquo Revista dePsicologıa del Deporte (Journal of Sport Psychology) vol 29no 2 pp 149ndash152 2020

[18] R Summerley ldquoe development of sports a comparativeanalysis of the early institutionalization of traditional sports

Table 1 Test error rate of different types of martial arts BP neuralnetwork ()

Type SDBP MOBP OursOptional boxing 623 485 137Prescribed boxing 735 597 178Traditional boxing 769 623 203Short instrument 823 514 199Long instrument 658 469 187Double instrument 769 528 201Soft instrument 859 499 189

Table 2 e prediction time of BP neural network for differentmartial arts(s)

Type SDBP MOBP OursOptional boxing 373 236 172Prescribed boxing 345 245 183Traditional boxing 359 229 152Short instrument 365 238 163Long instrument 359 249 152Double instrument 375 258 172Soft instrument 359 249 183

Journal of Healthcare Engineering 7

and E-sportsrdquo Games and Culture vol 15 no 1 pp 51ndash722020

[19] Q S Han M eeboom and D Zhu ldquoChinese martial artsand the olympics analysing the policy of the internationalwushu federationrdquo International Review for the Sociology ofSport 2020 In press

[20] C Hongtu and Z Cai ldquoe formal evolution of Chinesemartial arts action movies and the spread of core valuesrdquoAeFrontiers of Society Science and Technology vol 2 no 4 2020

[21] V Kurkova ldquoKolmogorovrsquos theorem and multilayer neuralnetworksrdquo Neural Networks vol 5 no 3 pp 501ndash506 1992

[22] W Jin Z J Li L S Wei and H Zhen ldquoe improvements ofBP neural network learning algorithmrdquo in Proceedings of theWCC 2000-ICSP 2000 2000 5th international conference onsignal processing proceedings 16th world computer congress2000 vol 3 pp 1647ndash1649 IEEE Beijing China August2000

[23] S W Piche ldquoSteepest descent algorithms for neural networkcontrollers and filtersrdquo IEEE Transactions on Neural Net-works vol 5 no 2 pp 198ndash212 1994

8 Journal of Healthcare Engineering

Page 5: Martial Arts Competitive Decision-Making Algorithm Based

It can be seen from Figure 5 that there are too manyminimum points on the error surface because the transferfunction of the SDBP neural network is nonlinear

322 MOBP Neural Network MOBP neural network in-troduces momentum factor η(0lt ηlt 1) based on SDBPneural network

Δwk+1 ηΔwk + α(1 minus η)zEk

zwk

(6)

where η is the momentum factor wk is the weight of the k thiteration Δwk is the weight correction of the k th iteration αis the learning rate and Ek is the total error performancefunction

e new correction amount of the algorithm is closelyrelated to the previous correction result and restrictedSuppose the last amount correction is too significant In thatcase the sign of the second term of equation 6 is the oppositeof the previous correction amount resulting in the currentcorrection If the last revision is too minor the second termof equation 6 is the same as the previous correction sign As aresult the correction is increased and the training speed willbe improved for the network erefore the MOBP neuralnetwork essentially increases the correction in the samegradient direction that is increases the ldquomomentumrdquo in thesame gradient direction by increasing the momentum factorη reasonably

In the MOBP neural network compared to the SDBPneural networkrsquos learning rate the MOBP neural networkrsquoslearning rate can be increased to a greater extent Still theincreased learning rate will not make the network trainingimpossible and paralyzedere are two reasons On the onehand when the degree of correction is too large the algo-rithm can correct to decrease ensuring that the correctiondirection still maintains the direction of convergence On theother hand MOBP neural network does the same by in-troducing momentum factor e correction amount of thegradient direction is reasonably increased to increase thelearning rate From this it can be obtained that based onensuring the algorithmrsquos stability the MOBP neural net-workrsquos learning time is shortened and the convergencespeed has been dramatically improved

323 Limitations of BP Neural Network When applyingartificial neural network modeling in related fields mostneural network models use BP neural network SDBP neuralnetwork MOBP neural network and other improved formsHowever BP neural network still has its limitations

(1) e limitations of the network itself the BP neuralnetwork is a static transmission network It can onlycomplete nonlinear static mapping safely and doesnot have the information processing function thatchanges dynamically with time or space

(2) Selection of initial parameters It is difficult for BPneural network to select initial parameters especiallythe size and direction of initial weights andthresholds Although there are correction formulas

once the initial values are determined it is equivalentto determining the network convergence Whennetwork learning is over it will generally converge tothe local extreme value closest to the initial valuedirectly affecting the difference between the localextreme pointrsquos error and the ideal error

(3) e choice of learning step length e training steplength in BP neural network is a fixed value and thepath left by the gradient method is zigzag whenapproaching the minimum point e closer to theminimum point the faster the convergence speederefore to reduce the error the training steplength should be set as small as possible If the steplength is too small the training time will increasewhich leads to the difficulty of establishing thetraining step length To obtain relevant learningresults it generally needs to be rich practical expe-rience or repeated adjustments to the step length

(4) e choice of learning rate If the learning rate isselected too large it will make learning unstable onthe contrary if the learning rate is chosen too smallit may increase the learning time and cause networkparalysis At present a fast and reasonable methodhas not been explored to solve the difficulty ofselecting the learning rate of BP neural network

33 ImprovedBPNeuralNetwork Because of the limitationsof the SDBP neural network that is MOBP neural networkand BP neural network this paper first improves the se-lection of learning step size and the size and direction ofweights then enhances the selection of learning rate that isthe new algorithm that is BP neural network with addi-tional momentum-elastic gradient descent-adaptive learn-ing rate

e selection of the learning step length in the BP neuralnetwork is essential If the step length is too large thenetwork will converge quickly but it will cause instability innetwork learning too small a step length can avoid insta-bility but the convergence speed will slow down Solving thiscontradiction it is proposed to add an ldquoadditional mo-mentum termrdquo based on SDBP neural network and MOBPneural network add a correction value proportional to theprevious weight or threshold value in the correction of each

Error surface Error contour

Sum sq

uared error

Weight W

Bias B

Bias B

060402

ndash02ndash04ndash06

0

ndash4ndash2

ndash2ndash2

0

0

00

2

2

2

2 44

4

ndash4 ndash4ndash4

ndash3

ndash2

ndash1

1

3

4

Figure 5 Error surface plot

Journal of Healthcare Engineering 5

layer weight or threshold and recalculate the new weight (orthreshold) correction amount according to the back-propagation method e specific adjustment formula is asfollows

Δwij(n) λΔwij(n minus 1) + ζδj(t)vi(n) (7)

Δbi(n) λΔbi(n minus 1) + ζδi(t) (8)

where n is the number of trainings λ is the momentum termζ is the learning step size Δwij(n) is the correction of theweight of the n th trainingΔbi(n) is the correction of the n thtraining threshold and δj(t)vi(n) is the n th training gra-dient vector Write equation 8 as a time series t as a variablet isin [0 n] then equation 8 can be regarded as the first-orderdifference equation Δwij(n) e calculation equation is asfollows

Δwij(n) ζ 1113944n

t0λnminus 1δj(t)vi(n)

δj(t)vi(n) minuszE(n)

zwij(n)

(9)

where δj(t)vi(n) is understood as the gradient vector of the n

th training then

Δwij(n) minusζ 1113944n

t0λnminus t zE(n)

zwij(n) (10)

In the training process of the additional momentummethod to prevent the corrected weight from having anexcessive influence on the error the momentum term λneeds to be strengthened that is the judgment condition ofthe additional momentum method in the training process is

λ

0 E(n)gt 104E(n minus 1)

095 E(n)ltE(n minus 1)

λ otherwise

⎧⎪⎪⎨

⎪⎪⎩(11)

However the additional momentum BP neural networkalgorithmrsquos hidden layer generally uses an S-type activationfunction e S-type function is often called a ldquosqueezerdquofunction because it can compress an infinite input range to alimited output range When the input is significant the slopeis almost close It will cause the gradient amplitude in thealgorithm to be tiny making the correction processmeaningless In response to this problem the elastic gradientdescent method is introduced to improve again e elasticgradient descent method does not study the amplitude of thepartial derivative Only the partial derivative sign is studiedto determine the weightrsquos update direction and the revisedldquoupdate value determines the size of the weight changerdquo Ifthe sign of the partial derivative of the objective function to acertain weight is in two consecutive iterations and if themedium is unchanged increase the corresponding ldquoupdatevaluerdquo otherwise decrease the corresponding ldquoupdatevaluerdquo Use the additional momentum method and theelastic gradient descent method to improve the BP neuralnetwork at the same time It will provide the range the size

of the correction the number of weights and the direction ofupdating the weights It can also significantly prevent thetraining process from falling into a local minimum andspeed up the convergence speed

4 Experiments

41 Experimental Environment Since the experiment in thispaper needs to train a deep neural network the scale is largethe structure is more complex and the calculation scale isenormous e programming language used is Python theversion is 36 the deep learning framework used is Keras215 and the IDE for program deployment is Pycharm Allexperiments are conducted in the same environment All ourexperiments have been conducted on a desktop PC with anIntel Core i7-8700 processor and an NVIDIA GeForce GTX1080ti GPU

42 Datasets Preprocessing e data set used in this paper iscollected in martial arts competitions is paper removessingular values and unstable values and creates training setsand test sets

43 Experimental Results All experiments are performed onthe same platform and the experimental results areexpressed in terms of error rate and prediction time eexperimental results comparing various algorithms areshown in Table 1 below

It can be seen from Table 1 that the MDBP neuralnetwork adds a momentum term based on the SDBP neuralnetwork to modify the weight and threshold Its test errorrate is lower than that of the DBP neural network About 2percentage points reduce the error rate e new algorithmin this paper has improved the selection of learning step sizeand the size and direction of weights and limited the choiceof momentum and learning rate making the new algorithmmore efficient e error rate is significantly lower than thatof other algorithms and nearly 5 percentage points sig-nificantly reduce the new algorithm From the comparisonof error rates in Table 1 it can be fully seen that the error rateof the new BP neural network is significantly lower than theother two BP neural network lines and has a better pre-diction effect

It can be seen from Table 2 that the improvement of theMOBP neural network over the SDBP neural network is theintroduction of a momentum term that is a more signifi-cant learning rate can be used without paralyzing thelearning process e MOBP neural network always accel-erates the same gradient direction e correction amountthat is the MABP neural network has a faster convergencerate than the SDBP neural network and the learning time isshorter e new algorithm is more aimed at the selection ofthe first learning step e size and direction of the weightsare improved and the momentum term is improved ereare clear criteria for selecting learning rate that is the newalgorithm can fine-tune the correction of weights and in-crease the learning rate within a reasonable range and at thesame time accelerate the convergence speed that is reduce

6 Journal of Healthcare Engineering

the prediction time e comparison can clearly show thatthe new algorithmrsquos prediction time is significantly lowerthan that of the other two types of algorithms through theprediction time

5 Conclusion

e formulation of martial arts competitive strategies re-quires scientific analysis of competitive data and more ac-curate predictions Based on this observation this papercombines the popular neural network technology It pro-poses a novel additional momentum-elastic gradient de-scent-adaptive learning rate BP neural network ealgorithm is improved for the traditional BP neural networksuch as selecting learning step size and the difficulty ofdetermining the weightrsquos size and direction e learningrate is not easy to controle experimental results show thatthis paperrsquos algorithm has made good improvements in bothin-network scale and running time and can predict martialarts competition routines and formulate scientific strategies

Data Availability

e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

All the authors do not have any possible conflicts of interest

References

[1] V Romanenko L Podrigalo W J Cynarski et al ldquoAcomparative analysis of the short-termmemory of martial artsathletes of different levels of sportsmanshiprdquo Journal ofMartial Arts Anthropology vol 20 no 3 pp 18ndash24 2020

[2] A Harwood-Gross R Feldman O Zagoory-Sharon andY Rassovsky ldquoHormonal reactivity during martial artspractice among high-risk youthsrdquo Psychoneuroendocrinologyvol 121 Article ID 104806 2020

[3] R B Santos Jr A C Utter S R McAnulty et al ldquoWeight lossbehaviors in Brazilian mixed martial arts athletesrdquo SportSciences for Health vol 16 no 1 pp 117ndash122 2020

[4] X Ning W Li and W Liu ldquoA fast single image haze removalmethod based on human retina propertyrdquo IEICE Transactionson Information and Systems vol E100D no 1 pp 211ndash2142017

[5] Z L Yang S Y Zhang Y T Hu Z W Hu and Y F HuangldquoVAE-stega linguistic steganography based on variationalauto-encoderrdquo IEEE Transactions on Information Forensicsand Security vol 16 pp 880ndash895 2020

[6] X Ning W Li and J Xu ldquoe principle of homologycontinuity and geometrical covering learning for patternrecognitionrdquo International Journal of Pattern Recognition andArtificial Intelligence vol 32 no 12 Article ID 1850042 2018

[7] W Cai and Z Wei ldquoRemote sensing image classification basedon a cross-attention mechanism and graph convolutionrdquo IEEEGeoscience and Remote Sensing Letters pp 1ndash5 2020

[8] X Ning W Li and L Sun ldquoAge estimation method based ongenerative adversarial networksrdquo DEStech Transactions onComputer Science and Engineering 2017

[9] W Cai B Liu Z Wei M Li and J Kan ldquoTARDB-net triple-attention-guided residual dense and BiLSTM networks forhyperspectral image classificationrdquo Multimedia Tools andApplications vol 80 no 2 pp 1ndash22 2021

[10] N Xin LWeijun L Haoguang and LWenjie ldquoUncorrelatedlocality preserving discriminant analysis based on bionicsrdquoJournal of Computer Research and Development vol 53no 11 p 2623 2016

[11] Z Wang C Zou and W Cai ldquoSmall sample classification ofhyperspectral remote sensing images based on sequential jointdeeping learning modelrdquo IEEE Access vol 8 pp 71353ndash71363 2020

[12] X Zhang Y Yang Z Li X Ning Y Qin and W Cai ldquoAnimproved encoder-decoder network based on strip poolmethod applied to segmentation of farmland vacancy fieldrdquoEntropy vol 23 no 4 p 435 2021

[13] W Cai and Z Wei ldquoPiiGAN generative adversarial networksfor pluralistic image inpaintingrdquo IEEE Access vol 8pp 48451ndash48463 2020

[14] X Ning X Wang S Xu et al ldquoA review of research on co-trainingrdquo Concurrency and Computation Practice and Ex-perience 2021 In press

[15] I Portela-Pino A Lopez-Castedo M J Martınez-PatintildeoT Valverde-Esteve and J Domınguez-Alonso ldquoGenderdifferences in motivation and barriers for the practice ofphysical exercise in adolescencerdquo International Journal ofEnvironmental Research and Public Health vol 17 no 1p 168 2020

[16] A De Sire R de Sire V Petito et al ldquoGut-joint axis the roleof physical exercise on gut microbiota modulation in olderpeople with osteoarthritisrdquo Nutrients vol 12 no 2 p 5742020

[17] V Malkin S Serpa A Garcia-Mas and E Shurmanov ldquoAnew paradigm in modern sports psychologyrdquo Revista dePsicologıa del Deporte (Journal of Sport Psychology) vol 29no 2 pp 149ndash152 2020

[18] R Summerley ldquoe development of sports a comparativeanalysis of the early institutionalization of traditional sports

Table 1 Test error rate of different types of martial arts BP neuralnetwork ()

Type SDBP MOBP OursOptional boxing 623 485 137Prescribed boxing 735 597 178Traditional boxing 769 623 203Short instrument 823 514 199Long instrument 658 469 187Double instrument 769 528 201Soft instrument 859 499 189

Table 2 e prediction time of BP neural network for differentmartial arts(s)

Type SDBP MOBP OursOptional boxing 373 236 172Prescribed boxing 345 245 183Traditional boxing 359 229 152Short instrument 365 238 163Long instrument 359 249 152Double instrument 375 258 172Soft instrument 359 249 183

Journal of Healthcare Engineering 7

and E-sportsrdquo Games and Culture vol 15 no 1 pp 51ndash722020

[19] Q S Han M eeboom and D Zhu ldquoChinese martial artsand the olympics analysing the policy of the internationalwushu federationrdquo International Review for the Sociology ofSport 2020 In press

[20] C Hongtu and Z Cai ldquoe formal evolution of Chinesemartial arts action movies and the spread of core valuesrdquoAeFrontiers of Society Science and Technology vol 2 no 4 2020

[21] V Kurkova ldquoKolmogorovrsquos theorem and multilayer neuralnetworksrdquo Neural Networks vol 5 no 3 pp 501ndash506 1992

[22] W Jin Z J Li L S Wei and H Zhen ldquoe improvements ofBP neural network learning algorithmrdquo in Proceedings of theWCC 2000-ICSP 2000 2000 5th international conference onsignal processing proceedings 16th world computer congress2000 vol 3 pp 1647ndash1649 IEEE Beijing China August2000

[23] S W Piche ldquoSteepest descent algorithms for neural networkcontrollers and filtersrdquo IEEE Transactions on Neural Net-works vol 5 no 2 pp 198ndash212 1994

8 Journal of Healthcare Engineering

Page 6: Martial Arts Competitive Decision-Making Algorithm Based

layer weight or threshold and recalculate the new weight (orthreshold) correction amount according to the back-propagation method e specific adjustment formula is asfollows

Δwij(n) λΔwij(n minus 1) + ζδj(t)vi(n) (7)

Δbi(n) λΔbi(n minus 1) + ζδi(t) (8)

where n is the number of trainings λ is the momentum termζ is the learning step size Δwij(n) is the correction of theweight of the n th trainingΔbi(n) is the correction of the n thtraining threshold and δj(t)vi(n) is the n th training gra-dient vector Write equation 8 as a time series t as a variablet isin [0 n] then equation 8 can be regarded as the first-orderdifference equation Δwij(n) e calculation equation is asfollows

Δwij(n) ζ 1113944n

t0λnminus 1δj(t)vi(n)

δj(t)vi(n) minuszE(n)

zwij(n)

(9)

where δj(t)vi(n) is understood as the gradient vector of the n

th training then

Δwij(n) minusζ 1113944n

t0λnminus t zE(n)

zwij(n) (10)

In the training process of the additional momentummethod to prevent the corrected weight from having anexcessive influence on the error the momentum term λneeds to be strengthened that is the judgment condition ofthe additional momentum method in the training process is

λ

0 E(n)gt 104E(n minus 1)

095 E(n)ltE(n minus 1)

λ otherwise

⎧⎪⎪⎨

⎪⎪⎩(11)

However the additional momentum BP neural networkalgorithmrsquos hidden layer generally uses an S-type activationfunction e S-type function is often called a ldquosqueezerdquofunction because it can compress an infinite input range to alimited output range When the input is significant the slopeis almost close It will cause the gradient amplitude in thealgorithm to be tiny making the correction processmeaningless In response to this problem the elastic gradientdescent method is introduced to improve again e elasticgradient descent method does not study the amplitude of thepartial derivative Only the partial derivative sign is studiedto determine the weightrsquos update direction and the revisedldquoupdate value determines the size of the weight changerdquo Ifthe sign of the partial derivative of the objective function to acertain weight is in two consecutive iterations and if themedium is unchanged increase the corresponding ldquoupdatevaluerdquo otherwise decrease the corresponding ldquoupdatevaluerdquo Use the additional momentum method and theelastic gradient descent method to improve the BP neuralnetwork at the same time It will provide the range the size

of the correction the number of weights and the direction ofupdating the weights It can also significantly prevent thetraining process from falling into a local minimum andspeed up the convergence speed

4 Experiments

41 Experimental Environment Since the experiment in thispaper needs to train a deep neural network the scale is largethe structure is more complex and the calculation scale isenormous e programming language used is Python theversion is 36 the deep learning framework used is Keras215 and the IDE for program deployment is Pycharm Allexperiments are conducted in the same environment All ourexperiments have been conducted on a desktop PC with anIntel Core i7-8700 processor and an NVIDIA GeForce GTX1080ti GPU

42 Datasets Preprocessing e data set used in this paper iscollected in martial arts competitions is paper removessingular values and unstable values and creates training setsand test sets

43 Experimental Results All experiments are performed onthe same platform and the experimental results areexpressed in terms of error rate and prediction time eexperimental results comparing various algorithms areshown in Table 1 below

It can be seen from Table 1 that the MDBP neuralnetwork adds a momentum term based on the SDBP neuralnetwork to modify the weight and threshold Its test errorrate is lower than that of the DBP neural network About 2percentage points reduce the error rate e new algorithmin this paper has improved the selection of learning step sizeand the size and direction of weights and limited the choiceof momentum and learning rate making the new algorithmmore efficient e error rate is significantly lower than thatof other algorithms and nearly 5 percentage points sig-nificantly reduce the new algorithm From the comparisonof error rates in Table 1 it can be fully seen that the error rateof the new BP neural network is significantly lower than theother two BP neural network lines and has a better pre-diction effect

It can be seen from Table 2 that the improvement of theMOBP neural network over the SDBP neural network is theintroduction of a momentum term that is a more signifi-cant learning rate can be used without paralyzing thelearning process e MOBP neural network always accel-erates the same gradient direction e correction amountthat is the MABP neural network has a faster convergencerate than the SDBP neural network and the learning time isshorter e new algorithm is more aimed at the selection ofthe first learning step e size and direction of the weightsare improved and the momentum term is improved ereare clear criteria for selecting learning rate that is the newalgorithm can fine-tune the correction of weights and in-crease the learning rate within a reasonable range and at thesame time accelerate the convergence speed that is reduce

6 Journal of Healthcare Engineering

the prediction time e comparison can clearly show thatthe new algorithmrsquos prediction time is significantly lowerthan that of the other two types of algorithms through theprediction time

5 Conclusion

e formulation of martial arts competitive strategies re-quires scientific analysis of competitive data and more ac-curate predictions Based on this observation this papercombines the popular neural network technology It pro-poses a novel additional momentum-elastic gradient de-scent-adaptive learning rate BP neural network ealgorithm is improved for the traditional BP neural networksuch as selecting learning step size and the difficulty ofdetermining the weightrsquos size and direction e learningrate is not easy to controle experimental results show thatthis paperrsquos algorithm has made good improvements in bothin-network scale and running time and can predict martialarts competition routines and formulate scientific strategies

Data Availability

e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

All the authors do not have any possible conflicts of interest

References

[1] V Romanenko L Podrigalo W J Cynarski et al ldquoAcomparative analysis of the short-termmemory of martial artsathletes of different levels of sportsmanshiprdquo Journal ofMartial Arts Anthropology vol 20 no 3 pp 18ndash24 2020

[2] A Harwood-Gross R Feldman O Zagoory-Sharon andY Rassovsky ldquoHormonal reactivity during martial artspractice among high-risk youthsrdquo Psychoneuroendocrinologyvol 121 Article ID 104806 2020

[3] R B Santos Jr A C Utter S R McAnulty et al ldquoWeight lossbehaviors in Brazilian mixed martial arts athletesrdquo SportSciences for Health vol 16 no 1 pp 117ndash122 2020

[4] X Ning W Li and W Liu ldquoA fast single image haze removalmethod based on human retina propertyrdquo IEICE Transactionson Information and Systems vol E100D no 1 pp 211ndash2142017

[5] Z L Yang S Y Zhang Y T Hu Z W Hu and Y F HuangldquoVAE-stega linguistic steganography based on variationalauto-encoderrdquo IEEE Transactions on Information Forensicsand Security vol 16 pp 880ndash895 2020

[6] X Ning W Li and J Xu ldquoe principle of homologycontinuity and geometrical covering learning for patternrecognitionrdquo International Journal of Pattern Recognition andArtificial Intelligence vol 32 no 12 Article ID 1850042 2018

[7] W Cai and Z Wei ldquoRemote sensing image classification basedon a cross-attention mechanism and graph convolutionrdquo IEEEGeoscience and Remote Sensing Letters pp 1ndash5 2020

[8] X Ning W Li and L Sun ldquoAge estimation method based ongenerative adversarial networksrdquo DEStech Transactions onComputer Science and Engineering 2017

[9] W Cai B Liu Z Wei M Li and J Kan ldquoTARDB-net triple-attention-guided residual dense and BiLSTM networks forhyperspectral image classificationrdquo Multimedia Tools andApplications vol 80 no 2 pp 1ndash22 2021

[10] N Xin LWeijun L Haoguang and LWenjie ldquoUncorrelatedlocality preserving discriminant analysis based on bionicsrdquoJournal of Computer Research and Development vol 53no 11 p 2623 2016

[11] Z Wang C Zou and W Cai ldquoSmall sample classification ofhyperspectral remote sensing images based on sequential jointdeeping learning modelrdquo IEEE Access vol 8 pp 71353ndash71363 2020

[12] X Zhang Y Yang Z Li X Ning Y Qin and W Cai ldquoAnimproved encoder-decoder network based on strip poolmethod applied to segmentation of farmland vacancy fieldrdquoEntropy vol 23 no 4 p 435 2021

[13] W Cai and Z Wei ldquoPiiGAN generative adversarial networksfor pluralistic image inpaintingrdquo IEEE Access vol 8pp 48451ndash48463 2020

[14] X Ning X Wang S Xu et al ldquoA review of research on co-trainingrdquo Concurrency and Computation Practice and Ex-perience 2021 In press

[15] I Portela-Pino A Lopez-Castedo M J Martınez-PatintildeoT Valverde-Esteve and J Domınguez-Alonso ldquoGenderdifferences in motivation and barriers for the practice ofphysical exercise in adolescencerdquo International Journal ofEnvironmental Research and Public Health vol 17 no 1p 168 2020

[16] A De Sire R de Sire V Petito et al ldquoGut-joint axis the roleof physical exercise on gut microbiota modulation in olderpeople with osteoarthritisrdquo Nutrients vol 12 no 2 p 5742020

[17] V Malkin S Serpa A Garcia-Mas and E Shurmanov ldquoAnew paradigm in modern sports psychologyrdquo Revista dePsicologıa del Deporte (Journal of Sport Psychology) vol 29no 2 pp 149ndash152 2020

[18] R Summerley ldquoe development of sports a comparativeanalysis of the early institutionalization of traditional sports

Table 1 Test error rate of different types of martial arts BP neuralnetwork ()

Type SDBP MOBP OursOptional boxing 623 485 137Prescribed boxing 735 597 178Traditional boxing 769 623 203Short instrument 823 514 199Long instrument 658 469 187Double instrument 769 528 201Soft instrument 859 499 189

Table 2 e prediction time of BP neural network for differentmartial arts(s)

Type SDBP MOBP OursOptional boxing 373 236 172Prescribed boxing 345 245 183Traditional boxing 359 229 152Short instrument 365 238 163Long instrument 359 249 152Double instrument 375 258 172Soft instrument 359 249 183

Journal of Healthcare Engineering 7

and E-sportsrdquo Games and Culture vol 15 no 1 pp 51ndash722020

[19] Q S Han M eeboom and D Zhu ldquoChinese martial artsand the olympics analysing the policy of the internationalwushu federationrdquo International Review for the Sociology ofSport 2020 In press

[20] C Hongtu and Z Cai ldquoe formal evolution of Chinesemartial arts action movies and the spread of core valuesrdquoAeFrontiers of Society Science and Technology vol 2 no 4 2020

[21] V Kurkova ldquoKolmogorovrsquos theorem and multilayer neuralnetworksrdquo Neural Networks vol 5 no 3 pp 501ndash506 1992

[22] W Jin Z J Li L S Wei and H Zhen ldquoe improvements ofBP neural network learning algorithmrdquo in Proceedings of theWCC 2000-ICSP 2000 2000 5th international conference onsignal processing proceedings 16th world computer congress2000 vol 3 pp 1647ndash1649 IEEE Beijing China August2000

[23] S W Piche ldquoSteepest descent algorithms for neural networkcontrollers and filtersrdquo IEEE Transactions on Neural Net-works vol 5 no 2 pp 198ndash212 1994

8 Journal of Healthcare Engineering

Page 7: Martial Arts Competitive Decision-Making Algorithm Based

the prediction time e comparison can clearly show thatthe new algorithmrsquos prediction time is significantly lowerthan that of the other two types of algorithms through theprediction time

5 Conclusion

e formulation of martial arts competitive strategies re-quires scientific analysis of competitive data and more ac-curate predictions Based on this observation this papercombines the popular neural network technology It pro-poses a novel additional momentum-elastic gradient de-scent-adaptive learning rate BP neural network ealgorithm is improved for the traditional BP neural networksuch as selecting learning step size and the difficulty ofdetermining the weightrsquos size and direction e learningrate is not easy to controle experimental results show thatthis paperrsquos algorithm has made good improvements in bothin-network scale and running time and can predict martialarts competition routines and formulate scientific strategies

Data Availability

e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

All the authors do not have any possible conflicts of interest

References

[1] V Romanenko L Podrigalo W J Cynarski et al ldquoAcomparative analysis of the short-termmemory of martial artsathletes of different levels of sportsmanshiprdquo Journal ofMartial Arts Anthropology vol 20 no 3 pp 18ndash24 2020

[2] A Harwood-Gross R Feldman O Zagoory-Sharon andY Rassovsky ldquoHormonal reactivity during martial artspractice among high-risk youthsrdquo Psychoneuroendocrinologyvol 121 Article ID 104806 2020

[3] R B Santos Jr A C Utter S R McAnulty et al ldquoWeight lossbehaviors in Brazilian mixed martial arts athletesrdquo SportSciences for Health vol 16 no 1 pp 117ndash122 2020

[4] X Ning W Li and W Liu ldquoA fast single image haze removalmethod based on human retina propertyrdquo IEICE Transactionson Information and Systems vol E100D no 1 pp 211ndash2142017

[5] Z L Yang S Y Zhang Y T Hu Z W Hu and Y F HuangldquoVAE-stega linguistic steganography based on variationalauto-encoderrdquo IEEE Transactions on Information Forensicsand Security vol 16 pp 880ndash895 2020

[6] X Ning W Li and J Xu ldquoe principle of homologycontinuity and geometrical covering learning for patternrecognitionrdquo International Journal of Pattern Recognition andArtificial Intelligence vol 32 no 12 Article ID 1850042 2018

[7] W Cai and Z Wei ldquoRemote sensing image classification basedon a cross-attention mechanism and graph convolutionrdquo IEEEGeoscience and Remote Sensing Letters pp 1ndash5 2020

[8] X Ning W Li and L Sun ldquoAge estimation method based ongenerative adversarial networksrdquo DEStech Transactions onComputer Science and Engineering 2017

[9] W Cai B Liu Z Wei M Li and J Kan ldquoTARDB-net triple-attention-guided residual dense and BiLSTM networks forhyperspectral image classificationrdquo Multimedia Tools andApplications vol 80 no 2 pp 1ndash22 2021

[10] N Xin LWeijun L Haoguang and LWenjie ldquoUncorrelatedlocality preserving discriminant analysis based on bionicsrdquoJournal of Computer Research and Development vol 53no 11 p 2623 2016

[11] Z Wang C Zou and W Cai ldquoSmall sample classification ofhyperspectral remote sensing images based on sequential jointdeeping learning modelrdquo IEEE Access vol 8 pp 71353ndash71363 2020

[12] X Zhang Y Yang Z Li X Ning Y Qin and W Cai ldquoAnimproved encoder-decoder network based on strip poolmethod applied to segmentation of farmland vacancy fieldrdquoEntropy vol 23 no 4 p 435 2021

[13] W Cai and Z Wei ldquoPiiGAN generative adversarial networksfor pluralistic image inpaintingrdquo IEEE Access vol 8pp 48451ndash48463 2020

[14] X Ning X Wang S Xu et al ldquoA review of research on co-trainingrdquo Concurrency and Computation Practice and Ex-perience 2021 In press

[15] I Portela-Pino A Lopez-Castedo M J Martınez-PatintildeoT Valverde-Esteve and J Domınguez-Alonso ldquoGenderdifferences in motivation and barriers for the practice ofphysical exercise in adolescencerdquo International Journal ofEnvironmental Research and Public Health vol 17 no 1p 168 2020

[16] A De Sire R de Sire V Petito et al ldquoGut-joint axis the roleof physical exercise on gut microbiota modulation in olderpeople with osteoarthritisrdquo Nutrients vol 12 no 2 p 5742020

[17] V Malkin S Serpa A Garcia-Mas and E Shurmanov ldquoAnew paradigm in modern sports psychologyrdquo Revista dePsicologıa del Deporte (Journal of Sport Psychology) vol 29no 2 pp 149ndash152 2020

[18] R Summerley ldquoe development of sports a comparativeanalysis of the early institutionalization of traditional sports

Table 1 Test error rate of different types of martial arts BP neuralnetwork ()

Type SDBP MOBP OursOptional boxing 623 485 137Prescribed boxing 735 597 178Traditional boxing 769 623 203Short instrument 823 514 199Long instrument 658 469 187Double instrument 769 528 201Soft instrument 859 499 189

Table 2 e prediction time of BP neural network for differentmartial arts(s)

Type SDBP MOBP OursOptional boxing 373 236 172Prescribed boxing 345 245 183Traditional boxing 359 229 152Short instrument 365 238 163Long instrument 359 249 152Double instrument 375 258 172Soft instrument 359 249 183

Journal of Healthcare Engineering 7

and E-sportsrdquo Games and Culture vol 15 no 1 pp 51ndash722020

[19] Q S Han M eeboom and D Zhu ldquoChinese martial artsand the olympics analysing the policy of the internationalwushu federationrdquo International Review for the Sociology ofSport 2020 In press

[20] C Hongtu and Z Cai ldquoe formal evolution of Chinesemartial arts action movies and the spread of core valuesrdquoAeFrontiers of Society Science and Technology vol 2 no 4 2020

[21] V Kurkova ldquoKolmogorovrsquos theorem and multilayer neuralnetworksrdquo Neural Networks vol 5 no 3 pp 501ndash506 1992

[22] W Jin Z J Li L S Wei and H Zhen ldquoe improvements ofBP neural network learning algorithmrdquo in Proceedings of theWCC 2000-ICSP 2000 2000 5th international conference onsignal processing proceedings 16th world computer congress2000 vol 3 pp 1647ndash1649 IEEE Beijing China August2000

[23] S W Piche ldquoSteepest descent algorithms for neural networkcontrollers and filtersrdquo IEEE Transactions on Neural Net-works vol 5 no 2 pp 198ndash212 1994

8 Journal of Healthcare Engineering

Page 8: Martial Arts Competitive Decision-Making Algorithm Based

and E-sportsrdquo Games and Culture vol 15 no 1 pp 51ndash722020

[19] Q S Han M eeboom and D Zhu ldquoChinese martial artsand the olympics analysing the policy of the internationalwushu federationrdquo International Review for the Sociology ofSport 2020 In press

[20] C Hongtu and Z Cai ldquoe formal evolution of Chinesemartial arts action movies and the spread of core valuesrdquoAeFrontiers of Society Science and Technology vol 2 no 4 2020

[21] V Kurkova ldquoKolmogorovrsquos theorem and multilayer neuralnetworksrdquo Neural Networks vol 5 no 3 pp 501ndash506 1992

[22] W Jin Z J Li L S Wei and H Zhen ldquoe improvements ofBP neural network learning algorithmrdquo in Proceedings of theWCC 2000-ICSP 2000 2000 5th international conference onsignal processing proceedings 16th world computer congress2000 vol 3 pp 1647ndash1649 IEEE Beijing China August2000

[23] S W Piche ldquoSteepest descent algorithms for neural networkcontrollers and filtersrdquo IEEE Transactions on Neural Net-works vol 5 no 2 pp 198ndash212 1994

8 Journal of Healthcare Engineering