0% found this document useful (0 votes)
268 views

Trainlm (Neural Network Toolbox)

trainlm is a neural network training function that uses the Levenberg-Marquardt algorithm to update weight and bias values. It takes inputs like the network, training data, validation data and returns the trained network and a training record. It trains the network according to parameters like epochs, performance goal and validation failure count. The algorithm calculates the Jacobian of the performance with respect to weights and biases and updates them to reduce the error according to Levenberg-Marquardt.

Uploaded by

savisu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
268 views

Trainlm (Neural Network Toolbox)

trainlm is a neural network training function that uses the Levenberg-Marquardt algorithm to update weight and bias values. It takes inputs like the network, training data, validation data and returns the trained network and a training record. It trains the network according to parameters like epochs, performance goal and validation failure count. The algorithm calculates the Jacobian of the performance with respect to weights and biases and updates them to reduce the error according to Levenberg-Marquardt.

Uploaded by

savisu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

26/12/2016

trainlm(NeuralNetworkToolbox)

trainlm
LevenbergMarquardtbackpropagation

Syntax
[net,TR]=trainlm(net,Pd,Tl,Ai,Q,TS,VV,TV)
info=trainlm(code)

Description
trainlmisanetworktrainingfunctionthatupdatesweightandbiasvaluesaccordingtoLevenbergMarquardt
optimization.
trainlm(net,Pd,Tl,Ai,Q,TS,VV,TV)takestheseinputs,
netNeuralnetwork.
PdDelayedinputvectors.
TlLayertargetvectors.
AiInitialinputdelayconditions.
QBatchsize.
TSTimesteps.
VVEitheremptymatrix[]orstructureofvalidationvectors.
TVEitheremptymatrix[]orstructureofvalidationvectors.

andreturns,
netTrainednetwork.
TRTrainingrecordofvariousvaluesovereachepoch:
TR.epochEpochnumber.
TR.perfTrainingperformance.
TR.vperfValidationperformance.
TR.tperfTestperformance.
TR.muAdaptivemuvalue.

Trainingoccursaccordingtothetrainlm'strainingparametersshownherewiththeirdefaultvalues:
net.trainParam.epochs100Maximumnumberofepochstotrain
net.trainParam.goal0Performancegoal
net.trainParam.max_fail5Maximumvalidationfailures
http://wwwrohan.sdsu.edu/doc/matlab/toolbox/nnet/trainlm.html

1/3

26/12/2016

trainlm(NeuralNetworkToolbox)

net.trainParam.mem_reduc1Factortouseformemory/speed

tradeoff
net.trainParam.min_grad1e10Minimumperformancegradient
net.trainParam.mu0.001InitialMu
net.trainParam.mu_dec0.1Mudecreasefactor
net.trainParam.mu_inc10Muincreasefactor
net.trainParam.mu_max1e10MaximumMu
net.trainParam.show25Epochsbetweenshowingprogress
net.trainParam.timeinfMaximumtimetotraininseconds

Dimensionsforthesevariablesare:
PdNoxNixTScellarray,eachelementP{i,j,ts}isaDijxQmatrix.
TlNlxTScellarray,eachelementP{i,ts}isaVixQmatrix.
AiNlxLDcellarray,eachelementAi{i,k}isanSixQmatrix.

where
Ni=net.numInputs
Nl=net.numLayers
LD=net.numLayerDelays
Ri=net.inputs{i}.size
Si=net.layers{i}.size
Vi=net.targets{i}.size
Dij=Ri*length(net.inputWeights{i,j}.delays)

IfVVorTVisnot[],itmustbeastructureofvectors,
VV.PD,TV.PDValidation/testdelayedinputs.
VV.Tl,TV.TlValidation/testlayertargets.
VV.Ai,TV.AiValidation/testinitialinputconditions.
VV.Q,TV.QValidation/testbatchsize.
VV.TS,TV.TSValidation/testtimesteps.

Validationvectorsareusedtostoptrainingearlyifthenetworkperformanceonthevalidationvectorsfailsto
improveorremainsthesameformax_failepochsinarow.Testvectorsareusedasafurthercheckthatthe
networkisgeneralizingwell,butdonothaveanyeffectontraining.
trainlm(code)returnsusefulinformationforeachcodestring:
http://wwwrohan.sdsu.edu/doc/matlab/toolbox/nnet/trainlm.html

2/3

26/12/2016

trainlm(NeuralNetworkToolbox)

'pnames'Namesoftrainingparameters.
'pdefaults'Defaulttrainingparameters.

NetworkUse
Youcancreateastandardnetworkthatusestrainlmwithnewff,newcf,ornewelm.
Toprepareacustomnetworktobetrainedwithtrainlm:
1.Setnet.trainFcnto'trainlm'.Thiswillsetnet.trainParamtotrainlm'sdefaultparameters.
2.Setnet.trainParampropertiestodesiredvalues.
Ineithercase,callingtrainwiththeresultingnetworkwilltrainthenetworkwithtrainlm.
Seenewff,newcf,andnewelmforexamples.

Algorithm
trainlmcantrainanynetworkaslongasitsweight,netinput,andtransferfunctionshavederivativefunctions.
BackpropagationisusedtocalculatetheJacobianjXofperformanceperfwithrespecttotheweightandbias
variablesX.EachvariableisadjustedaccordingtoLevenbergMarquardt,
jj=jX*jX
je=jX*E
dX=(jj+I*mu)\je

whereEisallerrorsandIistheidentitymatrix.
Theadaptivevaluemuisincreasedbymu_incuntilthechangeaboveresultsinareducedperformancevalue.The
changeisthenmadetothenetworkandmuisdecreasedbymu_dec.
Theparametermem_reducindicateshowtousememoryandspeedtocalculatetheJacobianjX.Ifmem_reducis1,
thentrainlmrunsthefastest,butcanrequirealotofmemory.Increasingmem_reducto2cutssomeofthe
memoryrequiredbyafactoroftwo,butslowstrainlmsomewhat.Highervaluescontinuetodecreasethe
amountofmemoryneededandincreasetrainingtimes.
Trainingstopswhenanyoftheseconditionsoccur:
Themaximumnumberofepochs(repetitions)isreached.
Themaximumamountoftimehasbeenexceeded.
Performancehasbeenminimizedtothegoal.
Theperformancegradientfallsbelowmingrad.
muexceedsmu_max.
Validationperformancehasincreasedmorethanmax_failtimessincethelasttimeitdecreased(when
usingvalidation).

SeeAlso
newff,newcf,traingd,traingdm,traingda,traingdx

http://wwwrohan.sdsu.edu/doc/matlab/toolbox/nnet/trainlm.html

3/3

You might also like