change input of tansig with scaler

hi.. i want to train my neural network by use matlab command but the problem is that my transfer function in my neural network equal tanh(m*x) the m may 0.01 or 0.001 or 2 the derivative of function is m*(1-tanh(m*x).^2)
can any suggestion !!!!!
thank in advance

2 Comments

I don't understand your problem.
What version of the NNTBX are you using?
Details?
Code??
mangood UK
mangood UK on 5 Jan 2013
Edited: mangood UK on 5 Jan 2013
dear Greg i have matlab 2012. i have this neural network, net = newff([0 1;0 1 ;0 1],[4 1],{'tansig','purelin'})
the problem is i want to replace the tansig with tanh(3*X) and train network using net=train(net,I,O);
can we do this in matlab!!!! any suggestion to solve the problem

Sign in to comment.

 Accepted Answer

Greg Heath
Greg Heath on 6 Jan 2013
Edited: Greg Heath on 8 Jan 2013
What you want to do is totally unnecessary. What is your reason for wanting to do this?
tanh = tansig.
The weight initialization logic in init automatically determines an optimal range for the intial weights. The final weights needed are automatically determined by the training algorithm.
Since your very obsolete version of newff does not automatically normalize the input to the range [-1,1], you can test what happens if you keep the same target but multiply all of the inputs by the same constant (equivalent to using the multiplier in tanh).
[ x0, t ] = simplefit_dataset;
MSE00 = mean(var(t',1)) % Reference MSE
H = 6,
nmax = 10,
Ntrials = 10,
rng(0),
j = 0
for n =1:nmax
j = j+1
x = n*x0;
for i = 1:Ntrials
net = newff( minmax(x), [ H O ] ,{'tansig' 'purelin'});
net.trainParam.goal = 0.01*MSE00;
[net tr ] = train(net,x,t);
time(i,j) = tr.time(end);
Nepochs(i,j) = tr.epoch(end);
MSE = tr.perf(end);
R2(i,j) = 1-MSE/MSE00;
end
end
n = (1:nmax)
time = time
Nepochs = Nepochs
R2 = R2
Does looking at the summary statistics of time, Nepochs and R2 indicate any advantage to using a multiplier?
Hope this help.
Thank you for formally accepting my answer.
Greg
P.S. If you have 2012, why are you using the very obsolete version of newff?
Does it have anything to do with the fact that the more recent obsolete version of newff (net =newff(x,t,H)) or the current version of fitnet (fitnet(H) ) normalize, by default, the inputs to the range [-1,1]?

5 Comments

What is your reason for wanting to do this? i use tanh(m*x) ,my project require that.
Does looking at the summary statistics of time, Nepochs and R2 indicate any advantage to using a multiplier?
this code is a complex for me.
I don't follow. The default algorithm already is
h = tanh(IW*x + b1);
y = LW*h + b2;
where IW is optimized to minimize mse(t-y).
Replace the x in the 1st equation by x0.
[ x0, t ] = simplefit_dataset;
Then cut and paste the code into the command line.
The results for 100 designs are tabulated in 3 10X10 matrices.
Rows show the results of varying the multiplier n in x=n*x0.
Columns show the effect of changing the random initial weights.
You can just peruse the rows or calculate the min, mean, median, mean and max of the matrices to see if there is any indication that decreasing the scale of x (equivalent to increasing the scale of IW) is of any importance.
@Jan Is that sufficient or would you like me to do something more?

Sign in to comment.

More Answers (0)

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Products

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!