Main Content


Resilient backpropagation


net.trainFcn = 'trainrp' sets the network trainFcn property.


[trainedNet,tr] = train(net,...) trains the network with trainrp.

trainrp is a network training function that updates weight and bias values according to the resilient backpropagation algorithm (Rprop).

Training occurs according to trainrp training parameters, shown here with their default values:

  • net.trainParam.epochs — Maximum number of epochs to train. The default value is 1000.

  • — Epochs between displays (NaN for no displays). The default value is 25.

  • net.trainParam.showCommandLine — Generate command-line output. The default value is false.

  • net.trainParam.showWindow — Show training GUI. The default value is true.

  • net.trainParam.goal — Performance goal. The default value is 0.

  • net.trainParam.time — Maximum time to train in seconds. The default value is inf.

  • net.trainParam.min_grad — Minimum performance gradient. The default value is 1e-5.

  • net.trainParam.max_fail — Maximum validation failures. The default value is 6.

  • — Learning rate. The default value is 0.01.

  • net.trainParam.delt_inc — Increment to weight change. The default value is 1.2.

  • net.trainParam.delt_dec — Decrement to weight change. The default value is 0.5.

  • net.trainParam.delta0 — Initial weight change. The default value is 0.07.

  • net.trainParam.deltamax — Maximum weight change. The default value is 50.0.


collapse all

This example shows how to train a feed-forward network with a trainrp training function to solve a problem with inputs p and targets t.

Create the inputs p and the targets t that you want to solve with a network.

p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];

Create a two-layer feed-forward network with two hidden neurons and this training function.

net = feedforwardnet(2,'trainrp');

Train and test the network.

net.trainParam.epochs = 50; = 10;
net.trainParam.goal = 0.1;
net = train(net,p,t);
a = net(p)

For more examples, see help feedforwardnet and help cascadeforwardnet.

Input Arguments

collapse all

Input network, specified as a network object. To create a network object, use for example, feedforwardnet or narxnet.

Output Arguments

collapse all

Trained network, returned as a network object.

Training record (epoch and perf), returned as a structure whose fields depend on the network training function (net.NET.trainFcn). It can include fields such as:

  • Training, data division, and performance functions and parameters

  • Data division indices for training, validation and test sets

  • Data division masks for training validation and test sets

  • Number of epochs (num_epochs) and the best epoch (best_epoch).

  • A list of training state names (states).

  • Fields for each state name recording its value throughout training

  • Performances of the best network (best_perf, best_vperf, best_tperf)

More About

collapse all

Network Use

You can create a standard network that uses trainrp with feedforwardnet or cascadeforwardnet.

To prepare a custom network to be trained with trainrp,

  1. Set net.trainFcn to 'trainrp'. This sets net.trainParam to trainrp’s default parameters.

  2. Set net.trainParam properties to desired values.

In either case, calling train with the resulting network trains the network with trainrp.

Resilient Backpropagation

Multilayer networks typically use sigmoid transfer functions in the hidden layers. These functions are often called “squashing” functions, because they compress an infinite input range into a finite output range. Sigmoid functions are characterized by the fact that their slopes must approach zero as the input gets large. This causes a problem when you use steepest descent to train a multilayer network with sigmoid functions, because the gradient can have a very small magnitude and, therefore, cause small changes in the weights and biases, even though the weights and biases are far from their optimal values.

The purpose of the resilient backpropagation (Rprop) training algorithm is to eliminate these harmful effects of the magnitudes of the partial derivatives. Only the sign of the derivative can determine the direction of the weight update; the magnitude of the derivative has no effect on the weight update. The size of the weight change is determined by a separate update value. The update value for each weight and bias is increased by a factor delt_inc whenever the derivative of the performance function with respect to that weight has the same sign for two successive iterations. The update value is decreased by a factor delt_dec whenever the derivative with respect to that weight changes sign from the previous iteration. If the derivative is zero, the update value remains the same. Whenever the weights are oscillating, the weight change is reduced. If the weight continues to change in the same direction for several iterations, the magnitude of the weight change increases. A complete description of the Rprop algorithm is given in [RiBr93].

The following code recreates the previous network and trains it using the Rprop algorithm. The training parameters for trainrp are epochs, show, goal, time, min_grad, max_fail, delt_inc, delt_dec, delta0, and deltamax. The first eight parameters have been previously discussed. The last two are the initial step size and the maximum step size, respectively. The performance of Rprop is not very sensitive to the settings of the training parameters. For the example below, the training parameters are left at the default values:

p = [-1 -1 2 2;0 5 0 5];
t = [-1 -1 1 1];
net = feedforwardnet(3,'trainrp');
net = train(net,p,t);
y = net(p)

rprop is generally much faster than the standard steepest descent algorithm. It also has the nice property that it requires only a modest increase in memory requirements. You do need to store the update values for each weight and bias, which is equivalent to storage of the gradient.


trainrp can train any network as long as its weight, net input, and transfer functions have derivative functions.

Backpropagation is used to calculate derivatives of performance perf with respect to the weight and bias variables X. Each variable is adjusted according to the following:

dX = deltaX.*sign(gX);

where the elements of deltaX are all initialized to delta0, and gX is the gradient. At each iteration the elements of deltaX are modified. If an element of gX changes sign from one iteration to the next, then the corresponding element of deltaX is decreased by delta_dec. If an element of gX maintains the same sign from one iteration to the next, then the corresponding element of deltaX is increased by delta_inc. See Riedmiller, M., and H. Braun, “A direct adaptive method for faster backpropagation learning: The RPROP algorithm,” Proceedings of the IEEE International Conference on Neural Networks,1993, pp. 586–591.

Training stops when any of these conditions occurs:

  • The maximum number of epochs (repetitions) is reached.

  • The maximum amount of time is exceeded.

  • Performance is minimized to the goal.

  • The performance gradient falls below min_grad.

  • Validation performance (validation error) has increased more than max_fail times since the last time it decreased (when using validation).


[1] Riedmiller, M., and H. Braun, “A direct adaptive method for faster backpropagation learning: The RPROP algorithm,” Proceedings of the IEEE International Conference on Neural Networks,1993, pp. 586–591.

Version History

Introduced before R2006a