Deep Learning - Sequence-to-Sequence Regression with normalized error

1 view (last 30 days)
Hi everyone,
I have an application based on the Matlab Example: Sequence-to-Sequence Regression Using Deep Learning.
Here, the performance function used to train the neural network is the mean squared error (MSE).
Since my application has multiple outputs with different ranges of values, the training with MSE results in:
  • optimization on output elements with the wider range of values;
  • bad fit on output elements with the smaller range of values.
This concept is well explained here, where the proposed solution is to set the normalization performance parameter to 'standard', but this can't be applied to a Sequence-to-Sequence Regression case.
How can optimize my regresssion application to fit output elements equally well in a relative sense?
Thank you
Fabio

Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!