fmincon does not produce similar result due to automatic scaling?

4 views (last 30 days)
I'm starting to believe that somewhere within fmincon things are scaled using the objective function value. Whether it's the constraints function or the thing that determines the step length, I don't know. I'm wondering whether this is true and how I should deal with it. Do I approach it the wrong way or should I implement my own scaling?
I'll share my observations with you. The optimization I perform is confidential so I cannot say much about it, save to say that the size of the input can differ per optimization run in the range of 20 to 100 by 3. In my question I say there's no similar result, because I observe that if I filter my input differently though I'm expecting the same result ish, the result is completely different and doesn't satisfy the constraints function anymore. What's more, what I tried is to initialize the objective function such that the value of it is divided by the value of the initial guess, so the value always starts at 1 and should only ever go down. Then, I randomly multiply the value of the obj. function at the end of the function by 100; this shouldn't change the objective function value, because in the initialization I scaled it such that the initial value is always 1. However, this gave me an entirely different result, which can be only be if another function used by fmincon is scaled using the objective function, right?
Is this an obvious result? Should I deal with it differently? What's the proper approach?

Accepted Answer

Matt J
Matt J on 30 May 2022
Edited: Matt J on 30 May 2022
With such a general description, only general things can be said. However, here are some remarks:
What's more, what I tried is to initialize the objective function such that the value of it is divided by the value of the initial guess, so the value always starts at 1 and should only ever go down.
fmincon is not guaranteed to be monotonically descending.
I observe that if I filter my input differently though I'm expecting the same result ish, the result is completely different and doesn't satisfy the constraints function anymore.
There's no reason to think the optimization result is a smooth function of the input. It depends on how well-conditioned the problem is. Also, if the "input" that you are speaking of is the initial x0, a slightly different initial guess can send the iteration sequence on a very different trajectory, if the x0 is close to the edge of a capture basin.
Then, I randomly multiply the value of the obj. function at the end of the function by 100; this shouldn't change the objective function value
I think you mean that it shouldn't change the optimum. That's true, but it does affect when the stopping tolerance parameters (StepTolerance, Function Tolerance, OptimaltiyTolerance) are triggered.
  1 Comment
Steven H
Steven H on 31 May 2022
Yea I know the description is very general, but I was hoping the observation would ring a bell. I might play with the stopping tolerance parameters, see if that makes a difference.

Sign in to comment.

More Answers (0)

Tags

Products


Release

R2017b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!