Updating constraints in fmincon at while loop

Hello everyone ,
I have a script for Sequential Linear Programming method and i need to add one more constraint to the set of constraints that i have after every iteration.
Then i apply the new minimization problem to fmincon with handle @ .
I can't find a way to update the set of contraints and the only way i found is to do it manually at the end of every iteration.
Do u know a way i can make it full automatic so i dont stop at every iteration ?
One thought was to write all the constraints to a .txt file , update it at the end of iteration , then reading from it, somehow save the text i got from .txt to a new .m file and then apply it with @ to the fmincon.
Any ideas on how i can make that possible ?
Thanks in advance !

2 Comments

If you are using Linear Programming, then are the new constraints linear inequality constraints? If so then just append them to the A and b matrix. For linear equality constraints, just append them to the Aeq and beq matrix.
If they are non-linear constraints, then what form do they take?
I was stuck at the thought that i had to somehow add the new constraints to the .m file and pass them with @ at nonlcon argument . My constraints are linear inequality and did it your way with A and b matrix and works fine.
Thanks a lot mate :D

Sign in to comment.

 Accepted Answer

Bruno Luong
Bruno Luong on 8 Sep 2019
Edited: Bruno Luong on 8 Sep 2019
The number of non-linear constraints can be dynamically changed during the run of FMINCON.
You can then set the OutputFun, track of the iteration on optimValues.iteration, and change accordingly the NONLCON setup with new constraint added. You need to make those two functions speak to each other.
Otherwise nothing prevent you to call iteratively FMINCON with maxiter == 1. But it might not converge since I don't know if it would match the requirement oin your theory of your sequential linear programming.
Same remark applies if you hack FMINCON to do the dirty work part instead of implementing eactly the method as the theory of SLQ describes. You would get dirty result and it might not converge at all.

2 Comments

Walter, the dynamic changing only works for 'interior-point' algorithm. Not sure if it's documented but for example this code (fit data in x=[0,1] by 2nd order polynomial with the constraints P(x) <= 1 for x in [0,1]) works
x = linspace(0,1);
P = rand(3,1);
y = polyval(P,x);
y = y+0.05*randn(size(y)); % add noise
options = optimoptions(@fmincon, 'Algorithm', 'interior-point');
cost = @(P) norm(polyval(P,x)-y).^2;
P = fmincon(cost, 0*P, [], [], [], [], [], [], @maxcon, options);
close all
figure
yfit = polyval(P,x);
plot(x,y,'.r',x,yfit,'-b');
%%
function [c, ceq] = maxcon(P)
a = P(1);
b = P(2);
xb = -b/(2*a);
if a >= 0 || ~(xb>=0 && xb<=1)
XB = [0; 1]; % max at the bounds
else
XB = xb; % max somewhere inside
end
c = polyval(P,XB)-1; % P(x) <= 1 for all x in [0,1]
ceq = [];
end

Sign in to comment.

More Answers (0)

Asked:

on 7 Sep 2019

Edited:

on 3 Nov 2020

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!