How to implement theses constraints in the cost function ?

I am working on a constrained PSO program (S decision variable) where the constraints are arrays that there elements should be lower or egual to 0.4 :
1) dx = [S(1) - 0.3, diff(S)];
2) ds = [0.3 - S(1),S(1:end-1)-S(2:end)];
abs(dx)<=0.4
abs(ds)<=0.4
in a simpler way,dx and ds should be arrays with elements that are less or egual to 0.4
i tried this : le(abs(dx),0.4)
le(abs(ds),0.4)
but when runing the main pso i dont see constrained results

2 Comments

i tried this
Where? How? pso() doesn't have an input for such constraints. Maybe use ga() or patternsearch(). Also, the constraints are linear, so you should put them in the form A*S<=b.
Also, from your expressions we can see that dx=-ds. Therefore, the constraints, abs(dx)<=0.4 and abs(ds)<=0.4 are the same thing.

Sign in to comment.

Answers (1)

pso() doesn't have an input for such constraints. Maybe use ga() or patternsearch().
Here's how it would look when solved with ga():
lb=-inf(nvars,1);
ub=-lb;
e=0.4*ones(nvars-1,1);
D=diff(eye(nvars));
%constraint matrices
lb(1)=-0.1;
ub(1)=0.7;
A=[D;-D];
b=[e;e];
%run optimization
S = ga(fun,nvars,A,b,[],[],lb,ub)

12 Comments

As a project i am obliged to solve it with the pso.
I am currently not using the particleswarm tool ,am using the yarpiz's,i wonder if there's a way to implement thoses constraints in the objective function's script and if there's a way how to ? thanks a lot
I'm not familiar with Yarpiz. Maybe if you provide a link to the syntax documentation...
For sure ,Here's the Pso's Script
CostFunction =@costfunction; % Cost Function
nVar = 10; % Number of Decision Variables
VarSize = [1 nVar]; % Size of Decision Variables Matrix
VarMin = 0.30; % Lower Bound of Variables
VarMax = 0.80; % Upper Bound of Variables
%% PSO Parameters
MaxIt = 1000; % Maximum Number of Iterations
nPop = 100; % Population Size (Swarm Size)
% PSO Parameters
w = 1; % Inertia Weight
wdamp = 0.99; % Inertia Weight Damping Ratio
c1 = 1.5; % Personal Learning Coefficient
c2 = 2.0; % Global Learning Coefficient
% Velocity Limits
VelMax = 0.1*(VarMax-VarMin);
VelMin = -VelMax;
%% Initialization
empty_particle.Position = [];
empty_particle.Cost = [];
empty_particle.Velocity = [];
empty_particle.Best.Position = [];
empty_particle.Best.Cost = [];
particle = repmat(empty_particle, nPop, 1);
GlobalBest.Cost = inf;
for i = 1:nPop
% Initialize Position
particle(i).Position = unifrnd(VarMin, VarMax, VarSize);
% Initialize Velocity
particle(i).Velocity = zeros(VarSize);
% Evaluation
particle(i).Cost = costfunction(particle(i).Position);
% Update Personal Best
particle(i).Best.Position = particle(i).Position;
particle(i).Best.Cost = particle(i).Cost;
% Update Global Best
if particle(i).Best.Cost<GlobalBest.Cost
GlobalBest = particle(i).Best;
end
end
BestCost = zeros(MaxIt, 1);
%% PSO Main Loop
for it = 1:MaxIt
for i = 1:nPop
% Update Velocity
particle(i).Velocity = w*particle(i).Velocity ...
+c1*rand(VarSize).*(particle(i).Best.Position-particle(i).Position) ...
+c2*rand(VarSize).*(GlobalBest.Position-particle(i).Position);
% Apply Velocity Limits
particle(i).Velocity = max(particle(i).Velocity, VelMin);
particle(i).Velocity = min(particle(i).Velocity, VelMax);
% Update Position
particle(i).Position = particle(i).Position + particle(i).Velocity;
% Velocity Mirror Effect
IsOutside = (particle(i).Position<VarMin | particle(i).Position>VarMax);
particle(i).Velocity(IsOutside) = -particle(i).Velocity(IsOutside);
% Apply Position Limits
particle(i).Position = max(particle(i).Position, VarMin);
particle(i).Position = min(particle(i).Position, VarMax);
% Evaluation
particle(i).Cost = CostFunction(particle(i).Position);
% Update Personal Best
if particle(i).Cost<particle(i).Best.Cost
particle(i).Best.Position = particle(i).Position;
particle(i).Best.Cost = particle(i).Cost;
% Update Global Best
if particle(i).Best.Cost<GlobalBest.Cost
GlobalBest = particle(i).Best;
end
end
end
BestCost(it) = GlobalBest.Cost;
disp(['Iteration ' num2str(it) ': Best Cost = ' num2str(BestCost(it))]);
w = w*wdamp;
end
BestSol = GlobalBest;
For sure ,Here's the Pso's Script
That doesn't help us, I'm afraid. You made it sound like it's a 3rd party package. Don't they provide user documentation with it? Where is the section of the documentation which talks about how to set up a problem and constraints?
I dont think so,any way thanks a lot for your time man ,i appreciate,i would be glad for any recommendations for this case.
I don't know if PSO is designed for constraints more complicated than simple bounds.
If you have no other constraints than what we've discussed, and a small enough number of unknowns S(i), you can find the vertices V of your constraint set using this FEX download,
For example, with nvars=10 and if I assume all -100<=S(i)<=100,
nvars=10;
lb=-100*ones(nvars,1);
ub=-lb;
e=0.4*ones(nvars-1,1);
D=diff(eye(nvars));
%constraint matrices
lb(1)=-0.1;
ub(1)=0.7;
A=[D;-D];
b=[e;e];
[A,b]=addBounds(A,b,[],[],lb,ub);
V=lcon2vert(A,b)
>> whos V
Name Size Bytes Class Attributes
V 1024x10 81920 double
This means that all feasible vectors S have the form S=V'*x where all x(j) are bounded according to 0<=x(j)<=1. Thus you can rewrite your objective function f(S) as a function x with the change of variables f(V.'*x). But since all x(j) have only simple bounds, they can be optimized using PSO in this transformed space.
A problem may be that you now have 1024 variables x(j) whereas before, you had only 10 variables S(i). I'm not sure whether that's a big number for PSO.
Evry thing is set up now , can you please clarify more :
How the changing to f(V.'x) is done exactly.
the lb and the up are 0.3 and 0.8 ,does lb (1) =-0.1 And ub(1)=0.7 affect by giving negative values to the S ?.
Thanks a lot man
How the changing to f(V.'x) is done exactly.
If you have f(S), then you can do
f=@(x) f(V.'*x)
does lb (1) =-0.1 And ub(1)=0.7 affect by giving negative values to the S ?.
The constraint on S(1) that you posted is |S(1)-0.3|<=0.4, which is the same as -0.1<=S(1)<=0.7.
Can you please show me how it's done on the script if this is the last detail to work it out ,keeps giving the error of 'Failure in initial objective function evaluation. PARTICLESWARM cannot continue'.
Here's where i ve actually arrived to :
%Cost function script:
function F = costfunction(S)
dx = [S(1) - 0.3, diff(S)];
ds = [0.3 - S(1),S(1:end-1)-S(2:end)];
V = 384;
Q = 60;
N = 0.4;
R = [7, 12, 8, 11, 9, 13, 7, 12, 10, 6];
Sbk = [1,-1,1,-1,1,-1,1,-1,1,-1];
C = (V * dx * Q .* R * (1 / N)) .* ((1 + Sbk) / 2) .* Sbk.^2;
D = (V * ds * Q .* R ) .* ((1 + Sbk) / 2) .* Sbk.^2 ;
J = C+D;
F = sum(J);
end
%PSO script
nvars = 10;
lb = ones(1,nvars)*0.3;
ub = ones(1,nvars)*0.8;
S = particleswarm(@(x)costfunction(V.'*x),nvars, lb, ub) ;
e=0.4*ones(nvars-1,1);
D=diff(eye(nvars));
%constraint matrices
lb(1)=-0.1;
ub(1)=0.7;
A=[D;-D];
b=[e;e];
[A,b]=addBounds(A,b,[],[],lb,ub);
V=lcon2vert(A,b);
Thank's alot man.
The objective function came out of your brain. If it's returning the wrong value, I have no way of knowing what it should be returning instead.
Here is the original syntax of my function :
I don't know if there's a way out to this last step but this was really helpful,i mean getting steps forward with thoses constraints seting.Thanks a lot Matt J ,I appreciate .

Sign in to comment.

Products

Release

R2021a

Asked:

on 16 Nov 2022

Commented:

on 17 Nov 2022

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!