# f(x)=0 matlab code for fixed point

31 views (last 30 days)
soufiane on 22 Mar 2023
Edited: John D'Errico on 22 Mar 2023
Hello I ried to write matlab code that solve f(x)=0 witrh fixed point method
so i want the user to give function and an interval and x0 an initial guess ...
and i want the code to find x = g(x) from f(x) and tried my best and i used chatgpt and nothing either the code run in a loop or does not give the real solution
example : f(x)=log(x) - x^2 + 2 interval 0.1 and 0.5 x0=0.3 and i want the code to find g(x) = x
this is my try
clc;
clear all;
syms x;
f_str = input('Enter the function f(x): ');
f = matlabFunction(f_str);
g = solve(x == f(x) + x, x);
g_func = matlabFunction(g);
a = input('Enter the lower bound of the interval: ');
b = input('Enter the upper bound of the interval: ');
x0 = input('Enter an initial approximation x0: ');
N = 0;
x = x0;
x1 = g_func(x0);
while abs(x1 - x) > eps
x = x1;
x1 = g_func(x);
N = N + 1;
end
fprintf('The solution is: %f\n', x1);
fprintf('Number of iterations: %d\n', N);
this is my second try :
clc;
clear all;
syms x;
h = input('donner la fonction f(x) = '); % log(x) - x^2 + 2
f = matlabFunction(h);
%k = x - f(x);
%g = matlabFunction(k);
a = input('Entrez la borne inférieure a : '); %0.1
b = input('Entrez la borne supérieure b : '); %0.5
if f(a) * f(b) >= 0
error(' l''intervalle n''est pas bien preciser ')
end
x0 = input('donner une racine approchée aléatoire de préférence qu elle appartienne a [a, b] ');
N = 0;
x = x0;
x1 = g(x0);
while abs(x1 - x) > eps
x = x1;
x1 = g(x);
N = N + 1;
end
fprintf('solution : %f\n', x1);
fprintf('nombre d''itérations : %d\n', N);
figure (1), fplot(f,[a,b]);
grid on

John D'Errico on 22 Mar 2023
Edited: John D'Errico on 22 Mar 2023
You have been given a function f(x), and you now want to write a fixed point solver that will first, automatically create g(x) by adding x to both side of the equation.
And it seems this is not converging for you. SURPRISE!
One problem is, very often, just adding x to both sides does NOT create a convergent function for a fixed point solver!
So I'm not sure if this is homework for you to do, but just adding x to both sides always is arguably a bad idea. Are there better choices? Perhaps. As well, you seem to think such a solver can be made to work un some fixed interval. Again, that may fail too.
Why is there a problem at all? The issue is, a fixed point solver will not converge if abs(g(x))>1 in the vicinity of the solution. (I should add a link where that is discussed, but it is not difficult to show.)
I'm not really sure those links are that easy to read for a student though. I could probably have found better choices. Sigh. I can add more discussion about why a solver will diverge if needed.
For example, can you solve the problem
f1 = @(x) x^2 - 5;
So by your logic, we would create g(x) as
g1 = @(x) f1(x) + x;
The solution lies at x=sqrt(5). But at that point, the derivative of this function is 2*x+1, at x=sqrt(5) is significantly greater than 1 in absolute value.
x = 2.5;
x = g1(x)
x = 3.7500
x = g1(x)
x = 12.8125
Do you see what is happening? This is a seriously DIVERGENT process! I started out at a point that is actually quite close, and things go crazy very fast.
But instead, if we knew that the approximate slope of the curve in the vicinity of the starting point was 2*x, so the slope was approximately 5 at x==2.5, then can we improve things, making this process a convergent one?
dydx = 5;
g2hat = @(x) x - f1(x)/(2*dydx);
x = 2.5;
x = g2hat(x)
x = 2.3750
x = g2hat(x)
x = 2.3109
sqrt(5)
ans = 2.2361
It seems like the above process is going in the correct direction. What I did do requires at least an approximation to the local slope of the function near the solution. But if dydx is a (SUFFICIENTLY) decent approximation to that slope, then the above alteration should pretty much always converge to a fixed point.
I can think of a simple way to do this, since you already seem to be providing a bracket for the solution. That is, if the bracket is [a,b], where you believe a root lies, then you might do this:
dydx = (f(b) - f(a))/(b-a);
For REALLY LARGE brackets, this might fail. But then anything can fail. Crap in, crap out is a basic rule of computing. If you want good results, you need to start with something good.
For example, we can try this with the example function I gave, having written a very basic fixed point solver...
f = @(x) x.^2 - 5;
ab = [0,10];
format long g
[x,stuff] = fpsolver(f,ab,1e-12,1000)
x =
2.23606797750308
stuff = struct with fields:
iter: 106 finalxdiff: 9.48574552239734e-13 fval: 1.47286627338872e-11
So after 106 iterations, it converged to a very respectable approximation to sqrt(5).
f = @(x) log(x) - x.^2 + 2;
[xfinal,stuff] = fpsolver(f,[0.1 0.5],1e-12,1000)
xfinal =
0.13793482556525
stuff = struct with fields:
iter: 9 finalxdiff: 3.99985600196828e-13 fval: 5.06261699229071e-14
Will the trick I show here always work? Of course not. Again, if you choose a wide bracket, where the initial estimate of dydx is a poor one to the slope in the vicinity of the true solution will be poor, then expect problems. But you might want to think about why what I did makes the problem convergent. What, for example, is the derivative of your function near the solution?
syms X
vpa(subs(diff(f(X)+X),X,xfinal))
ans =
7.9739310855728454092986033439228
Do you see that if you just use f(x)+x in the vicinity of the solution, this fixed point process will not converge? I'll let you go back and read your notes. Your teacher should have discussed these ides, or perhaps, this homework was just motivation for the next class, where your teacher will discuss why/when fixed point iterations can diverge.
I did make some claims above, that g(x), as I have built it is generally sufficient for a convergent fixed point iteration. We can check that simply: If f(x) is the function we wish to find a root for, thus f(x) == 0, then if fp is an approximation to the derivative of x, see what happens:
syms f(x) fp
g = x - f(x)/(2*fp)
g = diff(g)
ans = But if fp is a reasonable approximation to df/dx near the root, then dg/dx will always be approximately 1/2. And since abs(1/2) is significantly smaller than 1, then g(x) will always yield a convergent sequence.
function [x,stuff] = fpsolver(fun,xbrak,xtol,itmax)
% fixed point solver, given an initial bracket as a guess
dydx = diff(fun(xbrak))/diff(xbrak);
g = @(x) x - fun(x)/(2*dydx);
% starting point will be just the mid point of the interval provided
x = mean(xbrak);
stuff.iter = 0;
xdiff = inf;
while (stuff.iter < itmax) && (xdiff>xtol)
stuff.iter = stuff.iter + 1;
xnew = g(x);
xdiff = abs(xnew - x);
x = xnew;
end
stuff.finalxdiff = xdiff;
stuff.fval = fun(x);
end
I can surely think of several ways to improve that solver, making it more robust, more stable, even more rapidly convergent. The problem is, you need to have a good transformation from f(x) to g(x), that will yield a convergent sequence.
Anyway, despite the fact that this was homework, if you have gotten to this point in reading what I wrote, AND you thought about why it is that the trick I used to create g(x), then you will have learned something from this assignment.

### Categories

Find more on Mathematics in Help Center and File Exchange

R2015a

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!