Finding the points where Bessel functions are equivalent

8 views (last 30 days)
Hello,
I am trying to repeat some results in a research paper using Matlab. I am having some trouble finding some propagation constants. I suspect Matlab is capable of doing this, but I'm not sure how to go about it. The value I am trying to find is the propagation constant β, given in the following:
The values for k, n1, n0 and a are known. Only β is unknown. I am trying to find β by determining the values for u and w using the following equation:
Where J1 and J0 are Bessel functions of the first kind and K1 and K0 are Bessel functions of the second kind. I have tried solving for this using symbolic logic as follows:
um = 1e-6;
nCore = 1.4682;
nClad = 1.446;
lam = 1.55*um;
k = 2*pi/lam;
a = 8.2*um;
A = 125*um;
syms x
w = (a/2)*sqrt(k^2*nCore^2 - x^2);
u = (a/2)*sqrt(x^2 - k^2*nClad^2);
eqn = (besselj(1, u)/(u*besselj(0, u))) == -1*(besselk(1, w)/(w*besselk(0, w)));
S = vpasolve(eqn, x);
This returns the following:
Warning: Unable to find explicit solution. For options, see help.
> In sym/solve (line 317)
In besselTest (line 24)
The results here are well-known, so I think the issue is with my execution of the problem.
Is there another way to go about this without using symbolic logic?
Thanks!
  2 Comments
Walter Roberson
Walter Roberson on 23 Feb 2025

The error message does not match the code. The error message is about using solve() but the code uses vpasolve()

Matthew
Matthew on 23 Feb 2025
Hey Walter,
You're aboslutely correct - I tried it ways and accidentally write it incorrectly here. My mistake.

Sign in to comment.

Accepted Answer

David Goodmanson
David Goodmanson on 23 Feb 2025
Edited: David Goodmanson on 25 Feb 2025
Hi Matthew,
If you scale this problem properly, the process is much clearer. And in this case,
It also allows finding the roots without knowing any starting points.
Factoring k out of the square root and setting
b = beta/k (1)
leads to the following:
J1(k*a*cj)/(cj*J0(k*a*cj)) = -K1(k*a*ck)/(ck*K0(k*a*ck)) (2)
(a common factor of k*a cancels out on each side)
where
cj = sqrt(ncore^2 - b^2) ck = sqrt(b^2-nclad^2)
This has the distinct advantage that k*a = 33.2401 has a recognizable sensible value when inserted into the bessel function arguments, and that the dimensionless b can be compared to ncore and nclad. To get real square roots, b is restricted to the narrow range
nclad <= b <= ncore
so, if done in the normal way:
% um = 1e-6;
ncore = 1.4682;
nclad = 1.446;
lam = 1.55e-6;
k = 2*pi/lam;
a = 8.2e-6;
% A = 125*um;
cj = @(b) sqrt(ncore^2-b.^2);
ck = @(b) sqrt(b.^2 - nclad^2);
fj = @(b) besselj(1,a*k*cj(b)) ./(cj(b).*besselj(0,a*k*cj(b)));
fk = @(b) -besselk(1,a*k*ck(b),1)./(ck(b).*besselk(0,a*k*ck(b),1));
% plot both sides
b = nclad:1e-4:ncore;
plotj = fj(b);
plotk = fk(b);
plotj(abs(plotj)>100) = nan; % don't plot large peaks, where denom --> 0
figure(1)
plot(b,plotj,b,plotk)
grid on
There are two roots.
% find the zeros
f = @(b) (fj(b)-fk(b));
format long
b1 = fzero(f,[1.455 1.458]) % provide search domains
b2 = fzero(f,[1.464 1.466])
format
b1 = 1.456322586904848
b2 = 1.464598125862584
Then of course (1) produces beta.
At better way to find the roots in this case is to take both denominators of (2) over to the other side so that their zero crossings do not produce large peaks in the plot. That way the upper and lower bounds for the search range of fzero could be looser, and
It allows finding the roots without providing a detailed search range at all.
Since K drops off exponentially, this leads to small values on the plot. The code below uses the '1' option which outputs K(z)/exp(-z) instead, and that quantity is in the 'seeable range'. This works since the original expression involves K1/K0 and the common factor of exp(-z) does not affect the answer.
f1 = @(b) besselj(1,a*k*cj(b)).* ck(b).*besselk(0,a*k*ck(b),1);
f2 = @(b) -besselk(1,a*k*ck(b),1).*cj(b).*besselj(0,a*k*cj(b));
figure(2)
plot(b,f1(b),b,f2(b))
grid on
(The apparent zero on the right end of the plot is just an artifact due to the altered equations, not a real zero).
One could use fzero the same way as before, but now since the functions are better behaved you can do
f12 = @(b) (f1(b)-f2(b));
b = nclad+1e-8 : 1e-8 : ncore-1e-8;
diffsign = diff(sign(f12(b)));
[fs ind] = find(diffsign~=0)
format long
b_roots = b(ind) % the result
format
b_roots = 1.456322580000000 1.464598120000000
which agrees with the previous b1 and b2 to 8 sig figs (and could be improved by making the b spacing finer).
No initial starting points are required since we are exporing the entire range of b.
  1 Comment
Matthew
Matthew on 25 Feb 2025
Thank you so much! This really demonstrated thr use of function handles and how to couch them in an efficient manner.

Sign in to comment.

More Answers (3)

Alan Stevens
Alan Stevens on 23 Feb 2025
Like this?
um = 1e-6;
nCore = 1.4682;
nClad = 1.446;
lam = 1.55*um;
k = 2*pi/lam;
a = 8.2*um;
A = 125*um;
w = @(x) (a/2)*sqrt(k^2*nCore^2 - x^2);
u = @(x) (a/2)*sqrt(x^2 - k^2*nClad^2);
eqn = @(x) (besselj(1, u(x))/(u(x)*besselj(0, u(x)))) + 1*(besselk(1, w(x))/(w(x)*besselk(0, w(x))));
x0 = k*(nCore+nClad)/2;
x = fzero(eqn,x0);
disp(x)
5.8909e+06

Sam Chak
Sam Chak on 23 Feb 2025
You might consider approaching the problem as finding the minimum point of a convex function. I have used both ga() and fminunc() for this purpose. You should exercise caution when dealing with square root functions to prevent the generation of complex numbers in this context. Both terms and inside the domains of the square root functions, are very large values, on the order of . Therefore, it is essential to estimate the range of β to solve the problem more efficiently.
The β I found using this approach is around this point 5,908,196.505.
format long
%% initial beta search
lb = 5.905e6; % lower bound of beta search
ub = 5.910e6; % upper bound of beta search
beta0 = ga(@costfcn, 1, [], [], [], [], lb, ub)
ga stopped because the average change in the fitness value is less than options.FunctionTolerance.
beta0 =
5.908196504897320e+06
%% refined beta search
options = optimoptions('fmincon', 'Display', 'iter', 'OptimalityTolerance', 1e-8);
[bestbeta, fval] = fmincon(@costfcn, beta0, [], [], [], [], lb, ub, [], options)
First-order Norm of Iter F-count f(x) Feasibility optimality step 0 2 9.532660e-21 0.000e+00 1.394e-10 Initial point is a local minimum that satisfies the constraints. Optimization completed because at the initial point, the objective function is non-decreasing in feasible directions to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
bestbeta =
5.908196504897320e+06
fval =
9.532660447575216e-21
%% plot result
um = 1e-6;
nCore = 1.4682;
nClad = 1.446;
lam = 1.55*um;
k = 2*pi/lam;
a = 8.2*um;
A = 125*um;
beta = linspace(5.90e6, 5.92e6, 10001);
w = (a/2)*sqrt(k^2*nCore^2 - beta.^2);
u = (a/2)*sqrt(beta.^2 - k^2*nClad^2);
Leqn = (besselj(1, u)./(u.*besselj(0, u))); % left side of equation
Reqn = -1*(besselk(1, w)./(w.*besselk(0, w))); % right side of equation
plot(beta, [Leqn; Reqn]), grid on, xlabel('\beta')
xline(bestbeta, '--', sprintf('best beta: %.3f', bestbeta), 'color', '#7F7F7F', 'LabelVerticalAlignment', 'bottom')
legend('Leqn', 'Reqn', 'best \beta', 'location', 'east')
%% cost function to find the minimum
function J = costfcn(beta)
um = 1e-6;
nCore = 1.4682;
nClad = 1.446;
lam = 1.55*um;
k = 2*pi/lam;
a = 8.2*um;
A = 125*um;
% to find the range
% ub = k^2*nCore^2
% lb = k^2*nClad^2
% beta = linspace(sqrt(lb+100), sqrt(ub-100), 10001);
% beta = linspace(5.90e6, 5.92e6, 10001);
w = (a/2)*sqrt(k^2*nCore^2 - beta.^2);
u = (a/2)*sqrt(beta.^2 - k^2*nClad^2);
Leqn = (besselj(1, u)./(u.*besselj(0, u)));
Reqn = -1*(besselk(1, w)./(w.*besselk(0, w)));
J = (Leqn - Reqn).^2;
end
  2 Comments
Matthew
Matthew on 23 Feb 2025
Using this method, it seems like you kind of need to know a starting point to begin with. Is there a way around that, or a way to find muliple roots?
Sam Chak
Sam Chak on 24 Feb 2025
You do not need to supply an initial guess when using the genetic algorithm (ga) method, as it automatically generates a random initial population of solutions to initiate the search process. However, it does not return the same solution each time it is executed because it incorporates randomness in its operations to find a 'just good enough' solution.
Most optimization methods require an initial guess from the user, as this allows for guiding the iterative process from a desired starting point toward a potential optimum. This enables the optimizer to navigate the solution space more efficiently and converge to a result more quickly. You may refer to @David Goodmanson's excellent answer for further details.

Sign in to comment.


Sam Chak
Sam Chak on 23 Feb 2025
I revisited your approach using vpasolve(), but I provided an initial guess that is close to the solution, as illustrated in the graph.
um = 1e-6;
nCore = 1.4682;
nClad = 1.446;
lam = 1.55*um;
k = 2*pi/lam;
a = 8.2*um;
A = 125*um;
syms x
w = (a/2)*sqrt(k^2*nCore^2 - x^2);
u = (a/2)*sqrt(x^2 - k^2*nClad^2);
% eqn = (besselj(1, u)/(u*besselj(0, u))) == -1*(besselk(1, w)/(w*besselk(0, w)));
eqnLeft = (besselj(1, u)/(u*besselj(0, u)));
eqnRight = -1*(besselk(1, w)/(w*besselk(0, w)));
lb = 5.90e6; % lower bound
ub = 5.92e6; % upper bound
fplot([eqnLeft eqnRight], [lb, ub]), grid on
legend('Leqn', 'Reqn', 'location', 'east')
x0 = (lb + ub)/2;
bestbeta = vpasolve(eqnLeft == eqnRight, x, x0)
bestbeta = 
5908196.5048948658927662166292494

Products


Release

R2024b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!