Main Content

Results for

Helllo all
I write The MATLAB Blog and have covered various enhancements to MATLAB's ODE capabilities over the last couple of years. Here are a few such posts
Everyone in this community has deeply engaged with all of these posts and given me lots of ideas for future enhancements which I've dutifully added to our internal enhancment request database.
Because I've asked for so much in this area, I was recently asked if there's anything else we should consider in the area of ODEs. Since all my best ideas come from all of you, I'm asking here....
So. If you could ask for new and improved functionality for solving ODEs with MATLAB, what would it be and (ideally) why?
Cheers,
Mike
Gregory Vernon
Gregory Vernon
Last activity on 8 Oct 2025 at 13:32

Something that I periodically wonder about is whether an integration with the Rubi integration rules package would improve symbolic integration in Matlab's Symbolic Toolbox. The project is open-source with an MIT-licensed, has a Mathematica implementation, and supposedly SymPy is working on an implementation. Much of my intrigue comes from this 2022 report that compared the previous version of Rubi (4.16.1) against various CAS systems, including Matlab 2021a (Mupad):
While not really an official metric for Rubi, this does "feel" similar to my experience computing symbolic integrals in Matlab Symbolic Toolbox vs Maple/Mathematica. What do y'all think?
Collin
Collin
Last activity on 5 Oct 2025 at 14:04

Yesterday I had an urgent service call for MatLab tech support. The Mathworks technician on call, Ivy Ngyuen, helped fix the problem. She was very patient and I truly appreciate her efforts, which resolved the issue. Thank you.
Benjamin
Benjamin
Last activity on 2 Oct 2025 at 17:03

excited to learn more on Mathworks
Nermin
Nermin
Last activity on 2 Oct 2025 at 15:23

Looking forward to the Expo!
I saw an interesting problem on a reddit math forum today. The question was to find a number (x) as close as possible to r=3.6, but the requirement is that both x and 1/x be representable in a finite number of decimal places.
The problem of course is that 3.6 = 18/5. And the problem with 18/5 has an inverse 5/18, which will not have a finite representation in decimal form.
In order for a number and its inverse to both be representable in a finite number of decimal places (using base 10) we must have it be of the form 2^p*5^q, where p and q are integer, but may be either positive or negative. If that is not clear to you intuitively, suppose we have a form
2^p*5^-q
where p and q are both positive. All you need do is multiply that number by 10^q. All this does is shift the decimal point since you are just myltiplying by powers of 10. But now the result is
2^(p+q)
and that is clearly an integer, so the original number could be represented using a finite number of digits as a decimal. The same general idea would apply if p was negative, or if both of them were negative exponents.
Now, to return to the problem at hand... We can obviously adjust the number r to be 20/5 = 4, or 16/5 = 3.2. In both cases, since the fraction is now of the desired form, we are happy. But neither of them is really close to 3.6. My goal will be to find a better approximation, but hopefully, I can avoid a horrendous amount of trial and error. It would seem the trick might be to take logs, to get us closer to a solution. That is, suppose I take logs, to the base 2?
log2(3.6)
ans = 1.8480
I used log2 here because that makes the problem a little simpler, since log2(2^p)=p. Therefore we want to find a pair of integers (p,q) such that
log2(3.6) + delta = p + log2(5)*q
where delta is as close to zero as possible. Thus delta is the error in our approximation to 3.6. And since we are working in logs, delta can be viewed as a proportional error term. Again, p and q may be any integers, either positive or negative. The two cases we have seen already have (p,q) = (2,0), and (4,-1).
Do you see the general idea? The line we have is of the form
log2(3.6) = p + log2(5)*q
it represents a line in the (p,q) plane, and we want to find a point on the integer lattice (p,q) where the line passes as closely as possible.
[Xl,Yl] = meshgrid([-10:10]);
plot(Xl,Yl,'k.')
hold on
fimplicit(@(p,q) -log2(3.6) + p + log2(5)*q,[-10,10,-10,10],'g-')
plot([2 4],[0,-1],'ro')
hold off
Now, some might think in terms of orthogonal distance to the line, but really, we want the vertical distance to be minimized. Again, minimize abs(delta) in the equation:
log2(3.6) + delta = p + log2(5)*q
where p and q are integer.
Can we do that using MATLAB? The skill about about mathematics often lies in formulating a word problem, and then turning the word problem into a problem of mathematics that we know how to solve. We are almost there now. I next want to formulate this into a problem that intlinprog can solve. The problem at first is intlinprog cannot handle absolute value constraints. And the trick there is to employ slack variables, a terribly useful tool to emply on this class of problem.
Rewrite delta as:
delta = Dpos - Dneg
where Dpos and Dneg are real variables, but both are constrained to be positive.
prob = optimproblem;
p = optimvar('p',lower = -50,upper = 50,type = 'integer');
q = optimvar('q',lower = -50,upper = 50,type = 'integer');
Dpos = optimvar('Dpos',lower = 0);
Dneg = optimvar('Dneg',lower = 0);
Our goal for the ILP solver will be to minimize Dpos + Dneg now. But since they must both be positive, it solves the min absolute value objective. One of them will always be zero.
r = 3.6;
prob.Constraints = log2(r) + Dpos - Dneg == p + log2(5)*q;
prob.Objective = Dpos + Dneg;
The solve is now a simple one. I'll tell it to use intlinprog, even though it would probably figure that out by itself. (Note: if I do not tell solve which solver to use, it does use intlinprog. But it also finds the correct solution when I told it to use GA offline.)
solve(prob,solver = 'intlinprog')
Solving problem using intlinprog. Running HiGHS 1.7.1: Copyright (c) 2024 HiGHS under MIT licence terms Coefficient ranges: Matrix [1e+00, 2e+00] Cost [1e+00, 1e+00] Bound [5e+01, 5e+01] RHS [2e+00, 2e+00] Presolving model 1 rows, 4 cols, 4 nonzeros 0s 1 rows, 4 cols, 4 nonzeros 0s Solving MIP model with: 1 rows 4 cols (0 binary, 2 integer, 0 implied int., 2 continuous) 4 nonzeros Nodes | B&B Tree | Objective Bounds | Dynamic Constraints | Work Proc. InQueue | Leaves Expl. | BestBound BestSol Gap | Cuts InLp Confl. | LpIters Time 0 0 0 0.00% 0 inf inf 0 0 0 0 0.0s R 0 0 0 0.00% 0 0.765578819 100.00% 0 0 0 1 0.0s H 0 0 0 0.00% 0 0.5905649912 100.00% 11 5 0 6 0.0s H 0 0 0 0.00% 0 0.2686368963 100.00% 12 5 1 6 0.0s H 0 0 0 0.00% 0 0.0875069139 100.00% 13 5 1 6 0.0s H 0 0 0 0.00% 0 0.0532911986 100.00% 14 5 1 6 0.0s H 0 0 0 0.00% 0 0.0190754832 100.00% 15 5 6 6 0.0s H 0 0 0 0.00% 0 0.0151402321 100.00% 16 5 11 6 0.0s H 0 0 0 0.00% 0 0.00115357525 100.00% 17 5 22 6 0.0s Solving report Status Optimal Primal bound 0.00115357524726 Dual bound 0.00115357524726 Gap 0% (tolerance: 0.01%) Solution status feasible 0.00115357524726 (objective) 0 (bound viol.) 0 (int. viol.) 0 (row viol.) Timing 0.01 (total) 0.00 (presolve) 0.00 (postsolve) Nodes 1 LP iterations 98 (total) 1 (strong br.) 6 (separation) 88 (heuristics) Optimal solution found. Intlinprog stopped at the root node because the objective value is within a gap tolerance of the optimal value, options.AbsoluteGapTolerance = 1e-06. The intcon variables are integer within tolerance, options.ConstraintTolerance = 1e-06.
ans = struct with fields:
Dneg: 0 Dpos: 0.0012 p: 39 q: -16
The solution it finds within the bounds of +/- 50 for both p and q seems pretty good. Note that Dpos and Dneg are pretty close to zero.
2^39*5^-16
ans = 3.6029
and while 3.6028979... seems like nothing special, in fact, it is of the form we want.
R = sym(2)^39*sym(5)^-16
R = 
vpa(R,100)
ans = 
3.6028797018963968
vpa(1/R,100)
ans = 
0.277555756156289135105907917022705078125
both of those numbers are exact. If I wanted to find a better approximation to 3.6, all I need do is extend the bounds on p and q. And we can use the same solution approch for any floating point number.
Check out how these charts were made with polar axes in the Graphics and App Building blog's latest article "Polar plots with patches and surface".
Nine new Image Processing courses plus one new learning path are now available as part of the Online Training Suite. These courses replace the content covered in the self-paced course Image Processing with MATLAB, which sunsets in 2026.
New courses include:
The new learning path Image Segmentation and Analysis in MATLAB earns users the digital credential Image Segmentation in MATLAB and contains the following courses:
Apparently, the back end here is running 2025b, hovering over the Run button and the Executing In popup both show R2024a.
ver matlab
------------------------------------------------------------------------------------------------- MATLAB Version: 25.2.0.2998904 (R2025b) MATLAB License Number: 40912989 Operating System: Linux 6.8.0-1019-aws #21~22.04.1-Ubuntu SMP Thu Nov 7 17:33:30 UTC 2024 x86_64 Java Version: Java 1.8.0_292-b10 with AdoptOpenJDK OpenJDK 64-Bit Server VM mixed mode ------------------------------------------------------------------------------------------------- MATLAB Version 25.2 (R2025b)
Registration is now open for MathWorks annual virtual event MATLAB EXPO 2025 on November 12 – 13, 2025!
Register now and start building your customized agenda today!
Explore. Experience. Engage.
Join MATLAB EXPO to connect with MathWorks and industry experts to learn about the latest trends and advancements in engineering and science. You will discover new features and capabilities for MATLAB and Simulink that you can immediately apply to your work.
Mike Croucher
Mike Croucher
Last activity on 30 Sep 2025 at 9:50

all(logical.empty)
ans = logical
1
Discuss!
I just noticed that MATLAB R2025b is available. I am a bit surprised, as I never got notification of the beta test for it.
This topic is for highlights and experiences with R2025b.
Have you ever been enrolled in a course that uses an LMS and there is an assignment that invovles posting a question to, or answering a question in, a discussion group? This discussion group is meant to simulate that experience.

The functionality would allow report generation straight from live scripts that could be shared without exposing the code. This could be useful for cases where the recipient of the report only cares about the results and not the code details, or when the methodology is part of a company know how, e.g. Engineering services companies.

In order for it to be practical for use it would also require that variable values could be inserted into the text blocks, e.g. #var_name# would insert the value of the variable "var_name" and possibly selecting which code blocks to be hidden.

“Hello, I am Subha & I’m part of the organizing/mentoring team for NASA Space Apps Challenge Virudhunagar 2025 🚀. We’re looking for collaborators/mentors with ML and MATLAB expertise to help our student teams bring their space solutions to life. Would you be open to guiding us, even briefly? Your support could impact students tackling real NASA challenges. 🌍✨”
Since R2024b, a Levenberg–Marquardt solver (TrainingOptionsLM) was introduced. The built‑in function trainnet now accepts training options via the trainingOptions function (https://www.mathworks.com/help/deeplearning/ref/trainingoptions.html#bu59f0q-2) and supports the LM algorithm. I have been curious how to use it in deep learning, and the official documentation has not provided a concrete usage example so far. Below I give a simple example to illustrate how to use this LM algorithm to optimize a small number of learnable parameters.
For example, consider the nonlinear function:
y_hat = @(a,t) a(1)*(t/100) + a(2)*(t/100).^2 + a(3)*(t/100).^3 + a(4)*(t/100).^4;
It represents a curve. Given 100 matching points (t → y_hat), we want to use least squares to estimate the four parameters a1​–a4​.
t = (1:100)';
y_hat = @(a,t)a(1)*(t/100) + a(2)*(t/100).^2 + a(3)*(t/100).^3 + a(4)*(t/100).^4;
x_true = [ 20 ; 10 ; 1 ; 50 ];
y_true = y_hat(x_true,t);
plot(t,y_true,'o-')
  • Using the traditional lsqcurvefit-wrapped "Levenberg–Marquardt" algorithm:
x_guess = [ 5 ; 2 ; 0.2 ; -10 ];
options = optimoptions("lsqcurvefit",Algorithm="levenberg-marquardt",MaxFunctionEvaluations=800);
[x,resnorm,residual,exitflag] = lsqcurvefit(y_hat,x_guess,t,y_true,-50*ones(4,1),60*ones(4,1),options);
Local minimum found. Optimization completed because the size of the gradient is less than 1e-4 times the value of the function tolerance.
x,resnorm,exitflag
x = 4×1
20.0000 10.0000 1.0000 50.0000
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
resnorm = 9.7325e-20
exitflag = 1
  • Using the deep-learning-wrapped "Levenberg–Marquardt" algorithm:
options = trainingOptions("lm", ...
InitialDampingFactor=0.002, ...
MaxDampingFactor=1e9, ...
DampingIncreaseFactor=12, ...
DampingDecreaseFactor=0.2,...
GradientTolerance=1e-6, ...
StepTolerance=1e-6,...
Plots="training-progress");
numFeatures = 1;
layers = [featureInputLayer(numFeatures,'Name','input')
fitCurveLayer(Name='fitCurve')];
net = dlnetwork(layers);
XData = dlarray(t);
YData = dlarray(y_true);
netTrained = trainnet(XData,YData,net,"mse",options);
Iteration TimeElapsed TrainingLoss GradientNorm StepNorm _________ ___________ ____________ ____________ ________ 1 00:00:03 0.35754 0.053592 39.649
Warning: Error occurred while executing the listener callback for event LogUpdate defined for class deep.internal.train.SerialMetricManager:
Error using matlab.internal.capability.Capability.require (line 94)
This functionality is not available on remote platforms.

Error in matlab.ui.internal.uifigureImpl (line 33)
Capability.require(Capability.WebWindow);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in uifigure (line 34)
window = matlab.ui.internal.uifigureImpl(false, varargin{:});
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deepmonitor.internal.DLTMonitorView/createGUIComponents (line 167)
this.Figure = uifigure("Tag", "DEEPMONITOR_UIFIGURE");
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deepmonitor.internal.DLTMonitorView (line 123)
this.createGUIComponents();
^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deepmonitor.internal.DLTMonitorFactory/createStandaloneView (line 8)
view = deepmonitor.internal.DLTMonitorView(model, this);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deep.TrainingProgressMonitor/set.Visible (line 224)
this.View = this.Factory.createStandaloneView(this.Model);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deep.internal.train.MonitorConfiguration/updateMonitor (line 173)
monitor.Visible = true;
^^^^^^^^^^^^^^^
Error in deep.internal.train.MonitorConfiguration>@(logger,evtData)weakThis.Handle.updateMonitor(evtData,visible) (line 154)
this.Listeners{end+1} = listener(logger,'LogUpdate',@(logger,evtData) weakThis.Handle.updateMonitor(evtData,visible));
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deep.internal.train.SerialMetricManager/notifyLogUpdate (line 28)
notify(this,'LogUpdate',eventData);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deep.internal.train.MetricManager/evaluateMetricsAndSendLogUpdate (line 177)
notifyLogUpdate(this, logUpdateEventData);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deep.internal.train.setupTrainnet>iEvaluateMetricsAndSendLogUpdate (line 140)
evaluateMetricsAndSendLogUpdate(metricManager, evtData);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deep.internal.train.setupTrainnet>@(source,evtData)iEvaluateMetricsAndSendLogUpdate(source,evtData,metricManager) (line 125)
addlistener(trainer,'IterationEnd',@(source,evtData) iEvaluateMetricsAndSendLogUpdate(source,evtData,metricManager));
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deep.internal.train.BatchTrainer/notifyIterationAndEpochEnd (line 189)
notify(trainer,'IterationEnd',data);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deep.internal.train.FullBatchTrainer/computeBatchTraining (line 112)
notifyIterationAndEpochEnd(trainer, matlab.lang.internal.move(data));
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deep.internal.train.BatchTrainer/computeTraining (line 144)
net = computeBatchTraining(trainer, net, mbq);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deep.internal.train.Trainer/train (line 67)
net = computeTraining(trainer, net, mbq);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in deep.internal.train.train (line 30)
net = train(trainer, net, mbq);
^^^^^^^^^^^^^^^^^^^^^^^^
Error in trainnet (line 51)
[varargout{1:nargout}] = deep.internal.train.train(mbq, net, loss, options, ...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in LiveEditorEvaluationHelperEeditorId (line 27)
netTrained = trainnet(XData,YData,net,"mse",options);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error in connector.internal.fevalMatlab

Error in connector.internal.fevalJSON
7 00:00:04 5.3382e-10 1.4371e-07 0.43992 Training stopped: Gradient tolerance reached
netTrained.Layers(2)
ans =
fitCurveLayer with properties: Name: 'fitCurve' Learnable Parameters a1: 20.0007 a2: 9.9957 a3: 1.0072 a4: 49.9962 State Parameters No properties. Use properties method to see a list of all properties.
classdef fitCurveLayer < nnet.layer.Layer ...
& nnet.layer.Acceleratable
% Example custom SReLU layer.
properties (Learnable)
% Layer learnable parameters
a1
a2
a3
a4
end
methods
function layer = fitCurveLayer(args)
arguments
args.Name = "lm_fit";
end
% Set layer name.
layer.Name = args.Name;
% Set layer description.
layer.Description = "fit curve layer";
end
function layer = initialize(layer,~)
% layer = initialize(layer,layout) initializes the layer
% learnable parameters using the specified input layout.
if isempty(layer.a1)
layer.a1 = rand();
end
if isempty(layer.a2)
layer.a2 = rand();
end
if isempty(layer.a3)
layer.a3 = rand();
end
if isempty(layer.a4)
layer.a4 = rand();
end
end
function Y = predict(layer, X)
% Y = predict(layer, X) forwards the input data X through the
% layer and outputs the result Y.
% Y = layer.a1.*exp(-X./layer.a2) + layer.a3.*X.*exp(-X./layer.a4);
Y = layer.a1*(X/100) + layer.a2*(X/100).^2 + layer.a3*(X/100).^3 + layer.a4*(X/100).^4;
end
end
end
The network is very simple — only the fitCurveLayer defines the learnable parameters a1–a4. I observed that the output values are very close to those from lsqcurvefit.
David
David
Last activity on 29 Aug 2025

I’d like to take a moment to highlight the great contributions of one of our community members, @Paul, who is fast approaching an impressive 5,000 reputation points!
Paul has built his reputation the best way possible - by generously sharing his knowledge and helping others. Over the last few years, he’s provided thoughtful and practical answers to hundreds of questions, making life a little easier for learners and experts alike.
Reputation points are more than just numbers here - they represent the trust and appreciation of the community. Paul’s upcoming milestone is a testament to his consistency, expertise, and willingness to support others.
Please join me in recognizing Paul's contributions and impact on the MATLAB Central community.