Main Content

Results for

I'm seeing solution maps shown with low-contrast gray colors instead of the correct symbol colors. I have observed this using both Safari and Chrome. Screenshot:
Here is a screenshot of a Cody problem that I just created. The math rendering is poor. (I have since edited the problem to remove the math formatting.)
David
David
Last activity on 20 Oct 2025 at 21:26

I just learned you can access MATLAB Online from the following shortcut in your web browser: https://matlab.new
I'm developing a comprehensive MATLAB programming course and seeking passionate co-trainers to collaborate!
Why MATLAB Matters:Many people underestimate MATLAB's significance in:
  • Communication systems
  • Signal processing
  • Mathematical modeling
  • Engineering applications
  • Scientific computing
Course Structure:
  1. Foundation Module: MATLAB basics and fundamentals
  2. Image Processing: Practical applications and techniques
  3. Signal Processing: Analysis and implementation
  4. Machine Learning: ML algorithms using MATLAB
  5. Hands-on Learning: Projects, assignments.
What I'm Looking For:
  • Enthusiastic educators willing to share knowledge
  • Experience in any MATLAB application area
  • Commitment to collaborative teaching
Interested in joining as a co-trainer? Please comment below or reach out directly!
Are there any code restrictions for programming Cody solutions? I could not find anything mentioned at https://www.mathworks.com/matlabcentral/content/cody/about.html, other than toolbox functions not being available.
Hey everyone,
I’m currently working with MATLAB R2025b and using the MQTT blocks from the Industrial Communication Toolbox inside Simulink. I’ve run into an issue that’s driving me a bit crazy, and I’m not sure if it’s a bug or if I’m missing something obvious.
Here’s what’s happening:
  • I open the MQTT Configure block.
  • I fill out all the required fields — Broker address, Port, Client ID, Username, and Password.
  • When I click Test Connection, it says “Connection established successfully.” So far so good.
  • Then I click Apply, close the dialog, set the topic name, and try to run the simulation.
  • At this point, I get the following error:Caused by: Invalid value for 'ClientID', 'Username' or 'Password'.
  • When I reopen the MQTT config block, I notice that the Password field is empty again — even though I definitely entered it before and the connection test worked earlier.
It seems like Simulink is somehow not saving the password after hitting Apply, which leads to the authentication error during simulation.
Has anyone else faced this? Is this a bug in R2025b, or do I need to configure something differently to make the password persist?
Would really appreciate any insights, workarounds, or confirmations from anyone who has used MQTT in Simulink recently.
Thanks in advance!
I'm working on training neural networks without backpropagation / automatic differentiation, using locally derived analytic forms of update rules. Given that this allows a direct formula to be derived for the update rule, it removes alot of the overhead that is otherwise required from automatic differentiation.
However, matlab's functionalities for neural networks are currently solely based around backpropagation and automatic differentiation, such as the dlgradient function and requiring everything to be dlarrays during training.
I have two main requests, specifically for functions that perform a single operation within a single layer of a neural network, such as "dlconv", "fullyconnect", "maxpool", "avgpool", "relu", etc:
  • these functions should also allow normal gpuArray data instead of requiring everything to be dlarrays.
  • these functions are currently designed to only perform the forward pass. I request that these also be designed to perform the backward pass if user requests. There can be another input user flag that can be "forward" (default) or "backward", and then the function should have all the necessary inputs to perform that operation (e.g. for "avgpool" forward pass it only needs the avgpool input data and the avgpool parameters, but for the "avgpool" backward pass it needs the deriviative w.r.t. the avgpool output data, the avgpool parameters, and the original data dimensions). I know that there is a maxunpool function that achieves this for maxpool, but it has significant issues when trying to use it this way instead of by backpropagation in a dlgradient type layer, see (https://www.mathworks.com/matlabcentral/answers/2179587-making-a-custom-way-to-train-cnns-and-i-am-noticing-that-avgpool-is-significantly-faster-than-maxpo?s_tid=srchtitle).
I don't know how many people would benefit from this feature, and someone could always spend their time creating these functionalities themselves by matlab scripts, cuDNN mex, etc., but regardless it would be nice for matlab to have this allowable for more customizable neural net training.
Inspired by @xingxingcui's post about old MATLAB versions and @유장's post about an old Easter egg, I thought it might be fun to share some MATLAB-Old-Timer Stories™.
Back in the early 90s, MATLAB had been ported to MacOS, but there were some interesting wrinkles. One that kept me earning my money as a computer lab tutor was that MATLAB required file names to follow Windows standards - no spaces or other special characters. But on a Mac, nothing stopped you from naming your script "hello world - 123.m". The problem came when you tried to run it. MATLAB was essentially doing an eval on the script name, assuming the file name would follow Windows (and MATLAB) naming rules.
So now imagine a lab full of students taking a university course. As is common in many universities, the course was given a numeric code. For whatever historical reason, my school at that time was also using numeric codes for the departments. Despite being told the rules for naming scripts, many students would default to something like "26.165 - 1.1" for problem one on HW1 for the intro applied math course 26.165.
No matter what they did in their script, when they ran it, MATLAB would just say "ans = 25.0650".
Nothing brings you more MATLAB-god credibility as a student tutor than walking over to someone's computer, taking one look at their output, saying "rename your file", and walking away like a boss.
I recently published this blog post about resources to help people learn MATLAB https://blogs.mathworks.com/matlab/2025/09/11/learning-matlab-in-2025/
What are your favourite MATLAB learning resources?
It was 2010 when I was a sophomore in university. I chose to learn MATLAB because of a mathematical modeling competition, and the university provided MATLAB 7.0, a very classic release. To get started, I borrowed many MATLAB books from the library and began by learning simple numerical calculations, plotting, and solving equations. Gradually I was drawn in by MATLAB’s powerful capabilities and became interested; I often used it as a big calculator for fun. That version didn’t have MATLAB Live Script; instead it used MATLAB Notebook (M-Book), which allowed MATLAB functions to be used directly within Microsoft Word, and it also had the Symbolic Math Toolbox’s MuPAD interactive environment. These were later gradually replaced by Live Scripts introduced in R2016a. There are many similar examples...
Out of curiosity, I still have screenshots on my computer showing MATLAB 7.0 running compatibly. I’d love to hear your thoughts?
Edit 15 Oct 2025: Removed incorrect code. Replaced symmatrix2sym and symfunmatrix2symfun with sym and symfun respectively (latter supported as of 2024b).
The Symbolic Math Toolbox does not have its own dot and and cross functions. That's o.k. (maybe) for garden variety vectors of sym objects where those operations get shipped off to the base Matlab functions
x = sym('x',[3,1]); y = sym('y',[3,1]);
which dot(x,y)
/MATLAB/toolbox/matlab/specfun/dot.m
dot(x,y)
ans = 
which cross(x,y)
/MATLAB/toolbox/matlab/specfun/cross.m
cross(x,y)
ans = 
But now we have symmatrix et. al., and things don't work as nicely
clearvars
x = symmatrix('x',[3,1]); y = symmatrix('y',[3,1]);
z = symmatrix('z',[1,1]);
sympref('AbbreviateOutput',false);
dot() expands the result, which isn't really desirable for exposition.
eqn = z == dot(x,y)
eqn = 
Also, dot() returns the the result in terms of the conjugate of x, which can't be simplifed away at the symmatrix level
assumeAlso(sym(x),'real')
class(eqn)
ans = 'symmatrix'
try
eqn = z == simplify(dot(x,y))
catch ME
ME.message
end
ans = 'Undefined function 'simplify' for input arguments of type 'symmatrix'.'
To get rid of the conjugate, we have to resort to sym
eqn = simplify(sym(eqn))
eqn = 
but again we are in expanded form, which defeats the purpose of symmatrix (et. al.)
But at least we can do this to get a nice equation
eqn = z == x.'*y
eqn = 
dot errors with symfunmatrix inputs
clearvars
syms t real
x = symfunmatrix('x(t)',t,[3,1]); y = symfunmatrix('y(t)',t,[3,1]);
try
dot(x,y)
catch ME
ME.message
end
ans = 'Invalid argument at position 2. Symbolic function is evaluated at the input arguments and does not accept colon indexing. Instead, use FORMULA on the function and perform colon indexing on the returned output.'
Cross works (accidentally IMO) with symmatrix, but expands the result, which isn't really desirable for exposition
clearvars
x = symmatrix('x',[3,1]); y = symmatrix('y',[3,1]);
z = symmatrix('z',[3,1]);
eqn = z == cross(x,y)
eqn = 
And it doesn't work at all if an input is a symfunmatrix
syms t
w = symfunmatrix('w(t)',t,[3,1]);
try
eqn = z == cross(x,w);
catch ME
ME.message
end
ans = 'A and B must be of length 3 in the dimension in which the cross product is taken.'
In the latter case we can expand with
eqn = z == cross(sym(x),symfun(w)) % x has to be converted
eqn(t) = 
But we can't do the same with dot (as shown above, dot doesn't like symfun inputs)
try
eqn = z == dot(sym(x),symfun(w))
catch ME
ME.message
end
ans = 'Invalid argument at position 2. Symbolic function is evaluated at the input arguments and does not accept colon indexing. Instead, use FORMULA on the function and perform colon indexing on the returned output.'
Looks like the only choice for dot with symfunmatrix is to write it by hand at the matrix level
x.'*w
ans(t) = 
or at the sym/symfun level
sym(x).'*symfun(w) % assuming x is real
ans(t) = 
Ideally, I'd like to see dot and cross implemented for symmatrix and symfunmatrix types where neither function would evaluate, i.e., expand, until both arguments are subs-ed with sym or symfun objects of appropriate dimension.
Also, it would be nice if symmatrix could be assumed to be real. Is there a reason why being able to do so wouldn't make sense?
try
assume(x,'real')
catch ME
ME.message
end
ans = 'Undefined function 'assume' for input arguments of type 'symmatrix'.'
What if you had no isprime utility to rely on in MATLAB? How would you identify a number as prime? An easy answer might be something tricky, like that in simpleIsPrime0.
simpleIsPrime0 = @(N) ismember(N,primes(N));
But I’ll also disallow the use of primes here, as it does not really test to see if a number is prime. As well, it would seem horribly inefficient, generating a possibly huge list of primes, merely to learn something about the last member of the list.
Looking for a more serious test for primality, I’ve already shown how to lighten the load by a bit using roughness, to sometimes identify numbers as composite and therefore not prime.
But to actually learn if some number is prime, we must do a little more. Yes, this is a common homework problem assigned to students, something we have seen many times on Answers. It can be approached in many ways too, so it is worth looking at the problem in some depth.
The definition of a prime number is a natural number greater than 1, which has only two factors, thus 1 and itself. That makes a simple test for primality of the number N easy. We just try dividing the number by every integer greater than 1, and not exceeding N-1. If any of those trial divides leaves a zero remainder, then N cannot be prime. And of course we can use mod or rem instead of an explicit divide, so we need not worry about floating point trash, as long as the numbers being tested are not too large.
simpleIsPrime1 = @(N) all(mod(N,2:N-1) ~= 0);
Of course, simpleIsPrime1 is not a good code, in the sense that it fails to check if N is an integer, or if N is less than or equal to 1. It is not vectorized, and it has no documentation at all. But it does the job well enough for one simple line of code. There is some virtue in simplicity after all, and it is certainly easy to read. But sometimes, I wish a function handle could include some help comments too! A feature request might be in the offing.
simpleIsPrime1(9931)
ans = logical
1
simpleIsPrime1(9932)
ans = logical
0
simpleIsPrime1 works quite nicely, and seems pretty fast. What could be wrong? At some point, the student is given a more difficult problem, to identify if a significantly larger integer is prime. simpleIsPrime1 will then cause a computer to grind to a distressing halt if given a sufficiently large number to test. Or it might even error out, when too large a vector of numbers was generated to test against. For example, I don't think you want to test a number of the order of 2^64 using simpleIsPrime1, as performing on the order of 2^64 divides will be highly time consuming.
uint64(2)^63-25
ans = uint64 9223372036854775783
Is it prime? I’ve not tested it to learn if it is, and simpleIsPrime1 is not the tool to perform that test anyway.
A student might realize the largest possible integer factors of some number N are the numbers N/2 and N itself. But, if N/2 is a factor, then so is 2, and some thought would suggest it is sufficient to test only for factors that do not exceed sqrt(N). This is because if a is a divisor of N, then so is b=N/a. If one of them is larger than sqrt(N), then the other must be smaller. That could lead us to an improved scheme in simpleIsPrime2.
simpleIsPrime2 = @(N) all(mod(N,2:sqrt(N)));
For an integer of the size 2^64, now you only need to perform roughly 2^32 trial divides. Maybe we might consider the subtle improvement found in simpleIsPrime3, which avoids trial divides by the even integers greater than 2.
simpleIsPrime3 = @(N) (N == 2) || (mod(N,2) && all(mod(N,3:2:sqrt(N))));
simpleIsPrime3 needs only an approximate maximum of 2^31 trial divides even for numbers as large as uint64 can represent. While that is large, it is still generally doable on the computers we have today, even if it might be slow.
Sadly, my goals are higher than even the rather lofty limit given by UINT64 numbers. The problem of course is that a trial divide scheme, despite being 100% accurate in its assessment of primality, is a time hog. Even an O(sqrt(N)) scheme is far too slow for numbers with thousands or millions of digits. And even for a number as “small” as 1e100, a direct set of trial divides by all primes less than sqrt(1e100) would still be practically impossible, as there are roughly n/log(n) primes that do not exceed n. For an integer on the order of 1e50,
1e50/log(1e50)
ans = 8.6859e+47
It is practically impossible to perform that many divides on any computer we can make today. Can we do better? Is there some more efficient test for primality? For example, we could write a simple sieve of Eratosthenes to check each prime found not exceeding sqrt(N).
function [TF,SmallPrime] = simpleIsPrime4(N)
% simpleIsPrime3 - Sieve of Eratosthenes to identify if N is prime
% [TF,SmallPrime] = simpleIsPrime3(N)
%
% Returns true if N is prime, as well as the smallest prime factor
% of N when N is composite. If N is prime, then SmallPrime will be N.
Nroot = ceil(sqrt(N)); % ceil caters for floating point issues with the sqrt
TF = true;
SieveList = true(1,Nroot+1); SieveList(1) = false;
SmallPrime = 2;
while TF
% Find the "next" true element in SieveList
while (SmallPrime <= Nroot+1) && ~SieveList(SmallPrime)
SmallPrime = SmallPrime + 1;
end
% When we drop out of this loop, we have found the next
% small prime to check to see if it divides N, OR, we
% have gone past sqrt(N)
if SmallPrime > Nroot
% this is the case where we have now looked at all
% primes not exceeding sqrt(N), and have found none
% that divide N. This is where we will drop out to
% identify N as prime. TF is already true, so we need
% not set TF.
SmallPrime = N;
return
else
if mod(N,SmallPrime) == 0
% smallPrime does divide N, so we are done
TF = false;
return
end
% update SieveList
SieveList(SmallPrime:SmallPrime:Nroot) = false;
end
end
end
simpleIsPrime4 does indeed work reasonably well, though it is sometimes a little slower than is simpleIsPrime3, and everything is hugely faster than simpleIsPrime1.
timeit(@() simpleIsPrime1(111111111))
ans = 0.6447
timeit(@() simpleIsPrime2(111111111))
ans = 1.1932e-04
timeit(@() simpleIsPrime3(111111111))
ans = 6.4815e-05
timeit(@() simpleIsPrime4(111111111))
ans = 7.5757e-06
All of those times will slow to a crawl for much larger numbers of course. And while I might find a way to subtly improve upon these codes, any improvement will be marginal in the end if I try to use any such direct approach to primality. We must look in a different direction completely to find serious gains.
At this point, I want to distinguish between two distinct classes of tests for primality of some large number. One class of test is what I might call an absolute or infallible test, one that is perfectly reliable. These are tests where if X is identified as prime/composite then we can trust the result absolutely. The tests I showed in the form of simpleIsPrime1, simpleIsPrime2, simpleIsPrime3 and aimpleIsprime4, were all 100% accurate, thus they fall into the class of infallible tests.
The second general class of test for primality is what I will call an evidentiary test. Such a test provides evidence, possibly quite strong evidence, that the given number is prime, but in some cases, it might be mistaken. I've already offered a basic example of a weak evidentiary test for primality in the form of roughness. All primes are maximally rough. And therefore, if you can identify X as being rough to some extent, this provides evidence that X is also prime, and the depth of the roughness test influences the strength of the evidence for primality. While this is generally a fairly weak test, it is a test nevertheless, and a good exclusionary test, a good way to avoid more sophisticated but time consuming tests.
These evidentiary tests all have the property that if they do identify X as being composite, then they are always correct. In the context of roughness, if X is not sufficiently rough, then X is also not prime. On the other side of the coin, if you can show X is at least (sqrt(X)+1)-rough, then it is positively prime. (I say this to suggest that some evidentiary tests for primality can be turned into truth telling tests, but that may take more effort than you can afford.) The problem is of course that is literally impossible to verify that degree of roughness for numbers with many thousands of digits.
In my next post, I'll look at the Fermat test for primality, based on Fermat's little theorem.
Baranitharan
Baranitharan
Last activity on 8 Oct 2025 at 10:39

Share your learning starting trouble experience of Matlab.. Looking forward for more answers..
Helllo all
I write The MATLAB Blog and have covered various enhancements to MATLAB's ODE capabilities over the last couple of years. Here are a few such posts
Everyone in this community has deeply engaged with all of these posts and given me lots of ideas for future enhancements which I've dutifully added to our internal enhancment request database.
Because I've asked for so much in this area, I was recently asked if there's anything else we should consider in the area of ODEs. Since all my best ideas come from all of you, I'm asking here....
So. If you could ask for new and improved functionality for solving ODEs with MATLAB, what would it be and (ideally) why?
Cheers,
Mike
Gregory Vernon
Gregory Vernon
Last activity on 8 Oct 2025 at 13:32

Something that I periodically wonder about is whether an integration with the Rubi integration rules package would improve symbolic integration in Matlab's Symbolic Toolbox. The project is open-source with an MIT-licensed, has a Mathematica implementation, and supposedly SymPy is working on an implementation. Much of my intrigue comes from this 2022 report that compared the previous version of Rubi (4.16.1) against various CAS systems, including Matlab 2021a (Mupad):
While not really an official metric for Rubi, this does "feel" similar to my experience computing symbolic integrals in Matlab Symbolic Toolbox vs Maple/Mathematica. What do y'all think?
Collin
Collin
Last activity on 5 Oct 2025 at 14:04

Yesterday I had an urgent service call for MatLab tech support. The Mathworks technician on call, Ivy Ngyuen, helped fix the problem. She was very patient and I truly appreciate her efforts, which resolved the issue. Thank you.