Exponentiation Turns Values Into Zeros.

After I solved an equation, I have good non-zero, numerical data. After exponentiation, some of these values go to zero, so literally MATLAB sees them as zeros. I tried a few symbolic computations, but some of them don't seem to complete due to the memory constraints, or my values would now be set in symbolic form, which destroys the program output. Would MATLAB have a way of increasing accuracy of the computation, being as gentle on memory requirements as possible, but without converting everything to symbolic form?

1 Comment

BM
BM on 12 Apr 2018
Edited: BM on 12 Apr 2018
Also, is there a way in MATLAB to judge when a symbolic computation will complete or not? Sometimes the memory requirements on my computer are fine, but the program might not complete. I would like to know of a better way, if possible, to judge/evaluate a MATLAB program to have a better idea if it will complete the run or not.

Sign in to comment.

 Accepted Answer

John D'Errico
John D'Errico on 12 Apr 2018
Edited: John D'Errico on 12 Apr 2018
No, you cannot tell MATLAB to increase the accuracy of computation. Numbers are doubles. Or they are symbolic. Or you could use my HPF toolbox, but that is similar in its limitations to symbolic computations.
Double precision numbers will underflow if you push things too far. You cannot change that behavior.
exp([-745 -746])
ans =
4.9407e-324 0
exp([-745 -746]) == 0
ans =
1×2 logical array
0 1
And, no. There is no way in advance to know if or when a given arbitrary computation will terminate. For example, suppose your program was something simple, like finding and computing the 51'st perfect number? (As of this moment, it seems only 50 are known because even perfect numbers are directly related to Mersenne primes.)
It might even be a simpler question, like finding the second prime number of the form (n+1)*17^n-1. Even the first prime is pretty large. I won't tell you what value of n that is. ;-) And all I know about the second prime of that form, IF one exists, is that n must be at least as large as 23000 or so.
Some computations are tractable, in the sense that you can predict if and when they will terminate. For example, suppose you will use a scheme to compute pi to 1 million digits? (Not that hard, really.) There are simple schemes that compute essentially a fixed number of digits per iteration. So you can easily predict roughly how many iterations will be required.
But if you just write some general mess of code, there is no way to predict in advance when or if it will terminate. Sorry. For example, while I'm pretty sure that before long, having used only about a year's worth of computer time, the 51st Mersenne prime will have been found, and therefore the 51'st perfect number will be known too. But I have no idea if a second prime of the form (n+1)*17^n-1 exists at all.
To a large extent, knowing if a code will terminate (and how long it might take) relies on your understanding of the computations being done. Essentially, to use a mathematical tool, it helps if you understand the mathematics behind what will be done.

8 Comments

I have been estimating bits throughout, which worked fine until I tried using symbolic computation. I didn't know if there was perhaps a bit of knowledge on symbolic computation that I didn't know that could help me make better estimates. Guess not.
Is there a way then to use 'vpa' to arrive at a numeric answer, as I know my exponentiated values are not zero?
No knowledge there to be had. No magical formula will tell you when an undecidable computation will terminate, since it is, oh, yeah, undecidable. And some computations are undecidable, or at least practically so, or are so at this point in time based on current knowledge.
Is there a way to use vpa? Of course. Just use vpa. But that means you need to convert the computations to symbolic form. And you said you don't want to do that.
A big part of applied mathematics, numerical analysis, etc., is in understanding how to do a computation that might be otherwise intractable. You might decide to transform the problem. For example, working with logs, instead of exponentiating something nasty. It might be as simple as an effective change of variables, or scaling the problem, etc.
So there are lots of ways you MIGHT be able to fix this. But we cannot know how you should fix your specific problem, because we have been told nothing except a general complaint that exponentials of numbers sometimes go to zero.
In many cases, those zeros might be completely unimportant. If those underflows are important, then you need to find a way around them. But there is no magical way to do so, without investing either the mathematical effort to fix the problem, or investing the computational resources to use a tool like syms or HPF.
My understanding is that symbolic computation would prevent any possibility of plotting the value via contour plots. It refused such symbolic data on one of my attempts, which added to my reluctance.
As for the specific code, I can't have it published. The calculation works if I enter it in via the command window. In the program, it takes those same operations and sends them to zero. A bit strange. Thanks for your insight.
As long as the array is purely numeric, even if in symbolic form, just plot the contours of log(z). These are apparently positive numbers, even if in symbolic form. So the log of those numbers will still be fine. Only THEN convert to double. Contour will have no problems now.
If it varies by that many orders of magnitude, AND you care about what happens way down there in those depths, then this is what you want to do anyway. A standard contour plot of z(x,y) would be meaningless.
As far as turning numbers into zeros, that is probably just a question of display format. Not strange at all.
x = -10:10;
format short
exp(x)
ans =
1.0e+04 *
Columns 1 through 18
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0003 0.0007 0.0020 0.0055 0.0148 0.0403 0.1097
Columns 19 through 21
0.2981 0.8103 2.2026
format short g
exp(x)
ans =
Columns 1 through 14
4.54e-05 0.00012341 0.00033546 0.00091188 0.0024788 0.0067379 0.018316 0.049787 0.13534 0.36788 1 2.7183 7.3891 20.086
Columns 15 through 21
54.598 148.41 403.43 1096.6 2981 8103.1 22026
As you see, they are not zero. Merely too small to display using the default format.
Read the help for format. I usually suggest "format short g" as the best compromise, but sometimes "format long g" is a better choice.
Regardless, a contour plat will be useless unless you do a contour plot of log(z).
Not display, they literally are now zeros. I wrote a small user-defined function to count the number of zeros I had in an array
function howmanyzeros(n)
f = n == 0;
Zero_Sum = sum(f(:))
end
This returns 0 for the array prior to me exponentiating. After exponentiating, it returns a positive integer value of zeros.
Sorry about the delay in accepting your answer, I forgot! Nevertheless, I have found another way to solve the issue.
If they are turning into true zeros, thus an underflow after exponentiation, then they are essentially less than
log(realmin/2^52)
ans =
-744.44
Yes, they were below -745. This was the issue!

Sign in to comment.

More Answers (0)

Asked:

BM
on 12 Apr 2018

Commented:

BM
on 18 Apr 2018

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!