How can I prevent the calculation of intermediary results while code generation with the embedded coder?

For example I have created a Simulink subsystem with the following M-Function.
function y = fcn(u1,u2)
%#codegen
p1 = 0.13285;
p2 = 999.1;
p3 = 9.806;
p4 = 132.535;
p = p1*p2*p3;
y = p2*p4*u1 - p*u2;
end
After running code generation with "C++ Code --> Build this Subsystem" the Embedded Coder create the following code (extract from Subsystem.cpp).
/* Model step function */
void Subsystem_step(void)
{
/* Outport: '<Root>/Out1' incorporates:
* Inport: '<Root>/In1'
* Inport: '<Root>/In2'
* MATLAB Function: '<S1>/MATLAB Function'
*/
/* MATLAB Function 'Subsystem/MATLAB Function': '<S2>:1' */
/* '<S2>:1:11' */
Subsystem_Y.Out1 = 132415.7185 * Subsystem_U.In1 - 1301.5546456099999 *
Subsystem_U.In2;
}
As you can see, the coder calculates intermediary results without using the absolute values in the output equation. For complex systems the inaccuracy of the calculation generates a divergence that produces instability. What chances in the configuration parameters are necessary to prevent this simplification of calculation while code generation?
current configuration: * system target file: ert.tlc (Visual C/C++ Solution File) * language: C++ * compiler optimization level: optimization off * prioritized objectives: safety precation, debugging * code replacement library: C++ (ISO) * shared code placement: auto * parentheses level: maximum

4 Comments

What "inaccuracy in the calculation" ? p1*p2 is going to produce the same result whether done at compilation time or at run time.
I have the same question as Walter. What you're seeing is an optimization, since the 3 quantities in your code are constants. One way to avoid the optimization might be to pass those quantities in as tunable parameters instead.
It is only an example to show the problem, which appear in a considerably larger controller system.
There is a difference between the calculation of p1*p2*p3 in every step and the value 1301.5546456099999. I want to avoid this optimization without declare these parameter as tuneable (no workspace parameter). Is that possible?
There is no difference between precalculation of p1*p2*p3 and run-time calculation, provided the architecture stays constant.
Are you compiling on one brand of processor (e.g., Intel) but executing on another (e.g., AMD) ? If so then Yes, in such a case you could get a one bit difference for p1*p2*p3 (unless the rounding settings were different for the two systems.)
There can be a difference between p1*p2*p3*u2 and u2*p1*p2*p3 due to round-off, as order of operations is important in floating point. MATLAB uses left-to-right evaluation for expressions of the same precedence.
If these kinds of differences are critical you should be considering using the fixed-point toolbox.

Sign in to comment.

 Accepted Answer

I think you have misapprehended the source of your problem. The exact mathematical result p1*p2*p3 is 1301.55464561. This number does not have an exact representation in 64-bit IEEE double precision binary floating point (for that matter, neither do p1, p2, and p3). However, the closest floating point number is, in fact, 1301.5546456099999, since the next larger one is 1301.5546456100003. If you are observing a divergence on account of this optimization, then it would probably have to involve the use of 80-bit extended precision registers or some such. On x86 architectures with C compilers these are notoriously difficult to predict or control.

4 Comments

Or difference in rounding modes. Compiled code is not guaranteed to be executed in the same rounding mode as the compiler is executing in, not unless the compiled code includes commands to set the rounding.
That is a possibility. Another possible source of variation would be from the math functions in the C-runtime library, not in this example but in the more complicated system the customer speaks of. It's common that the least significant bit is different. Beyond that, in matrix operations we often see differences from BLAS implementations. For example, if A*B goes through DGEMM from the Intel MKL in simulation but through a mathematically but not computationally equivalent nested loop implementation in generated code, we can't reasonably expect bit-wise equality.
That could be the point. We use a multiplicity of matrix operations (multiplications, inversions, etc.) in our larger controller system, where the problem appeared.
For the alternative solution: difference in rounding modes. How can I adjust the rounding modes while code generation?

Sign in to comment.

More Answers (0)

Categories

Find more on Deployment, Integration, and Supported Hardware in Help Center and File Exchange

Asked:

on 31 May 2013

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!