A good question. I do not think, that there is a general answer. It will depened on your code. If e.g. the file transfer is the limiting factor, a faster SSD is more important than the Coder.

If runtime matters, optimizing the Matlab code is important. But it might be hard to impoissible to convert the highly optimized Matlab code to C. So a fair comparison between the Coder and pure Matlab code requires to optimize (and test) two different code versions.

If the optimization (and testing) takes a week and you save 2 minutes of run time, this is interestingly from a scientific point of view only.

There are many examples in this forum for optimizing code by exploiting the underlying maths. Accelerating the calculations cannot compete with omitting them. An example:

Z1 = exp(-X * sin(0.1) - Y * cos(0.1));

Z2 = exp(-x * sin(0.1)) .* exp(-y.' * cos(0.1));

The standard procedure of optimization is:

- Debug the code and test it exhaustively in the initial version. Improving failing code is a complete waste of time. You need a trustwothry version to compare the results after each step of optimization.
- Use the profiler to identify the bottlenecks.
- Analyse the maths and adjust the model to reduce computations.
- Check if a (partial) vectorization improves the speed.
- Compare with the bottleneck improved by the Coder.
- Parallelize the code, if many cores of the CPU are in idle mode during the processing.
- Buy a faster computer (or if step 6 is fine: 10 faster computers).