I'm quite late to the party on this one, but it seems that this is worth a reread and repost. I think many of us are finding that the need for managing the data we are working with is not given enough focus. Specifically, we throw around terms like "Big Data" yet have few tools to intelligently data-mine, manipulate or transform same.
If you consider one of the fundamental functions of MatLab and the family of similar solutions, embarrassingly parallel and / or distributed jobs are often parametric sweeps. I would even assert that the majority of jobs are.
Some component of the classic parametric sweep process is most certainly static, and likely growing exponentially in size as our Big Data notion continues to evolve. In the financial services space this borders often on unwieldy. It was trivial to throw a dataset (matrix or otherwise) around when its size was simply N a decade ago, but we now finds ourselves at N^t which I posit is no small issue.
I would appeal to those that understand this paradigm within Mathworks to consider it a high priority need that we provide a mechanism to declare and access a shared memory space of static global constants over the course of a job or jobs at the node level. If we need to invoke a process at onset and closing of a job process as overhead in perhaps an example of a distributed job, this seems a trivial price to pay in comparison of moving large datasets.