How did I actually solved it: by attaching such files to the pool didn't work out, so I had to share via network an external hard drive in order to have these files in the same path/folder.
MapReduce cannot find dataset
1 view (last 30 days)
Show older comments
Hello there,
I was using MapReduce to elaborate a (relatively) large dataset, which is a simple matrix stored in a .csv format (~250MB). I am running this MapReduce in a home-made cluster with 2 computers.
The cluster is working fine, no problems whatsoever.
As always I create the datastore from the csv file
ds=datastore({'DS.csv'},'ReadVariableNames',false);
I open the pool and set the MapReduce environment
myCluster=parpool('HomeCluster');
MRE=mapreducer(myCluster);
I start the procedure and the following error appears:
Error using matlab.io.datastore.TabularTextDatastore/partition (line 44)
Cannot find files or folders matching: '<path for CSV>'
Weird thing is...the csv file is inside the current folder, which also contains my scripts/functions.
I also tried attaching that file to the pool (even if sounds pretty stupid to me, correct me if I'm wrong), but still no luck.
Any help is appreciated. Thanks!
Update: I was able to run this program by creating a folder on both computers with the same name with the same path and copying the csv file in both folders. But that's rather stupid, even because most of the times is impossible to create a perfectly matching path name (e.g. the user name in the operating system might be different). Is there any smarter way to avoid this? Can the master node be the only one with the dataset file in it?
0 Comments
Accepted Answer
More Answers (0)
See Also
Categories
Find more on MapReduce in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!