What is causing an Undefined or variable 'oldVariableNames' Error when using trainNetwork?

1 view (last 30 days)
I am getting the following error when calling trainNetwork from the script facetrain.m which is shown further down.
>> facetrain Error using trainNetwork (line 150) Undefined function or variable 'oldVariableNames'.
Error in facetrain (line 87) net = trainNetwork(train_patch_ds, lgraph, options);
Caused by: Undefined function or variable 'oldVariableNames'.
The crash does not come immediately, a window with the text 'imageNormalization' is present for a quite some time and when the window disappears the error appears.
I have without success attempted to follow suggestions from other threads recommending the removal of any custom paths via the following steps: restoredefaultpath % This will remove any custom paths rehash toolboxcache savepath
The problem started when I swapped to using imageDatastores instead of using X and Y variables for injecting image data in the call to trainNetwork. The imageDatastores used in 'facetrain.m' for training and validation are each a merge of two imageDatastores via a randomPatchExtractionDatastore operation. The imageDatastores are augmented.
What could be causing this error to appear?
%================================= % facetrain.m %================================= dataroot = '/hd1/Data/FaceNNData/';
num_epochs = 10000; batch_size = 30; img_width = 128; img_height = 128; num_channels = 3;
layers = [ imageInputLayer([img_width img_height num_channels],'Name','input_face')
convolution2dLayer(3,128,'Padding','same','Name','conv128_enc')
batchNormalizationLayer('Name','bn128_enc')
reluLayer('Name','relu128_enc')
maxPooling2dLayer(2,'Stride',2,'Name','pool128_enc','HasUnpoolingOutputs',true)
convolution2dLayer(3,64,'Padding','same','Name','conv64_enc')
batchNormalizationLayer('Name','bn64_enc')
reluLayer('Name','relu64_enc')
maxPooling2dLayer(2,'Stride',2,'Name','pool64_enc','HasUnpoolingOutputs',true)
convolution2dLayer(3,32,'Padding','same','Name','conv32')
batchNormalizationLayer('Name','bn32_enc')
reluLayer('Name','relu32_enc')
maxPooling2dLayer(2,'Stride',2,'Name','pool32_enc','HasUnpoolingOutputs',true)
maxUnpooling2dLayer('Name','pool32_dec')
convolution2dLayer(3,64,'Padding','same','Name','conv32_dec')
batchNormalizationLayer('Name','bn32_dec')
reluLayer('Name','relu32_dec')
maxUnpooling2dLayer('Name','pool64_dec')
convolution2dLayer(3,128,'Padding','same','Name','conv64_dec')
batchNormalizationLayer('Name','bn64_dec')
reluLayer('Name','relu64_dec')
maxUnpooling2dLayer('Name','pool128_dec')
convolution2dLayer(1,3,'Padding','same','Name','conv128_dec')
regressionLayer('Name','output_depthmap')
];
lgraph = layerGraph(layers); lgraph = connectLayers(lgraph,'pool32_enc/indices','pool32_dec/indices'); lgraph = connectLayers(lgraph,'pool32_enc/size','pool32_dec/size'); lgraph = connectLayers(lgraph,'pool64_enc/indices','pool64_dec/indices'); lgraph = connectLayers(lgraph,'pool64_enc/size','pool64_dec/size'); lgraph = connectLayers(lgraph,'pool128_enc/indices','pool128_dec/indices'); lgraph = connectLayers(lgraph,'pool128_enc/size','pool128_dec/size');
train_face_ds = imageDatastore([dataroot, '/face/train'],'ReadFcn',@myreadfcn); train_world_ds = imageDatastore([dataroot '/world/train'],'ReadFcn',@myreadfcn); valid_face_ds = imageDatastore([dataroot, '/face/valid'],'ReadFcn',@myreadfcn); valid_world_ds = imageDatastore([dataroot '/world/valid'],'ReadFcn',@myreadfcn);
train_augmenter = imageDataAugmenter('RandXReflection', true, ... 'RandRotation', [-5.0 5.0], ... 'RandScale', [0.98 1.2], ... 'RandXTranslation', [-8 8], ... 'RandYTranslation',[-8 8]); valid_augmenter = imageDataAugmenter('RandXReflection', true, ... 'RandRotation', [-5.0 5.0], ... 'RandScale', [0.98 1.2], ... 'RandXTranslation', [-8 8], ... 'RandYTranslation',[-8 8]);
train_patch_ds = randomPatchExtractionDatastore(train_face_ds, train_world_ds, ... [img_width img_height], ... 'DataAugmentation', train_augmenter, ... 'PatchesPerImage',32); valid_patch_ds = randomPatchExtractionDatastore(valid_face_ds, valid_world_ds, ... [img_width img_height], ... 'DataAugmentation', valid_augmenter, ... 'PatchesPerImage',32);
options = trainingOptions('adam',... 'MiniBatchSize',batch_size,... 'MaxEpochs',num_epochs,... 'InitialLearnRate',1e-4,... 'Shuffle','every-epoch',... 'ValidationData',valid_patch_ds,... 'ValidationFrequency',100,... 'ValidationPatience',Inf,... 'Plots','training-progress',... 'ExecutionEnvironment','gpu',... 'Plots','training-progress',... 'Verbose',false); net = trainNetwork(train_patch_ds, lgraph, options);
function J = myreadfcn(filename) I = imread(filename); J = imresize(I,[128 128]); end
Thanks /Mats
  1 Comment
Mats Åhlander
Mats Åhlander on 21 Sep 2018
I found the error, as Vignesh pointed out the problem in my case was that the sizes of the color dimensions did not match up for all the paired images in the randomPacthExtractionDatastore. When I removed the bad pairs the problem was solved.

Sign in to comment.

Accepted Answer

Vignesh
Vignesh on 21 Sep 2018
This is a bug in randomPatchExtractionDatastore. Admittedly, it occurred while randomPatchExtractionDatastore is trying to throw an error that it detected in your code. This means that one of two things happened. The images input to randomPatchExtractionDatastore through train_face_ds and train_world_ds are not the same size. Alternatively, it's possible that the patch size, [img_width img_height], is larger than the input image from train_face_ds or train_world_ds. The same for the 2nd call to randomPatchExtractionDatastore.
I don't have access to your data so I am not able to say exactly what happened but you should be able to set a breakpoint in trainNetwork to figure out what happened in your code. Alternatively, call read() in a loop on the two datastores, train_patch_ds and valid_patch_ds until you run out of data to reproduce the issue.
dbstop if error
train_patch_ds = randomPatchExtractionDatastore(train_face_ds, train_world_ds, ... [img_width img_height], ... 'DataAugmentation', train_augmenter, ... 'PatchesPerImage',32);
while(hasdata(train_patch_ds))
[data,info] = read(train_patch_ds);
end
The code execution will stop at the point where error occurs. Use the Function Call Stack drop down in the editor menu to select randomPatchExtractionDatastore.readByIndex and then run the following commands
img = subds.Images(imgIndex,:)
patchSize = [img_width img_height]
The two columns in the cell array, img must be the same size. The patch size must not be greater than the size of images in img. To find the image pairs that are causing the issue, run
info.ImageFilenameFirst{imgIndex}
info.ImageFilenameSecond{imgIndex}
Do the same with valid_patch_ds.
Thank you for reporting this issue. We’ll fix randomPatchExtractionDatastore.
  2 Comments
Zarrin Wu
Zarrin Wu on 28 Feb 2019
Hi Vignesh,
I also encountered such problem with Undefined function or variable 'oldVariableNames'. I figured out while using 'randomPatchExtractionDatastore' function, the imds1 and imds2 should have the same dimension. I have tried [40 40 3] to [40 40 3], and [40 40] to [40 40]. Both of these two conditions work. But if the imds is of the dimension of [40 40 3] and imds2 of the dimension of [40 40], there will be the error with Undefined function or variable 'oldVariableNames'. Do you any idea about this?
Vignesh
Vignesh on 28 Feb 2019
Edited: Vignesh on 28 Feb 2019
Ziling,
This is the same issue reported earlier. The randomPatchExtractionDatastore is trying to report an error that input images are not the same size. While doing this it ran into an exception. We made the restriction in randomPatchExtractionDatastore to not support use-cases where images are of different dimensions. It was made with the intention to prevent users from associating input-response image pairs that are potentially unrelated to each other. However, that means not all use-cases are addressed. To workaround this issue, comment one line in randomPatchExtractionDatastore. Here are the steps
Type "edit randomPatchExtractionDatastore" in MATLAB Commadn window
Goto Line 451 in randomPatchExtractionDatastore and make it a comment by adding a % sign at the beginning of the line.
Save your changes.
Run your code again and verify that it works.
I will consider this issue as an enhancement request to allow flexibility to support more use-cases such as yours. Thanks for reporting it.

Sign in to comment.

More Answers (0)

Products


Release

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!