Layer Input Expectation Doesn't Match (Neural Network)

Creating a dlnetwork with three inputs and two outputs. The inputs are x, y, z coordinates in the form of UTMN, UTME, and height. The outputs are pressure and temperature. Data is from multiple well logs and is organized by well ID, UTMN, UTME, height, temp, and pressure.
I seperated the data into training and validation sets. Training data: train_x_array (3 feature inputs) and train_y_array (2 feature outputs). Validation: val_x_array and val_y_array.
Keep getting errors such as:
Training stopped: Error occurred
Error using trainnet
Layer 'input': Invalid input data. Invalid size of channel dimension. Layer expects input with channel dimension size 3
but received input with size 189.
Error in final3 (line 25)
netTrained = trainnet(train_x_array_transposed, train_y_array_transposed, net,"mse", opts);
In an effort to fix the issue, I transposed the x and y arrays for both the training and validation sets. However, I am still getting the same error. I am at a loss for how to fix the error. If I try to use the Deep Network Designer, I receive an error stating their are multiple observations. The x and y values are the same for different heights/temps/pressures from the same well. I am assuming this is were the issue arises, but I do not understand why the designer would produce a different error.
Example data (before transposing):
x y z T P
4407300 327880 2796 344.3 501
4407300 327880 2746 356.5 521
4407300 327880 2696 357 541
Where train_x would be the first 3 columns, and train_y is the last 2.
Full Code:
inputSize = 3;
hiddenLayerSize = 20;
outputSize = 2;
layers = [
featureInputLayer(inputSize)
fullyConnectedLayer(hiddenLayerSize, 'Name', 'fc1')
reluLayer('Name', 'relu1')
fullyConnectedLayer(outputSize, 'Name', 'output')
];
net = dlnetwork(layers);
opts = trainingOptions('adam', ...
'MaxEpochs', 100, ...
'MiniBatchSize', 64, ...
'ValidationData', {val_x_array_transposed', val_y_array_transposed'}, ...
'ValidationFrequency', 10, ...
'Verbose', true ...
);
netTrained = trainnet(train_x_array_transposed, train_y_array_transposed, net,"mse", opts);
Y_pred_val = predict(netTrained, val_x_array');

 Accepted Answer

'ValidationData', {val_x_array_transposed', val_y_array_transposed'}
From the naming you have used here, it appears that you have transposed your validation data twice (and hence untransposed them).

5 Comments

When I tried to run the code without transposing, I recieved errors about trainnet expecting 3 channels and receiving 81. My validation dataset has 81 rows. My test dataset has 189 rows. I thought this was odd because trainnet deals with the 189 row dataset. I transposed all of the datasets, then the error changed to expecting 3 and recieving 189.
Would seperating my training data into different arrays and then combining help? I.e. seperating each column of data from the original table into different arrays.
dsTrain_ix = arrayDatastore(Train_ix, IteratioDimension= 4);
dsTrain_iy = arrayDatastore(Train_iy);
dsTrain_iz = arraydatastore(Train_iz);
dsTrain_o1 = arrayDatastore(Train_o1);
dsTrain_o2 = arrayDatastore(Train_o2);
dsTrain = combine(dsTrain_ix, dsTrain_iy, dsTrain_iz, dsTrain_o1, dsTrain_o2);
The array sizes you mention are reflected in the test code below, which gives me absolutely no errors. So, I think you just have to check that your arrays are truly the sizes that you say they should be,
inputSize = 3;
hiddenLayerSize = 20;
outputSize = 2;
layers = [
featureInputLayer(inputSize)
fullyConnectedLayer(hiddenLayerSize, 'Name', 'fc1')
reluLayer('Name', 'relu1')
fullyConnectedLayer(outputSize, 'Name', 'output')
];
net = dlnetwork(layers);
opts = trainingOptions('adam', ...
'MaxEpochs', 100, ...
'MiniBatchSize', 64, ...
'ValidationData', {rand(81,3), rand(81,2)}, ...
'ValidationFrequency', 10, ...
'Verbose', true ...
);
netTrained = trainnet(rand(189,3), rand(189,2), net,"mse", opts);
Iteration Epoch TimeElapsed LearnRate TrainingLoss ValidationLoss _________ _____ ___________ _________ ____________ ______________ 0 0 00:00:01 0.001 0.77353 1 1 00:00:01 0.001 0.72641 10 5 00:00:02 0.001 0.79818 0.63803 20 10 00:00:02 0.001 0.65905 0.52162 30 15 00:00:02 0.001 0.54222 0.42349 40 20 00:00:02 0.001 0.44389 0.34247 50 25 00:00:02 0.001 0.3608 0.27561 60 30 00:00:02 0.001 0.29201 0.22095 70 35 00:00:02 0.001 0.2377 0.17871 80 40 00:00:02 0.001 0.19629 0.14782 90 45 00:00:02 0.001 0.16601 0.12666 100 50 00:00:02 0.001 0.14579 0.1128 110 55 00:00:02 0.001 0.13273 0.10406 120 60 00:00:02 0.001 0.12485 0.098747 130 65 00:00:03 0.001 0.11987 0.09504 140 70 00:00:03 0.001 0.11638 0.092164 150 75 00:00:03 0.001 0.11361 0.089783 160 80 00:00:03 0.001 0.11121 0.087748 170 85 00:00:03 0.001 0.10903 0.085977 180 90 00:00:03 0.001 0.10699 0.084425 190 95 00:00:03 0.001 0.10507 0.083059 200 100 00:00:03 0.001 0.1033 0.081858 Training stopped: Max epochs completed
Y_pred_val = predict(netTrained, rand(81,3));
whos Y_pred_val
Name Size Bytes Class Attributes Y_pred_val 81x2 648 single
Thank you Matt for your help. I was able to find the error was related to the function I used to create the arrays.
You're welcome, but if your question is resolved now, please Accept-click the asnwer.

Sign in to comment.

More Answers (0)

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Products

Release

R2023b

Asked:

on 2 May 2024

Commented:

on 3 May 2024

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!