Main Content

Anomaly Detection in Industrial Machinery Using Three-Axis Vibration Data

This example shows how to detect anomalies in vibration data using machine learning and deep learning. The example uses vibration data from an industrial machine. First, you extract features from the raw measurements corresponding to normal operation using the Diagnostic Feature Designer App. You use the selected features to train three different models (one-class SVM, isolation forest, and LSTM autoencoder) for anomaly detection. Then, you use each trained model to identify whether the machine is operating normally.

Data Set

The data set contains three-axis vibration measurements from an industrial machine. The data is collected both immediately before and after a scheduled maintenance. The data collected after scheduled maintenance is assumed to represent normal operating conditions of the machine. The data from before maintenance can represent either normal or anomalous conditions. Data for each axis is stored in a separate column. Save and unzip the data set and then, load the training data.

url = 'https://ssd.mathworks.com/supportfiles/predmaint/anomalyDetection3axisVibration/v1/vibrationData.zip';
websave('vibrationData.zip',url);
unzip('vibrationData.zip');
load("MachineData.mat")
trainData
trainData=40×4 table
          ch1                 ch2                 ch3           label 
    ________________    ________________    ________________    ______

    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
      ⋮

To better understand the data, visualize it before and after maintenance. Plot vibration data for the fourth member of the ensemble and note that the data for the two conditions looks different.

ensMember = 4;
helperPlotVibrationData(trainData, ensMember)

Extract Features with Diagnostic Feature Designer App

Because raw data can be correlated and noisy, using raw data for training machine learning models is not very efficient. The Diagnostic Feature Designer app lets you interactively explore and preprocess your data, extract time and frequency domain features, and then rank the features to determine which are most effective for diagnosing faulty or otherwise anomalous systems. You can then export a function to extract the selected features from your data set programmatically. Open Diagnostic Feature Designer by typing diagnosticFeatureDesigner at the command prompt. For a tutorial on using Diagnostic Feature Designer, see Design Condition Indicators for Predictive Maintenance Algorithms.

Click the New Session button, select trainData as the source, and then set label as Condition Variable. The label variable identifies the condition of the machine for the corresponding data.

You can use Diagnostic Feature Designer to iterate on the features and rank them. The app creates a histogram view for all generated features to visualize the distribution for each label. For example, the following histograms show distributions of various features extracted from ch1. These histograms are derived from a much larger data set than the data set that you use in this example, in order to better illustrate the label-group separation. Because you are using a smaller data set, your results will look different.

Use the top four ranked features for each channel.

  • ch1 : Crest Factor, Kurtosis, RMS, Std

  • ch2 : Mean, RMS, Skewness, Std

  • ch3 : Crest Factor, SINAD, SNR, THD

Export a function to generate the features from the Diagnostic Feature designer app and save it with the name generateFeatures. This function extracts the top 4 relevant features from each channel in the entire data set from the command line.

trainFeatures = generateFeatures(trainData);
head(trainFeatures)
    label     ch1_stats/Col1_CrestFactor    ch1_stats/Col1_Kurtosis    ch1_stats/Col1_RMS    ch1_stats/Col1_Std    ch2_stats/Col1_Mean    ch2_stats/Col1_RMS    ch2_stats/Col1_Skewness    ch2_stats/Col1_Std    ch3_stats/Col1_CrestFactor    ch3_stats/Col1_SINAD    ch3_stats/Col1_SNR    ch3_stats/Col1_THD
    ______    __________________________    _______________________    __________________    __________________    ___________________    __________________    _______________________    __________________    __________________________    ____________________    __________________    __________________

    Before              2.2811                      1.8087                   2.3074                2.3071               -0.032332              0.64962                   4.523                  0.64882                    11.973                    -15.945                -15.886                -2.732      
    Before              2.3276                      1.8379                   2.2613                 2.261                -0.03331              0.59458                   5.548                  0.59365                    10.284                    -15.984                -15.927               -2.7507      
    Before              2.3276                      1.8626                   2.2613                2.2612               -0.012052              0.48248                  4.3638                  0.48233                    8.9125                    -15.858                -15.798               -2.7104      
    Before              2.8781                      2.1986                   1.8288                1.8285               -0.005049              0.34984                  2.3324                  0.34981                    11.795                    -16.191                 -16.14               -3.0683      
    Before              2.8911                        2.06                   1.8205                1.8203              -0.0018988              0.27366                  1.7661                  0.27365                    11.395                    -15.947                -15.893               -3.1126      
    Before              2.8979                      2.1204                   1.8163                1.8162              -0.0044174               0.3674                  2.8969                  0.36737                    11.685                    -15.963                -15.908               -2.9761      
    Before              2.9494                        1.92                   1.7846                1.7844              -0.0067284              0.36262                  4.1308                  0.36256                    12.396                    -15.999                -15.942               -2.8281      
    Before              2.5106                      1.6774                   1.7513                1.7511              -0.0089548              0.32348                  3.7691                  0.32335                    8.8808                     -15.79                -15.732               -2.9532      

Prepare Full Data Sets for Training

The data set you use to this point is only a small subset of a much larger data set to illustrate the process of feature extraction and selection. Training your algorithm on all available data yields the best performance. To this end, load the same 12 features as previously extracted from the larger data set of 17,642 signals.

load("FeatureEntire.mat")
head(featureAll)
    label     ch1_stats/Col1_CrestFactor    ch1_stats/Col1_Kurtosis    ch1_stats/Col1_RMS    ch1_stats/Col1_Std    ch2_stats/Col1_Mean    ch2_stats/Col1_RMS    ch2_stats/Col1_Skewness    ch2_stats/Col1_Std    ch3_stats/Col1_CrestFactor    ch3_stats/Col1_SINAD    ch3_stats/Col1_SNR    ch3_stats/Col1_THD
    ______    __________________________    _______________________    __________________    __________________    ___________________    __________________    _______________________    __________________    __________________________    ____________________    __________________    __________________

    Before              2.3683                       1.927                   2.2225                2.2225               -0.015149              0.62512                  4.2931                  0.62495                    5.6569                    -5.4476                -4.9977               -4.4608      
    Before               2.402                      1.9206                   2.1807                2.1803               -0.018269              0.56773                  3.9985                  0.56744                    8.7481                    -12.532                -12.419               -3.2353      
    Before              2.4157                      1.9523                   2.1789                2.1788              -0.0063652              0.45646                  2.8886                  0.45642                    8.3111                    -12.977                -12.869               -2.9591      
    Before              2.4595                      1.8205                     2.14                2.1401               0.0017307              0.41418                  2.0635                  0.41418                    7.2318                    -13.566                -13.468               -2.7944      
    Before              2.2502                      1.8609                   2.3391                 2.339              -0.0081829               0.3694                  3.3498                  0.36931                    6.8134                     -13.33                -13.225               -2.7182      
    Before              2.4211                      2.2479                   2.1286                2.1285                0.011139              0.36638                  1.8602                  0.36621                    7.4712                    -13.324                -13.226               -3.0313      
    Before              3.3111                      4.0304                   1.5896                1.5896              -0.0080759              0.47218                  2.1132                  0.47211                    8.2412                     -13.85                -13.758               -2.7822      
    Before              2.2655                      2.0656                   2.3233                2.3233              -0.0049447              0.37829                  2.4936                  0.37827                    7.6947                    -13.781                -13.683               -2.5601      

Use cvpartition to partition data into a training set and an independent test set. Use the helperExtractLabeledData helper function to find all features corresponding to the label 'After' in the featureTrain variable.

rng(0) % set for reproducibility
idx = cvpartition(featureAll.label, 'holdout', 0.1);
featureTrain = featureAll(idx.training, :);
featureTest = featureAll(idx.test, :);

For each model, train on only the after maintenance data, which is assumed to be normal. Extract only this data from featureTrain.

trueAnomaliesTest = featureTest.label;
featureNormal = featureTrain(featureTrain.label=='After', :);

Detect Anomalies with One-Class SVM

Support Vector Machines are powerful classifiers, and the variant that trains on only the normal data is used here.. This model works well for identifying abnormalities that are "far" from the normal data. Train a one-class SVM model using the fitcsvm function and the data for normal conditions.

mdlSVM = fitcsvm(featureNormal, 'label', 'Standardize', true, 'OutlierFraction', 0);

Validate the trained SVM model by using test data, which contains both normal and anomalous data.

featureTestNoLabels = featureTest(:, 2:end);
[~,scoreSVM] = predict(mdlSVM,featureTestNoLabels);
isanomalySVM = scoreSVM<0;
predSVM = categorical(isanomalySVM, [1, 0], ["Anomaly", "Normal"]);
trueAnomaliesTest = renamecats(trueAnomaliesTest,["After","Before"], ["Normal","Anomaly"]);
figure;
confusionchart(trueAnomaliesTest, predSVM, Title="Anomaly Detection with One-class SVM", Normalization="row-normalized");

From the confusion matrix, you can see that the one-class SVM performs well. Only 0.3% of anomalous samples are misclassified as normal and about 0.9% of normal data is misclassified as anomalous.

Detect Anomalies with Isolation Forest

The decision trees of an isolation forest isolate each observation in a leaf. How many decisions a sample passes through to get to its leaf is a measure of how difficult isolating it from the others is. The average depth of trees for a specific sample is used as their anomaly score and returned by iforest.

Train the isolation forest model on normal data only.

[mdlIF,~,scoreTrainIF] = iforest(featureNormal{:,2:13},'ContaminationFraction',0.09);

Validate the trained isolation forest model by using the test data. Visualize the performance of this model by using a confusion chart.

[isanomalyIF,scoreTestIF] = isanomaly(mdlIF,featureTestNoLabels.Variables);
predIF = categorical(isanomalyIF, [1, 0], ["Anomaly", "Normal"]);
figure;
confusionchart(trueAnomaliesTest,predIF,Title="Anomaly Detection with Isolation Forest",Normalization="row-normalized");

On this data, the isolation forest doesn't do as well as the one-class SVM. The reason for this poorer performance is that the training data contains only normal data while the test data contains about 30% anomalous data. Therefore, the isolation forest model is a better choice when the proportion of anomalous data to normal data is similar for both training and test data.

Detect Anomalies with LSTM Autoencoder Network

Autoencoders are a type of neural network that learn a compressed representation of unlabeled data. LSTM autoencoders are a variant of this network that can learn a compressed representation of sequence data. Here, you train an LSTM autoencoder with only normal data and use this trained network to identify when a signal does not look normal.

Start by extracting features from the after maintenance data.

featuresAfter = helperExtractLabeledData(featureTrain, ...
   "After");

Construct the LSTM autoencoder network and set the training options.

featureDimension = 1;

% Define biLSTM network layers
layers = [ sequenceInputLayer(featureDimension, 'Name', 'in')
   bilstmLayer(16, 'Name', 'bilstm1')
   reluLayer('Name', 'relu1')
   bilstmLayer(32, 'Name', 'bilstm2')
   reluLayer('Name', 'relu2')
   bilstmLayer(16, 'Name', 'bilstm3')
   reluLayer('Name', 'relu3')
   fullyConnectedLayer(featureDimension, 'Name', 'fc')
   regressionLayer('Name', 'out') ];

% Set Training Options
options = trainingOptions('adam', ...
   'Plots', 'training-progress', ...
   'MiniBatchSize', 500,...
   'MaxEpochs',200);

The MaxEpochs training options parameter is set to 200. For higher validation accuracy, you can set this parameter to a larger number; However, the network might overfit.

Train the model.

net = trainNetwork(featuresAfter, featuresAfter, layers, options);
Training on single GPU.
|========================================================================================|
|  Epoch  |  Iteration  |  Time Elapsed  |  Mini-batch  |  Mini-batch  |  Base Learning  |
|         |             |   (hh:mm:ss)   |     RMSE     |     Loss     |      Rate       |
|========================================================================================|
|       1 |           1 |       00:00:14 |         5.81 |         16.9 |          0.0010 |
|       3 |          50 |       00:00:18 |         5.43 |         14.8 |          0.0010 |
|       5 |         100 |       00:00:20 |         3.99 |          7.9 |          0.0010 |
|       8 |         150 |       00:00:23 |         4.27 |          9.1 |          0.0010 |
|      10 |         200 |       00:00:25 |         3.47 |          6.0 |          0.0010 |
|      13 |         250 |       00:00:28 |         3.97 |          7.9 |          0.0010 |
|      15 |         300 |       00:00:30 |         3.17 |          5.0 |          0.0010 |
|      18 |         350 |       00:00:33 |         3.72 |          6.9 |          0.0010 |
|      20 |         400 |       00:00:35 |         2.89 |          4.2 |          0.0010 |
|      23 |         450 |       00:00:37 |         3.49 |          6.1 |          0.0010 |
|      25 |         500 |       00:00:40 |         2.67 |          3.6 |          0.0010 |
|      28 |         550 |       00:00:42 |         3.31 |          5.5 |          0.0010 |
|      30 |         600 |       00:00:45 |         2.49 |          3.1 |          0.0010 |
|      33 |         650 |       00:00:47 |         3.14 |          4.9 |          0.0010 |
|      35 |         700 |       00:00:50 |         2.29 |          2.6 |          0.0010 |
|      38 |         750 |       00:00:52 |         2.96 |          4.4 |          0.0010 |
|      40 |         800 |       00:00:55 |         2.11 |          2.2 |          0.0010 |
|      43 |         850 |       00:00:57 |         2.82 |          4.0 |          0.0010 |
|      45 |         900 |       00:01:00 |         1.98 |          2.0 |          0.0010 |
|      48 |         950 |       00:01:02 |         2.71 |          3.7 |          0.0010 |
|      50 |        1000 |       00:01:05 |         1.89 |          1.8 |          0.0010 |
|      53 |        1050 |       00:01:07 |         2.63 |          3.5 |          0.0010 |
|      55 |        1100 |       00:01:10 |         1.81 |          1.6 |          0.0010 |
|      58 |        1150 |       00:01:12 |         2.55 |          3.3 |          0.0010 |
|      60 |        1200 |       00:01:15 |         1.74 |          1.5 |          0.0010 |
|      63 |        1250 |       00:01:17 |         2.48 |          3.1 |          0.0010 |
|      65 |        1300 |       00:01:20 |         1.67 |          1.4 |          0.0010 |
|      68 |        1350 |       00:01:22 |         2.40 |          2.9 |          0.0010 |
|      70 |        1400 |       00:01:25 |         1.54 |          1.2 |          0.0010 |
|      73 |        1450 |       00:01:27 |         2.30 |          2.6 |          0.0010 |
|      75 |        1500 |       00:01:29 |         1.45 |          1.1 |          0.0010 |
|      78 |        1550 |       00:01:32 |         2.23 |          2.5 |          0.0010 |
|      80 |        1600 |       00:01:34 |         1.37 |          0.9 |          0.0010 |
|      83 |        1650 |       00:01:37 |         2.16 |          2.3 |          0.0010 |
|      85 |        1700 |       00:01:39 |         1.30 |          0.8 |          0.0010 |
|      88 |        1750 |       00:01:42 |         2.10 |          2.2 |          0.0010 |
|      90 |        1800 |       00:01:44 |         1.23 |          0.8 |          0.0010 |
|      93 |        1850 |       00:01:47 |         2.04 |          2.1 |          0.0010 |
|      95 |        1900 |       00:01:49 |         1.17 |          0.7 |          0.0010 |
|      98 |        1950 |       00:01:52 |         1.99 |          2.0 |          0.0010 |
|     100 |        2000 |       00:01:54 |         1.11 |          0.6 |          0.0010 |
|     103 |        2050 |       00:01:57 |         1.94 |          1.9 |          0.0010 |
|     105 |        2100 |       00:01:59 |         1.06 |          0.6 |          0.0010 |
|     108 |        2150 |       00:02:02 |         1.90 |          1.8 |          0.0010 |
|     110 |        2200 |       00:02:04 |         1.01 |          0.5 |          0.0010 |
|     113 |        2250 |       00:02:06 |         1.86 |          1.7 |          0.0010 |
|     115 |        2300 |       00:02:09 |         0.97 |          0.5 |          0.0010 |
|     118 |        2350 |       00:02:11 |         1.82 |          1.7 |          0.0010 |
|     120 |        2400 |       00:02:14 |         0.93 |          0.4 |          0.0010 |
|     123 |        2450 |       00:02:16 |         1.79 |          1.6 |          0.0010 |
|     125 |        2500 |       00:02:18 |         0.90 |          0.4 |          0.0010 |
|     128 |        2550 |       00:02:21 |         1.76 |          1.6 |          0.0010 |
|     130 |        2600 |       00:02:23 |         0.87 |          0.4 |          0.0010 |
|     133 |        2650 |       00:02:25 |         1.73 |          1.5 |          0.0010 |
|     135 |        2700 |       00:02:28 |         0.84 |          0.4 |          0.0010 |
|     138 |        2750 |       00:02:30 |         1.71 |          1.5 |          0.0010 |
|     140 |        2800 |       00:02:32 |         0.81 |          0.3 |          0.0010 |
|     143 |        2850 |       00:02:35 |         1.68 |          1.4 |          0.0010 |
|     145 |        2900 |       00:02:37 |         0.78 |          0.3 |          0.0010 |
|     148 |        2950 |       00:02:40 |         1.66 |          1.4 |          0.0010 |
|     150 |        3000 |       00:02:42 |         0.76 |          0.3 |          0.0010 |
|     153 |        3050 |       00:02:44 |         1.63 |          1.3 |          0.0010 |
|     155 |        3100 |       00:02:47 |         0.74 |          0.3 |          0.0010 |
|     158 |        3150 |       00:02:49 |         1.61 |          1.3 |          0.0010 |
|     160 |        3200 |       00:02:52 |         0.72 |          0.3 |          0.0010 |
|     163 |        3250 |       00:02:54 |         1.59 |          1.3 |          0.0010 |
|     165 |        3300 |       00:02:56 |         0.69 |          0.2 |          0.0010 |
|     168 |        3350 |       00:02:59 |         1.57 |          1.2 |          0.0010 |
|     170 |        3400 |       00:03:01 |         0.68 |          0.2 |          0.0010 |
|     173 |        3450 |       00:03:03 |         1.55 |          1.2 |          0.0010 |
|     175 |        3500 |       00:03:06 |         0.66 |          0.2 |          0.0010 |
|     178 |        3550 |       00:03:08 |         1.53 |          1.2 |          0.0010 |
|     180 |        3600 |       00:03:10 |         0.64 |          0.2 |          0.0010 |
|     183 |        3650 |       00:03:13 |         1.51 |          1.1 |          0.0010 |
|     185 |        3700 |       00:03:15 |         0.62 |          0.2 |          0.0010 |
|     188 |        3750 |       00:03:18 |         1.49 |          1.1 |          0.0010 |
|     190 |        3800 |       00:03:20 |         0.61 |          0.2 |          0.0010 |
|     193 |        3850 |       00:03:22 |         1.48 |          1.1 |          0.0010 |
|     195 |        3900 |       00:03:25 |         0.59 |          0.2 |          0.0010 |
|     198 |        3950 |       00:03:27 |         1.46 |          1.1 |          0.0010 |
|     200 |        4000 |       00:03:29 |         0.58 |          0.2 |          0.0010 |
|========================================================================================|
Training finished: Max epochs completed.

Visualize Model Behavior and Error on Validation Data

Extract and visualize a sample each from Anomalous and Normal condition. The following plots show the reconstruction errors of the autoencoder model for each of the 12 features (indicated on the X-axis). The reconstructed feature value is referred to as "Decoded" signal in the plot. In this sample, features 10, 11, and 12 do not reconstruct well for the anomalous input and thus have high errors. We can use reconstructon errors to identify an anomaly.

testNormal = {featureTest(1200, 2:end).Variables};
testAnomaly = {featureTest(200, 2:end).Variables};

% Predict decoded signal for both
decodedNormal = predict(net,testNormal);
decodedAnomaly = predict(net,testAnomaly);

% Visualize
helperVisualizeModelBehavior(testNormal, testAnomaly, decodedNormal, decodedAnomaly)

Extract features for all the normal and anomalous data. Use the trained autoencoder model to predict the selected 12 features for both before and after maintenance data. The following plots show the root mean square reconstruction error across the twelve features. The figure shows that the reconstruction error for the anomalous data is much higher than the normal data. This result is expected, since the autoencoder is trained on the normal data, so it better reconstructs similar signals.

% Extract data Before maintenance
XTestBefore = helperExtractLabeledData(featureTest, "Before");

% Predict output before maintenance and calculate error
yHatBefore = predict(net, XTestBefore);
errorBefore = helperCalculateError(XTestBefore, yHatBefore);

% Extract data after maintenance
XTestAfter = helperExtractLabeledData(featureTest, "After");

% Predict output after maintenance and calculate error
yHatAfter = predict(net, XTestAfter);
errorAfter = helperCalculateError(XTestAfter, yHatAfter);

helperVisualizeError(errorBefore, errorAfter);

Identify Anomalies

Calculate the reconstruction error on the full validation data.

XTestAll = helperExtractLabeledData(featureTest, "All");

yHatAll = predict(net, XTestAll);
errorAll = helperCalculateError(XTestAll, yHatAll);

Define an anomaly as a point that has reconstruction error 0.5 times the mean across all observations. This threshold was determined through previous experimentation and can be changed as required.

thresh = 0.5;
anomalies = errorAll > thresh*mean(errorAll);

helperVisualizeAnomalies(anomalies, errorAll, featureTest);

In this example, three different models are used to detect anomalies. The one-class SVM had the best performance at 99.7% for detecting anomalies in the test data, while the other two models are around 93% accurate. The relative performance of the models can change if a different set of features are selected or if different hyper-parameters are used for each model. Use the Diagnostic Feature Designer MATLAB App to further experiment with feature selection.

Supporting Functions

function E = helperCalculateError(X, Y)
% HELPERCALCULATEERROR calculates the rms error value between the
% inputs X, Y
E = zeros(length(X),1);
for i = 1:length(X)
   E(i,:) = sqrt(sum((Y{i} - X{i}).^2));
end

end

function helperVisualizeError(errorBefore, errorAfter)
% HELPERVISUALIZEERROR creates a plot to visualize the errors on detecting
% before and after conditions
figure("Color", "W")
tiledlayout("flow")

nexttile
plot(1:length(errorBefore), errorBefore, 'LineWidth',1.5), grid on
title(["Before Maintenance", ...
   sprintf("Mean Error: %.2f\n", mean(errorBefore))])
xlabel("Observations")
ylabel("Reconstruction Error")
ylim([0 15])

nexttile
plot(1:length(errorAfter), errorAfter, 'LineWidth',1.5), grid on,
title(["After Maintenance", ...
   sprintf("Mean Error: %.2f\n", mean(errorAfter))])
xlabel("Observations")
ylabel("Reconstruction Error")
ylim([0 15])

end

function helperVisualizeAnomalies(anomalies, errorAll, featureTest)
% HELPERVISUALIZEANOMALIES creates a plot of the detected anomalies
anomalyIdx = find(anomalies);
anomalyErr = errorAll(anomalies);

predAE = categorical(anomalies, [1, 0], ["Anomaly", "Normal"]);
trueAE = renamecats(featureTest.label,["Before","After"],["Anomaly","Normal"]);

acc = numel(find(trueAE == predAE))/numel(predAE)*100;
figure;
t = tiledlayout("flow");
title(t, "Test Accuracy: " + round(mean(acc),2) + "%");
nexttile
hold on
plot(errorAll)
plot(anomalyIdx, anomalyErr, 'x')
hold off
ylabel("Reconstruction Error")
xlabel("Observation")
legend("Error", "Candidate Anomaly")

nexttile
confusionchart(trueAE,predAE)

end

function helperVisualizeModelBehavior(normalData, abnormalData, decodedNorm, decodedAbNorm)
%HELPERVISUALIZEMODELBEHAVIOR Visualize model behavior on sample validation data

figure("Color", "W")
tiledlayout("flow")

nexttile()
hold on
colororder('default')
yyaxis left
plot(normalData{:})
plot(decodedNorm{:},":","LineWidth",1.5)
hold off
title("Normal Input")
grid on
ylabel("Feature Value")
yyaxis right
stem(abs(normalData{:} - decodedNorm{:}))
ylim([0 2])
ylabel("Error")
legend(["Input", "Decoded","Error"],"Location","southwest")

nexttile()
hold on
yyaxis left
plot(abnormalData{:})
plot(decodedAbNorm{:},":","LineWidth",1.5)
hold off
title("Abnormal Input")
grid on
ylabel("Feature Value")
yyaxis right
stem(abs(abnormalData{:} - decodedAbNorm{:}))
ylim([0 2])
ylabel("Error")
legend(["Input", "Decoded","Error"],"Location","southwest")

end

function X = helperExtractLabeledData(featureTable, label)
%HELPEREXTRACTLABELEDDATA Extract data from before or after operating
%conditions and re-format to support input to autoencoder network

% Select data with label After
if label == "All"
   Xtemp = featureTable(:, 2:end).Variables;
else
   tF = featureTable.label == label;
   Xtemp = featureTable(tF, 2:end).Variables;
end

% Arrange data into cells
X = cell(length(Xtemp),1);
for i = 1:length(Xtemp)
   X{i,:} = Xtemp(i,:);
end

end

See Also

| | |

Related Topics