Video length is 8:16

Neural Networks for SOC Estimation | How to Estimate Battery State of Charge Using Deep Learning, Part 3

From the series: How to Estimate Battery State of Charge Using Deep Learning

Carlos Vidal, The McMaster Automotive Research Centre (MARC)

Explore the theory and implementation of the deep neural network used in this study; motivation and tradeoffs for the utilization of certain network architectures; and training, testing, validation, and analysis of the network performance.

Learn about the training and testing procedures and evaluating the prediction accuracy.

Published: 27 May 2021

Thank you, Javier.

In my part of the presentation, I'll walk you through the main aspects of the neural network design process for SOC estimation. The mains aspects include an overview of the training evaluation testing process, the neural network model structure, data preparation, an approach to improve robustness of the model, and, finally, some SOC estimation results at multiple temperature, including -10 degrees Celsius. Let's go.

A simplified concept of feed-forward neural network is shown in figure one, and a flow chart of training process is showing the figure two. First, the initial value of the learnable parameters, weights, and buyers, are randomly assigned using our case and our target distribution. Secondly, a so-called forward propagation computes the initial output the SOC candidate, and then compares to a reference SOC to compute the loss for this iteration.

The losses then backpropagated, using a series of partial derivatives with respect to each of the neural network weights and buyers, updating their values, and then repeating the forward propagation, plus the backpropagation steps. The iterated process continues until meetings were determined criteria, such as reaching a particular accuracy or reaching a certain number of iterations.

Another step happened at regular intervals during the training process, where separate data set the task, or validation. Data set is fed into the model candidate to validate the accuracy of the FNN during the training process, using forward propagation step without changing any of the learnable parameters.

For the training process, the data preparation is a very important step. It's necessary to check and clean the data, as well normalizing it's vector to improve training accuracy of the model. Normalization is a very common procedure when training a neural network. The type of neural network we are talking about in this seminar is the Feedforward Neural Network, or just FNN.

However, different machine learning approach can be used for this purpose. For example, a recurring neural network, such as a last temp. They put for this FNN are the temperature voltage and current from the battery.

Although, a very important aspect, that I would like to remark, is the moving average filters. Like recurring neural network, FNN doesn't pass information to help make its estimation. Just for comparison, without the filter, the estimation error is three or five times higher. And the next slide, we'll show how we have selected the number of neurons for the hidden layers.

In order to select the number of neurons for our model, we have trained hundreds models by varying the number of neurons in each hidden layers, as shown in the figure seven. The training parameters used in this selection is shown here. Although, based on these results, the models with 55, 82, and 99 neurons presented very similar accuracy, which we'll discuss in the next slide.

In figure six, shows the results of 2000 train models. And despite the significant variation of number of neurons, many results are very similar. This is because the random initial seed values assigned to each of the weights and bias every time a training process starts, which can lead to different local minimum, even for the same number of neurons and the structure. Therefore, the number of neurons for the neural

network structure, in this work, was chosen to be 55, which is the lowest number of neurons among the three best results we could find. In the next slide, we will show more details about the final FNN is structured and some results.

The detailed final FNN structure and the LG HG2 lithium-ion SOC estimation association estimation at 25 degrees Celsius result is showing the figure eight. The activation functions used in the structure are the hyperbolic tension, leaky RELU at 0.03 and clipped RELU at one. This model presents a very good accuracy on the SOC estimation, close to 0.5% on the mean absolute error. On the next slide, we'll show how this model can be visualized in the MATLAB deep network app.

One way to visualize the neural network structure that we've been discussing in the MATLAB is going through the app's Deep Neural Network Designer, then you can actually import the model and open up in this app, and then you can see how many activations, learnables, and what kind of structure it is, and this is how it is shown in the MATLAB. This is really interesting if you have to visualize quickly the structure you just built in MATLAB, and you also can use to build your own structure, also this way.

In the case of SOC estimation for electrified vehicles, the sensor used to capture a voltage, current, and temperature have significant error, due to the low cost sensor models you typically use in the automotive industry. If the expected error is found during the model design, the neural network training data set can be augmented to include error on the signal, improving the robustness of the resulting model.

Here, table five shows the gain and offset values used to augment to the data set for each input feature. The figure nine, here, shows that despite the inclusion of the noise into the test data set following the 14 cases in table 4 discuss previously, the SOC estimation was maintained with similar value throughout all cases. By using the final structure that we have been discussing, our FNN base model was trained on a data set containing multiple temperatures, including negative 10 degrees Celsius.

As shown in this figure 11, the mother performed incredibly well in all temperatures. Even though at negative temperature, the SOC estimation becomes considerably more challenged to be estimated, and it performs considerably well even at such a difficult condition. Here, in table six, show a summary of the results, where the RMSE here was kept close to 1.2% in all temperature cases, and under 1% on the mean absolute error.

To quick summarize this part of the seminar, we have shown an overview of the FNN training process. Part of the design process of the FNN, or electrified vehicle battery SOC estimation. Data preparation for a lithium ion LG HG2. We have presented an approach to improve SOC estimation robustness, and, finally, we have presented and discussed the results when the model was exposed to multiple temperatures, including negative temperatures. Thank you.