Embedded AI for Body Applications with MATLAB and Simulink - MATLAB
Video Player is loading.
Current Time 0:00
Duration 19:27
Loaded: 0.84%
Stream Type LIVE
Remaining Time 19:27
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 19:27

    Embedded AI for Body Applications with MATLAB and Simulink

    Athulya Thazha, Mercedes-Benz Research and Development India Private Limited
    Shubham Kale, Mercedes-Benz Research and Development India Private Limited

    At the outset of our project aimed at advancing smarter next-generation body interiors, we encountered several challenges: acquiring a technological edge without disrupting existing development processes, addressing tool dependencies, and deploying solutions on resource-constrained devices.

    In this session, we describe how we developed an efficient embedded AI workflow, encompassing everything from machine learning concept design to code generation and deployment on the target ECU, all while integrating seamlessly with our existing processes.

    This project was executed in collaboration with MathWorks India, leveraging Statistics and Machine Learning Toolbox™, Deep Learning Toolbox™, and automatic code generation to accelerate the deployment of vehicle software. The use of neural network optimization, quantization, and projection techniques has the potential to significantly minimize memory usage and computational demands, enabling deployment on resource-constrained devices.

    This project establishes a basis for future embedded AI development in body and comfort systems, and we are in the process of integrating this workflow into our current development process.

    Published: 22 Dec 2024

    Hello, everyone. A very good afternoon to all of you, and thank you so much for joining us today for our session on Embedded AI for Body Applications using MATLAB and Simulink. And I'm Athulya. And with me, I have Shubham Kale. And we are from MBRDI, Mercedes-Benz Research and Development India Limited Bangalore office.

    OK, let's do some quick introduction. So once again, I'm Athulya. And in my current role, I work as technical manager for in-house thermal comfort software development. And I've been with MBRDI for close to eight years and mainly into the field of model-based design and development for body and comfort features. So over to you, Shubham.

    Hello. I'm Shubham. And I'm a senior engineer at MBRDI. And we basically work on thermal comfort software development. And it's been four years for me at MBRDI. So, yeah, that's it.

    Thank you, Shubham. So before getting into the details, our agenda for today's presentation is as follows. We'll start off with our current area of work and introduction. And then, we'll also touch upon the motivation behind the AI-powered solutions.

    And then, we'll delve into the details of virtual sensor modeling for online mass flow estimation, an integral part of cabin thermal comfort function, and our journey starting from concept phase to the final target deployment, the challenges that we faced, and how they were addressed. And last, but not least, our collaboration with the MathWorks team to make this whole journey a grand success. OK, so let's get started.

    So this slide mainly talks about our current area of work and where we come from, just to connect the dots. So we are the architects of AWS platform, Mercedes-Benz operating system body and comfort domain. And being the architects, we have full control of development of vehicle functions and strongly determined to develop an outstanding luxury product.

    And when it comes to body and comfort domain, the prime focus is to offer a fully-personalized cabin comfort experience. And out of the wide-range of body and comfort features that offer an immersive entertainment experience, we come from the climate control software development or feature development.

    OK, so having said the context, then so this slide is mainly about the how part or the strategic shift that could give us an edge in delivering an outstanding luxury product. And as I covered in the previous slide, a few key focus areas include offering a new level of personalization, safety and comfort, and also integrating smarter next-generation car interiors with state-of-the-art tech.

    So, yeah, and if you see today, we are already in the forefront of transitioning the body and comfort features, like the lighting functions, or seat functions, or climate control, or display features, to the league of AI-enabled, in-car features that are proven to be successful, something like the predictive maintenance, or the Hey Mercedes voice assistant function, or the user action prediction, and so on.

    So having the motivation behind an AI-powered solution clear and understood, then we went one level below to the body and comfort features. And if you see today, most of our body and comfort functions are governed by real-time control algorithms. And, hence, we channel most of our efforts in the direction of making such control algorithms smarter and intelligent.

    So as a first step, we went ahead with a benchmarking of the different control system design approaches that we have today, something like the classical physics-based model, which solely relies on the physical laws and equations to predict and control the system behavior. So this approach was also compared with the traditional AI modeling approach, which uses the cloud computing infrastructure and the AI algorithms.

    So no surprise, because without doubt, the AI models fared better when it comes to the accuracy metric compared to the classical physics-based modeling. But something that was not very convincing for us, or we had some second thoughts due to multiple reasons. So one such reason being, if you see the AI modeling approach or the traditional AI modeling approach, is highly dependent on cloud connectivity.

    And if you see for most of our control algorithms, which requires a real-time execution, this will be a major challenge, right? And also, additionally, if you look at it from an OEM perspective, due to the recent data regulations, be it on the data safety, data security or data transfer, going by a cloud-based data transfer approach, we may not be complying with the data transfer regulations and so on.

    So these were the two major blockers for us. And so, then, we were also looking for other options. And that's when we heard about the embedded AI control system design approach, so which, in fact, is capable of or can deploy the AI algorithms into the resource constrained ECUs or legacy ECUs.

    And, in fact, this was a big saver for us because it was meeting all our requirements, one side on the AI capabilities and the accuracy and, on the other side, if you look at it from the real-time execution, and also meeting the data regulations because everything is done on board now.

    So, in fact, we finalized this approach considering because it was having the best of both worlds when the AI capabilities and everything done on board. So that was finalized. And once the embedded AI control system design approach was finalized, then we identified a use case to demonstrate its value.

    So that's when we went ahead with a virtual sensor modeling for online mass flow estimation using embedded AI. And, yeah, maybe I'll also go into the details, probably, because few typical questions you may have in mind related to this could be, why this particular use case? So why we went for this, the reason for that was because if you see, the more accurately you can estimate the cabin air mass flow rate, it can directly influence your cabin comfort, or it can directly enhance your cabin comfort.

    So that is a major touch point for choosing this particular use case. And then, you may have the question, why virtual sensor, or what is a virtual sensor basically? So virtual sensor, in simple words, if I have to say it is a software replacement for your physical hardware sensors, which is capable of estimating the physical quantity of interest, which, in this case, is a cabin air mass flow rate.

    So then, how is it done currently? How is the mass flow rate measured or estimated currently? So in few series, as you see here, it comes with a physical sensor in few cases. And, in few cases, you also have certain software solutions, like an estimation algorithms, like Kalman filter estimation, or classical physics-based modeling, and so on.

    And the million dollar question, so what are we trying to address with the virtual sensor model? So, basically, there are quite a lot of reasons. So one such reason is, when it goes to a physical sensor, it's not very cost effective. So that is one of the major reason. And other reasons being the safety and reliability part.

    So when you consider the sensor placement due to different vehicle operating conditions, or over a period of time sensor aging, or you also come across multiple sensor faults, like mechanical fault, or electrical fault, and so on, and the sensor tuning efforts, in case if you have to go for a sensor replacement. And if it is a software solution, obviously, you have to meet the accuracy requirements and so on.

    And most of you would also that when it comes to EVs, weight reduction plays a crucial role when it comes to EV performance and rate. So considering all these pressing points, this was a best fit solution for us, which could also address most of these challenges, which are told. And it can also enhance the customer comfort.

    OK, so that's when we went ahead with the virtual sensor modeling. And few of the features that we used were the climate control flap, stepper positions, and the blower speed. And in the following slides, Shubham will also take you through the workflow that we adopted, the challenges, and how they were addressed. So over to you, Shubham.

    Yeah. Thank you, Athulya. I'm all good, right? OK. So now, we will take a look at the workflow that was developed in collaboration with MathWorks for our target use case, which is the AI-powered virtual sensor. In our case, it is the mass flow estimator.

    So the main purpose of developing this kind of workflow was to have a platform where we can develop an AI model, train it, validate it, as well as, in the second step, which is the embedded AI development, what we want, basically, is to optimize the AI model in terms of or with respect to the ECU constraints what we have for a traditional automotive issue.

    So we will have constraints like limited floating point operations. We will have memory constraints. So in order to address that, we want a platform to have optimization of the model with respect to this kind of constraints. And further, we also want to have a seamless integration into our workflow rather than disrupting the whole workflow.

    So it should be able to integrate with the conventional MBD workflow. So that was the motivation to develop this workflow. So now, we'll go step by step through each of the process for the workflow. So the first step is basically the data generation, input data generation for our experimental setup.

    So in order to generate a quality data, we need to generate inputs using a sampling methods, certain sampling methods. And these sampling methods can be used from the Statistics and Machine Learning Toolbox from MathWorks. So in our case, also, we had features like blower speed. We had stepper mixing, air distribution steppers. We had temperature mixing flap steppers.

    So these features, so we wanted to have a sampling on this feature set. And once, let's say, we generate a quality data out of this feature set, now we further proceed with the pre-processing of the data in order to train our model. And as we had a lower number of feature sets, only pre-processing needed was the normalization.

    Again, normalization is the function from the Statistics and Machine Learning Toolbox. So that same was used here. And, as you know, because of this normalization, we would be able to generalize the model better and faster with less number of data points.

    Now going to the second process, it would be the training of the neural network. So the neural network can be developed, trained, and validated in either programmatical approach, or else we can use an app-based approach. So there is a Deep Network Designer app which can be used, or else it can be done programmatically.

    So we had preferred programmatically, in our case. And, as on the right, you can see the training performance plot where the loss function is seen to reduce with respect to the epochs, number of epochs. Yeah, so the second step, which is the crucial step here in the whole of the workflow, is basically the adaption to the ECU or embedded AI.

    So now, we faced a few challenges, basically, in traditional automotive issues in terms of memory constraints. So these memory constraints can be addressed by, first of all, having a hyperparameter tuning. So as you know, AI models will have lots of parameters, hyperparameters, lots of parameters, sorry, of weights and biases, which will consume a lot of memory of our ECU.

    So we can tune using the Experiment Designer App from Deep Learning Toolbox from MathWorks. And we can finalize on an optimal set of parameters, such that our accuracy is not at all impacted or compromised in the same process. So once we have a proper set of optimal parameters, which is less reduced, we can further also project the neural network using the neural network projection from the Deep Learning Toolbox from MathWorks.

    So now, because of this, we can address the challenge of the memory constraint. So we will have less number of parameters, which will consume lower memory in an issue. Further, the next step would be to quantize the model parameters. So as we know, the AI model will have lots of weights and biases. And those will be at a very high precision.

    And the challenge with a traditional automotive issue is that it will have limited floating point operation, or else it could also increase your execution time and whatnot. So in order to address that, what we can do is we can quantize this floating point parameters to a fixed point parameters, fixed point data types. And in order to do this conversion, we first need to convert, let's say, because up till now, we, let's say, trained the model.

    So this trained model would be a DL network object file. So that DL network object file needs to be converted to a Simulink model. And that can be done using the export network to Simulink function, which will help us to convert a DL network object file into a Simulink file with the same representation. So once we have that, then we can use the Fixed-Point Designer tool in order to convert the floating point to fixed point data type.

    So moving on to the third step, which is the code generation. And this is a challenge for us, as well, because it needs to be easily integratable into our regular workflow. It can be conventional MBD workflow, whatever each company, let's say, uses. And here, therefore, we can use an Embedded Coder directly to generate a generic C code, which can be easily plugged in, whichever workflow anybody is using.

    So once we have the code generated ready, we can deploy it onto the issue. We can check the performance metrics. And for the results here, in terms of accuracy, if you see that the mean absolute error reduced by 50% of that of physics-based model.

    So the first objective that we set out was to improve the accuracy. And we were able to hit that mark by reducing this mean absolute error. And if we improve the accuracy, we are, basically, able to give an improved thermal comfort to the user.

    So in the plot, there is a visualization of the mass flow coming out of each vents. And where you could see a green plot, which is a solid plot, is, basically, the actual mass flow, which was experimentally captured. And the white dotted plot is, basically, the estimated mass flow from our model. So, as you can see, it's very accurate, and it's following the actual mass flow as well.

    So now, we come to the memory part, memory utilization part of it. So in terms of RAM, if you see that, let's say, if the physics-based model consumes 78% of RAM, but when we have the newer NN-based model, it only increases the RAM consumption by 1% of this.

    Similarly, in the RAM memory usage, if the physics-based model was consuming 53% of the total RAM memory, the neural network only increased this usage up to 5% only. So, as you can see, it's a negligible increase in the whole memory usage, which was the second objective of our whole use case, is to reduce this memory utilization or overloading the ECU because of this AI model.

    In terms of execution time, you can see that classical physics-based model and neural network execution time. If you see, there is a difference of around 0.54 milliseconds, which is also really good because we have a very large tolerable time limit. And we are well within it.

    And this was only possible because we converted this floating point model to a fixed-point model. Plus, we also reduced the number of parameters or the of the whole model. So with this, I'll hand over for the key takeaways to Athulya.

    Moving to the last part of the presentation, the key takeaways, what went well for us. So I'll just read out a few of the key pointers. So overall, if you see, with the tooling solutions on the new Deep Learning Toolbox, the overall development time has significantly reduced now. So if you have to start off something from the concept phase to final target deployment, the overall duration is significantly reduced with the tool capabilities.

    And on the performance side, if you see, as mentioned by Shubham, few of the technicalities, which involves quantization, pruning, and projection. Because of all these, the overall performance requirements are also well within the stipulated thresholds. And it is a seamless integration to the existing tool chain because we also had a different set of tools involved in different phases.

    And when it comes to interoperability in this AI workflow, it is a big plus. And, as also mentioned in previous slides, the real-time processing capabilities, because we don't have to get into the hassles of cloud connectivity or bandwidth requirements, and so on. Because everything is done on board now, and coming to the data regulations based on the data transfer, and so on. And that is, also, we are complying with the data regulations because of the local processing.

    And then, looking forward, what we see, this option of this embedded AI workflow that was demonstrated today, we also see that it can be easily scaled to other diverse operations, like online sensor estimation, or predictive maintenance, or electrification, or even the energy efficient, smarter, and intelligent solutions. And, also, the overall embedded AI time-to-market is significantly reduced because of the MathWorks partnership on the tooling part.

    That's also quite promising for us. And if you see, we have also laid a strong foundation now, which can also be taken forward for innovative solutions to be implemented on board, be it on the radar side, or enhanced safety, and comfort and so on.

    So with all this, in this forum, I would also like to thank the MathWorks team for all their continued support for making this our pilot use case of virtual sensor modeling for online mass flow estimation a grand success. And thank you all for your attention. And I also wish you all the best for your embedded AI journey going forward. So thank you.

    [APPLAUSE]

    We have a lot of questions. And we find a common theme around that. So we thought of publishing one question in the interest of time. Probably. We'll have it here.

    The question is, does virtual sensor take help from other sensor values in real-time? Or was it trained on using a real sensor data to create training data? Or was the data was obtained from simulation or combination of all these techniques?

    So maybe I'll take up that question. So, in our case, we had taken the ground truth values for an actual sensor simulator during different vehicle operating conditions. And we also had a sensor placed during that time. So it was an actual sensor placement.

    OK. Thanks, Athulya. And we also hear a similar question, I believe, from Tata Motors as well. I believe, in the interest of time, we are going forward with one question.