Developing Guidance, Navigation, and control algorithm with MATLAB and Simulink
Overview
In this focused webinar, we will walk you through the complexities of defining environment, setting waypoints, and navigating around obstacles. Our focus is to design a robust trajectory using advance navigation and guidance algorithms, illustrated with practical examples and simulations. Further, we will demonstrate the control system design that can follow the actuation command and maintain the stability of aircraft with minimum deviation.
About the Presenter
Richa Singh, Application Engineer, MathWorks
Richa is an application engineer at MathWorks India Private Limited and specializes in the field of physical Modeling and Control Design. She has strong hold in Aerospace system and subsystems and holds a master’s and PhD degree in Aerospace Engineering from IIT Bombay, Mumbai.
Tittu George, Chief Manager - CLAW Group, RWRDC, HAL
Tittu George started his career as a Flight Instrumentation Engineer for Light Combat Aircraft, where he gained extensive knowledge in designing and developing instrumentation systems for aircraft.
He then transitioned to the role of Control System Engineer, where he spent the next 8 years working on the design and development of control systems for helicopters. He presently specializes in Model based design solutions using MATLAB/Simulink for aerospace control system. He has extensive knowledge in software certification standards like DO178B/C and DDPMAS. He is involved in design and development of airborne systems with in depth knowledge in ARP4754 and ARP4761.
Tittu George has an M.Tech degree in Data Science and Engineering from BITS Pilani and has three international papers to his name. He has been working in the aerospace industry for over 15 years and has held various roles during his career.
Recorded: 15 Feb 2024
Hi, everyone. Welcome in third session of webinar series for modern aircraft design and engineering. In today's session, the webinar will be focused on developing guidance navigation and control algorithm with Matlab and Simulink. Now, let's see, what we want to achieve here is we want our flight to take off, follow a designated path, and land safely autonomously. Now, here-- an example of Centaur Aurora flight that has used model-based design approach to develop flight control, mission planning, and testing in real time.
Now, in order to achieve such an application, there are multiple challenges, starting with-- first, it require multi-domain expertise to develop complex system, end-to-end workflow that covers algorithm development, code generation, testing, and deployment. Further, the more we want our system to be autonomous, the more complex the algorithm will be towards the end. Given the safety-critical system, such as aircraft, unmanned air vehicle, one must ensure that the technical depth and system stability as per standard guidelines.
All right. With these challenges in mind, we have crafted today's agenda. We'll start with the integrated workflow to develop autonomous applications using model-based design approach. Then we delve into developing the autonomous algorithm, starting with navigation, perception planning, control, and so on. Next, we will also give a glimpse of scenario simulation with Unreal Engine.
All right. So let's start with the MBD approach. That is Model-Based Design. All right. So here, we have one vehicle, one application, like autonomous vehicle, which we wanted to develop an application using model-based design approach. So we start with developing first system architecture. Next, we will design the platform, which is nothing but developing the model, developing the dynamics for the particular autonomous vehicle.
The part of modeling has already been covered in the second session by Mukesh Prasad. So we will not be delving much into the modeling the platform. Next, we will delve into designing the autonomous algorithm, where we will be developing algorithm for perception, planning and decision, and control.
Next, given the safety critical system, we have to verify our algorithm with respect to some standard certifications, such as DO-178 and so on. Next, we will be simulating with the algorithm with some sensor models and we'll deploy it onto hardware.
And towards the end, we will be planning the mission with actual hardware. And towards the end, whatever the data that we will be getting onboard flight data, we can analyze to perform some algorithms, such as fault-tolerant control algorithm and see health monitoring algorithm and condition monitoring.
All right. So what other building blocks are there-- exist to develop such an application? So the complete model-based design approach can be divided into three different stages, starting with design and simulate, plan the mission, and validate and deploy. Today, we will be focusing on the designing and simulate, where we will be developing our autonomous algorithm.
Right. So a success story from Korean Air Speed here-- the challenges were to develop and verify the flight control system. By following the model-based design approach, they were able to eliminate 100% runtime error by avoiding hand-written code. Further, the complete development time was reduced by 60%. Also, the cost of costly flight test has been minimized by performing the hardware simulation.
So let's get into developing the autonomous algorithm. All right. So what we have here is we have our vehicle. We have some path environment. We have an endpoint. What we want here is one path so that our flight is autonomously traveled in an optimized path and reach an endpoint.
Now, how do we do that? How exactly we develop autonomous algorithm is how we can add autonomy to our aerial vehicle, starting with autonomous system, which is nothing but developing the platform. Sorry, after that, we will be delving into autonomy, where we will start with developing the navigation algorithm, navigation and guidance algorithm, the waypoint follower, how we can avoid the obstacle that falls in the flight path, and towards the end, how we can perform the scenario simulation with Unreal Engine.
Now, throughout this webinar, I'll be taking one case study of a vertical takeoff and landing tiltrotor model. All right. So let's understand-- Prashant, is the pooling-- can it cancel this? It is still running. Yeah.
You can continue. It'll disappear.
OK. All right. So yeah, here, you can see an entire model for a VTOL aircraft. Here, we have different modules, such as digital twin autopilot, guidance test bench. We are giving the reference command through ground control station and visualizing the flight path in some scenarios conditions. All right. So let's get inside this model and see what are the models that are available.
OK. So inside digital twin, we have these forces and movements, where we are considering all the drag effect, propulsion effect, gravity effect. We have six block from Aerospace Blockset. Then we move on to autopilot, which is nothing but our control algorithm. We have multicopter controller and fixed-wing controller, plus the scheduler. In the multicraft multirotor, we have a controller when, actually, our VTOL is in tiltrotor motion. Then we have another fixed-wing controller because this VTOL actually operate in two different mode-- one is tiltrotor, and one is fixed wing.
So here, these are all the controller for air speed, attitude, and so on. Next, we have this guidance test bench, where we will be having this UAV path manager that gives the mission parameters, and the guidance test bench, where we have different mode of operation, like landing, takeoff, waypoint follower, and so on. OK.
Let's see how we-- and towards the end, it's visualization. And we can see how it is moving around. Whenever there is some turn happening, you can see we have in the tiltrotor, it has started moving, and so on. So what I wanted to say here is this VTOL aircraft is actually work in two different model. First is an tiltrotor model, and second is in fixed-wing model.
OK. Now, let's get into the navigation part-- how we are developing the navigation, like inertial navigation, for this VTOL example. Now, for inertial navigation is nothing but self-awareness-- where your aerial vehicle is at present state. So inertial-based navigation, we are going to talk about here.
We have different module, such as IMU-- library, such as IMU, GPS, which will give you the accelerometer, gyrometer readings, the earth to latitude angle, velocity, and so on. We also have filters such as AHRS adaptive reference tracking system, which will be giving me the filtered IMU values. You can also utilize the automated tuning that will give you the filtered position or tune position of the aerial vehicle.
Now, let's see how we have utilized these libraries in our VTOL example. Inside Digital Twin, we have this sensor block. In sensor block, we are using this IMU library to get acceleration and magnetometer reading. Inside GPS, we have this-- we have GPS module. And similarly, in sensors, we have barometers for pressure measurement.
So this is about navigation. This sensor model will be going back-- feeding back to our forces and moments. And we are getting the-- we are-- actually, these sensors are giving the ground-- checking the parameter from ground and giving you the filtered and tuned output.
All right. Next, moving on to guidance, where we will customize in the flight mission. OK. So in this case study, the vehicle is having actions, such as take-off, waypoint, orbit, land, and transition. Now, the vehicle is in tiltrotor mode while taking off, hovering, and landing. However, it transition to fixed-wing mode while following the waypoint and orbit.
All right. So let's see how we have set the test bench for guidance mission using state flow logic. Now, as you can see here inside Guidance Test Bench module, we have different mode of operation. Here, the UAV path manager gives the parameter mission parameter. And in different mode of operation, we have a different path follower, different orbit follower, and so on.
While we run the simulation, it highlights the particular mode that our vehicle is currently operating in. For example, currently, it is in Waypoint Follower mode. And now, it is landing at sunset. So you can see here, it is highlighting that particular mode of operation. So with this, we can have our navigation and guidance control algorithm as a module inside one system, subsystem.
All right. Next, moving on to planning and decision. Here, on the simpler side, you have a toolbox, which is Unmanned Air Vehicle toolbox. It provides a method to define UAV missions to waypoint, orbit follower, and customize a path-planning algorithm. This is helpful when there is a predefined path that UAV will fly so that you can quickly define this path with the waypoints and similar in the mission. You can also utilize some occupancy map, which will also help you to get the idea of the environment so that you can get the feedback from the occupancy map and can perform the smooth-- and can generate a smooth path with that occupancy map.
For more advanced UAV motion planning, you can also use RRT path planner, Hybrid A* that generate obstacle-free path in 2D space. Now, these algorithm will simulate the kinematics with various constraint. And also you can visualize the visualize the path in 3D occupancy map.
Here, you can also design this planning and decision algorithm with Matlab and Simulink ecosystem, depending on what type of motion planning is needed. All right. So this is about-- I mean, what are the SLAM algorithm, like RRT and Hybrid A* you can use for path planning.
Next, moving on to obstacle avoidance. Here, you can see in this obstacle-- in this particular example, we are getting all the sensor data from the UAV LiDAR, which is one sensor. We are getting this LiDAR data as a-- feeding back the current position to our waypoint follower and sending the signal lookahead point to the obstacle avoidance.
Now, this obstacle avoidance block or library block will give me the desired steering command or desired yaw, which will, again, go on to-- I mean, match with the existing desired command-- I mean, existing command that are already coming from the waypoint follower and will give me the final command to the controller. And this controller will activate the signals to our quadcopter.
Now, how exactly this obstacle avoidance block work on? So this block uses the Vector Field Histogram algorithm to calculate obstacle-free direction and yaw for collision-free flight. This update, the lookahead point-- that is computed by the waypoint follower. Here, you can see in this particular figure, the original path was a straight path.
However, we have built some cuboid-based obstacle in its path. So the obstacle avoidance block using vector field histogram will able to provide the right steering command whenever it sees the obstacle. And it takes input as a LiDAR sensor data.
All right. Next, we move on to designing the control algorithm. Now, stepping into the control, of course, model-based design is a strong and developing control system. Recently, there is an increase in research and application for advanced controls, such as model predictive, reinforcement learning, trajectory planning, and tracking.
Now, these algorithms are benefited from the model-based design process by being able to incorporate with the vehicle model in the simulations. Now, these advanced control are applied for optimization for generating the shortest path or safe collision-free trajectory, minimizing the energy consumption, and other aspects.
Now, the link for these particular examples are given here. However, let's move on to something simpler-- controller that we are familiar with, something like PID controller. I hope the audience will be familiar with the PID controller.
Now, let's see how Control System toolbox will help me to tune these gains and achieve a stable flight path. All right, so let's get into this particular example. Here, we have to control the roll, pitch, and yaw for this helicopter.
OK. Currently, as you can see-- yeah, we are opening the Control System to Tuner. And we are selecting which are the parameters, which are the blocks that we wanted to tune. So these are the three PI blocks that we wanted to tune.
Here, we can select Step Signal. We have given a step signal as tracking. So we are selecting all those step signal as an input. Right. Similarly, for the output, we will be selecting all these three outputs. Based on the input and output, currently, it is in unstable mode, as you can see from the figure here.
Next, we are also-- we can also define the margin, like phase margin and gain margin, over here at the Control Input. And apply. We can add some constraint on the margin goal for output as well. Next, the constraint. And then click on to Tune.
Once we tune it, we will be getting the stable operation as-- stable position-- I mean, roll, pitch, and yaw moment. So earlier, when we were-- when we run it, all the motions were-- I mean, all the roll, pitch, yaw were unstable. Right now, it is able to follow the given path.
Now, the same approach we have utilized here, but with the Matlab command using this tune and optimizing the performance with the TuningGoal options. Here, we can see the improved tracking response for airspeed and altitude with tuned response. Further, the vehicle is able to perform the entire mission about 30 seconds faster, as you can see here with the optimized gain, while maintaining the stability. All right. So with this, we completed our perception, planning, and control algorithm. Next, we move on to scenario simulation with Unreal Engine.
Now, when we talk about the photorealistic simulation, what are the different asset? What are the three main things that we needed is we needed one virtual asset that is nothing but our vehicle. It can be fixed-wing vehicle, it can be quadcopter, it can be tiltrotor, in our case, and so on.
Next, we will be needing some scene-- where, actually, our vehicle will be moving around, so some set of environment where we will be. And after that, how this virtual asset is moving, communicating with the scene is through some sensor model, such as camera, LiDAR, inertial-based navigation sensors, and so on.
Now, we provide Simulink block to enable this. Let's look into how we can do that. Yeah. So here, we provide basic virtual assets, such as multicopter, fixed-wing aerial vehicle. We also provide some scenes.
So here, you can see, we have this city block scene. We have some runway scenes. We have some urban air scenes and so on. We also provide generic sensors, such as LiDAR, camera, and inertial sensors. With that, you can communicate or you can have a co-simulation with the Unreal Engine. Provided you have visual camera, you can have depth camera, and you can also perform semantic segmentation while performing the core simulation of your autonomous vehicle with the Unreal Engine.
All right. So let me give you a demo for-- of vertical takeoff and landing example. So here is-- we don't have a asset already-- asset nothing but the VTOL all example. So what you can do in this case is you can create a mesh file and generate a .FBX file from the 2D mesh and import it in Matlab-- import it with the Matlab plugins.
Let me show you how it can be done. OK. Yeah. Oh. Sorry. Yeah, you can see here, here, I have used this download support file. Sorry. It's OK. Yeah. And if I delve into this particular model, we can see in the scene configuration, in the content, we go on to particular library, import that library.
Here, we will select the UAV skeleton from the bottom and import it. Once we imported it, we will get all these virtual assets for VTOL in our content browser. If we open this, we will be adding this skeleton over here. Also, we can copy this reference and communicate it with our host model, which is nothing but sitting in our virtual Matlab.
Now, we can also select the scene over here, which we are selecting as US City Block. Now, this is the scene over here-- and click onto the Play. So once we started playing it, our VTOL, which was not already available in the libraries of Unreal Engine, we are able to get this and able to simulate it.
Now, the flight path, I think, is not much far-- it goes a little faster. But you can see how the complete mission has been performed by using Unreal Engine where we didn't have the assets. So what we did? We created an FBX file and imported that mesh file into Unreal Engine, add it with the plugin in Matlab, and perform the simulation.
Now, with this, we have completed our modeling, developing the algorithm, autonomous algorithm, did the scenario simulation. Now, towards the end, you may also want to close the loop by validating with hardware in loop simulation. So that is-- actually comes after developing the algorithm. You can actually test all your autonomous algorithm in hardware and loop simulation.
Now, this HIL testing, it replaces the plant with a virtual simulation to design control quickly, safely, and economically because in general, let's say you are developing a huge navigation system for an aircraft, you cannot directly go and check it in with the actual aircraft. So for that, what you can do? You can deploy that plant model onto Speedgoat, and the controller model onto some controller, and can perform a entire higher loop hardware in loop simulation to verify your algorithms.
Now, with this, I would like to conclude the session. So what we learned here is we have understand what is the model-based design approach to develop autonomous algorithm. Next, we see what are the different applications and functionality to ease the design process for developing the autonomous algorithm. We have also seen how we can perform the core simulation of autonomous algorithm in photorealistic scenario. All right.
You can go to these resources, like Documentation page, the reference example, webinars by experts, and classroom training courses, and various success stories. You can also look through these web links, where you can have this particular example with respect to obstacle avoidance, with respect to VTOL, where you are tuning the gains and performing the mission. You can also have a SLAM-based navigation and so on.
Good morning, good afternoon, and good evening to everybody who are listening to this webinar. Thanks, Richa. Thanks, Prashant for that lovely introduction. I'm Tittu George from Automatic Flight Control group from Rotary Wing Research and Design Centre HAL. Without any further delay, I'll jump into the presentation.
I'm here to talk about how we went about the design of an automatic flight control system for helicopters. Helicopters inherently are a very unstable machine, while compared to the fixed wings. The other major challenge in helicopters when compared to the fixed wing is for fixed wing, we have wings for the lift. We have a rotor for helicopters. We have propellers. For fixed wing, again, we have the propulsion is through the rotors. We have controls, like aileron, rudder, and elevator. Again, we have only rotor for helicopters. So this makes helicopter a highly coupled system. It becomes a nightmare for a control engineer.
This results in poor controllability, poor maneuverability, and unacceptable gust responses. This loads or increases the pilot load considerably, thus reducing the mission effectiveness, exactly the reason why we have an automatic flight control system.
It is typically these functions are categorized into three-- the stability augmentation systems, which will handle-- which will impart the stability to the system. It will also enable a gust rejection facility. Second, what we call is the control augmentation system, which will improve the controllability and maneuverability of the system. And we have autopilots, or also called as upper modes, or also called as the extra modes, which will help pilots especially in ferry flights, like an air speed mode, altitude hold, et cetera.
These, we can very well see the number of functions or the subfunctions our automatic flight control system has to do. And that itself makes it a very complex system. On top of that, these systems are typically DAL A or DAL B designs based on our design. That increases the challenges of the project. So in the subsequent slides, I will be taking through how we went about design of two such systems-- one system, which is already flying, and other system, which is in process of development.
This is a typical V diagram. We are all quite conversant with this. I have purposefully omitted the ARP portions, the 4761, 4754 from this slide. So we start with the system specification, the requirement capture, the detailed design, model development, the unit level testing. That's an iterative process and an integration testing in it.
I'll start with the detailed design right away. We have extensively used Matlab and Simulink in this. We have also used a linearization manager to bring out trimmed points, the state space functions out of it using the linearization manager. I'll just slightly go in detail in this.
We have used state space approach. Thanks to MathWorks, we had enough inbuilt functions to easily do our studies and designs in state space approach. We have used extensively the pole placement techniques. Axis wise tuning-- as you can see in the slide, there are three axises-- the pitch axis, roll axis, and the yaw axis. And in Simulink, we were able to enable or disable each axis individually and were able to design it easily.
Time response analysis-- again, it was much easier to do with the tools. We had the Bode analysis and the Nicholas plot. So basically, what I want to bring out here is we had a lot of plethora of tools available within the toolboxes, which helped our design in a much faster way so that we can go ahead with our design.
Once our detail design are over, we entered into the model development. So I'll like to distinguish the model development into two phases, where we have the development as well as the verification. We have used, basically-- the model is based on Simulink. We had a set of user base libraries, which was designed in Matlab. We imported that using the user-defined functions, again, within the Simulink. Once that is done, because of availability of tools like model advisors, we were able to easily get our conformance to the standards available.
We also had used extensively the Design Verifier. This helped us extensively in figuring out the dead logics divided by 0, integer overflow kind of situations. So we were able to iron out the model-related issues at the initial phase itself before going to the testing phase.
These are small snippets of the blocks which we have designed. The one on the left bottom is our fully-integrated model. You can easily see it was-- we could easily demarcate and well segregate these models into different sections. Going ahead.
Now, I'll come to the unit-level testing. We had used Simulink test here. So advantage of Simulink test-- it has associated with the Coverage Analyzer. And we could get the MCDC coverage, which was required as part of the 178 standard, right away in the model itself. We could also find out which cases were missing and which cases required-- which was not having the coverage. And we could also generate-- the report generators helped in generating the reports which are required for the certification agencies.
This is a small snapshot of how we do it in our environment. The initial portion where I'm highlighting is-- we have selected it in the model phase. The bottom portion, we have selected in the coverage settings. So we were able to get the results out of it. We had also a baseline criteria enabled, which helped us in passing our expected results. And we were able to get the results immediately.
Moving on to the integration testing-- how we have done again? We have used the Simulink test. We have used, again, the automatic report generation feature in this. Since it was an integration testing, we were not much interested in the coverage because our plan in our particular development scenario was to get the coverage analysis in the unit level.
This is our integration testing environment, where you can see the controller, which is to the right. We have developed the whole sensor model in Simulink. The helicopter model, it was developed in-- it was a legacy software we had imported to the Simulink. And we have the actual model again, which was developed in Simulink. The whole setup became our fully-integrated model.
This was the test environment, through which we had tested our integrated model. We had the models selected. We could actually run multiple test cases together at one go. And we could get all the results immediately. This was how our results were looking. It was very easy to debug and do any correction if required.
Now, I think this is a strong fort of MathWorks that is the auto generation feature. So this-- we have actually explored a lot recently in our recent projects. Using the Simulink test in our old existing projects, we were the-- sorry, the auto-generated code, we were generating using the Simulink coders. On that, the generated C code, we were testing using a third-party software. Recently, in our upgraded versions, we have moved this feature to the Simulink test so we could actually test the model as well as the code immediately. This actually saved us a lot of time.
The Code Inspector helped us to confirm that the traceability with the model was established easily. This ensured or reduced our time in the code walkthrough. So it's saving us a lot of time.
Again, the Coverage Analyzer and the Report Generator was used. We also used a Polyspace Bug Finder to find out any runtime issues. These-- Code Inspector, Code Analyzer, Report Generator, and Simulink thing was a qualifiable tool. And using the qual kit, we have certified or qualified those tools in our environment. These results were submitted to certification agency. And we were able to go ahead.
This is one such environment which we have made to test the software. I think this, all of us know. But to reiterate, and from my personal experience, and as a team, how we see it, the development in Matlab, we found that it had to be highly reusable product, especially from our experience, since we are dealing with multiple platforms. We have seen that the planned design is different. But we were able to reuse the models up to 70% to 80% of what we have done already for another platform.
In terms of time, as I told, reusability helped us in reduction of development time considerably. Regarding the verification time, since we were able to test both the code as well as the model together, verification time in order of close to 50% we were able to save. In terms of compliance, it was easy. We had the model advisors. So we could generate the model advisor. And the reports were again submitted to the certification agency. Regarding ease of adaptation, we have two different teams doing it. And we were easily able to adapt from the old to the new without much difficulty.
Now, where we see ourself ahead-- as you can see in the V model, we have not used all the MathWorks toolboxes across the V model. In the future, we are planning to cover the requirement phase also. Using the Design Verifier, we can use the formal testing, also formal base testing. We are planning to do that also. And maybe in the long run, in hardware and loop testing using Matlab features is what we envisage ourself as a control system group in the future. With that, I would like to conclude my talk. I would like to thank the MathWorks team for giving me an opportunity to speak. Thank you.