Design, Analyze and Implement Radar Systems in MATLAB
From the series: Signal Processing and Wireless – Webinar Series
Overview
Radar system design, simulation, and analysis is complex because the design space spans the digital, analog, and RF domains. These domains extend across the complete signal chain, from the antenna array to radar signal processing algorithms, to data processing and control. The resulting system-level complexity drives the need for modeling and simulation at all stages of the development cycle. In this webinar, we illustrate techniques for designing and analyzing radar systems. You’ll see how you can perform radar system design and analysis tasks such as waveform design, target detection, beamforming, and space-time adaptive processing.
Highlights
- Multi-function closed-loop simulation between a Radar model and a scheduler
- Sensor fusion & tracking algorithms and scenario generation
- Space Time Adaptive Processing, RF Propagation Analysis and Terrain Visualization
- Ground truth perturbation and sensor configurations to increase testing robustness
- Testing tracking systems with Monte Carlo simulations
- Automatically generate C and HDL Code for rapid prototyping on Xilinx Zynq, Intel and Microsemi SoC FPGAs
About the Presenter
Abhishek Tiwari is a senior application engineer at MathWorks India specializing in design analysis and implementation of signal processing, communications, and data processing applications. He works closely with customers across domains to help them use MATLAB® and Simulink® in their workflows. He has over nine years of industrial experience in the design and development of hardware and software applications in the radar domain. He has been a part of the complete lifecycle of projects pertaining to aerospace and defense applications. Prior to joining MathWorks, he worked for Bharat Electronics Limited (BEL) and Electronics and Radar Development Establishment (LRDE) as a deputy manager. He holds a bachelor’s degree in electronics and telecommunication from National Institute of Technology, Raipur.
Recorded: 21 Nov 2020
My name is Abishek Tiwari. I've been working at MathWorks as a senior application engineer. My core area lies helping customers adopt our solutions in the areas of radar sensor fusion domain. As you can see, how we can see that the multi-function systems are getting evolved, with the advent of phased array systems, you have seen that search and track scheduling was a priority when those systems were getting designed.
Now with more and more dynamism being changing in the environment, and you have multiple approaches from electronic countermeasures to counter-countermeasures, communications coming into the domain, you have multifunction search approaches, working in operation, to give you a unified platform, where you need to be defining a radar in such a way that it should be agile to handle such situations. Going forward, the situations are becoming much more dynamic, wherein you have sensors being lined up in multiple areas. And then you will be visualizing a complete coverage based on doing the sensor fusion.
Subsequently, the technology is also reaching to a fact that, whatever the sensors are seeing, they are also basically giving a feedback to the sensors. So basically the comparative target, a mechanism has been in place, which is the need of the hour. So we can see how the systems are getting evolved, how easy it is to simulate the systems, make it more dynamic and robust with regard to the environmental conditions, and build an environment, which would be helpful for you to mitigate such risks.
If you consider the radar system design, it is expanding over multiple domains. It has antenna and RF at the front end. It has environmental factors taken into the consideration. If you are working on the signal processing and data processing, that itself are two different verticals and challenging with the time management, in the project line. And the resource management and control is something which is adding a layer of control mechanisms to it.
So if you see, from the systems perspective, all such systems, I mean, you need a multi-domain discipline expertise. And to handle such systems, it is considered as a very complex system. So to start with an example, I mean, we would like to highlight you one of the examples, and then take you throughout the journey of this example, that how the different systems are being simulated, how the different components are being modeled into it, and how this is being integrated together to build a multi-function search and tracker compound like this.
So what, if you see from this example, what we are doing here, is we are defining a search grid, wherein, considering the coverage of minus 30 to 30 degree in azimuth and 0 to 20 degrees in elevation, we have defined search grid, we are transmitting a beam, considering different priorities for search and track. And then we will be formatting a sleek, modulating between the search and track priorities, to see whether what can be given as a priority, whether the track can be played over above search, or search can be given more coverage, so that you would be acquiring some fresh new targets.
So what do you require to model in such systems? You will be having front-end antenna systems. You will be dealing with a lot of propagation challenges. You will be dealing with agility when you define your wave-forms and other systems taken into consideration, signal processing, data processing, target modeling, resource management control. So we will go on into each of it, bit by bit, and see how this complete design can be made, and the complete resource management logic behind it, how it is operational.
To start with, I would like to start with antenna array, RF characteristics from the front end, and how the environmental factors come into picture to provide you a better signal to noise ratio, or a better coverage analysis of the same. We have an app called Sensor Array Analyzer App, which can be used to define an element which can be used to define an array geometry, be it linear, planar, and conformal.
You will be able to analyze beam patterns. You would be able to provide weights to it, provide a steering factor. You would be able to analyze the radiation patterns in different wavelet, pattern azimuth, pattern elevation effects, and all those things. This recent release, we have also added an enhancement to incorporate the subarray features.
So this is with the new release. We are also trying to incorporate, whether from a particular array geometric, how would you be able to configure the subarrays, and take those subarrays into account by seeing what would be the effects on that. One thing to note there, if the array geometric and the pattern, which is being generated here, is using pattern of superposition principle, to generate such radiation pattern, which is computing a center element pattern, and multiplying it with an array factor. If you want to increase the fidelity, if you want to work at the solver level where the electromagnetics will come into the picture, and then you will be doing a post surface and charge analysis of it.
You can use antenna and antenna array designer, with antenna tool box, that uses method of moments solver to give you an accurate estimation of a 3D pattern, both in azimuth and elevation, along with an embedded element pattern effects tool. So you can define an array or an element. You can choose it from a catalog of elements. We have various catalogue, around 80 to 90 plus elements, which you can choose off of. We are now adding more and more from large parabolic reflectors to structures, antenna structures, which are electrically very large in nature.
So those also are being incorporated into the product, and this will give you a fidelity of doing an analysis wherein you can have a matrix being involved, and measure the antenna performances, by putting all those visualizations in one tile format. So, depending upon the fidelity in which you want to work and analyze how your antenna will behave, you can rely on this app. We are also working towards putting more and more solvers into the loop, where you have now a hybrid method of moments solver.
It is, when you have such setting structure, which can be imported using CAD file into the product line, and you want to model the antenna on the front end of it, so that can also be possible now, wherein the complete geometry of the plane can be simulated using physical optics. And the antenna mounted on it will be simulated, analyzed using a method of moments solver. So all those will be taken into consideration, and you would be able to generate a pattern which will be more realistic, considering metal body effects from this large aerodynamic structure.
We have also added a SADEA optimization in our previous release. And now we are adding a surrogate optimizer, which is a part of a global optimization toolbox, and integrated it inside an app. So now, if you have an antenna design, and you are working in the direction of maximizing a gain, minimizing the bandwidth, maximizing bandwidth, front to back lobe ratios and all, by putting constraints on the various parameters, you can now work inside the app.
It will be automatically generating those regressions for you, and then it will be providing you a pattern which will be more realistic, or an antenna design optimization which can be more realistic for you to achieve such different objectives for this dynamic. So this is now integrated inside the app. You don't have to be outside the app once you are able to do those optimizations. You can utilize parallel computing tracks.
You can run it on a multicore and have a faster evaluation software antenna design. So with this, I mean, you would be able to design an array. You would be able to provide tapering and thinning in it. You would be able to analyze subarrays. You will also be able to synthesize a particular pattern, based on running the optimization routines.
You can model the imperfections, you can model the failures in antenna arrays. You can import antenna patterns if you are working on some third party software, you can drop into the antenna framework and then do some analysis on that. You can, considering the effects of solvers, when you are dealing with designing an antenna, you can consider the effects of mutual coupling automatically incorporated in that.
So this is something which, when you are dealing with antenna and array characteristics, you can look into considerations. What I want to start with is like the best place to start for transmitter analysis with chain modeling, is with the RF budget analyzer app, that comes with RF toolbox. With this app, you can perform RF budget analysis in terms of gain, IPT values, and it is better than most of the custom-made spreadsheets approach, wherein you can import those touchstone files and have a look at the cascaded effects when you are modeling transmitter energy level.
So again, with this release, I would say that we have made a considerable amount of modifications in this app, wherein you can design those transceiver chains, having docked results and figures analysis along with it, when you are doing a cascaded analysis. We have also added S-parameter composite blocks in that. There are new element galleries with new transmission lines being added to it. You can also do now two-tone analysis for nonlinearities and IP2 factors.
We have also added for a platform the two-tone analysis, what you want to do with your transceiver models, you can also compute a phase equation now. So when you consider here that your transmitter and antenna, transmitter and receiver has some gain values, what will be the power received, compared to what the power transmitted? So this is something which you can analyze based on the chain, what you want to model, very quickly inside the project analyzer app.
We have also added now antenna toolbox. Antenna toolbox has added to simulate the effects of realistically transmitting and receiving antennas on signals. So you can find now interaction with RF system. We have a block in Simulink called as antenna block, so gives that more realistically to capture the interaction with RF system, and then provide you with realistic returns and then modeling the receiver chain taken into the signal processing aspect.
So this is something which we have added very recently to our product line. Mostly with regards to the propagation, you have heard that the coverage is one very important criteria, when you are mounting any radar on a field, and you want the geographical location of that. You want to analyze the coverage. And coverage is not just static in nature.
You would be analyzing it based on how your radar beam will be behaving, how you will be steering the antennas, and you will be able to analyze the dynamic coverage based on your antenna characteristics. So for that, we have a site viewer wherein you can incorporate some of the open source maps from the United States Geological Survey, which provides you global mapping to the resolution of 90 meter, as free, to be downloaded, and incorporated in that.
So you can see here, the antenna is being mounted on the top of a hill. And then you would be able to analyze the coverage and see the contour plot, there, what, depending upon the intensity of the signal you want to receive, how those contours can change. So these are now all feasible. We have some analytical models, like Longley-Rice model, and then terrain integrated rough earth model, depending upon the frequency at which it operates. I mean, one is, Longley-Rice, operating at 20 megahertz to 20 gigahertz. This will be, TIREM will be operating at 1 megahertz to 40 gigahertz.
So the TIREM comes with the Alion Science. So you need to buy it from an Alion Science, and we provide you an interface where you can incorporate the scene, and then do your coverage analysis aspects of it. This was long overdue from our side, working on the ray tracing part. You can now import STL files for an indoor mapping environment. And if you have multiple transmitter and receiver sites from that, and you want to do some urban ray tracing mechanisms, that is also feasible now.
So ray tracing has been added, and it has been continuously evolved over the previous releases, considering reflection, refraction, diffraction, and all those parameters, with regards to the material of the building, and all what you want to take into consideration. Moving further into the signal processing aspects, you can see that, we would be able to model a complete chain, starting from wave-form design transmitter, and receiver, work at the parametric level, and add the detailed RF level. You have the complete array characteristics, which you can model. Or you can design it from an antenna toolbox and bring it here at the realistic pattern level.
You have a separate, I will be going forward, I will be covering about mostly on the target, and the scene interference properties, what can we model, to give you a fidelity IQ detection, which you would be able to use it inside the signal processing. So take, for an example, if I want to use generated dynamic waveforms, or generate some of the linear FM, rectangular FM, stepped FM, and those sets of waveforms, for validating my radar performances, it will be easier for you to choose from a configurable set using Radar Waveform Analyzer app. You can visualize in time and frequency domain.
You can export it, and take the code out of it. With recent release, we have also added the pulse completion integration to the app. So you can have a control on the match feature coefficients also, and then take that directly as an output like how your waveform will behave in that. So that's a very recent advancement what we have done with this.
Considering a complex application for our airborne platform, wherein you have a radar which has been mounted on a moving platform and you are seeing downside on the terrain, STAP processing comes as a very practical challenge to it. We incorporate a lot of DPCA or ADPCA, which is advanced displaced phase cancellation techniques or displaced phase cancellation techniques, in the signal processing mechanisms, to avoid and see that under the effects of interference and clutter, how you will be able to get the target.
So similarly, what you see here in the right is a Simulink Canvas, wherein we have designed a complete end-to-end system model, be it the waveform generation, the transmitter, the narrow-band transmit and receive module, or you have interference along with the propagation taken into consideration. We are also adding a gamma clutter, which is like a continuous white noise you can consider here, to be added to IQ source, and then it is being processed with STAP visualization.
So what you can see is, I mean, we have chosen an array, we have defined different components of it, targets is one, interference is one. We have also added clutter to it. This is where we are doing most of the cancellation and doing the wave cancellation aspects of a STAP processing, and constituting like the channels, the environment, the models, the targets, the clutter, into a scene definition aspect.
So this is where you would be able to add a lot of fidelity into your system. So that, realistically, you will add a lot of value to IQ detection vision what you are doing with this chain. That can be used as an input as a test vector to validate your STAP processing algorithms. So in this example, if I would say, basically, we are considering a radar which is at 1,000 meter height from the surface, looking into the ground, wherein you are injecting a continuous clutter reach across in the form of ridges. You can see the clutter here. And jammer effects, which is in the form of white noise, is being spread around this angle.
So this is the Doppler, normalized Doppler frequency, versus an angle response. And I have defined my target at around 1.7 kilometers. So I'm not able to find my target beyond those ranges, considering crossing it is showing me a completely dominated with interference and clutter. Considering providing a computation with ADPCA weights, I have put a null in that direction, and minimize the effect of a jammer and add the 60 degree angle.
And subsequently, I would be able to see where my target is. So these are some of the mechanisms in which you would make this algorithm very adaptive with relaxed fewer weight computation, and you can use it with high fidelity IQ data generation using Simulink as a model-based platform for you. So you will be able to have a control on the waveforms. You can generate different types of waveforms.
Transmitter and receiver models can act as parametric representation for you and, depending upon the fidelity, what you want you can choose out of the toolbox, out of block set, and add a layer of non-linearity to it. And then analytic characteristics can be used to model the geometry, what you want to define for propagating your signal, along with the channel environmental effects model. Considering the signal-crossing effects, we have a lot of algorithms which are out of the box for the beamforming, matched filtering, detection, CFAR, and STAP processing.
Coming to the target representation of it, mostly the challenges come with how we are defining a point scatter. So you have a man which is sitting on a cycle here. You can see that I'm considering it off to be a multiple point scatter, to generate my return out of it, which can be very easily analyzed when you are doing such microlocal analysis. Or you can rely on a 2D and a 3D attributed shapes of like spherical cone, body of a human, which is walking on the platform. And then with recent releases, we are also investing a lot on bringing some of the solvers, like Hybrid MoM, hybrid EM solvers, physical optics solver, and all those, to give you a realistic assessed return for the target, what you want to realize.
So you can actually see a realistic, depending upon the aspect angles, what you are eliminating for a particular target. What are the returns you are getting, and how those returns will be helpful for you to see generating a profile image or generating target returns, and access to be realistic for processing. So you can see we have directly added pedestrian and bicyclist as our target return, which can be simulated as a multiple points scatterer, or as attributed shapes.
So you would be able to simulate the returns out of it, and then it will be useful for you to decide the signal processing based on how the signature comes. This is what I was telling that we have an example wherein you can import such guide structures in a STL format, and this can be excited with, depending upon the radar frequency, the standing wave and the polarization what you want to be excited for, a bandwidth or sweep of frequencies, you can do that and generate such artifacts.
So you can have, varied upon the frequency as a vector component, azimuth sweep and elevation, you would be able to generate some patterns which can be analyzed and see what values of RCS you will be getting for a particular target. So this was, I mean, this was multiple requests from a lot of our customer interactions. And this is something which we are adding a lot of fidelity on, considering what can be the best solver to provide you analytical results.
Moving into the detection and tracking and classification aspects, our focus has been more, I would say, that to be if you are a data processing scientist, and working on the algorithms, do you have a testbed and scenario framework wherein you can just use scenario generation aspects, use the sensor models what we have, and generate a detection which can be useful for you to validate such algorithms. These algorithms can work and integrate very well with the signal processing aspects. If you want an accurate data to be fed into these algorithms, that is also feasible.
Metrics play a very important role wherein you want to do assignments or radar assignments or some of the score-based logic, which can come handy to you, based on optimal set pattern assignment or generalized optimal set pattern assignment. That has also become a part of the product. So this is where a complete-- sorry for that-- this is where a complete ecosystem is coming into the picture for your data processing algorithm validation, whether you want to feed it with a high fidelity IQ detection, or you want a test bed, which is very quick in nature, considering an approximation of occupied representation for targets, and feeding you a detection of low quality which can be taken into validated accreditation aspects.
This is, again, in the past release, with 2020a, we have launched a tracking scenario designer app, which comes very handy for you to define the platform and targets, defining trajectories with way points, defining position, orientation, time of arrival, ground speed, and various parameters to it. You can either make modifications based on a tabular format, by changing successive values between successive way points, or you can choose an option auto and then interpolants inside will automatically come to play and then choose these values for you.
We have also given a sensor model, which is of a low fidelity in nature. When I say low fidelity, it is a statistical model which provides you agility to add a radar, which is a mechanical leader getting radar, which is like doing an electronic or a self-scanning, or it is doing just an electronic raster scanning for a particular azimuth and elevation framework. You would be able to do that, and generate a detection which can be exported from this framework. And that can be used to add a tracker module outside the MATLAB workflow.
And then you would be able to validate those results with the metrics. So this is where, I mean, a lot of your time would be saved, when you are working on generating such test data, which can validate your tracker algorithms. To see here, there is an example. This is a Canvas. I have put one tower here.
I am using a platform, and on this platform, I am giving and generating at a way point. You can also analyze in the xy and z plane, so that you can very easily and quickly assign altitude to each of these way points where you want to move it. And you can see in the tabular form, how the changes are being incorporated.
I want to choose in this tower my radar. And I have mounted it on the top of the tower. You can also control a lot of the sensor-based parameters' value in which how you want your sensor to behave with regards to the scene generation. Once you click on the run, you can see that, you can see that, depending upon the characteristics what you have provided to the sensor model, the target is moving, and you would be able to generate the detections, which can be used as an input for developing the tracker algorithms.
This is a very simple explanation on just the one scenario. But it can be scaled to multi-sensor, multi-target, multi-platform scenario, depending upon the use case, what you are trying to build, whether you are trying to work on a centralized tracker or you are doing a track-level fusion architecture for that. These are some of the sensor models which are available, which are directly available for you to generate such cases.
We have, for the initial stability, we have altimeters, GPS, IMU, and INS in the model, which can, against the ground truth, how your INS will behave or how your IMUs will behave, you will be able to synthesize that data fits. There are a lot of sensor models, which we have to generate detections for you, to test your sensor fusion algorithms, be it infrared sensor, be it radar, lidar, or a passive sensor, which is sonar or ESM sensor. So that's where, I mean, the total workflow lies.
You can be in the MATLAB environment, generate a scenario, put sensor models to it, generate a detection, which is in the object detection format, and take it or give it to the different trackers. I will cover in my subsequent slides, what are the various tracker modules we are supporting now with array central user, and generate this object track structure. So you can play around with various tracking filters, what we have directly out of the box, be it particle filter, the diamond filter, be it extended Kalman, unscented Kalman and all.
You can directly use the motion models, system objects, what we have, or you can use your own custom motion models, which can be very well integrated with this workflow. One more important thing which I want to highlight here is not only we allow the simulation data to be taken into this workflow, but you can also use the recorded sensor data. So if you have a time-series data, which can be wrapped up around the object detection format, the kinematic software would be maintained inside it.
And that can be used as an input to these trackers. So as I mentioned, these are the various point object trackers, which are available directly out of the box, global nearest neighbor, joint probability, probabilistic data association. You have track-oriented multiple hypothesis tracker, hypothesis-oriented multiple hypothesis tracker, and probability hypothesis density tracker. So these are some of the point object trackers which we support directly out of the box.
When you are dealing with situations for, let's say, a test case of an extended marine object which your radar is observing and radar is of very high resolution, which is giving multiple detections for a resolution set, then either you will be dealing with the problem of clustering challenge, or you want to rely on a tracker bearing directly those extended objects regarding that. So those will not be only tracking of what you can see as the kinematics of it, but they can also be used to track the shape, size, and orientation of the platform what we have.
So some such trackers are also there, like that is Gamma Gaussian Inverse-Wishart probability hypothesis density figure, which can be used mostly in the marine applications for tracking very large extended objects. So this is where you will be co-ordinating, either it to be a rectangular body or to be an ellipsoid body, and the detections around it will be clustered, so that it will be a unified track ID for you. These are some of the recent developments which we are doing with our product line.
You can see here that we are also now supporting earth-centered scenarios wherein this type of visualizations can be taken into consideration. This is something where the white line shows the actual ground truth and, depending upon how I am able to fuse three radar which are being placed along with the real speed data, I can see that how my efficiency will be improving. So showcasing view, if I am generating an ADS-B detection data at every second, and my radars are generating data at a particular rate, you can see that, against the ground truth, what I have defined in dotted blue, if it is just the radar only, this thing, you can see that there are a lot of error in estimating the altitude here.
So that's where we can see that, if you have other sensors and you want to bring in value from the other sensors, your sensor fusion output can be improved and it can be more realistic to the ground truth, how you define it. So these are some of the new features wherein you want to fuse multiple radar data, radar with other sensor data, or radar with ADS-B data, is now very well integrated in the workflow. One more thing, which I wanted to highlight, is, for applications wherein you want to synthesize such scenarios, with various false alarms, again, you will not be able to see how many number of tracks here.
So if you actually see, this is how you would be able to configure your sensor, by providing high false alarm value, and putting "has false alarms true" in the monostatic radar sensor system object, which generates a scene. So it is very easy for you to generate such scenes. And then that can be used as an important metric for you to test and validate how the tracker has been performing in there.
So the tracker used is tracker GM-PHD, Gaussian mixed probability hypothesis density tracker for such scenes, and that can be validated with the metric called OSPA and GOSPA metrics. When I say GOSPA metric, GOSPA metric is also taking into account the missed targets and the false alarms, throughout the database, how your track is implementing. So this is something which we have recently added to.
I mean, we now have a Monte Carlo simulation APA, wherein that can be used as multiple input scenarios depending upon various probability of false alarm values, depending upon that how your tracker is performing. So with this metrics, you would be very easily able to either validate it on your own recorded data sets, or the data sets which can be synthesized inside the MATLAB environment. That is where I have shown most of the cases, with regard to the detection level fusion architecture.
But now you also have a fidelity on using a track level fusion architecture. So if your sensors are generating directly the tracklist, and you want to build a centralized fuser, which can use this track and then provide you with centralized tracks, which can be taken into consideration for some of the advanced control logics, you can build such tracks, user algorithms, now. It has its own benefits.
You have the lower data bandwidth between sensors and central fuser. You will have, of course, reduced processing requirement at the central nodes. And sensors can have the optimized tracks. But it comes with a lot of challenges also. If you are handling at the detection level fusion or at the track level fusion, there are a lot of benefits you will get on the accuracy correlation noise and rumors.
On the classification aspects, I mean, we would be happy to share you a lot of videos around it. We have run one campaign last week for AI in wireless and radar. We will be happy to share the videos for you. But this is where that artificial intelligence and deep learning are coming into the picture. And we already learned in a lot more examples how those classification and other aspects can be taken care of.
If you want to classify your targets based on RCS, we have an example, which can help you with that. Micro-Doppler signatures taken into consideration for identifying some of the challenges, what your field when you classify a rotating or a fixed wing helicopter. We have added some of the AI-based techniques in landing approach when you have an air traffic controller radar wanted to visualize various anomalies, or detect anomalies, when there are various landing approaches.
And if you are working in an application called SAR, wherein you want to, not only you want to classify like a particular image, of what it is, you can also do the object detection inside that. So we have composite examples around these aspects, and how the artificial intelligence can be taken for the radar applications.
Moving forward to the scheduling and control aspects, you have seen how, depending upon how you design your array RF characteristics, or antenna and RF propagation taken into the consideration, you will be able to generate a high fidelity IQ detection. That detection can be processed right to the tracker to generate tracklist. But we need, considering the dynamic environment changes, and with the advent of phased array, you would be able to schedule a beam which will be relevant for you to provide a particular target of concern, so that you will be not wasting or throwing out a lot of the energy which these resources consume.
So you need a job manager, or you need a job queue, which will maintain those tasks for you. Very recently, I mean, we are doing a lot more investment in this direction to bring out some of the scheduling or the state events based on the time trigger-based situations wherein you would be able to switch waveforms, the PRS, the different waveform IDs and indexes, or add more and more dynamism to your simulation. But this is something wherein you can see the example, which we have started in the beginning. I'm running a search dwell here, considering most of the times.
I mean for the initial 120, it has a 10 millisecond search dwell duration, depending upon the search beam grid, how we have defined. I'm able to constantly put 100% of my resources, or 80% of my resources to the search till the track has been established. Once the track has been established, I am again placing a confirmation beam to confirm the target then and there. And, depending upon the priorities of how you have defined multiple targets in the space, I'm dealing with this.
So I'm using a priority-based scheduler in this case, but that can be a scale that can be expanded when you are looking for something of a deadline-based or a time-based phenomena for triggering your waveforms from the resource managers. So that is something which is providing you a resource-based management logic wherein you can work around the job percentage versus time plot, and you can see that, initially, as I told, like started from 100%, and maximum till 80% you have resources. I mean, all the resources have been consumed by the search operation.
Once you have been able to generate a track, the 20% resources have been allocated to it. And the 80-20 ratio has been followed, depending upon the priority of search and track. Subsequently, when you see the track one has been established, and we want to release the priority again to more of a search scenario, I'm putting again more energy to acquire some of the targets.
So that is where I'm modulating between the resources. And this is completely flexible to certain job view architecture changing and what you want to do, whether you want to put priorities on the beam pointing, whether you want to change waveforms in that, whether you want to change PRI and PRS of those, subsequently job queues can be managed. This is something which we are providing as if you are working on building a tracker algorithm, and you want to be inside of MATLAB for accelerating some of the performances, you can generate a MATLAB executable.
That executable can be called inside the MATLAB function itself, and that can accelerate your simulation results. If you are working on taking this data out or taking the validation, the algorithm which is being validated, outside to a environment for the real-time validation, you can choose MATLAB Coder to generate C/C++ code and link it with your real-time IDs, like you're dealing outside a case-based environment to validate those algorithms. So there are a lot of examples here added now how to take the tracker algorithms to a C/C++ platform or generate C-code for it.
Similarly, we are also adding a lot in this direction of taking phased array beam-forming algorithms to HDL platform. I would request everyone to have a look to these examples, with the recent releases. We have also added FPGA-based technique for the monopulse technique workflow. And yet the beamforming has been taken into the FPGA-based and we are adding more and more to this list. So this can be a very good point for you to see how we are handling the HDL-based deployment on the platforms.
This is something which, I mean, you have seen that, from the signal acquisition perspective, or if you have certain low-cost hardwares like Demorad or evaluation boards from TIA or Encoretech, then you can directly stream those data into the MATLAB radio signal processing algorithms there, and then take subsequently where you want, whether you want to generate a C/C++ code out of it, whether you want a separate HDL-based workflow for taking it into a very local HDL-based environment, you can do that. So to explain, like, I mean, that was with regards to whatever I have explained to you, that was with regards to a one radar system design, which is doing a task of multi-function search and track, but where we are leading with bringing complete digital engineering aspects to the project lifecycle, wherein that can be scaled from project to project.
We have a product line called System Composer, wherein you can see such architecture-level building. You have radar requirements which start from any spreadsheet or any document what you have for making that into operation. And you can say that these are the latest characteristics on which you want to design your radar. That can be scaled to the architecture of this modeling, which is bringing the power systems into the picture, bringing the different subsystem modules into the characteristics, bringing consoling, command center, how you want it to be integrated.
And each of these will act as an individual subsystem development for you. So from the requirements, you have built an architecture, which will be integrated with all the components of subsystems, modulating the power, the comms, the command center in detail. And then, subsequently, depending upon how you want to deep dive into individual with regards to the app and designs and then translate this into a modeling and simulation environment for the phase data system, this is all in one shot what we are doing now.
So your requirements can be translated into a multi-architecture domain. In each of the multi-architectures you can make a detailed design, whether you are designing comm systems, whether you are designing a radar system, whether you are modulating a power control module, and all those things, translating those into a model-based design framework. And then subsequently, as and when in your project, you feel that, OK, now is the time to increase my fidelity. Now is the time to add RF values in more detail, or let's add an RF transmitter and then see what changes, depending upon your antenna and array characteristics.
You can go and bring this model. And, subsequently, depending upon the various parameters and non-linearity, how you add it into the system, more and more, your architectural design will become robust. So this is something which can be iterative, which can be put into a framework wherein you are thinking of radar, not just as an entity, but as a system architecture. We have multiple things to discuss with what we have discussed today.
I mean, you would be able to do all these things with regards to, if you are on a MATLAB or Simulink-based platform. Scenario generation is one aspect which, I would say, very definitely we are being investing a lot in this direction. You are already aware of the antenna arrays and the system modeling aspects. Target and environment is something which is very easily providing you the RF coverage analysis and doing S&R computations for you to be taken into and incorporated into your system.
So one more area which very heavily we are investing on is in the area of AI-based in radar systems, which is around deep learning and machine learning aspects. And, subsequently, once the algorithmic validation with regards to this number of test cases or this number of validation in scenarios be taken into consideration, you would then scale those algorithms into the deployment aspects. So this is something which we have in resources.
I mean, you can have a tutorial. You can have Quickstart guides. You can download white papers. And you can go to some of the resources to understand the radar processing domain. On the phased array aspects, we do have training on Onramp, and in Simulink Onramp training is there. For the training and the consulting services, you can always contact us.
We have a dedicated training team, consulting team, at MathWorks which can help you, sit with you and bring out or carve out like at what stage of the modeling and simulation or what stage of the deployment, you are, and then handhold you to achieve your objectives.