The Roadmap for Software-Defined Vehicles and Disruptive Technologies - MATLAB
Video Player is loading.
Current Time 0:00
Duration 35:26
Loaded: 0.46%
Stream Type LIVE
Remaining Time 35:26
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 35:26

    The Roadmap for Software-Defined Vehicles and Disruptive Technologies

    Jim Tung, MathWorks

    “Software-defined vehicle” has become a buzzword, and the real question is how it should be addressed. Is it a “revolution” or “evolution”? How can tool vendors work together to help? And what is the role of the cloud and emerging technologies like Generative AI? In this talk, we discuss three technical approaches—software factory, virtualization and simulation, and service-oriented architectures—that are key to answering these questions. Until now, those approaches have often been pursued independently and in limited ways. That leads to inefficiencies as well as friction between systems engineering and software engineering teams.

    In this session, we discuss and demonstrate how model-based approaches can evolve and scale in a coordinated and synergistic fashion to revolutionize the value of software in the vehicle. We show how the cloud has evolved from a general technology to a focused engineering enabler and explore the role of generative AI for engineering, taking a collaborative, multivendor approach.

    Published: 22 Dec 2024

    I'm honored and delighted to join you for this year's conference. And so thank you for joining me. He placed us the challenge to learn today, at least one thing. I think I'll have no problem learning about a dozen things or more, because that is how I am energized in working with you all, understanding your challenges, understanding the work you do. And so I'm looking forward to the discussions during the course of the day.

    Today, I want to talk about the roadmap for software-defined vehicles, and some of the disruptive technologies that need to be harnessed in order to make SDVs successful. Let's start with a short definition, or description, of SDV. You'll find many of them. When I think about SDV, I think of it as a situation where the brand-distinctive features and what we perceive as the main value in the vehicles is being delivered through software. It's a very short description, a very simple description.

    But if you look at it, what you see is a different set of expectations of the customers as they look for a more sustainable and safe mobility system. Safe in terms of physical safety. Safe in terms of cyber safety. A digital life continuity from their cell phones to their experience in the car, whether they're the driver or the passenger. It means that in terms of the automotive manufacturers, a deep investment in new technologies and leveraging those technologies, including electrification, connectivity, and autonomy in various degrees.

    It also means a new way of developing and delivering that functionality with new business models, new revenue streams, if you're an OEM or a supplier. But also new ways of engaging from a commercial perspective with your vehicles and the capabilities that they provide. And it's closing this virtuous circle as well, because the customer's expectations will, of course, continue to grow and evolve. Now, simply when you look at it, it says, OK, more value, more resilience in terms of the vehicles through delivering software quickly and more reliably and robustly.

    What's the challenge? Well, there are many challenges. It's increased complexity of the functionality and of the E/E architectures, as we all. It means it also increases need for functional safety, especially as cars have more autonomous and adaptive capabilities. It means that the system and the software development platforms, the ecosystem must continue to evolve. And I'll spend a bit of time talking about that during the talk. But perhaps hidden in all of this is that the processes and the team engagements must evolve.

    There needs to be greater alignment in terms of how people perceive the work they do and how they work with others. And I'll talk about that as well. Since MathWorks works across industries and also works with automotive companies across the world, and I work with many of them around the world, I'll also pick on certain patterns of functionality and disfunctionality that I perceive. I will not name names, but I will look at patterns that I perceive. And you may look at your own organizations and situations and see things that you can be proud of and things that you want to work on.

    Let's start by looking at the basic context, parse around software-defined vehicle and its two components. First of all, of course, vehicles are systems. And many of you in the room will be familiar with that aspect of the SDV. Systems that must be reliable. Systems that, in many cases, will have portions that have functional safety certification requirements. And of course, it's a matter of software interacting with the physical components that are important aspects of the vehicle.

    When we think about software defined, there are many things about it, but I'll bring in three particular aspects of that. One of them is what I'll call modern software practices. Which I'll look at in terms of faster development of the software, more frequent releases of the software, and more use of automation in how that software is developed, tested, and released. A second aspect is more data-driven functionality. Not just physics-based or mathematically-based, but data-driven functionality as well.

    And third, the leverage of the cloud for development purposes, and also for the operational support. I'll focus today on the development aspects. Now, when you look at these two columns, you may see that they're quite straightforward. But when you bring them together, you start to see where some of the conflicts and apparent contradictions start to emerge. How do you come about and look at modern software practices when they must engage with the physical components of the vehicle? How much you look at data-driven functionality and making sure that it can verify it well and certified?

    Those are some of the surface issues when you look at SDV. And they require differences in how the teams work, and also how the techniques are applied. Let's start by looking first of all at the systems area, because there's a lot that MathWorks is doing to support customers in the development of the systems. And then we'll look at how those techniques will evolve as we look at more software aspects to it. So looking at the systems area, you may be familiar with something called model-based design.

    It's something that we've evangelized and worked with other vendors to support to offer new capabilities that are coordinated. And we've done this for probably 20, 30 years or so now. This is a short-form version of what that looks like. You have the requirements and architecture. You have algorithms design, and the ability to automatically generate code from them. And you have the use of simulation as well. The output of that being embedded software that's going to go, traditionally, onto a range of different dedicated ECUs.

    This has been used by our customers in automotive and other industries for many years. And these are just a few examples from automotive of some of the capabilities and benefits that our customers have achieved here. I'm going to highlight a few of them. If you go to the MathWorks website, you'll see full write of these stories, and many more as well. But what you'll see is that the improvements will occur in many different places. In some cases, around development and certification time. In other cases, about time to market from concept to production. And very importantly, no recalls.

    So those are the kinds of the benefits that our customers look for and are able achieve by using model-based approaches. Now, a comment on this. We all the V-cycle. I'm not going to show it in these slides. In fact, I have not shown the V-cycle in my presentations for about four years. OK? Now you may ask, why is that? It's because it's not that it's not valid. Of course, it's a nice, useful decomposition of a system into its various subsystems. You can look at them, you design them, you implement them. You integrate, test, integrate, test. You all what the V-cycle represents.

    But when we start talking with software development groups, and we start talking with Chinese customers. What we see is that when they talk about their engineering work with our tools, they will use many of the same words that many of the other companies on the previous slide will talk about. They'll talk about simulation. They'll talk about efficiency. They'll talk about code generation, and many other aspects as well. But when you talk about their perspective on it, they don't think in terms of the V-cycle.

    They talk about it in a very compressed fashion, bringing all the pieces together, all the different groups and viewpoints together in a very fast, compressed, agile manner. Because they understand that if you can compress the period of a development cycle, you can increase the frequency of that development cycle. And so even though they use model-based design approaches, they don't think in terms of the V-cycle. And so that's why I've stopped using the V-cycle.

    If you wanted to look at this diagram, you'll see all the different aspects of the V-cycle. In here, you'll see the left hand part of the V in terms of the upper left, in terms of requirements and architecture definition. You see algorithm design, code generation moving down the V-cycle left hand side for software development. Simulation is both to support design decisions on the left hand side. It's also virtual integration and test on the right hand side. And then you of course, have the implementation of the software to the silicon.

    Everything's here except for the physical test, I could argue. But this way of thinking about it enables the system engineers and the software engineers to start to think in a more aligned fashion. When I talk to software engineers about the V-cycle, they bring up comments like this. You may be familiar with the seven types of muda, of waste. And when they look at the V-cycle and how it represents a potential workflow and a type of process, they'll say, well, I've got to move stuff from place to place. I've got to wait for information to come from a different group.

    I've got stuff that's in motion. And when it's in motion, it cannot be acted on. And defects while you try and trap them on the left hand side, you will ensure you get them on the right hand side. This is not the intent of the V-cycle. I'm not trying to dismiss the V-cycle at all. But it's often how it's perceived by groups within an engineering organization. And that perception, that mindset is one of the first issues in how to improve the development process. And so I stay with this.

    Part of the reason for this is because when you zoom out from the development process and look at the overall life cycle, what you want is the ability to have the SOP, the start of production of the vehicle platform itself. And then the continuous release cycle of software functionality, that value generation that you want your customers to achieve and to appreciate. Of course, you need to get data back from the system. That's where the data driven functionality comes in, because you want to be able to take data, prospect that data, analyze that data, make decisions and improvements based on that data. And generate new functionality based on the data as well.

    And so when we look at this aspect of the development process, you see that need, that value in compressing things in an ever tighter, more agile and multi-dimensional fashion. A lot of our capabilities for model-based design are really designed to leverage that. When we look at things like simulation capability, looking at the bottom row of this area, we have capabilities for looking at the full system across span, different levels of fidelity, as well as components, doing a deeper dive on specific functionality.

    With electrification, as Sunil mentioned, we have capabilities, for example, with Simscape Electrical for looking at a full electrical system, including the transmission, the generation, and distribution of the electricity and power. We also have more detailed capabilities, for example, with Simscape Battery, to look at the particular battery cell behavior, for example, important for a battery management system. We have algorithm libraries as well. And of course, you will also develop and implement your own, which is your innovation, including things like motor control blocks, things that can be tested, analyzed. And then generated code, either to a microcontroller or perhaps to an FEJ, or that sort of thing.

    And so you see again, the value of bringing those aspects together, that content together, those decisions and collaborations together in a tighter and tighter fashion. When we think about the component models, it's not just about MathWorks capabilities. If you've worked with MATLAB and Simulink, you know that we have a very open environment. Many APIs that we leverage, that our customers leverage and our partners leverage to extend how our tools can be used in a variety of interesting areas.

    And so those component models can include things like language representations in C or C++, Python, and other types of environments. They can include middlewares such as ROS, DES, standard modeling representations like FMI, and more and more. One of the interesting things is there's a whole range of types of traditional simulation and analysis capabilities, for example, with magnetics, FEA, spatial or geometric-based simulations that can also be reduced in order-- what's called a ROM-- and incorporate it in that kind of rapid, in some cases, real-time simulation to look at the assessment of the software. And so we work with those partners as well.

    One thing we have not done as much, though, historically with model-based design is to look at the silicon. And it's actually something where we acknowledged it. We said, we implement target-optimized code for the chips. And that's very good. But we always saw the chip sets as the destination point for a process. That's changing. One of the things we realized is that if we can bring the silicon closer to the process, virtualize its characteristics, it enables the systems engineers and the software developers to more quickly develop the functionality, and to test it and to verify it, even if they don't have the actual hardware available.

    And so, for example, we work with Infineon to do that with the AURIX microcontroller family. It involves taking the processor models, virtualizing them, bringing them into a form that they can be brought into Simulink and used as part of a system description, and a part of system simulation iteration. In this case, using the Synopsis Virtualizer, or VDC, as the mechanism for capturing the processor. That simulation can iterate and evolve and make decisions and optimizations. And then the code can be optimized. We work, again, with Infineon to use target-optimized libraries that they provide so that you can do that.

    I'm shifting left here, but it's not the kind of shift left of the V-cycle. I'm not trying to move content or activities from the right-hand side of a big process to the left-hand side of a big process. What I'm trying to do, and am doing, is to take the processor characteristics and make them part of the iteration process. A very different mindset for how to think about it. When I'm talking about code generation, it's not the endpoint of a development process. It's part of the development process because they can generate code very quickly, because it's automatically done, and use that as part of the iteration process.

    And so you can start to see how thinking about things in a way that's not the V-cycle can benefit. We do a similar thing with Snapdragon working with Qualcomm, where we work with them on virtual processor models, using their Qualcomm simulator for Hexagon. And also using the target-optimized code. One of the interesting characteristics of the SDV area is the evolution of the E/E architecture. Not just dedicated ECUs, but also larger, more powerful in-vehicle HPCs.

    And so we've done the same thing there as well, working with partners such as NXP and their GoldBox to provide the same sort of support, that same sort of shift left virtualization, that same sort of rapid iteration with optimized code generation so you can more quickly get to the optimal solution on your own with a team of people working closely together. Whether it's scrum-based or whatever it is, they are working in tight iterations without a lot of waste.

    In August this year, we had the MathWorks Automotive Conference in China. And as we talked to-- as we listened to some of our customers talking about that, they took the same mindset. And they said, well, let's take it a step further. Let's use the MathWorks open architecture to take that same kind of capabilities and extend it to our own middleware. So this is ZEEKR's approach, where they're taking their ARK OS, which is their own proprietary middleware, they've incorporated it with MATLAB and Simulink so that you can automatically generate a service-oriented architecture based on the service descriptions, encapsulate their OS middleware module as part of simulations and part of the implementation.

    Generate code that will be fit for purpose onto their target platform, and also leverage both the MathWorks capabilities, partner capabilities, and their own OS software development and verification capabilities to provide the software quality work. And so this is an example of how the APIs are being extended by customers who see their own view of how to accelerate what they do, and how they go about doing it. Now, that example of SOA, service-oriented architectures, and middleware enables me to focus on maybe a different aspect of this diagram, the requirements and the architecture aspects of the diagram.

    There, an increasingly important capability that we have is called System Composer. System Composer is a tool for model-based systems engineering that's an add on to Simulink that serves several different purposes. It enables the early architectural description and interfaces to be defined. It enables the systems to be decomposed and assessed in terms of the component allocations. System analysis and optimization can be done using MATLAB and Python, and other types of languages as well. And also enables a clean, concise way of doing the service-oriented architecture definitions that are then going to be used as part of that componentization, the allocation, and the network assignments.

    Another aspect that's important for System Composer is that it provides a digital thread all the way from the textural requirements, as they might be represented in the model, as they're implemented in the design, as they're implemented in software, as they're tested in test cases. And so we found that, in many cases, the MBSE tools using a traditional language like SysML were effective for what they did, but they were marooned and isolated in a certain place early in the development process. They were done, decisions were made, and then they were thrown away.

    And that kind of phenomenon does not enable agile iteration. Because how do you bring that system architecture forward. How do you make it part of an agile methodology? That's what customers were challenged with and frustrated with. We also found that the test and verification-- think of that as the upper left hand part of the V, the upper right hand part of the V, however you want to think of it-- need to be brought together more concisely as well. And so we enable System Composer to work with other capabilities that we have.

    Including Simulink tests for automated test execution and definition and coverage analysis, and a new capability called Simulink Fault Analyzer, which is used for security and safety analysis as well, doing fault injection, FMEA, and so on and so forth. And so those are some additional capabilities that are important. But System Composer, that world of systems engineering, is changing pretty dramatically. SysML 1 is going away. We don't when, but it's going away. It has too many shortcomings, too many deficiencies.

    Its successor, SysML V2, addresses many of those deficiencies. It'll take a little while to mature, but it's well on its way to becoming the standard for how systems engineering should be done. And our objective is to make sure that we support it well. It turns out that System Composer-- I won't go into the details of why. It turns out that System Composer already has many of the concepts in terms of modeling, many semantic concepts as well. Where SysML V2 is, System Composer already is.

    And so we find customers that are working with System Composer because that what they implement with System Composer will stand the test of time. As SysML V2 comes out, things will align, things will work together, and so on. Unlike SysML 1, which is a file-based approach, SysML 2 is a repository-based approach. It's designed to enable interoperability with a back-end, front-end type of architecture, many APIs that are REST based. And so we find that we have the ability to take our capabilities with System Composer and to provide access to model information through those kinds of APIs, enabling other tools, other clients that support SysML V2 to be consumers and producers of information that we use.

    It also means that MATLAB can be used for analyzing, for assessing, and optimizing whether content that's being delivered through our tools into that kind of repository representation, or other SysML tools as well. And so a key aspect of SysML V2 is about interoperability. It's about liveliness of data. It's about enabling agility. And so we see that it's going to provide an ability to take a system architecture, a system engineering description, and make it agile, make it part of a CI pipeline.

    Enabling you to use automation to say, if a requirement changes, what's the impact? If a design changes, what's the impact? Really enabling much more of that digital thread that we all aim for and talk about, but making it realistic. So I've talked about various aspects of model-based design as it's relevant to systems. There's another aspect that MathWorks is investing in very heavily, which is what we call scenarios. Using a capability called RoadRunner, which is one of our newer platforms, we have the ability to define scenes and scenarios-- driving scenarios in this case. And the integration with environments like the Unreal Engine enables the visualization of those scenarios.

    Now, those visualizations are not just eye candy. They're not just for the purpose of consumption. They're, in many cases, going to be inputs to sensor streams. And those inputs then become algorithmic inputs as well. So it's a way of verifying that the sensor stream, the pre-processing, the decision making are all working with facsimiles of the real situation. With RoadRunner scenario, a key aspect is that it enables interoperability. First of all, there's a direct interface between RoadRunner and Simulink and MATLAB, so you have the ability to take the RoadRunner scenes and scenarios and, very quickly, use them as part of a larger Simulink simulation.

    But with support also for OpenSCENARIO file export, we also work with a lot of other simulators as well. Again, it's part of encouraging that ecosystem to come together in a pragmatic, realistic, and appropriate way to enable you to be more successful. So that's a quick glimpse at various aspects that we're investing in, in terms of the system side of things. Now let's look at that other piece, the software piece. OK? What's the impact of taking those pieces of software development practices and mapping them to the systems area?

    Well, first of all, let's look at what we call-- why are we talking about modern software practices at all? It's because we all, and all of our organizations, are hiring more software developers. And they're bringing along with them more groups that understand CI pipelines, DevOps approaches, and the like. And so we see in many organizations that they go through a phase that looks like this. They have, I'll say, a traditional-- I hate to use the term, but a traditional systems engineering approach that might be model-based.

    And then they have software factory approaches, whether it's the team, or the practice, or the methodology that's called software factory. It doesn't really matter, but it's often a term that's used to this thing at the top. And one of the patterns of dysfunction that we see is when an organization creates a pattern like this in terms of organizational approach and workflow approach and say, we're done. Because what they've achieved is only a partial optimization of the overall approach. Things are left lying on the table, unfulfilled, unused, where the value is not consumed.

    And so what we've done is to work with customers, and work with different groups within those customer organizations to bring those together. So that model-based approaches can be done with CI. So that artifactories and other kinds of repositories can incorporate both models and also code and binaries. So that more automation can be done regardless of whether it's model-based or code-based. So there's more consistency in how artifacts are generated. So when you have release processes, they are more unified and consistent. And that's where we see things going in terms of that unification.

    Oh, by the way, I'll also comment that I changed the term simulation in this last block to virtualization. You might say, wait. Jim made a mistake in the PowerPoint. Well, that was actually quite deliberate. When we talked to system engineers, they're very used to saying simulation. Because what they want to understand is the nature of behavior of the system. When we talk to software engineers, they talk about it in virtualization. Because their objective is not to understand the system, their objective is to virtualize the system so they can develop software faster. Same technology, different words.

    May seem like a subtle point, but that point can be the reason why groups don't communicate or work well together at all. And so think about the words you use, and think about the underlying technologies behind the words. And say, are you communicating in the best way possible with your colleagues today and colleagues to be? That virtualization capability also has the ability to look at these types of technologies and free them from other aspects of model-based design.

    In other words, even if you're not doing automatic code generation, or if your colleagues are not doing automatic code generation, they may use those virtualization technologies as test criteria-- think of that as a CI pipeline task-- even for handwritten code that's being submitted through a pipeline. And so the broader value of building these models that you may be doing can happen, can be unlocked when those CI capabilities are invoked. It's not about technologies, but it's also about people, as I started to talk about in terms of the terminology.

    And there are, often, many different groups of people who are involved in an organization. Some you work with closely, some may not know at all. And again, that's a pattern of dysfunction that we sometimes see in an organization. Because functionality occurs when you can take those different personas, model-based software developers and code-based software developers, system engineers who understand the system models, and what I'll call platform engineers who might be called a DevOps engineer-- people who are responsible for the CI pipelines, the technology stack that actually generates the automation, manages the use of cloud and so on-- and enable all those different groups to work together.

    I find that we particularly need to focus on the group in the middle, the platform engineers. Because they are the catalyst that enable the rest of these people to really get the value and the efficiencies they want to achieve. A little anecdote, a little bit of detail here is that when those groups want to stand up, we'll say, our tools in the cloud, they may do it using custom images. We have the ability to generate custom containers of our tools. And so on and so forth. And how to do that. They how to build custom containers. And they stand them up on the cloud and then they launch them, whether it's as a Virtual Engineering Workbench type of paradigm for interactivity or as part of a CI task.

    And what they experience is this. MATLAB, when you bring it up from a cold start, takes a little while to actually get running and dig a cursor. OK, 12 minutes is not a little while, unfortunately, when you're running a CI pipeline. And it has to do with how-- it has to do with a bunch of things, but it has to do with how big software applications need to work in the cloud. We didn't see that as just an issue. We saw it as an opportunity to improve it.

    And so what we've done is to create a set of warm-up scripts that we make available on our GitHub repositories, make available as-- I won't tell you the language because you don't care. But make them available so that the platform engineers can incorporate them into their startup scripts for their infrastructure as code, and get startup times that are much more rapid. This is for the first time. And then every subsequent time, it's even faster. And so that's important for those platform engineers to know.

    And so if you're working with those groups, I encourage you to talk to them and have them talk to us. Because we have a lot of tips, tricks, and other tools that we have and that we can build to make their performance better that will make your performance better as well. It also turns that those platform engineers use a lot of other technologies and tooling. Some of which, again, you may recognize, others of which you may not at all. But these are all capabilities that MathWorks has built integrations with.

    And so when you want to put together those tech stacks, that larger ecosystem beyond engineering, but also into IT, we also have knowledge in our organization that can help those teams be successful. Some organizations really see that as a value generator and a differentiator. That Automotive Conference in China that I mentioned earlier, Geely presented that they are very good at accelerating how they build vehicles. We all understand that. Now, we're much faster at how they create software. They understand that.

    But they really want to improve the quality. And for them, CI and CT and implementing that will be the way they leapfrog-- they intend to leapfrog in terms of software quality while maintaining the rapid pace of software development. It's also being done by the more traditional organizations. Conti in Germany gave a presentation at The MathWorks conference where they've been doing this for a long time. And we've worked with them on it, and they're able to achieve great benefits and efficiencies using newer technologies as well that we have, and also, that they want to leverage.

    Data-driven functionality. I'm not going to spend a lot of time on that, but I'm just going to use two letters. So Neal talked about it as well, AI. I'm going to talk about it in three aspects. First of all, the value of being able to design systems with AI in them. We have a lot of capabilities I won't talk about today. But we have, just as importantly, a lot of reference examples that you can access. You can see how we use AI in development of a system, verification of a system, and implementing the systems. Whether it's through a CPU, a GPU, an FPJ, or whatever the target is.

    Another aspect of AI is that we understand that there's a lot of different frameworks and ecosystems for AI. And so we don't live alone in that regard. And so you may have AI experts that are using a framework like PyTorch, and building PyTorch models as well. We have the ability to bring them into the MATLAB and Simulink environment by transforming them. But more recently, we also have the ability to bring in those PyTorch models as is, so that you can bring them into a Simulink model, run simulations, and use that as part of a design iteration process. We're not done on that front, but it's a big step forward to respect the AI inference models as they are, without having them be something that has to be transformed to be used.

    And then thirdly, GenAI, as Sunil mentioned. We have a variety of activities going on. One of them-- or two of them that I'll talk about that are currently available, is we have an enhanced MATLAB GPT. One that we've invested our knowledge to make sure it's more performant in terms of providing good information about MATLAB, and so on. And we've made that available as part of the ChatGPT marketplace. A second one is since MATLAB can be used with a front end that's not just the MATLAB IDE, but also the Visual Studio code IDE, there's the ability to access the various code copilots through VS Code as well for doing that.

    And so for example, adding a comment in VS Code can then generate code that's MATLAB code, and it's more performant. Those are just current steps using other environments. There's actually a talk, as Sunil mentioned, from one of our customers today, TATA, that's using our capabilities for text analytics and building their own AI capability for doing this kind of diagnostics work that enables their teams to get answers more fast. But a key thing that we're working on is building copilots.

    In 2025, we plan to deliver three different copilots for the workflows of our key products. For MATLAB, for Simulink, and for Polyspace. I don't have time to go into the details at this point, but stay tuned. There's a lot of exciting stuff happening in 2025 on that front. And my last topic is about cloud. The question that I want to bring up is, how can you leverage cloud to get scale and automation, and leverage the capabilities I talked about earlier, and use cloud to get faster on that? Let's take a use case.

    Let's say I'm an automotive OEM and I want to create a new feature, software-only feature without any changes to the hardware leveraging the cloud. So we have an example, which is, I want to implement a Sport+ mode. A new mode of driving which is going to improve my acceleration without changing my battery range. And the question is, how do I do that by leveraging the IVI system, the vehicle computer, and the dedicated ECUs for battery management in doing it? We put together an example working with these other three vendors to do that. And that's what this looks like.

    With MathWorks capabilities being able to run in parallel simulations in the cloud, you can do design optimization, do design space exploration, and so on, interactively or through a CI pipelines. You can automatically generate code as well. That generate application code can integrate with the middlewares coming from Elektrobit. In this case AUTOSAR Adaptive and AUTOSAR Classic. Because you don't have a processor for the ECUs in the cloud, they connect to Synopsys Silver to provide the emulation of the ECU hardware. And then you have the ability to invoke and interact with the function, in this case, using Android automotive.

    All of this on AWS cloud. All of this with multiple instances as well. And so if you look at that, it's a really tight encapsulation of the overall development process. Interactive exploration, design optimization, automation of tests and integration, and then deployment of automatically generated code. But we're not done yet. We said, we need to bring things further forward. And so let's take some IP messages that are generated from the Elektrobit middlewares and bringing those into a Simulink environment as well. So that we have the ability then to make it a live Interaction, a compressed interaction where you can take a vehicle with two different modes of behavior, toggle between them in the same environment, interact with them in the closed loop and see what happens.

    So you'll see a lot of things happening here connecting Android middleware, Simulink, Elektrobit middleware, and Synopsis all in concert, all in real time with multiple instances running on AWS cloud. So what I've been doing in a bit of a flurry of comments is to talk about several things. Software-defined vehicles, and the need to align and compress the activities of the systems and the software organizations. The changing tool ecosystem, not only in terms of the engineering tools, but also the back-end IT tools that are needed to make SDV a reality. And making sure that those work in a process that enables those tools to be used efficiently by the different teams that enabled them. And then AI.

    And so in summary, when I look at the integrations, the processes, and the teamwork for SDV, what are my calls to action? Well, especially if you're a domestic OEM, think about your systems teams and your software teams. And are they aligned? Are they aligned to plan? And then are the tool chains in the ecosystem aligned to act? If you're a part of a multinational organization, work with MathWorks India, because MathWorks is working with your teams as well around the world. We can help you stay synchronized. You can help us stay synchronized so that as you do change management, you do it effectively and efficiently.

    And if you're a service provider, a tech provider, stay current on our latest capabilities so that we can work together and work in partnership to enable these changes to happen. Thank you very much for your interest, and enjoy the conference.

    [APPLAUSE]