Developing and Delivering the New Generation of Software-Defined Vehicles
The development of automotive systems and software is being redefined to deliver perpetually upgradeable software-defined vehicles. The user experience of these vehicles will be largely defined by their software capabilities and value-added services. The solution landscape is complex with no single winning strategy. Delivering this software requires a mindset shift; updated organizational structures; introduction of new processes, methods, and development platforms; and the forging of new partnerships. Some additional challenges include optimizing engineering methods with changing OEM/Tier-1 relationships, leveraging hyperscalers, and using both models and vehicle data for optimization. Model-Based Design, AI, and DevOps have emerged as a set of typical approaches that both challengers and incumbents are adopting. Clean-slate implementation of Model-Based Design and development processes has enabled newcomers to exploit their software development expertise. Established players are looking to turn their technical legacy into their advantage by bringing years of rigorous Model-Based Design and system development experience together with new software and data capabilities to master the business transformation.
In this talk, Jim Tung, chief strategist at MathWorks, contrasts these approaches and presents mindset shifts and strategies needed to enable the systematic use of models and data that is helping to develop a new generation of software-defined vehicles.
Published: 22 May 2023
Goodness, 25 years plus. Actually, I need to update that bio. In August, it will be 35 years, which is an even scarier number. Good morning, everybody.
175 years ago, about 175 years and three months ago almost to the day, gold was discovered at Sutter's Mill, California. And that triggered the California Gold Rush. Now, today, we are all in a different rush, a rush to software. And you can see it in the press. You can see in your company's internal pronouncements. And you can see in your engineering activities as well. It's a profound rush not to California, although that's part of the story too, but it's a rush toward a new destination.
Now the talk today is about software-defined vehicle. I'm not going to spend any time really defining software-defined vehicle. But I think of it as what you and your company does and is planning to do and needs to do given the recognition that the brand-distinctive features and the customer-visible value is going to be delivered through software largely.
Now we all know that it's not just about the software that the user, or the driver, or the passenger interacts with. It's all the stuff underneath it as well. But think of that as the delivery vehicle of value to your customers. Now there isn't a single instigating event like there was with the California Gold Rush.
The instigating event or trend is the customer perception and expectation, clean and safe mobility, a digital life continuity. So I can take my phone and whatever I'm doing in my phone, and it just continues so that my vehicle isn't just a cell phone on wheels. But the experience is continuous from one place to the other as my place goes from place to place.
Now that demand from the customer leads us to think about technology and innovation that we need to deliver to satisfy that need. It includes electrification, autonomy, connectivity, CASE, that other acronym out there. And you'll see a lot in the demo stations about what we're doing to help you deliver on those technologies and innovations.
But it's also about other aspects as well. It's about delivering the business opportunity and capitalizing on the business opportunity that derives from all of those electrification, autonomy, and connectivity elements. The ability to deliver add-on apps, add-on services, add-on value digitally where the car and the vehicle and that vehicle experience is the platform for delivering that vehicle and monetizing that. And so there's a whole set of other business opportunities that ensue as a result of that.
Now this software-defined vehicle is just the next generation of a digital transformation in automotive. And model-based design has been part of earlier waves of that digital transformation. When we think about the different automotive domains updated to roughly today and think about where model-based design is applied, I tend to highlight those particular domains. It doesn't mean that the IVI and the other are irrelevant. They certainly are relevant.
But the center focus of model-based design has been in these areas, delivering software that has to interact with the various physical components of the vehicle, software that needs real time, either hard real time or soft real time, deterministic behavior. It requires high assurance. It must match and meet various maturity models such as the automotive SPICE. And it must deal with the EE architectures that have evolved and are continuing to evolve, AUTOSAR, AUTOSAR Adaptive, and perhaps your own flavor of whatever those adaptive SOA-based architectures are. And so that's where model-based design has been.
Since you're coming to the MathWorks Automotive Conference, I'm not going to spend a lot of time talking about what model-based design is because I think a lot of you probably have a sense for that or have a deep understanding of that, perhaps deeper than I have. But you can think of it as taking the customer requirements as captured in natural text and captured in models and then using models to carry through that development process, not just models of the algorithms that are going to go onto the processors but also models of the rest of the vehicle, the vehicle subsystems, the scenarios and scenes that the vehicle has to interact with and deal with, from drive cycles many years ago to the kind of visual input and sensor streams of today's systems, all with the idea of using automatically-generated code not only to get to the test vehicles but to the production systems, the vehicles that are on the road. And so that's been model-based design till now.
A key aspect of it is that those models aren't just describing expressing the concepts, but they're also simulatable. And because you can simulate them, it means not just about implementation but also about verification, verification from the very beginning when you have models. Because if it doesn't work in initial model simulation, it ain't going to work when you get to the code.
And so stop that bad idea. Don't pursue it. Find something that's better, more promising. And so that idea of shifting left that insight to make sure that you detect the software and system defects before they become defects make it a decision support tool as opposed to a catch-up on what you did wrong tool is a key aspect of what model-based design has had.
But things are changing. When you think about what model-based design has to be in the time of the software-defined vehicle, we think about some-- your organizations may talk about in terms of process methods and tools. They might talk about in terms of approaches, whatever it is.
But when you think about the established approaches for model-based design, you may recognize it as being associated with hard real-time, implementation of software onto resource constrained ECUs, microcontrollers in many cases, with algorithms being expressed in terms of signal flow and schedule through an RTOS or some other type of scheduler. That's sort of the way we think of it from an embedded-systems perspective.
But in addition now, it's not substituting, but it is an addition. It could target not only ECUs but also central controllers, zonal, gateways, computer nodes, HPCs, and also perhaps eventually algorithms also running on the cloud in a connected manner. The algorithm cells aren't simply hard scheduled with an RTOS, but they're also callable in a service-oriented manner.
And so when you look at the demo stations during the course of the day today, you'll see a lot about what MathWorks is doing in terms of targeting for Linux not just to embedded ECUS. They're using messaging as a way of modeling and implementing systems and algorithms that are going to be service oriented in terms of how they're delivered and deployed. And so we're doing a lot of work on that front from a technology perspective and tooling perspective.
But it's much more profound than that. Because to the degree that you are a user of our tools, you may recognize that a lot of what we do in terms of our products is for your desktop experience. You are the magic. Your ability to come up with the optimal design through iteration, through interaction with the tools is a lot of what we have focused on over 30 years. And that is a human-driven type of experience, which is focused on design, innovation, iteration, and then quality of delivery. And the toolchain that we work with with partners and others is really focused on that design experience.
But when you look forward to that, it's not only about the interactive design, but what's downstream from that is execution. It's throughput. It's automation.
Some cases on cloud, maybe using technologies like containers. It's automating whenever possible, whether it's using CI pipelines or perhaps Kubernetes-orchestrated flows as a way of delivering that automation. The metrics that I'll talk about later are not just focused on is the design the best design? But it's also what's my throughput look like? And so those become an additional set of dynamics that we all have to deal with as tool developers and for you as users.
The tool criteria also must measure against the litmus test of not only is it interactively good, but also is it automatable? Is it scalable from a cloud perspective? And then, finally, perhaps your IT and administrative groups-- the way in which life cycle is managed of tools and of artifacts is different. It's not PLM systems anymore because PLM systems are really well designed for mechanical, electrical systems, redeveloping a bill of materials to manufacturing. That's not software.
And so in software, we're talking about repos, trunk and branch type of approaches, the Gits, repositories like Artifactory and so on as ways of being a more agile method of storing, iterating, and improving your artifacts in a much more rapid, much more granular manner because you're not going to deliver them to a bill of materials to manufacturing. You're going to be streaming them into the vehicles over the air.
And from an IT perspective, it also means that from our standpoint it's not just a matter of making sure we can have the IT groups install the software onto a thousand laptops. But they also need the ability to have what's called Infrastructure as Code or IAC. So they can in a scriptable way quickly stand up a full stack to support a new project, a new team, a new whatever and improve it by improving the scripts. So it's a much more automation-driven mindset to complement the interactive aspects that we all know and dearly love.
And so when we think about that model-based design workflow, it's perhaps a stereotype, but I'll sort of say that it lives in the system engineering mindset. It could be software engineering, software development with a system engineering mindset. But that's just one group of people in an organization today.
And as we all know, there's another mindset or group out there, people who think more in terms of software development. I'll call them code centric. They bring in practices like agile and DevOps methodologies. They perhaps bring in CI and CD platforms, and DevOps platforms, cloud platforms, and the like. In many organizations we work with, they are in two different universes with no interaction between them or attempts to try and reconcile them and to bridge them.
But what we see is the need for a shift in mindsets, not just one mindset but multiple mindsets. From a systems engineering perspective, we see that groups are starting to, in many cases have, added their own CI/CD platforms, pipelining, and so on in many cases separate from those used by the software oriented groups.
And so we see an evolution where the system engineering teams, that mindset, needs to move more toward automation and DevOps view of things, again, not to replace the interactive view but to say if I'm developing a great model, who am I going to hand it off to, and what are they going to do with it? And what should my modeling practices be so that handoff is as clean, as automatable as possible because that's going to be the downstream process that I'm feeding into?
Similarly, the software-centric teams we believe also need to shift their mindset to think of systems as an important adjunct to their software development work. They're not going to be systems engineers. That's not their mindset. That's not the way they're wired. But to understand how to use systems through simulation is a key aspect of us thinking about software-defined vehicles, software and systems altogether.
Now the other aspect, another third mindset shift, is taking the people who develop those CI and CD systems-- they're often a DevOps group or whatever-- and bringing them together as much as possible, harmonizing the platforms as much as possible because your organization is not trying to do automation in a domain-specific manner or an application-specific manner. You're trying to do it at a vehicle level behavior. And the degree to which much of that capability and technology stack can be integrated and harmonized means it's just more beneficial, more scalable, more maintainable to your organizations.
And then to support that is KPIs, KPIs that are shared across the different ways that software is developed and the way systems are developed, measuring flow and velocity, looking at the software reliability and robustness, but also the system rigor as well. And coming up with a strong set of KPIs that really envelop all those different attributes is what we see organizations doing more and more.
Now what this means from a workflow perspective is that the model-based approaches live side by side with the software-oriented approaches all with a DevOps type of infrastructure as the flywheel, the automation and engine for moving all that stuff forward in as automated way as possible. A key aspect of that is that when you start to bring the modeling together with the software development practices, the models for things like the vehicle, for things like the scenarios, the scenes and the like are not just used by the model-based workflows and users. They're also available as a service to the software teams.
And so as you think about your virtual vehicle projects-- I know a lot of you have them, perhaps all of you-- don't just think about what the virtual vehicle technology should be. But are you delivering it within your organization in a manner that all the different groups can benefit from it even if they don't naturally think in a model-based way? It could be behind the scenes as part of a CI pipeline. It could be in some other form that, again, the user doesn't have to interact with. But are those different groups getting the value from the virtual vehicle models that are being built in your organizations?
I talked about KPIs earlier. And some of the KPIs that we see customers starting to use as sort of a North Star include these two. What's the frequency of code deployments? What's the elapsed time from commit to deploy? Now these metrics are two of what are called the DORA metrics. They were identified by Google's DevOps Research and Assessment Group probably five, six years ago now because they looked at different DevOps programs and said, what are the key things that a DevOps team had to measure and do well at in order for their DevOps activity to be successful?
And so they came up with a total of four metrics. And these are just two of them. They're important, especially want to get into the mindset of a DevOps approach. But since we're in automotive, it's not just about frequency and it's just about throughput time. It's also about rigor and reliability.
And so we think of other types of KPIs. What's a defect detection rate? In other words, when did I find out a defect? Did I find it early when I was first capturing and reconciling requirements? Did I find it on the testbed when it was about to go into the test vehicle? Or heaven forbid, did I find it after it shipped?
So understanding those kinds of metrics are critical as well, code coverage percentages, MC/DC coverage and the like, and also just software reliability. What's the mean time between failure of my software and of my system? And so those are the kinds of KPIs that we're starting to see. We're also seeing that different groups are starting to be involved. And that's a very interesting dynamic. Because in many groups-- in many cases in many organizations, these groups don't even know each other, don't know how each other thinks, don't know who to talk to, et cetera, et cetera, et cetera. You probably have your own stories about this.
But it's complexity in terms of culture. It's about technology. They use this other software tooling stuff that I don't understand, but they use and they love it. And they don't understand my stuff. It's a funny kind of non-interaction that happens.
And it's also a set of practices and processes as well that somehow need to come together so to avoid the worst case, they don't block and impede each other. Let's make it a non-negative value statement. Instead, you want to align them and make it a positive value statement where they don't become self-canceling but, in fact, contribute to each other's approaches.
But you look at that set of five KPIs that I mentioned. We're still missing a piece of the story. It comes up in the two other of the four DORA metrics that I mentioned. And those two other metrics are these. If I introduce a change into my system, how often does that change or improvement actually create a problem?
That's what's called the change failure rate. How often is my remedy worse than the disease or at least a different kind of disease basically? And the second one is if I have an incident in the field, what is my Mean Time to Recovery or to Restore, MTTR.
And in order to look at that, you kind of need to zoom out a little bit. It's not just the development process anymore. It's actually looking at the entire life cycle. It's saying once I have the development taking place and I have my start of production and I've got my vehicles on the road, that's when I start to find many incidents because not everything is going to be caught during development, and test, and integration.
In many cases, they're beyond the engineering groups control. It could be a security measure. Basically somebody found a hack, and you basically have to close that loophole. They're not necessarily of somebody's making. In some cases, the external world is basically doing things to us.
But in any case, when you look at that entire loop and closing the loop, that is the other time constant that has to be accelerated because that's the time to recover, to identify, to assess, to fix, and to deliver a fix to that incident, or that flaw, or that bug, or whatever it happens to be. In addition, it's a way of providing that added value, that business opportunity that I mentioned earlier, because it's a way in which you can identify potential new capabilities, new value propositions and deliver them in digital form to perhaps one driver if they want to get a subscription or to every vehicle if that's part of the strategy.
From a MathWorks perspective, it's not something certainly we're going to do ourselves on our own. We see that the tooling, whether it's cloud environments, whether it's other types of environments as shown here, all need to come together with MATLAB, and with Simulink, and the rest of our tooling so that we can support you to basically close that bigger loop in as automated way as possible, in as rich a way as possible, and as complete a way as possible.
And it's not at that point just about models. In many cases-- look at the words up there in that cloud. It's operational data. So it's models and data dealing with them in a systematic way across our life cycle. Again, so you're going to identify issues, identify value, and deliver against them.
And so we see this as a broader set of KPIs that are associated with the software-defined vehicle activities. Some of them are DORA metrics. Others are like almost even automotive DORA metrics. I'm not going to coin a phrase there. Don't attribute me in that. But think of it as the other attributes that are necessary to deliver quality of DevOps from an automotive context.
Now MathWorks because we've worked with automotive industry and a lot of automotive customers around the world, we're doing that, helping those customers to leverage model-based design in new ways in the organization, basically taking the goodness of model-based design, in some cases stripping off the varnish and sort of rebuilding it with newer technology, with more automation in mind as a way of doing it. They include established automotive companies. This is just a sampling. And I'm not meaning to call out particular ones.
But these are organizations that have very strong systems engineering cultures. And they're adding software mastery, software capabilities, and seeing how to leverage the tune together, again, so they don't self-cancel but instead can contribute to each other toward where they want to go. And we're dealing with younger companies, the West Coast companies, the tech startups and others, the tech companies and the startups.
In many cases, these kinds of companies have a benefit of not having had a systems engineering background. They can think about model-based design with a bit of a clean sheet, a bit of I'm going to start fresh in a going forward manner. But they don't have that legacy, that heritage of systems engineering to build on either. So as you work with organizations across the slide, we're seeing kind of different ways that we have to work with different companies to get them to where they want to be given where they are today.
Part of our engagement with customers is to build what we call demonstrators. And there are a lot of great demo stations in the other room. And I encourage you to look at them during the course of the day today.
But one of them which I want to call out particularly to you, and Stefano Marzani is going to talk in more detail about it in his talking a little bit later this morning, is to say, what does software-defined vehicle development cycle look like? If you can use AWS, for example, and you can use Elektrobit middleware, and you can do automation, and you want to be able to look both at a virtual ECU and a virtual HPC development target, what are the kinds of tooling, what are the kinds of workflows that we should build?
So I encourage you to take a look at the demo stations, all of them. But I encourage you to look at this one and challenge us. Are we on target with what you want, with what you see happening? We have a lot of data points, and this reflects that. But we also want a lot more data points because we're all working on this together.
So when I think about the path forward, it really encompasses four strategic clusters of action. It's about process, people, methods, and standards. And you're going to say, oh gosh, not another management meeting.
But from the standpoint of process, it really is what I've talked about earlier. It's about aligning the software development and the system engineering practices and expertise that you have in your organization, enabling them to leverage each other so you can bring out really the highest value from your company and your engineering know-how. It's about people. It's about domain skills. It's not just a matter that you need to be able to build a great Simulink model.
But you need to build a great Simulink model that can be automatable and run in cloud, maybe not today but virtual tomorrow. It's also a matter of having the collaboration with other groups that are either in your organization or forming in your organization and building the alliances, building the dialogues, and building the processes between what you do and what those other groups do because those have to come together. Those gears have to mesh.
It's about understanding methods, not the buzzwords behind the methods, but agile, DevOps, really what they mean and what they can entail from an engineering systems perspective. It's about virtual development, as Nishan mentioned in his introduction, paralyzing whenever possible because cloud is many things. It's absurdly good parallelization and scale-up environment. And can you take advantage of that effectively?
And it's also having a software factory type of mindset. As I mentioned earlier, it was about design interaction before. Now it's about automation and a factory mindset. It's not that one is better or worse than the other. Each of them has their place in the overall life cycle and process.
And then it's about standards. I'm going to talk as much about that in today's talk. But it's about all sorts of standards. As I mentioned earlier, cybersecurity is important, the ability to remedy fixes on the fly using DevOps processes and approaches. It's things like AUTOSAR compliance if you're part of a value chain and a supply chain that relies on AUTOSAR and other infrastructure and modes of delivering components and many, many other things.
So that's what I wanted to go over this morning. I hope you found it interesting. And I look forward to the discussion about what you see as your future with the software-defined vehicle. Thank you very much, and enjoy the conference.
[APPLAUSE]