Testing 1-2-3 | Hosted by Parasoft
Testing 1-2-3 | Hosted by Parasoft
MISRA C 2025: What’s New & What AI Has to Do With It
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Welcome back to Testing 1, 2, 3 with Joanna Schloss and Arthur Hicken!
In this episode, the duo is joined by Michal Rozenau, a seasoned MISRA C expert and project lead engineer at Parasoft.
Tune in as they unpack the latest MISRA C:2025 update, how it's evolving to accommodate AI-generated code, and what it means for safety-critical embedded systems. The trio also explores the growing interest in Rust, its intersection with MISRA, and the broader push for reliable software standards in the age of automation and machine learning.
Whether you're a compliance pro or just dipping your toes into MISRA, this episode has something for everyone.
🔗 Explore More from Parasoft
Stay connected and dive deeper into the world of automated software testing and AI-driven quality assurance:
- 🌐 Website: www.parasoft.com
- 🎧 Podcast Hub: Testing 1-2-3 on Buzzsprout
- 💼 LinkedIn: Parasoft on LinkedIn
- 📘 Facebook: Parasoft on Facebook
Join our community for the latest insights, episodes, and discussions on software testing, AI integration, and quality assurance best practices.
Joanna Schloss:
Welcome to the Parasoft podcast. Today, we're talking about MISRA and how it affects safety-critical systems and auto-generated code, especially in the world of AI-generated software. I'm joined by Arthur Hicken, Parasoft's Evangelist, and Michal Rozenau, one of our senior product managers. Michal, let’s start with you. What is MISRA and why is it important?
Michal Rozenau:
MISRA stands for Motor Industry Software Reliability Association. It originally started in the automotive industry to promote safety and reliability in software written in C. Over the years, it expanded beyond automotive into any safety-critical domain—medical devices, aerospace, industrial automation—you name it. It gives us a set of guidelines to help make sure our code is predictable, testable, and free from common mistakes that can lead to dangerous failures.
Arthur Hicken:
Yeah, and it’s not just about avoiding bugs. MISRA helps with writing better C code that’s more maintainable and easier to analyze, especially when you're dealing with large, complex systems.
Joanna Schloss:
That makes sense. Now with AI becoming such a big part of software development, especially with tools generating code automatically, how does MISRA play into that?
Michal Rozenau:
Great question. MISRA was originally written for human developers, but now we’re seeing a lot of AI-generated code entering the picture. That raises concerns because while AI can be efficient, it doesn't necessarily follow coding guidelines unless you specifically guide it to. So, now we’re asking: Should AI-generated code follow MISRA? And how do we validate that?
Arthur Hicken:
Exactly. Just because code is generated by an AI doesn't mean it's immune to the same problems. It might even be more prone to certain issues if you’re not watching it closely. That’s why applying MISRA—even to generated code—is so important. You still need to run static analysis and ensure that the generated code meets the same safety and security standards you’d expect from a human developer.
Joanna Schloss:
So Michal, how are teams dealing with this? Are they holding AI to the same standards?
Michal Rozenau:
The leading organizations are. They're treating generated code as part of their overall software supply chain, and they use tools—like Parasoft’s—to automatically check that code against MISRA rules. We’re seeing updates to MISRA guidelines to reflect these modern realities. MISRA C:2023 and even discussions about MISRA C:2025 are all about adapting to these new challenges.
Arthur Hicken:
And there’s a big difference between saying “I use AI” and saying “I use AI safely.” That’s what compliance is all about—being able to prove that your software meets the necessary safety and reliability requirements, regardless of how it was written.
Joanna Schloss:
Absolutely. So what do you think is the future of MISRA in a world where AI plays an even bigger role in development?
Michal Rozenau:
I think we’ll see more guidelines evolve to explicitly include AI considerations. We're already seeing that discussion. The tools will also evolve. But at the end of the day, the goal is the same: Build software that works reliably, especially when people’s lives depend on it.
Arthur Hicken:
Right. And as things like self-driving cars and automated medical devices become more common, there’s zero room for error. Following standards like MISRA—and making sure AI-generated code complies—is going to be essential.
Joanna Schloss:
This is a fascinating discussion. Thank you both for joining. If you're interested in learning more about MISRA, AI, or Parasoft's tools, check out our website and subscribe for future episodes. Until next time!