Hi Smoothy,
It looks like the comments may not be sequenced correctly in your post.
From what I can see (screenshot below), when Kieran Ryan asks: "Can you utilise other hardware yet which utilises less power? Maybe something neuromorphic?"
Piednoel responds "
No, we avoid any non-deterministic solution, as it is what is used to certify the safety side of our hardware + software. "
Out of curiosity, I asked ChatGPT what it made of that exchange. In summary, it suggested:
- Neuromorphic architectures are generally considered non-deterministic in behaviour (at least from a safety certification perspective).
- At this point in time neuromorphic architectures don't meet automotive safety standards (ISO 26262, ASIL) prioritise deterministic, repeatable execution paths.
- If the system is part of a safety-certified validation layer, engineers would typically avoid architectures that are difficult to formally verify.
- His mention of a “hardware scheduler” suggests a deterministic co-processor or ASIC approach rather than a neuromorphic one.
ChatGPT reckons it doesn’t necessarily rule neuromorphic out for other parts of a stack in future (5-7 years), but for the safety-certified path he’s describing, it sounds like Piednoel is deliberately avoiding anything that could be perceived as non-deterministic.
I think ChatGPT might be being a bit conservative about the 5-7 year estimate for automotive adoption since Mercedes have commented previously, unless I'm mistaken, that they're looking at 2030 time-frame.
Happy to hear alternative interpretations, but that’s how it reads to me.
View attachment 95610