BRN Discussion Ongoing

manny100

Regular
I noticed this too in the podcast, yet he cleverly did not distinguish whether it was another partner as have the many which have been announced this year, or material engagements where customers are signing on the line as new IP Customers.. Not much of a fan of Sean’s interviews.. his value is ecosystem building and networking, not necessarily communicating to shareholders.. Seems there’s too much information he’s protecting and never able to get more than general waffle out that we’ve already heard..

I liked the latest Investor Podcast with Zac Shelby where he is focused on education within the industry to massage Edge AI development into the right direction.. That interview really shone a light on where the industry is at and where BRN is likely at in that process..
Have not had a chance to listen to the Shelby podcast. Will do tonight or tomorrow. I like the idea of massaging industry to our style of AI.
IMO the next big long market bull run will be driven by a change of ' guard' to a fair extent. We will likely see big gains in AI businesses that are not so big right now as well as EV related businesses.
With a little luck BRN will play a decent part in the coming AI at the Edge bull market.
 
  • Like
  • Fire
  • Love
Reactions: 18 users

Learning

Learning to the Top 🕵‍♂️
Screenshot_20231112_182730_LinkedIn.jpg

Screenshot_20231112_182748_LinkedIn.jpg

Screenshot_20231112_182808_LinkedIn.jpg


Learning 🪴
 
  • Like
  • Fire
  • Thinking
Reactions: 16 users

Tothemoon24

Top 20
Codasip has been discussed on occasions, interesting job description


IMG_7793.jpeg


Exciting News! 🚀 Codasip is Hiring an AI/ML HW Innovation Engineer in Germany! Join our talented global community and be a part of our cutting-edge processor solutions company. 💡

In this role, you'll have the opportunity to work on the Neurokit2e project, developing a RISC-V-based Application Specific Accelerator for Neuromorphic Computing as part of the EU Horizon Europe framework. 🌍

Your responsibilities will include innovating, developing, integrating, and evaluating state-of-the-art (micro-)architectural IP cores for next-gen Neuromorphic accelerators, customizing existing Codasip RISC-V IP cores, and collaborating with internal and external stakeholders. 🖥️

If you have over 5 years of relevant experience, strong digital design and computer architecture background, and expertise in AI/ML and spiking neural networks, this opportunity is perfect for you! 💼

To learn more about the position and apply, visit our blog post: [Link to the Blog Post](https://ift.tt/Z6qguY7)




CHERI was developed at the University of Cambridge, and the technology has been shown to be quite reliable in experimental processors. Now, according to the release, it will for the first time be available in a commercial offering. Codasip said its goal for the CHERI implementation is to offer secure-by-design products that enable preventive security measures without relying on patches from outside vendors.

(Editor’s note: Embedded Computing Design staff will be on-site at the RISC-V Summit, so find us and say hi, and make sure you come back here for live coverage!)

Codasip’s new custom compute product line, or family, is reportedly includes application and embedded cores, and is designed to complement the company’s existing cores by offering a different starting point to fit needs for higher performance. The first core in the family is the A730, a 64-bit RISC-V application core, now available to early-access customers, and Codasip Studio allows users to optimize each baseline core for target use cases, according to the release.
 
  • Like
  • Fire
Reactions: 9 users

Boab

I wish I could paint like Vincent
An Australian hearing aid Co with a similar name....could it be?
audika.com.au/hearing-aids/types/guide-to-best-hearing-aids
Alas, no. They appear to be supported by a company called Oticon. Oticon BrainHearing
No joy there either.
 
  • Like
  • Sad
Reactions: 6 users

MDhere

Regular
Evening fellow brns especially those in 🇩🇪 Germany right now. Can anyone go to the Autotech conference and attend this event at 11.25am on Wednesday the 15th? 😁
20231112_215151.jpg


O and meet David Edel from Ford there and Hans-peter Fischer from BMW while your there too :)

It's in my diary Autotech conférence this wednesday and Thursday :)

Also will take us all back to an article called Cars that think like you :) Happy Sunday night 💃

 
Last edited:
  • Like
  • Fire
Reactions: 11 users
An Australian hearing aid Co with a similar name....could it be?
audika.com.au/hearing-aids/types/guide-to-best-hearing-aids
Alas, no. They appear to be supported by a company called Oticon. Oticon BrainHearing
No joy there either.
It's more than likely to be Cochlear Limited (ASX COH) in my opinion.

"As of 2022, the company holds 50% of the cochlear implant market"

They are big guns, in the World hearing aid market and thanks to the proliferation of music ear buds, pumping music and audio directly into the ear canal, their future profits are also assured..

Technology, to the rescue of technology..


giphy-1.gif
 
  • Like
  • Fire
Reactions: 16 users

Perhaps

Regular
Valeo will be looking after the certification and compliance for Scala 3 as described below.

I believe @chapman89 commented on this article when it was first posted on Linkedin and received a response from Dr Henrich Gostzig from Valeo confirming they did a lot of work with neuromorphic SNN.



DVN-L Interview: Benazouz Bradaï, Valeo AD Innovation Chief​

Bradaï, Valeo AD Innovation Chief​


Dr. Benazouz Bradaï is Research & Innovation Director and Master Expert in Autonomous Driving at Valeo. In that role, he’s made major scientific and industrial contributions. He is also a scientific co-director of the ASTRA (Automated systems for Safe TRAnsportation) joint lab with Inria in France.
Bradaï holds a PhD in automatic control from Haute Alsace University in France. His expertise and research interests include sensor fusion, precise localization and mapping, and system architecture. He is a member of IEEE, ADASIS Forum, SENSORIS Consortium, SAE, and SIA (French SAE) as an ADAS/AD expert. He graciously granted DVN-Lidar this interview.

DVN: Will you tell us about your career and work at Valeo?​

Dr. Benazouz Bradaï: In 2003, I started a PhD thesis in multisensor fusion for ADAS in collaboration with Valeo in the framework of a CIFRE industrial contract (Conventions Industrielles de Formation par la Recherche, industrial agreements on training through research). At that time, I was working on fusion of cameras and maps for lighting automation and for Intelligent Speed Assist. In 2007, I got my PhD and have been hired by Valeo as an ADAS Engineer. I was promoted R&I Project Manager in 2012, then Autonomous Driving Innovation Platform Manager position in 2019. Since 2022, I am R&I Director on ADAS and Autonomous Driving. As a Research and Innovation Engineer, I have always been passionate about research and innovation. I continued working with academics at Valeo with supervision of several PhD thesis with Mines ParisTech and Inria Lab. Since February 2022, I am the Scientific Co-Director of a joint lab with Inria called ASTRA “Automated systems for Safe TRAnsportation”. I have been evolving also as an expert from 2009 to Senior Expert in 2015 and finally a Master Expert in 2022.

DVN: What are the key milestones to achieve robust autonomous driving?​

Dr. B.B.: Mercedes has achieved in 2022 the roll-out of the first homologated L3 system in EU and soon in the US up to 60 km/h, in line with UNECE R157 (ALKS). In a few months, Hyundai-Kia will be bringing another L3 system in various countries including South Korea, some European countries, and North America. To achieve this milestone, safety is a key element that needs to be proven at the homologation stage. The sensors’ redundancy is important for safety, and lidar is an enabler for higher autonomy. The lidar is also enabler for corner cases like ‘underdrivable’ objects, reducing false positives, and overcoming issues other sensors have—like camera that can be blinded by the sun, or radar which can have false positives with regard to tunnels. It allows increased detection range and high-fidelity 3D environment modelling.
Today, L4 Autonomous Vehicles are very limited in series production. By L4, we mean a safe system where the driver need not intervene at any moment and the system is ensuring the fallback in case of failure. Some experimentations or limited commercial services of L4 systems—Robotaxis—are currently on in China and in the US in geofenced, pre-defined areas, but no real commercial use.

DVN: How important is lidar, and why?​

Dr. B.B.: Lidar allows an increased range with a high accuracy of detections for highly retroreflective objects, height measurements, and 3D environment modelling capability.
From a safety point of view, lidar brings an essential technology redundancy that is important for the perception of the environment. Most approaches for L3 automation use a triple sensor technology redundancy. Compared to the camera, the lidar cannot be blinded by the sun above the horizon, it increases the detection range, and has higher accuracy. Compared to the radar, it allows height measurements and has better angular resolution. In addition, lidar allows modelling the 3D environment and is very useful for precise localization and mapping. In urban environments when the infrastructure can be occluded by other road users or when there are no lane markings for example, it allows the availability of the function and thus extends the ODD (operational design domain). Valeo developed Drive4U Locate, a precise localization and mapping system based on our Valeo Scala lidar, which reaches 10-cm accuracy. It maps the environment and detects the change and updates the map by crowdsourcing.
050723_I1.jpg
VALEO DRIVE4U LOCATE: PRECISE LOCALIZATION AND MAPPING BASED ON SCALA LIDAR

DVN: Are there critical use cases which have been solved by Lidar?​

Dr. B.B.: Lidar sensing technology is well suited to manage so-called ‘under-drivable’ objects such as sign gantries, bridges, or tunnels—elements of the infrastructure under which vehicles are supposed to drive freely. Traditional sensors such as radars and cameras usually perform poorly to classify these objects as under-drivable, and lidar brings the additional 3D information to reliably distinguish these objects from other road users such as cars.
One of the critical use-cases that lidar solves is the detection of debris (e.g., lost cargo) on the highway, a challenge that is directly related to the vehicle speed. The ALKS regulation has been amended and adopted in 2023 to have higher speeds on divided highways at 130 km/h, including automated lane changes. The lost cargo remains one of the challenging use-cases for this extension. With Valeo’s Scala3 lidar, these use-cases can be solved.

DVN: How does Scala3 do with these use cases, compared to the previous Scala2?​

050723_I2.jpg
LOST CARGO (MATTRESS) ON HIGHWAY DURING VALEO CRUISE4U TEST DRIVES
Dr. B.B.: Underdrivable detection tests have been performed with Valeo Scala3 and confirm a better capability to minimize false positives (where underdrivable objects such as sign gantries or tunnels would cause unwanted braking) at sensor level, which will translate into a much better performance at system level, compensating the limitations of other sensors in such corner cases.
Of course, debris detection range will be greatly increased with Scala3, and our first tests confirm at least a doubled detection distance for objects such as a small tire on the road. This capability will be crucial to bring the top operating speed of autonomous driving functions closer to 130 km/h.
Scala3‘s potential is being evaluated as we speak, with deterministic and statistical campaigns being carried out for our first customers, especially Stellantis.

DVN: Tell us about the V&V (validation and verification) process to launch an autonomous vehicle, will you please?​

Dr. B.B.: To validate an automated vehicle of L3 and beyond, the mileage target of validation is not achievable with a realistic budget, and even if it were, it still is not sufficient. Simulation and virtual validation are key to reduce this budget and cover all the scenarios. But simulation will not be the only tool to validate. Indeed, for homologation, the assessment method combines simulation, physical tests in proving grounds, and real world tests.
The simulation is mainly to assess the system’s capacity to deal with critical situations that are not testable on proving grounds or public roads. Proving ground tests allow testing challenging scenarios that are not testable on open roads. It can be combined with simulation to test the vehicle behaviour with simulated data for repeatable scenarios and with injecting faults. This is called VIL, for vehicle-in-the-loop. Finally, validation on open roads is performed in order to assess the ability of the system to manage real-world situations especially in its interaction with other road users. It intends to verify that the system has not been overfitted to specific test scenarios.

DVN: How do Valeo sensors do in bad weather conditions?​

Dr. B.B.: As it has been presented by my colleague Ahmed Yousif at the ADAS & Autonomous Vehicle Technology Expo Conference, Valeo develops three types of simulation sensor model:
  • High Fidelity Sensor Model models the various components of the lidar, including the laser pulse and the optical path, in addition to effects such as blooming and noise. Each point is classified with a unique ID, class, and material to ease the development of the stack.
  • Phenomenological Sensor Model is an object-based model to emulate the perception stack performance and metrics.
  • AI Trained Model is an accelerator of the high-fidelity sensor model which is trained on both the simulated and real logs and data.
The high fidelity sensor model is developed taking into consideration weather conditions such as rain and spray. In addition to that, currently we are working on severe weather conditions such as fog. On the other hand, the phenomenological sensor model can emulate the lidar perception stack performance, taking into consideration all the weather conditions.

DVN: What are automakers’ expectations regarding lidar sensor models?​

Dr. B.B.: Different OEMs use the sensor models in various applications. Here is a summary of the use cases:
  • High Fidelity Sensor Model: perception stack and functions development; raw data fusion.
  • AI Trained Model: integration to XiL (HiL/ SiL/ Overall HiL); perception stack and functions development, raw data fusion.
  • Phenomenological Sensor Model: object-based fusion and integration to HiL and SiL.
The high fidelity sensor model is required for virtual validation and the verification with respect to real word test drives. At Valeo we also validate, with digital twin, the targeted autonomous driving functions where we can test with the simulation using the high fidelity sensor models and with real data as well as the tests in the real world.
050723_I3.jpg
VALIDATION VS REAL WORLD – DIGITAL TWIN
050723_I4.jpg
EXAMPLE OF THE HIGH FIDELITY SENSOR MODEL USED IN SIMULATION

DVN: What’s next to improve the ODD for L3 and L4 autonomous driving systems?​

Dr. B.B.: Even as ADAS becomes increasingly a standard, L2 and L2+ automation will still be dominant as they will represent more than 50 per cent market share by the end of decade. These hands-off systems up to 130 km/h will be with progressive ODD extension including automated lane change, intersection support exit ramps management between highways, etc.
For L3 and L4 systems, the ODD is also increased progressively. The first L3 systems on the road will be based on ALKS up to 60 km/h, then ALKS up to 130 km/h with automated lane change starting from 2026-2027.
Lidar technology is key to manage the related critical use-cases as the lost cargo for example. There is a trend in China to have lidar from L2+ in order to prepare the next generation of L3 and L4 systems when the regulation is adopted. Regarding safety, at least a second sensor technology is required for managing these critical use-cases. Other sensor technologies will be introduced to extend the ODD, like the thermal camera to manage adverse weather situations and VRU (vulnerable road users). Connectivity deployment will allow more ODD extension. For example, at Valeo we are working on innovation for extending the L4 highway speed to new challenging use-cases like toll booths and work zones, using connectivity combined with the vehicle sensor perception.
050723_I5.jpg
VALEO CRUISE4U HIGHWAY EXTENDED ODD USING CONNECTIVITY: WORK ZONE MANAGEMENT
050723_I6.jpg
VALEO CRUISE4U HIGHWAY EXTENDED ODD USING CONNECTIVITY: TOLL BOOTH MANAGEMENT

DVN: What should be the safety targets for an AV?​

Dr. B.B.: Autonomous vehicle behavior must be safe whatever the potential root causes. Compliance with ISO 21448 SOTIF (Safety Of The Intended Functionality) is one of the major challenges in AV design and architecture. For that, the first difficulty automakers are facing is the definition of the acceptance criteria.
A PFA (French Automotive Platform) position paper from March 2019, “Safety Argument for SAE Automation Level 3 and 4 Automated Vehicles”, suggests using the GAME method (French acronym meaning globally at least equivalent) to define these objectives. This method is also recommended by the ISO 21448 standard and the new coming ISO/TS 5083 on Safety Demonstration of Automated Driving Systems. The principle being that the residual risk induced by the AD system must be less than or equal to the one induced by an average human driver.
There are other methods like MEM (minimum endogenous mortality), ALARP (as low as reasonably practicable), and positive risk balance. Today there is no worldwide, nor European state of the art. However, a common approach is to take into account the accidentology statistics for similar use cases and to derive the acceptance criteria. The target of the acceptable fatal accidents rate induced by the autonomous driving system shall be lower than the fatal accidents rate induced by human driving divided by a safety factor. This safety factor can mitigate all the uncertainties arising from the calculation. It can also take into account that accident rates evolve from one year to another or that they can be different from one country to another.
Considering the GAME method, this target can be a factor of 10—a number used for decades by the safety community. There is still no consensus, but the common approaches are converging towards this factor. There are currently activities in different working groups and this factor might be updated.
https://www.drivingvisionnews.com/news/2023/07/05/dvn-l-interview-benazouz-bradai-valeo-ad-innovation-chief/
Right, Valeo does a lot of work with neuromorphic technology and they don't make a secret out of it.
Valeo was a prominent part of the Tempo project (2018-January 2023). Base of the project was SynSense technology. Out of this project a consortium derived, where Valeo and Philips take the part of working on applications. In the meantime also GrAIMatterLabs joined the consortium. This all shows Brainchip isn't in any way an exclusive partner of Valeo, other activities seem to be much more prominent.

My conclusion is, maybe we will see neuromorphic technology in Scala 4, but there is no guarantee it will be Brainchip.
Don't get me wrong, I really hope any mass product with Akida IP will enter the market soon. but as this is my biggest investment, I need to take a full realistic view, dot joining and hope aren't the right instruments for myself.
In the meantime after a couple of complicated years with delays in the chip industry and general problems of industries I see a real market entrance of Brainchip in 2025/26.
I'm happy when I'm wrong but that's my view of today.


1699792507679.png

 
  • Like
  • Fire
Reactions: 4 users

Jasonk

Regular
Just to expand. on thelittleshort linkedIn find.



 
  • Like
Reactions: 10 users

Vladsblood

Regular
It's more than likely to be Cochlear Limited (ASX COH) in my opinion.

"As of 2022, the company holds 50% of the cochlear implant market"

They are big guns, in the World hearing aid market and thanks to the proliferation of music ear buds, pumping music and audio directly into the ear canal, their future profits are also assured..

Technology, to the rescue of technology..


View attachment 49339
Cochlear and I seconded your recommendation DB. Vlad
 
  • Like
Reactions: 5 users

Jasonk

Regular
  • Like
  • Fire
Reactions: 4 users

Diogenese

Top 20
Right, Valeo does a lot of work with neuromorphic technology and they don't make a secret out of it.
Valeo was a prominent part of the Tempo project (2018-January 2023). Base of the project was SynSense technology. Out of this project a consortium derived, where Valeo and Philips take the part of working on applications. In the meantime also GrAIMatterLabs joined the consortium. This all shows Brainchip isn't in any way an exclusive partner of Valeo, other activities seem to be much more prominent.

My conclusion is, maybe we will see neuromorphic technology in Scala 4, but there is no guarantee it will be Brainchip.
Don't get me wrong, I really hope any mass product with Akida IP will enter the market soon. but as this is my biggest investment, I need to take a full realistic view, dot joining and hope aren't the right instruments for myself.
In the meantime after a couple of complicated years with delays in the chip industry and general problems of industries I see a real market entrance of Brainchip in 2025/26.
I'm happy when I'm wrong but that's my view of today.


View attachment 49340

We do need to keep an eye on the competition. I have been following Synsense and GrAI Matter for a few years, and the technologies of Synsense and GrAI Matter don't seem to me to match Akida.

The Tempo project had 19 members. - there is an old joke about the camel being a horse designed by a committee.

Synsense, in keeping with the Tempo MemRistor ideology, initially started with analog SNNs.

SynSense has a way to go before it gets ISO automotive certification.

Their video Speck NN chip ended up in a toy because it was insufficiently accurate for automotive and did not come up to Prophesee's expectations:

The First Market for Neuromorphic Processing: Toys - EE Times Podcast


Synsense are in the midst of their Nth tape out of their new Xylo digital NN for audio.

SynSense | Neuromorphic Developers Partner to Integrate Sensor, Processor – EETimes​

2022-08-02
By SynSense
...
The performance requirements, the accuracy requirements for the applications are modest, right? If your toy robot misses a gesture one time in 20, that’s no problem. Compared with some sort of high-resolution CCTV or autonomous driving application or something like that, it’s a much simpler and easier application. … And it doesn’t mean we can’t also do these other more exciting industrial robotics applications and so on, which is also in the pipeline, that’s… we do the easy stuff first. Right, the low hanging fruit. Dylan Muir Synsense.

Having a different circuit for each sense will not produce cheaper chips.

GrAi Matter proclaim their 16-bit FP MACs as a key benefit while incongruously talking about sparsity:

https://www.graimatterlabs.ai/product
  • Uncompromising accuracy with 16-bit FP MACs

https://www.graimatterlabs.ai/technology

There's only one way to save energy in edge computation in the real world: perform just the essential computations efficiently, and nothing else. This requires a different way of thinking from the standard approach. The key is sparsity.

BrainChip has been working with Valeo since 2020 on a joint development project for lidar with specific (although unspecified to the public) milestones with payments attached. As far as we are aware, this JV is still on foot, and that looks like an exclusive partnership.
 
  • Like
  • Fire
  • Love
Reactions: 49 users

Frangipani

Regular
Right, Valeo does a lot of work with neuromorphic technology and they don't make a secret out of it.
Valeo was a prominent part of the Tempo project (2018-January 2023). Base of the project was SynSense technology. Out of this project a consortium derived, where Valeo and Philips take the part of working on applications. In the meantime also GrAIMatterLabs joined the consortium. This all shows Brainchip isn't in any way an exclusive partner of Valeo, other activities seem to be much more prominent.

We do need to keep an eye on the competition. I have been following Synsense and GrAI Matter for a few years, and the technologies of Synsense and GrAI Matter don't seem to me to match Akida.

The Tempo project had 19 members. - there is an old joke about the camel being a horse designed by a committee.

Synsense, in keeping with the Tempo MemRistor ideology, initially started with analog SNNs.

SynSense has a way to go before it gets ISO automotive certification.

Their video Speck NN chip ended up in a toy because it was insufficiently accurate for automotive and did not come up to Prophesee's expectations:

The First Market for Neuromorphic Processing: Toys - EE Times Podcast


Synsense are in the midst of their Nth tape out of their new Xylo digital NN for audio.

SynSense | Neuromorphic Developers Partner to Integrate Sensor, Processor – EETimes​

2022-08-02
By SynSense
...
The performance requirements, the accuracy requirements for the applications are modest, right? If your toy robot misses a gesture one time in 20, that’s no problem. Compared with some sort of high-resolution CCTV or autonomous driving application or something like that, it’s a much simpler and easier application. … And it doesn’t mean we can’t also do these other more exciting industrial robotics applications and so on, which is also in the pipeline, that’s… we do the easy stuff first. Right, the low hanging fruit. Dylan Muir Synsense.

Having a different circuit for each sense will not produce cheaper chips.

GrAi Matter proclaim their 16-bit FP MACs as a key benefit while incongruously talking about sparsity:

https://www.graimatterlabs.ai/product
  • Uncompromising accuracy with 16-bit FP MACs

https://www.graimatterlabs.ai/technology

There's only one way to save energy in edge computation in the real world: perform just the essential computations efficiently, and nothing else. This requires a different way of thinking from the standard approach. The key is sparsity.

BrainChip has been working with Valeo since 2020 on a joint development project for lidar with specific (although unspecified to the public) milestones with payments attached. As far as we are aware, this JV is still on foot, and that looks like an exclusive partnership.

Speaking of GrAI Matter Labs - I just happened to read they were actually acquired by SNAP, the US company behind Snapchat, last month…

43F70729-52C8-444F-912E-FDE0581EC191.jpeg



 
  • Like
  • Fire
Reactions: 9 users

Perhaps

Regular
We do need to keep an eye on the competition. I have been following Synsense and GrAI Matter for a few years, and the technologies of Synsense and GrAI Matter don't seem to me to match Akida.

The Tempo project had 19 members. - there is an old joke about the camel being a horse designed by a committee.

Synsense, in keeping with the Tempo MemRistor ideology, initially started with analog SNNs.

SynSense has a way to go before it gets ISO automotive certification.

Their video Speck NN chip ended up in a toy because it was insufficiently accurate for automotive and did not come up to Prophesee's expectations:

The First Market for Neuromorphic Processing: Toys - EE Times Podcast


Synsense are in the midst of their Nth tape out of their new Xylo digital NN for audio.

SynSense | Neuromorphic Developers Partner to Integrate Sensor, Processor – EETimes​

2022-08-02
By SynSense
...
The performance requirements, the accuracy requirements for the applications are modest, right? If your toy robot misses a gesture one time in 20, that’s no problem. Compared with some sort of high-resolution CCTV or autonomous driving application or something like that, it’s a much simpler and easier application. … And it doesn’t mean we can’t also do these other more exciting industrial robotics applications and so on, which is also in the pipeline, that’s… we do the easy stuff first. Right, the low hanging fruit. Dylan Muir Synsense.

Having a different circuit for each sense will not produce cheaper chips.

GrAi Matter proclaim their 16-bit FP MACs as a key benefit while incongruously talking about sparsity:

https://www.graimatterlabs.ai/product
  • Uncompromising accuracy with 16-bit FP MACs

https://www.graimatterlabs.ai/technology

There's only one way to save energy in edge computation in the real world: perform just the essential computations efficiently, and nothing else. This requires a different way of thinking from the standard approach. The key is sparsity.

BrainChip has been working with Valeo since 2020 on a joint development project for lidar with specific (although unspecified to the public) milestones with payments attached. As far as we are aware, this JV is still on foot, and that looks like an exclusive partnership.
Funny that ole Speck chip which ended in toys seems to be good enough for BMW:


1699809818784.png
 
  • Like
  • Wow
Reactions: 5 users

cosors

👀
The company has addressed the ISO certification issue, saying that that would be up to the customer to organize.
I agree with you from another angle. I am close to the certification business and have worked through a lot of requirement catalogues. The certification body is the subject of the evaluation and then to a certification as a whole. Imagine a device like a terminal, for example. The device as a whole is certified and not each individual component, input, circuit board, chip, software, etc.
I reckon it's the same here. Anyone who has the standards at their disposal could check. The auditor also has a certain degree of freedom of interpretation. A chip, a circuit board, a system, a device is full of different IPs. It is up to the the company who wants to sell it on a market with regulations to have the device evaluated and certified or to commission this.

I can imagine at one point that a seller of modules for eg the automotive sector who wants to have its device certified approaches Brainchip with some questions from the catalogue and asks how Brainchip fulfils this or that requirement. The company then takes this information and incorporates it into its own ~answer catalogue, if this is not already known and supplied to the customer.
Sometimes you can simplify the work instead of answering such questions every time. And you can do this by certify yourselve to a certain degree. Then only a reference to the binding certificate is necessary. However, this involves a lot of work and costs and raises the question of whether it makes sense. Brainchip does not sell any module or device on the respective market, e.g. US or EU.
That's how I interpret it and that's probably why he didn't get an answer to his question because someone rolled their eyes.
The certification business is complicated. Especially describing it to someone from the outside.

At this moment, for the first time, I see this as one next clear advantage that Brainchip 'only' sells IP, no device, system, module and not its own chip for the respective sector to be certified.

But that's just my guess. As I said, complicated it is.

Valeo has that work.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 21 users

Frangipani

Regular
Funny that ole Speck chip which ended in toys seems to be good enough for BMW:


View attachment 49343

Hi @Diogenese,

I was going to say something along the same lines as @Perhaps.

FYI, SynSense’s website still lists BMW as a partner under ‘Industry Ecology’.

114CF479-315A-4FA6-AA1D-7678DECEC0FB.jpeg


SynSense has a way to go before it gets ISO automotive certification.

Their video Speck NN chip ended up in a toy because it was insufficiently accurate for automotive and did not come up to Prophesee's expectations:

The First Market for Neuromorphic Processing: Toys - EE Times Podcast


I am somewhat puzzled you are claiming Speck did not meet Prophesee’s expectations, as SynSense developed Speck with its sister start-up iniVation, not with Prophesee. Are you by any chance mixing up Speck and Dynap-CNN? This was the SynSense processor used in the 2021 collaboration with Prophesee.


Prophesee continues to be listed as a partner both under ‘Industry Ecology’ (see above) and under ‘Academia cooperation’, by the way.


3CE6934C-937F-4172-AF63-FDE84C293F29.jpeg


And both SynSense and Brainchip are listed as partners on Prophesee’s website.

F5BE6E94-6814-4CBE-A381-629E9340343C.jpeg



According to this article dated March 30, 2023 ⬇️


… SynSense were still working with BMW at the time.

“Last year, SynSense announced a partnership with BMW to explore the integration of their brain-like chips into smart cockpits. This was SynSense’s first venture into the field of smart cockpits.

The company says Speck enables real-time visual information capturing, object recognition and detection, and other vision-based detection and interaction functions for BMW.”


This was published a week before the podcast you linked (about Speck being used in a toy) was released.

And why would the fact that Speck is used in a toy preclude it from being used in the automotive sector as well? How would that be different from Akida being not only used in future MB cars, but also in smart doorbells, hearing aids, wearables etc one (hopefully not too far away) day?

Mind you, I am not saying that Akida wouldn’t be the better choice, but I have yet to see any evidence that BMW and SynSense have since ended their collaboration.
 
Last edited:
  • Like
  • Fire
Reactions: 9 users

chapman89

Founding Member
Funny that ole Speck chip which ended in toys seems to be good enough for BMW:


View attachment 49343
Exactly!
They’re starting now to “explore” which as we know from experience that automotive takes a long time, years!
So Synsense and BMW can “explore” the benefits whilst TATA Valeo and others drive and implement akida IP into products!

Brainchip has been involved with Valeo since late 2019 and also SynSense don’t do IP, whereas Brainchip does!
Brainchip and Valeo have also worked on proof of concepts with Valeo!
 
  • Like
  • Fire
  • Love
Reactions: 50 users

chapman89

Founding Member
Hi @Diogenese,

I was going to say something along the same lines as @Perhaps.

FYI, SynSense’s website still lists BMW as a partner under ‘Industry Ecology’.

View attachment 49344



I am somewhat puzzled you are claiming Speck did not meet Prophesee’s expectations, as SynSense developed Speck with its sister start-up iniVation, not with Prophesee. Are you by any chance mixing up Speck and Dynap-CNN? This was the SynSense processor used in the 2021 collaboration with Prophesee.


Prophesee continues to be listed as a partner both under ‘Industry Ecology’ (see above) and under ‘Academia cooperation’, by the way.


View attachment 49346

And both SynSense and Brainchip are listed as partners on Prophesee’s website.



According to this article dated March 30, 2023 ⬇️


… SynSense were still working with BMW at the time.

“Last year, SynSense announced a partnership with BMW to explore the integration of their brain-like chips into smart cockpits. This was SynSense’s first venture into the field of smart cockpits.

The company says Speck enables real-time visual information capturing, object recognition and detection, and other vision-based detection and interaction functions for BMW.”


This was published a week before the podcast you linked (about Speck being used in a toy) was released.

And why would the fact that Speck is used in a toy preclude it from being used in the automotive sector as well? How would that be different from Akida being not only used in future MB cars, but also in smart doorbells, hearing aids, wearables etc one (hopefully not too far away) day?

Mind you, I am not saying that Akida wouldn’t be the better choice, but I have yet to see any evidence that BMW and SynSense have since ended their collaboration.
Because in toys, it doesn’t have to be accurate 100% of the time whereas in Automotive it has to be accurate all the time, that is what was said by the Synsense employee on that podcast @Diogenese shared.
Automotive goes through at least a 12 month safety check through the regulators and the like.
Nanden has also said back in July 2023 that Inivation was a partner of theirs!

Prophesee also said in a podcast with Brainchip that it wasn’t until Prophesee came across Brainchip that they were able to tell a full story as they kept on hitting bottlenecks.
 
  • Like
  • Fire
  • Love
Reactions: 51 users
Morning all, well last week ended up being a good week, so let’s hope it continues this week going hard.

1699818124346.gif
 
Last edited:
  • Haha
  • Like
Reactions: 10 users

Iseki

Regular
  • Like
Reactions: 1 users

Frangipani

Regular
Because in toys, it doesn’t have to be accurate 100% of the time whereas in Automotive it has to be accurate all the time, that is what was said by the Synsense employee on that podcast @Diogenese shared.
Automotive goes through at least a 12 month safety check through the regulators and the like.
Nanden has also said back in July 2023 that Inivation was a partner of theirs!

Prophesee also said in a podcast with Brainchip that it wasn’t until Prophesee came across Brainchip that they were able to tell a full story as they kept on hitting bottlenecks.

That may well be true, but you are missing my point that it is weird to say that Speck did not come up to Prophesee’s expectations, as that statement sounds (to me at least) as if they had been collaborating on it, whereas Speck was in fact developed with Prophesee’s competitor iniVation.

I have to correct myself in so far, though, as I just noticed that the Speck module is a combination of SynSense’s Dynap-CNN processor with an iniVation DVS camera and hence does not involve a newer processor.

As for iniVation being a current partner of Brainchip, too (beyond that 2016 highway monitoring video): I am very well aware of it, as I was actually the one who shared the info here on TSE that Nandan Nayampally let it slip during the CVPR 2023 Workshop on Event-Based Vision … 😉

But what does this have to do with the fact that SynSense and BMW still seem to be collaborating?
 
  • Like
  • Thinking
Reactions: 6 users
Top Bottom