BRN Discussion Ongoing

Frangipani

Regular
Here is yet another puzzle piece primarily aimed at those doubting that carmakers see huge potential in neuromorphic technology and believing that Mercedes Benz’s implementation of Akida in the Vision EQXX project car’s voice control system was just a one-off experiment (it certainly would have helped if Markus Schäfer had meanwhile followed up with his promised second In The Loop blog entry on neuromorphic computing).

I happened to come across a weekly podcast series by German national newspaper Frankfurter Allgemeine Zeitung - aka FAZ - called “D:Economy”, featuring current topics relating to digitalisation and technology.

The podcast’s guest in episode 286 (Dec 1, 2023) was Prof. Steven Peters, who - as you may recall - was the founder and Head of AI Research at Mercedes from 2016-2022, overseeing the Vision EQXX concept car project, before returning to the world of academia full-time; he is now Professor for Automotive Engineering at TU Darmstadt. The podcast centred around autonomous driving, and the interviewee was asked for his assessment of the progress made so far.


https://www.faz.net/podcasts/f-a-z-...men-fahren-wirklich-herr-peters-19354415.html


There was one specific passage (from 3:07 min) that I would like to share with you all. After podcast host Alexander Armbruster had introduced his guest, they briefly talked about Steven Peters’ role as MB’s former Manager of AI Research. The latter said that his team had tried to develop and implement ML tools everywhere in the company where they deemed it to be of added value (he listed aerodynamics, user experience, chassis analysis), and when asked about an example of a specific project, he referred to AI-generated routines as an example of how AI features his team had “vorgedacht” found their way into the new E-Class. [The verb used here, vordenken, derives from the German noun Vordenker (= forward thinker) and has a very positive pioneering, innovative and visionary connation].

Steven Peters then went on to say the following:
(First a verbatim transcription of the German original, followed by a teamwork translation by DeepL & myself):


Steven Peters: “Wir haben außerdem sehr viele Themen begleitet, die jetzt auch gerade ‘nen sehr sehr großen Hype auslösen, sag’ ich mal - das ist alles, was mit Energieeffizienz und KI zu tun hat. Wir haben das in dem Projekt Vision EQXX damals auch demonstrieren dürfen: Da haben wir die Sprachbedienung erstmalig - nach unserer Kenntnis erstmalig - auf einem neuromorphischen Chip umgesetzt, d.h., der läuft extrem energieeffizient - im Prinzip hat er die gleiche, vor Kunde die gleiche [? etwas unverständlich, evtl. meinte er für den Kunden?] Funktion, es ändert sich gar nichts, nur es läuft eben viel energieeffizienter ab. Jetzt ist die Sprachbedienung keine große Energiesenke in dem Auto, aber es war ein Use Case, an dem man mal zeigen konnte, dass es geht, und unser großes Ziel jetzt - auch in meiner wissenschaftlichen Forschung an der TU Darmstadt - ist, für sicherheitsrelevante Themen, wie jetzt z.B. die Perzeption - die Objekterkennung beim automatisierten Fahren - auf solchen Chips, mit solchen neuronalen Netzen auch eben energieeffizienter zu machen. Und dann sind wir wirklich in einer hochsicherheitsrelevanten, offensichtlich hochsicherheitsrelevanten Anwendung, und das ist noch ‘ne, ‘ne harte Nuss.”

Alexander Armbruster: “Ist Mercedes da eher hinten dran oder vorne mit dabei? Es gibt ja amerikanische Konzerne, die, ähm, zugegeben auch viel mehr Marketing machen, auch bei viel kleineren Schritten sehr große Ankündigungen zum Teil machen, damit aber, wenn man es so vergleicht, ist Waymo, ähm, ist Waymo weit vorne oder fährt‘s mit Daimler auf derselben Höhe ungefähr, oder…?”

Steven Peters: “Also, ich würde tatsächlich sagen, dass Waymo weltweit führend ist, und Waymo benimmt sich, ähm, im positiven Sinne wie ‘ne Universität. Also die, die forschen sehr sehr viel, auch in der Grundlagenforschung, veröffentlichen auch einiges, und das ist wirklich beeindruckend. Und mir wäre jetzt kein vergleichbares Unternehmen bekannt, das in dieser Tiefe und in dieser Ernsthaftigkeit und mit so einem langen Atem dieses Thema erforscht und vorantreibt. Von daher glaub’ ich, sind die zu Recht auf Platz 1, muss man glaub’ ich so sagen. Unsere deutschen sind aber allesamt, äh, jetzt nicht irgendwie abgehängt oder weit dahinter - der Anwendungsfall ist nur ‘n anderer. Und wir sehen das ja - Mercedes war der erste jetzt in Deutschland, BMW ist jetzt gefolgt - mit dem sogenannten Level 3-System; das ist nicht ganz so viel, wie die Robotaxis von Waymo, die in San Francisco jetzt seit wenigen Monaten auch kommerziell im Einsatz sind, aber es ist [sic!] auch weltweit hier die ersten, die an private Kunden solche Fahrzeuge ausliefern, wo ich unter ganz engen, definierten Szenarien als Fahrender wirklich die Hände auch vom Lenkrad nehmen darf und auch die Verantwortung ans Fahrzeug übergebe, d.h. ich darf dann wirklich in diesen Szenarien, wenn das Fahrzeug übernommen hat, z.B. die FAZ lesen.“





Steven Peters: “In addition, we were involved in a lot of topics that are currently generating a lot of hype, I'd say - everything that has to do with energy efficiency and AI. We were also able to demonstrate this in the Vision EQXX project: we implemented voice control on a neuromorphic chip for the first time - to our knowledge for the first time - which means it runs extremely energy-efficiently. In principle it has the same, … [? somewhat incomprehensible in the original, perhaps he meant for the customer?] function, nothing changes at all, it just runs much more energy-efficiently. Now, voice control is not a major energy sink in the car, but it was a use case that showed it works, and our big goal now - also in my scientific research at TU Darmstadt - is to make safety-relevant topics, such as perception - object recognition in automated driving - more energy-efficient on such chips, with such neural networks. And then we will really be in a highly safety-relevant, obviously highly safety-relevant application [more freely translated “we’ll be dealing with…”], and that is still a tough nut to crack.”
[Highly safety-relevant is the literal translation of the adjective hochsicherheitsrelevant, which Steven Peters uses in the German original; I‘d be inclined to use the English translation safety-critical here, but I am not sure whether those two terms would be equivalent in automotive tech speak]


Alexander Armbruster: “Is Mercedes at the back of the pack or in front? There are American companies that, um, admittedly do a lot more marketing, make some very big announcements, even with much smaller steps, but if you compare it like this, is Waymo, um, is Waymo far ahead or is it roughly on the same level with Daimler...?”

Steven Peters:
“Well, I would actually say that Waymo is the global leader, and Waymo behaves, um, like a university in a positive sense. They do a lot of research, including basic research, and they also publish a lot, and that's really impressive. And I'm not aware of any comparable company that is researching and advancing this topic in such depth and with such seriousness and staying power, so I think it is fair to say they are deservedly the number one. But our German companies are all, er, not somehow left behind or far behind - the use case is just a different one. And we can see that - Mercedes was the first in Germany, BMW has now followed - with the so-called Level 3 system, which is not quite as advanced as Waymo's robotaxis, which have now been in commercial use for a few months in San Francisco, but it [sic!] is also the first in the world to deliver such vehicles to private customers, in which I as the driver can really take my hands off the wheel under very narrow, defined scenarios and also hand over legal responsibility [liability?] to the vehicle, i.e. in these scenarios, once the vehicle has taken over, I [as the person in the driver’s seat] am actually [legally] permitted to read the FAZ, for example."


So here is my take on our relationship with Mercedes:

While no specific neuromorphic company gets mentioned in the podcast, we know for a fact that it was Akida he is referring to when he talks about the voice control implementation on a neuromorphic chip in the Vision EQXX concept car.

And we just heard it practically from the horse’s mouth (even though Steven Peters is no longer employed by MB) that voice control will not remain the only application to be optimised by neuromorphic technology. Automotive companies have definitely set their eyes even on highly safety-relevant vision applications, too (pun intended). However, it may take a little longer than a lot of us BRN shareholders envisage before neuromorphic chips will be implemented ubiquitously in cars (“… that is still a tough nut to crack.”)

Although we cannot be 100% sure that Mercedes is going to utilise Akida in the future (as opposed to other neuromorphic tech) - unless we get an official announcement - chances are they will stick with the commercially available tried and tested, and the recent reveal by former Brainchip ML intern Vishnu Prateek Kakaraparthi that @Pom down under had discovered is evidence of continued interest in cooperation between MB and Brainchip until at least August 2023.

ECECEC59-CCF7-4456-88DF-BBFCA9ED03C8.jpeg



Note that the wording is “positioning for potential project collaboration with Mercedes”, so the way I read it this is no proof of present collaboration, even though other posters have claimed so. To me, it sounds more like MB wants to compare two or more neuromorphic solutions before making a final decision, although that of course begs the question of who could be the potential competition. Last year, Markus Schäfer mentioned both Brainchip and Intel as leading developers in the field in his first In the Loop blog entry. But how does that align with Loihi not being ready for commercialisation for another few years? 🤔

As for the recent CES Rob Telson interview: Yes, he mentions smart cabin features announced by “companies like Mercedes”, but stops short of claiming that it is indeed Brainchip’s technology enabling Mercedes to do what they are promoting. IMO this is no proof either that Mercedes is still a customer.

So while I am convinced that MB (and other carmakers) will implement neuromorphic tech into their future cars and I am optimistic about a continuing collaboration between MB and Brainchip for various reasons (eg as the Mercedes logo continues to be shown on Brainchip’s website under “You’re in good company”), I wouldn’t say it is 100% certain from what we know so far, and I don’t think it is fair to insult people who question the claim made by some that it is or who state they believe the lead times are much longer than what some posters here wish for.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 83 users

Diogenese

Top 20
In 2022, TATA collectively filed over 60 patent applications including NNs for a wide variety of applications.

This one is for ultrasonic detection of hand gestures and uses CNN2SNN conversion:

US2023325001A1 ACOUSTIC SYSTEM AND METHOD BASED GESTURE DETECTION USING SPIKING NEURAL NETWORK 20220409

1706534013856.png



This one is from 2021, and omits the CNN step:

EP4170382A1 SYSTEM AND METHOD FOR ACOUSTIC BASED GESTURE TRACKING AND RECOGNITION USING SPIKING NEURAL NETWORK 20211024

1706534542214.png




This one is for cardio:

EP4292535A1 IDENTIFYING CARDIAC ABNORMALITIES IN MULTI-LEAD ECGS USING HYBRID NEURAL NETWORK WITH FULCRUM BASED DATA RE-BALANCING 20220617

1706534084740.png
 
  • Like
  • Fire
  • Love
Reactions: 57 users
In 2022, TATA collectively filed over 60 patent applications including NNs for a wide variety of applications.

This one is for ultrasonic detection of hand gestures and uses CNN2SNN conversion:

US2023325001A1 ACOUSTIC SYSTEM AND METHOD BASED GESTURE DETECTION USING SPIKING NEURAL NETWORK 20220409

View attachment 55506


This one is from 2021, and omits the CNN step:

EP4170382A1 SYSTEM AND METHOD FOR ACOUSTIC BASED GESTURE TRACKING AND RECOGNITION USING SPIKING NEURAL NETWORK 20211024

View attachment 55508



This one is for cardio:

EP4292535A1 IDENTIFYING CARDIAC ABNORMALITIES IN MULTI-LEAD ECGS USING HYBRID NEURAL NETWORK WITH FULCRUM BASED DATA RE-BALANCING 20220617

View attachment 55507
Interesting you mention that patent as sounds just like what Lumenci thought was interesting too :unsure:


In addition to this, we also expect many startups, cross-domain partnerships, IP licensing, mergers, acquisitions, etc., between organizations to utilize the advantages of neuromorphic computing. One such interesting case is the partnership between BrainChip and TCS for intelligent human gesture recognition.


 
  • Like
  • Fire
  • Love
Reactions: 32 users

Diogenese

Top 20
Here is yet another puzzle piece primarily aimed at those doubting that carmakers see huge potential in neuromorphic technology and believing that Mercedes Benz’s implementation of Akida in the Vision EQXX project car’s voice control system was just a one-off experiment (it certainly would have helped if Markus Schäfer had meanwhile followed up with his promised second In The Loop blog entry on neuromorphic computing).

I happened to come across a weekly podcast series by German national newspaper Frankfurter Allgemeine Zeitung - aka FAZ - called “D:Economy”, featuring current topics relating to digitalisation and technology.

The podcast’s guest in episode 286 (Dec 1, 2023) was Prof. Steven Peters, who - as you may recall - was the founder and Head of AI Research at Mercedes from 2016-2022, overseeing the Vision EQXX concept car project, before returning to the world of academia full-time; he is now Professor for Automotive Engineering at TU Darmstadt. The podcast centred around autonomous driving, and the interviewee was asked for his assessment of the progress made so far.


https://www.faz.net/podcasts/f-a-z-...men-fahren-wirklich-herr-peters-19354415.html


There was one specific passage (from 3:07 min) that I would like to share with you all. After podcast host Alexander Armbruster had introduced his guest, they briefly talked about Steven Peters’ role as MB’s former Manager of AI Research. The latter said that his team had tried to develop and implement ML tools everywhere in the company where they deemed it to be of added value (he listed aerodynamics, user experience, chassis analysis), and when asked about an example of a specific project, he referred to AI-generated routines as an example of how AI features his team had “vorgedacht” had found their way into the new E-Class. [The verb used here derives from the German noun Vordenker (= forward thinker) and has a very positive pioneering, innovative and visionary connation].

Steven Peters then went on to say the following:
(First a verbatim transcription of the German original, followed by a teamwork translation by DeepL & myself):


Steven Peters: “Wir haben außerdem sehr viele Themen begleitet, die jetzt auch gerade ‘nen sehr sehr großen Hype auslösen, sag’ ich mal - das ist alles, was mit Energieeffizienz und KI zu tun hat. Wir haben das in dem Projekt Vision EQXX damals auch demonstrieren dürfen: Da haben wir die Sprachbedienung erstmalig - nach unserer Kenntnis erstmalig - auf einem neuromorphischen Chip umgesetzt, d.h., der läuft extrem energieeffizient - im Prinzip hat er die gleiche, vor Kunde die gleiche [? etwas unverständlich, evtl. meinte er für den Kunden?] Funktion, es ändert sich gar nichts, nur es läuft eben viel energieeffizienter ab. Jetzt ist die Sprachbedienung keine große Energiesenke in dem Auto, aber es war ein Use Case, an dem man mal zeigen konnte, dass es geht, und unser großes Ziel jetzt - auch in meiner wissenschaftlichen Forschung an der TU Darmstadt - ist, für sicherheitsrelevante Themen, wie jetzt z.B. die Perzeption - die Objekterkennung beim automatisierten Fahren - auf solchen Chips, mit solchen neuronalen Netzen auch eben energieeffizienter zu machen. Und dann sind wir wirklich in einer hochsicherheitsrelevanten, offensichtlich hochsicherheitsrelevanten Anwendung, und das ist noch ‘ne, ‘ne harte Nuss.”

Alexander Armbruster: “Ist Mercedes da eher hinten dran oder vorne mit dabei? Es gibt ja amerikanische Konzerne, die, ähm, zugegeben auch viel mehr Marketing machen, auch bei viel kleineren Schritten sehr große Ankündigungen zum Teil machen, damit aber, wenn man es so vergleicht, ist Waymo, ähm, ist Waymo weit vorne oder fährt‘s mit Daimler auf derselben Höhe ungefähr, oder…?”

Steven Peters: “Also, ich würde tatsächlich sagen, dass Waymo weltweit führend ist, und Waymo benimmt sich, ähm, im positiven Sinne wie ‘ne Universität. Also die, die forschen sehr sehr viel, auch in der Grundlagenforschung, veröffentlichen auch einiges, und das ist wirklich beeindruckend. Und mir wäre jetzt kein vergleichbares Unternehmen bekannt, das in dieser Tiefe und in dieser Ernsthaftigkeit und mit so einem langen Atem dieses Thema erforscht und vorantreibt. Von daher glaub’ ich, sind die zu Recht auf Platz 1, muss man glaub’ ich so sagen. Unsere deutschen sind aber allesamt, äh, jetzt nicht irgendwie abgehängt oder weit dahinter - der Anwendungsfall ist nur ‘n anderer. Und wir sehen das ja - Mercedes war der erste jetzt in Deutschland, BMW ist jetzt gefolgt - mit dem sogenannten Level 3-System; das ist nicht ganz so viel, wie die Robotaxis von Waymo, die in San Francisco jetzt seit wenigen Monaten auch kommerziell im Einsatz sind, aber es ist [sic!] auch weltweit hier die ersten, die an private Kunden solche Fahrzeuge ausliefern, wo ich unter ganz engen, definierten Szenarien als Fahrender wirklich die Hände auch vom Lenkrad nehmen darf und auch die Verantwortung ans Fahrzeug übergebe, d.h. ich darf dann wirklich in diesen Szenarien, wenn das Fahrzeug übernommen hat, z.B. die FAZ lesen.“





Steven Peters: “In addition, we were involved in a lot of topics that are currently generating a lot of hype, I'd say - everything that has to do with energy efficiency and AI. We were also able to demonstrate this in the Vision EQXX project: we implemented voice control on a neuromorphic chip for the first time - to our knowledge for the first time - which means it runs extremely energy-efficiently. In principle it has the same, … [? somewhat incomprehensible in the original, perhaps he meant ‘for the customer?] function, nothing changes at all, it just runs much more energy-efficiently. Now, voice control is not a major energy sink in the car, but it was a use case that showed it works, and our big goal now - also in my scientific research at TU Darmstadt - is to make safety-relevant topics, such as perception - object recognition in automated driving - more energy-efficient on such chips, with such neural networks. And then we will really be in a highly safety-relevant, obviously highly safety-relevant application [more freely translated “we’ll be dealing with…”], and that is still a tough nut to crack.
[Highly safety-relevant is the literal translation of the adjective hochsicherheitsrelevant, which Steven Peters uses in the German original; I‘d be inclined to use the English translation safety-critical here, but I am not sure whether those two terms would be equivalent in automotive tech speak]


Alexander Armbruster: “Is Mercedes at the back of the pack or in front? There are American companies that, um, admittedly do a lot more marketing, make some very big announcements, even with much smaller steps, but if you compare it like this, is Waymo, um, is Waymo far ahead or is it roughly on the same level with Daimler...?”

Steven Peters:
“Well, I would actually say that Waymo is the global leader, and Waymo behaves, um, like a university in a positive sense. They do a lot of research, including basic research, and they also publish a lot, and that's really impressive. And I'm not aware of any comparable company that is researching and advancing this topic in such depth and with such seriousness and staying power, so I think it is fair to say they are deservedly the number one. But our German companies are all, er, not somehow left behind or far behind - the use case is just a different one. And we can see that - Mercedes was the first in Germany, BMW has now followed - with the so-called Level 3 system, which is not quite as advanced as Waymo's robotaxis, which have now been in commercial use for a few months in San Francisco, but it [sic!] is also the first in the world to deliver such vehicles to private customers, in which I as the driver can really take my hands off the wheel under very narrow, defined scenarios and also hand over legal responsibility [liability?] to the vehicle, i.e. in these scenarios, once the vehicle has taken over, I [as the person in the driver’s seat] am actually [legally] permitted to read the FAZ, for example."


So here is my take on our relationship with Mercedes:

While no specific neuromorphic company gets mentioned in the podcast, we know for a fact that it was Akida he is referring to when he talks about the voice control implementation on a neuromorphic chip in the Vision EQXX concept car.

And we just heard it practically from the horse’s mouth (even though Steven Peters is no longer employed by MB) that voice control will not remain the only application to be optimised by neuromorphic technology. Automotive companies have definitely set their eyes even on highly safety-relevant vision applications, too (pun intended). However, it may take a little longer than a lot of us BRN shareholders envisage before neuromorphic chips will be implemented ubiquitously in cars (“…that is still a tough nut to crack.”)

Although we cannot be 100% sure that Mercedes is going to utilise Akida in the future (and not other neuromorphic tech) - unless we get an official announcement - chances are they will stick with the commercially available tried and tested, and the recent reveal by former Brainchip ML intern Vishnu Prateek Kakaraparthi that @Pom down under had discovered is evidence of continued interest in cooperation between MB and Brainchip until at least August 2023.

View attachment 55504


Note that the wording is “positioning for potential project collaboration with Mercedes”, so the way I read it this is no proof of present collaboration, even though other posters have claimed so. To me, it sounds more like MB wants to compare two or more neuromorphic solutions before making a final decision, although that of course begs the question of who could be the potential competition. Last year, Markus Schäfer mentioned both Brainchip and Intel as leading developers in the field in this first In the Loop blog entry. But how does that align with Loihi not being ready for commercialisation for another few years? 🤔

As for the recent CES Rob Telson interview: Yes, he mentions smart cabin features announced by “companies like Mercedes”, but stops short of claiming that it is indeed Brainchip’s technology enabling Mercedes to do what they are promoting. IMO this is no proof either that Mercedes is still a customer.

So while I am convinced that MB (and other carmakers) will implement neuromorphic tech into their future cars and I am optimistic about a continuing collaboration between MB and Mercedes for various reasons (eg as the Mercedes logo continues to be shown on Brainchip’s website under “You’re in good company”), I wouldn’t say it is 100% certain from what we know so far, and I don’t think it is fair to insult people who question the claim made by some that it is or who state they believe the lead times are much longer than what some posters here wish for.
Hi Frangipanni,

Thanks for this detailed analysis.

It's good to see Prof Peters is pursuing SNNs at Darmstadt TU.

@IloveLamp posted about a presentation by Prof Peters/Darmstadt back in November:
"A Spike in Efficiency"
 
  • Like
  • Fire
Reactions: 33 users

cosors

👀
Here is yet another puzzle piece primarily aimed at those doubting that carmakers see huge potential in neuromorphic technology and believing that Mercedes Benz’s implementation of Akida in the Vision EQXX project car’s voice control system was just a one-off experiment (it certainly would have helped if Markus Schäfer had meanwhile followed up with his promised second In The Loop blog entry on neuromorphic computing).

I happened to come across a weekly podcast series by German national newspaper Frankfurter Allgemeine Zeitung - aka FAZ - called “D:Economy”, featuring current topics relating to digitalisation and technology.

The podcast’s guest in episode 286 (Dec 1, 2023) was Prof. Steven Peters, who - as you may recall - was the founder and Head of AI Research at Mercedes from 2016-2022, overseeing the Vision EQXX concept car project, before returning to the world of academia full-time; he is now Professor for Automotive Engineering at TU Darmstadt. The podcast centred around autonomous driving, and the interviewee was asked for his assessment of the progress made so far.


https://www.faz.net/podcasts/f-a-z-...men-fahren-wirklich-herr-peters-19354415.html


There was one specific passage (from 3:07 min) that I would like to share with you all. After podcast host Alexander Armbruster had introduced his guest, they briefly talked about Steven Peters’ role as MB’s former Manager of AI Research. The latter said that his team had tried to develop and implement ML tools everywhere in the company where they deemed it to be of added value (he listed aerodynamics, user experience, chassis analysis), and when asked about an example of a specific project, he referred to AI-generated routines as an example of how AI features his team had “vorgedacht” had found their way into the new E-Class. [The verb used here derives from the German noun Vordenker (= forward thinker) and has a very positive pioneering, innovative and visionary connation].

Steven Peters then went on to say the following:
(First a verbatim transcription of the German original, followed by a teamwork translation by DeepL & myself):


Steven Peters: “Wir haben außerdem sehr viele Themen begleitet, die jetzt auch gerade ‘nen sehr sehr großen Hype auslösen, sag’ ich mal - das ist alles, was mit Energieeffizienz und KI zu tun hat. Wir haben das in dem Projekt Vision EQXX damals auch demonstrieren dürfen: Da haben wir die Sprachbedienung erstmalig - nach unserer Kenntnis erstmalig - auf einem neuromorphischen Chip umgesetzt, d.h., der läuft extrem energieeffizient - im Prinzip hat er die gleiche, vor Kunde die gleiche [? etwas unverständlich, evtl. meinte er für den Kunden?] Funktion, es ändert sich gar nichts, nur es läuft eben viel energieeffizienter ab. Jetzt ist die Sprachbedienung keine große Energiesenke in dem Auto, aber es war ein Use Case, an dem man mal zeigen konnte, dass es geht, und unser großes Ziel jetzt - auch in meiner wissenschaftlichen Forschung an der TU Darmstadt - ist, für sicherheitsrelevante Themen, wie jetzt z.B. die Perzeption - die Objekterkennung beim automatisierten Fahren - auf solchen Chips, mit solchen neuronalen Netzen auch eben energieeffizienter zu machen. Und dann sind wir wirklich in einer hochsicherheitsrelevanten, offensichtlich hochsicherheitsrelevanten Anwendung, und das ist noch ‘ne, ‘ne harte Nuss.”

Alexander Armbruster: “Ist Mercedes da eher hinten dran oder vorne mit dabei? Es gibt ja amerikanische Konzerne, die, ähm, zugegeben auch viel mehr Marketing machen, auch bei viel kleineren Schritten sehr große Ankündigungen zum Teil machen, damit aber, wenn man es so vergleicht, ist Waymo, ähm, ist Waymo weit vorne oder fährt‘s mit Daimler auf derselben Höhe ungefähr, oder…?”

Steven Peters: “Also, ich würde tatsächlich sagen, dass Waymo weltweit führend ist, und Waymo benimmt sich, ähm, im positiven Sinne wie ‘ne Universität. Also die, die forschen sehr sehr viel, auch in der Grundlagenforschung, veröffentlichen auch einiges, und das ist wirklich beeindruckend. Und mir wäre jetzt kein vergleichbares Unternehmen bekannt, das in dieser Tiefe und in dieser Ernsthaftigkeit und mit so einem langen Atem dieses Thema erforscht und vorantreibt. Von daher glaub’ ich, sind die zu Recht auf Platz 1, muss man glaub’ ich so sagen. Unsere deutschen sind aber allesamt, äh, jetzt nicht irgendwie abgehängt oder weit dahinter - der Anwendungsfall ist nur ‘n anderer. Und wir sehen das ja - Mercedes war der erste jetzt in Deutschland, BMW ist jetzt gefolgt - mit dem sogenannten Level 3-System; das ist nicht ganz so viel, wie die Robotaxis von Waymo, die in San Francisco jetzt seit wenigen Monaten auch kommerziell im Einsatz sind, aber es ist [sic!] auch weltweit hier die ersten, die an private Kunden solche Fahrzeuge ausliefern, wo ich unter ganz engen, definierten Szenarien als Fahrender wirklich die Hände auch vom Lenkrad nehmen darf und auch die Verantwortung ans Fahrzeug übergebe, d.h. ich darf dann wirklich in diesen Szenarien, wenn das Fahrzeug übernommen hat, z.B. die FAZ lesen.“





Steven Peters: “In addition, we were involved in a lot of topics that are currently generating a lot of hype, I'd say - everything that has to do with energy efficiency and AI. We were also able to demonstrate this in the Vision EQXX project: we implemented voice control on a neuromorphic chip for the first time - to our knowledge for the first time - which means it runs extremely energy-efficiently. In principle it has the same, … [? somewhat incomprehensible in the original, perhaps he meant ‘for the customer?] function, nothing changes at all, it just runs much more energy-efficiently. Now, voice control is not a major energy sink in the car, but it was a use case that showed it works, and our big goal now - also in my scientific research at TU Darmstadt - is to make safety-relevant topics, such as perception - object recognition in automated driving - more energy-efficient on such chips, with such neural networks. And then we will really be in a highly safety-relevant, obviously highly safety-relevant application [more freely translated “we’ll be dealing with…”], and that is still a tough nut to crack.
[Highly safety-relevant is the literal translation of the adjective hochsicherheitsrelevant, which Steven Peters uses in the German original; I‘d be inclined to use the English translation safety-critical here, but I am not sure whether those two terms would be equivalent in automotive tech speak]


Alexander Armbruster: “Is Mercedes at the back of the pack or in front? There are American companies that, um, admittedly do a lot more marketing, make some very big announcements, even with much smaller steps, but if you compare it like this, is Waymo, um, is Waymo far ahead or is it roughly on the same level with Daimler...?”

Steven Peters:
“Well, I would actually say that Waymo is the global leader, and Waymo behaves, um, like a university in a positive sense. They do a lot of research, including basic research, and they also publish a lot, and that's really impressive. And I'm not aware of any comparable company that is researching and advancing this topic in such depth and with such seriousness and staying power, so I think it is fair to say they are deservedly the number one. But our German companies are all, er, not somehow left behind or far behind - the use case is just a different one. And we can see that - Mercedes was the first in Germany, BMW has now followed - with the so-called Level 3 system, which is not quite as advanced as Waymo's robotaxis, which have now been in commercial use for a few months in San Francisco, but it [sic!] is also the first in the world to deliver such vehicles to private customers, in which I as the driver can really take my hands off the wheel under very narrow, defined scenarios and also hand over legal responsibility [liability?] to the vehicle, i.e. in these scenarios, once the vehicle has taken over, I [as the person in the driver’s seat] am actually [legally] permitted to read the FAZ, for example."


So here is my take on our relationship with Mercedes:

While no specific neuromorphic company gets mentioned in the podcast, we know for a fact that it was Akida he is referring to when he talks about the voice control implementation on a neuromorphic chip in the Vision EQXX concept car.

And we just heard it practically from the horse’s mouth (even though Steven Peters is no longer employed by MB) that voice control will not remain the only application to be optimised by neuromorphic technology. Automotive companies have definitely set their eyes even on highly safety-relevant vision applications, too (pun intended). However, it may take a little longer than a lot of us BRN shareholders envisage before neuromorphic chips will be implemented ubiquitously in cars (“…that is still a tough nut to crack.”)

Although we cannot be 100% sure that Mercedes is going to utilise Akida in the future (and not other neuromorphic tech) - unless we get an official announcement - chances are they will stick with the commercially available tried and tested, and the recent reveal by former Brainchip ML intern Vishnu Prateek Kakaraparthi that @Pom down under had discovered is evidence of continued interest in cooperation between MB and Brainchip until at least August 2023.

View attachment 55504


Note that the wording is “positioning for potential project collaboration with Mercedes”, so the way I read it this is no proof of present collaboration, even though other posters have claimed so. To me, it sounds more like MB wants to compare two or more neuromorphic solutions before making a final decision, although that of course begs the question of who could be the potential competition. Last year, Markus Schäfer mentioned both Brainchip and Intel as leading developers in the field in this first In the Loop blog entry. But how does that align with Loihi not being ready for commercialisation for another few years? 🤔

As for the recent CES Rob Telson interview: Yes, he mentions smart cabin features announced by “companies like Mercedes”, but stops short of claiming that it is indeed Brainchip’s technology enabling Mercedes to do what they are promoting. IMO this is no proof either that Mercedes is still a customer.

So while I am convinced that MB (and other carmakers) will implement neuromorphic tech into their future cars and I am optimistic about a continuing collaboration between MB and Mercedes for various reasons (eg as the Mercedes logo continues to be shown on Brainchip’s website under “You’re in good company”), I wouldn’t say it is 100% certain from what we know so far, and I don’t think it is fair to insult people who question the claim made by some that it is or who state they believe the lead times are much longer than what some posters here wish for.
Thank you for your effort and work! I interpret it in the same way as you do.

And I have the thought after reading that of how far, in theory and in the future, Loihi will or could be scalable at all. It's all about costs and AKD is perfectly scalable, however and whatever your wallet allows.
Is Loihi scalable? I apologise if this question has already been asked and answered.
 
  • Like
Reactions: 14 users
Heavenly posting on us and our brand reboot but reposted by Stobbs who say they are partners with Heavenly.

Wonder if we do any work with them also :unsure:

Stobbs -

About us​

Stobbs IAM blends IP expertise with a knowledge of business, a passion for brands and a practical approach to problem solving. Our legal service is (1) highest quality and (2) doesn’t break the bank. We thrive in the grey, but speak in black and white. Founded in 2013, within three years, Stobbs was awarded Tier 1 status by The Legal 500. In 2016 and 2018, Stobbs won the Managing Intellectual Property's award for UK Trade Mark Firm of the year. In 2017 Stobbs was awarded Tier 1 status in Managing Intellectual Property's IP Stars 2017, and obtained the rating of "highly recommended" by the World Trademark Review.


View organization page for Stobbs
Stobbs
3,898 followers
4d

A forward thinking rebrand for a forward thinking brand from our partners at Heavenly ...
View organization page for Heavenly
Heavenly
3,035 followers
5d
2024's predicted to be the year that AI will really hit the mainstream, so we're bound to hear a lot more about companies like BrainChip , with their revolutionary neuromorphic AI capabilities, making waves in key industries such as healthcare, automotive, communications and manufacturing. We worked with Brainchip on a brand reboot, helping to establish and cement their position as a global leader in neuromorphic AI. Leading with magic, substantiating with logic. #branding #AI #tech
  • No alternative text description for this image
 
  • Like
  • Thinking
  • Fire
Reactions: 19 users
In 2022, TATA collectively filed over 60 patent applications including NNs for a wide variety of applications.

This one is for ultrasonic detection of hand gestures and uses CNN2SNN conversion:

US2023325001A1 ACOUSTIC SYSTEM AND METHOD BASED GESTURE DETECTION USING SPIKING NEURAL NETWORK 20220409

View attachment 55506


This one is from 2021, and omits the CNN step:

EP4170382A1 SYSTEM AND METHOD FOR ACOUSTIC BASED GESTURE TRACKING AND RECOGNITION USING SPIKING NEURAL NETWORK 20211024

View attachment 55508



This one is for cardio:

EP4292535A1 IDENTIFYING CARDIAC ABNORMALITIES IN MULTI-LEAD ECGS USING HYBRID NEURAL NETWORK WITH FULCRUM BASED DATA RE-BALANCING 20220617

View attachment 55507
Looks like TATA, have been busy bees..
When do we get our honey?...



bees-tigger-movie.gif
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Frangipani

Regular
Hi Frangipanni,

Thanks for this detailed analysis.

It's good to see Prof Peters is pursuing SNNs at Darmstadt TU.

@IloveLamp posted about a presentation by Prof Peters/Darmstadt back in November:
"A Spike in Efficiency"

Hi Diogenese,

yup - in fact, I myself had posted about that same presentation even earlier, back in October… 😉

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-385765

… and by the way, @Pepsin kindly provided a summary of the webinar here, which was great for those of us (including me) who couldn’t make it that day:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-398145


Actually, we’ve known for quite a while that Prof Peters continued researching SNNs after leaving MB for TU Darmstadt - just have a look at his comment on Markus Schäfer’s In the Loop post on neuromorphic computing:


72393DA3-10C2-42A5-851B-71E00475AD51.jpeg


BFC1149A-9978-4AD7-A6AC-7A7707DD70B5.jpeg
 
  • Like
  • Love
Reactions: 22 users

Frangipani

Regular

View attachment 55023

Thanks Supersonic and Chapman!
I love the "new" look Brainchip, way more professional and clean. I'd almost forgotten the old look.
I also like that Brainchip.ai links through to them.

Heavenly posting on us and our brand reboot but reposted by Stobbs who say they are partners with Heavenly.

Wonder if we do any work with them also :unsure:

Stobbs -

About us​

Stobbs IAM blends IP expertise with a knowledge of business, a passion for brands and a practical approach to problem solving. Our legal service is (1) highest quality and (2) doesn’t break the bank. We thrive in the grey, but speak in black and white. Founded in 2013, within three years, Stobbs was awarded Tier 1 status by The Legal 500. In 2016 and 2018, Stobbs won the Managing Intellectual Property's award for UK Trade Mark Firm of the year. In 2017 Stobbs was awarded Tier 1 status in Managing Intellectual Property's IP Stars 2017, and obtained the rating of "highly recommended" by the World Trademark Review.


View organization page for Stobbs
Stobbs
3,898 followers
4d

A forward thinking rebrand for a forward thinking brand from our partners at Heavenly ...
View organization page for Heavenly
Heavenly
3,035 followers
5d
2024's predicted to be the year that AI will really hit the mainstream, so we're bound to hear a lot more about companies like BrainChip , with their revolutionary neuromorphic AI capabilities, making waves in key industries such as healthcare, automotive, communications and manufacturing. We worked with Brainchip on a brand reboot, helping to establish and cement their position as a global leader in neuromorphic AI. Leading with magic, substantiating with logic. #branding #AI #tech
  • No alternative text description for this image
As for this brand reboot by Heavenly - didn’t all this happen in mid-2022 already? Or am I missing something here?

Don’t get me wrong, I love what they did with the Brainchip rebranding, but I am just a bit puzzled why they would make old news look like current ones?
Seems to me they are basically using the fact that AI is such a buzzword nowadays for publicity? #AI
 
  • Like
  • Haha
Reactions: 11 users

Frangipani

Regular
Interesting like by our VP of Sales, EMEA, Alf Kuchenbuch:



B721FF1F-4147-4406-A618-706E7C00796A.jpeg




146BFDAC-F3EB-4B1F-9888-CF137AEEBB5E.jpeg

972033C2-6F84-440B-8205-13D0F1F405A0.jpeg



And interestingly even a 👍🏻 for Gregor Lenz’s start-up Neurobus… And one more from another Brainchip employee - Gilles Bézard (remember my post yesterday? Gilles and Gregor are both on the scientific committee for the inaugural SPAICE 2024 conference). Oh, hang on, and Wouter Benoot, CTO of the Belgian SpaceTech company EDGX likes this post as well! (https://brainchip.com/edgx-announce...sruptive-data-processing-solutions-for-space/)
Are they all just a happy European space tech family, liking each others’ posts, or is there more to it after all? 🤔 Just when I thought that Neurobus was no longer a realistic dot join for the time being…


AD9609D2-1734-49CB-BD4C-A529E07E1DF0.jpeg
 

Attachments

  • 759D2F7D-FE27-4ACC-8440-44E812F2CB49.jpeg
    759D2F7D-FE27-4ACC-8440-44E812F2CB49.jpeg
    390 KB · Views: 53
  • Like
  • Fire
  • Love
Reactions: 29 users

IloveLamp

Top 20
Last edited:
  • Like
  • Thinking
Reactions: 17 users

IloveLamp

Top 20
🤔


Explore the basics of voice enablement and NXP’s i.MX RT106V-Based Smart Voice User Interface Solution for IoT devices. This EdgeReady solution for both local and online voice control leverages the i.MX RT106V crossover MCU with integrated Voice Intelligent Technology (VIT) speech recognition offering a voice user interface for touchless applications. This ultra-small form-factor and production ready hardware design comes with fully integrated software running on FreeRTOS for a quick out-of-the-box evaluation. The solution also minimizes time to market, risk and development effort enabling OEMs to easily add voice to their smart home and smart appliance products.
Screenshot_20240130_065127_LinkedIn.jpg
 
  • Like
  • Fire
  • Thinking
Reactions: 11 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
TDK's new low power smart sensors. Mentions their engineers "built an alternative in which a machine learning algorithm could recognize motion patterns at the sensor level, to determine whether further data processing is required. In that way a device could be ultra-low power."

Seems a bit fiddly and not very versatile IMO.



New Technology Aims at Easy, Low Power IoT for Manufacturing​

BY CLAIRE SWEDBERG

The new BLE-enabled module and software platform from TDK are part of the company’s drive to make sensor-based equipment monitoring with AI

Jan 29, 2024For those in industrial settings, digital management of machine operation may be easier, with a new solution and sensor from electronic components company TDK corporation.
The company is offering its Smart Sensing Platform to enable faster deployment for Internet of Things (IoT) devices or other wireless sensing technology.
The company has released its Bluetooth mesh connected I3 device—with AI on-the-edge capability for the industrial IoT. It captures sensor data and shares it, as well as inference data, wirelessly, but requires low power based on its ability to send only relevant data, when it’s needed. The company demonstrated the new products at CES.

Making Sensors Smarter

TDK’s primary businesses are divided into three main groups: ICT (information and communication technology), automotive, and industrial and energy. Within these groups, one of the technology solution consists of IoT-based sensors that measure conditions and share that data via wired or wireless connections.

“We've been [primarily] focused on making our sensors smarter,” says Jim Tran, TDK USA’s general manager, and part of that effort is edge computing. “When we say edge, [in this case] we really mean at the very edge —on the sensor itself.”
As a result, the company has built its SmartEdge machine learning algorithm that links to a motion sensor within a device so that it can detect a motion and transmit accordingly. When the device is stationary, it can remain dormant.

Low Power I3 Sensor

Tran notes most wearable technology comes with a form of motion sensing. Traditionally, devices process the motion data on a CPU or other dedicated hardware to identify what that motion means.
TDK engineers built an alternative in which a machine learning algorithm could recognize motion patterns at the sensor level, to determine whether further data processing is required. In that way a device could be ultra-low power.
The company refers to the resulting technology as “sipping currents” or micro droplets of energy that are required to track conditions with the new devices.

The I3 module, measuring about the size of a quarter, is a resulting product for electronic device developers focused on measuring machine health. It comes with built-in BLE beacon for industrial mesh networks.

SmartEdge Algorithm

The overall solution that enables the latest, low energy, IoT deployments is TDK’s Smart Sensing Platform, featuring sensors and software with edge AI, connectivity and cloud computing. The goal is to make deployments easier and more seamless, with always-on interactive apps and services. The solution leverages the company’s SmartEdge AI algorithms.
The algorithms enable users to run machine learning at the edge, using select sensor features such as vibration profile or temperature requirements to identify what is taking place and when data needs to be forwarded to the server.
Users could apply the TDK I3 or other IoT sensor devices on machinery in a factory or industrial site, which then begin tracking data about the sound, vibration or temperature being emitted by each machine. The devices could then use a Bluetooth mesh network to forward the data to a Wi-Fi access point when necessary.
However, the system is intended to send only relevant data to the cloud. TDK’s smart edge platform infers specific conditions before transmitting that data.

Ease of Integration With Less Engineering

For developers building AI intelligence into a sensor, the process requires several steps, says Tran.
“You need to be able to make that algorithm super small for the tiniest memory footprint, for cost and latency,” he said.
He adds that the next step is having AI engineers available to write an algorithm for each deployment, and in some cases, for each kind of sensor device, or equipment that the device is monitoring.
To that end, TDK recently acquired Carnegie Mellon spinoff company Qeexo which created the developer tools that simplifies the process.

A Set of Machine Learning Algorithms

In most cases engineers or developers would need to conduct modeling and coding using C or C++ code in Python—leveraging trained domain experts that understand that data so they can label it. AI engineers can bypass several processes with this technology, however. Using the TDK solution, a sensor can employ any of 18 machine-learning algorithms designed for edge sensing.
Developers select the algorithm, convert it to machine code and then download it to a sensor. They then apply the sensor to the machine to begin monitoring the data.

Transforming Manufacturing to Industrial 4.0

“We believe that this type of solution is really needed in order to scale for industrial 4.0,” says Tran.
The technology is being adopted by companies such as factories that use smart sensors, as well as by solution providers that license TDK’s tool and create their own edge AI products.
“We're focused on using this tool combined with our devices like the I3 and to really help factories transform themselves into the digital world,” Tran says, adding for him, it’s a way to democratize IoT and AI solutions.
The goal is to make it possible for companies to deploy a solution without hiring outside engineers.

Do-it-Yourself Development

In that way, some companies could develop their IoT solution without needing to hire outside expertise. “They can do it themselves which simplifies everything dramatically,” said David Almoslino, corporate marketing SVP.
Last year, Procter and Gamble announced the use of this tool for their product development which the company says reduces the development time of their AI algorithms. They have not shared how they use the technology specifically.
The technology helps companies better manage conditions even at customer sites. For instance, firms that sell or lease equipment used at manufacturing sites can identify problems that could arise. In the case of a break-down, they would have access to troubleshooting data even before service personnel are onsite.

Key Takeaways:

  • TDK releases new sensor and software to enable easier, low energy IoT for equipment monitoring.
  • Companies such as Procter and Gamble are using TDK’s solution for AI on the edge functionality.
 
  • Like
  • Fire
  • Thinking
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

yummy-delicious.gif

Smart Sensor Market Set to Soar Past USD 125.30 Billion by 2030

The global Smart Sensor market size is expected to reach USD 125.30 billion by 2030 and exhibit a CAGR of 18.1% in the forecast period (2023−2030), according to Skyquest's latest research report. The connectivity and IoT adoption, Industry 4.0 initiatives, demand for real-time data, automation trends, predictive maintenance needs, environmental monitoring, cost reduction strategies, regulatory compliance requirements, advancements in sensor technology, increasing use in healthcare, growth in the automotive sector, rising consumer interest in smart devices, and expansion of smart cities and infrastructure projects is fueling the market's growth.​

January 25, 2024 10:00 ET| Source: SkyQuest Technology Consulting Pvt. Ltd.Follow



Westford, USA, Jan. 25, 2024 (GLOBE NEWSWIRE) -- According to SkyQuest's latest global research of the Smart Sensor market, miniaturization of sensors, edge computing integration, AI and machine learning applications, increased use of wireless communication, sensor fusion for multi-modal data, growth in wearable sensor technology, expansion of the Internet of Things (IoT), emphasis on energy-efficient sensors, adoption of LiDAR and other 3D sensing technologies, enhanced security and privacy measures, and development of sensors for autonomous vehicles and robotics, are the trends that aid in the market's growth.
Browse in-depth TOC on "Smart Sensor Market"
  • Pages - 157
  • Tables - 65
  • Figures - 75
A smart sensor is a sensor that incorporates computation and communication capabilities. It can collect data from its environment, process it, and make decisions based on it, without the need for human intervention. Smart sensors can also communicate with other sensors and devices, to share data and collaborate on tasks.
Get a sample copy of this report:
https://www.skyquestt.com/sample-request/smart-sensor-market
Prominent Players in Smart Sensor Market
  • ABB
  • Analog Devices
  • Bosch
  • Eaton
  • GE
  • Honeywell
  • Infineon
  • Microchip
  • NXP
  • Panasonic
  • Qualcomm
  • Renesas
  • Siemens
  • STMicroelectronics
  • TE Connectivity
  • Texas Instruments
  • Würth Elektronik
  • Amphenol
  • Vishay
  • Murata
  • TDK
  • Omron
  • Sensirion
Temperature Sensors Demand to Grow Substantially in the Forecast Period
Temperature sensors dominated the global online market as they find applications in a wide range of industries and use cases, including consumer electronics, automotive, industrial processes, and healthcare. They are vital for monitoring and controlling temperature in various environments and processes.
Browse summary of the report and Complete Table of Contents (ToC):
https://www.skyquestt.com/report/smart-sensor-market
Internet of Things (IoT) Devices is the Leading Application Segment
In terms of application, the Internet of Things (IoT) Devices is the leading segment as they gained widespread adoption across industries, from smart homes to industrial automation and healthcare. IoT devices rely on various sensors to collect data and enable smart decision-making, creating a strong demand for smart sensors.
North America is the leading Market Due to the Technological Advancements
North America is a hub for technological innovation and research and development. Many leading sensor manufacturers and technology companies are based in the United States and Canada. This fosters the development and adoption of cutting-edge smart sensor technologies. The region's advanced infrastructure and strong internet connectivity support the growth of IoT applications, which are major consumers of smart sensors.
A recent report thoroughly analyzes the major players operating within the Smart Sensor market. This comprehensive evaluation has considered several crucial factors, such as collaborations, mergers, innovative business policies, and strategies, providing invaluable insights into the key trends and breakthroughs in the market. Additionally, the report has carefully scrutinized the market share of the top segments and presented a detailed geographic analysis. Finally, the report has highlighted the major players in the industry and their ongoing endeavors to develop innovative solutions that cater to the ever-increasing demand for Smart Sensor.
 
  • Like
  • Fire
  • Thinking
Reactions: 23 users

Doz

Regular
As at the 16/01/24 , Mr Louis Dinardo still held 4,921,536 shares .
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Diogenese

Top 20
Oh dear!

Radial ply tyres on a Model T.

https://www.amd.com/content/dam/amd...-docs/white-papers/amd-cdna-3-white-paper.pdf

AMD has pioneered the evolution of system architecture over the last decade to unify CPU and GPU computing at an unparalleled scale. AMD InstinctTM MI250X, at the heart of the first Exascale system, was enabled by the AMD CDNA™ 2 architecture and advanced packaging, as well as AMD Infinity Fabric™, connecting the Instinct GPUs and AMD EPYC 7453s CPU with cache coherence.
...
Last, the AMD Matrix Core Technology is now capable of efficiently handling sparse data to optimize machine learning workloads using matrix INT8, FP8, FP16, BF16. In many neural networks, a high percentage of the data will be zeroes - this is quite common in attention modules in transformer-based networks like most LLMs and also in convolution-based networks. The Matrix Core can take advantage of situations where at least two values within a group of four input values are zero (50%+ sparsity). The sparse non-zero data is represented in a compact and dense form with additional metadata tracking the locations. The dense representation of the data fits directly into the Matrix core pipeline and enables doubling the computational throughput up to an incredible 8K operations per clock for a CU.
 
  • Like
  • Haha
  • Fire
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Wonder how many hits we're inadvertently going to get on our website because of this news hot off the press! 😝

Screenshot 2024-01-30 at 10.37.50 am.png
 
  • Haha
  • Like
  • Wow
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

AQTRONICS FORMS STRATEGIC ALLIANCE WITH PROPHESEE TO TRANSFORM VISION TECHNOLOGY​

Through this partnership, Aqtronics will facilitate the distribution of Prophesee’s cutting-edge sensors and evolution kits in the Indian market

Aqtronics Forms Strategic Alliance With Prophesee To Transform Vision Technology




Outlook Start-Up Desk​

POSTED ON JANUARY 29, 2024 5:52 PM
Aqtronics, a semiconductor distribution platform has recently announced its strategic partnership with Prophesee, a pioneering company in the event-based vision systems space.
The partnership will leverage Aqtronics' distribution network and expertise in the Indian market to introduce and launch Prophesee's high-tech event-based vision sensors in India.
Through this partnership, Aqtronics will facilitate the distribution of Prophesee’s cutting-edge sensors and evolution kits in the Indian market, namely the EVK3 with the cutting-edge GenX 320 sensor and the EVK4 HD featuring the Sony IMX636 sensor, the statement added.
The kit represents the pinnacle of vision technology that will eventually transform into an invaluable asset in the domain of IoT, Robotics, Quality Control and Monitoring, Security, and Autonomous systems.
Rangaprasad Magadi of Aqtronics, commented, “This collaboration marks a significant step forward in bringing state-of-the-art event-based vision technology to the Indian market. It is a testament to our commitment to driving innovation and delivering cutting-edge technology solutions to our customers in the industry.”
In the near future, Aqtronics will continue to strengthen this vigorous alliance with Prophesee by entering into partnerships with prominent OEM players in order to package Prophesee’s event-based sensors into camera products for targeting markets like industrial automation, robotics, and security and surveillance, the statement further added.
 
  • Like
  • Love
  • Fire
Reactions: 20 users
Top Bottom