BRN Discussion Ongoing

Diogenese

Top 20
Thought I’d found a new discovery here, but upon a google search for more info it turns out our old mate @Fact Finder beat me to it and made the link back on 19/07/21 on HC. Damn he’s good 😅

View attachment 1348

Anyway that predates my BRN days (unfortunately!) and I’d done the research already so thought I’d continue my post - for two reasons:

1. Share the knowledge with more recent holders who may not have seen on HC
2. Reignite the discussion, as I personally don’t know if it had/has legs. Hopefully FF has something more to add 😃


It’s not a silver bullet but yet another interesting dot joined on our path to Brainchip glory
View attachment 1338

Milind Joshi is the Intellectual Property Officer at Brainchip - since May 2021

Prior to this Milind spent almost 7 years at Samsung in India - at their R&D Institute


View attachment 1340



View attachment 1341
View attachment 1343


9 months ago he asked on LinkedIn for:

recommendations for patent watch tool that sends email notification for each new patent publication or grant (USPTO and EPO at least) of a competing firm(s)?

View attachment 1344


More recently he liked this post by Mercedes about the Vision EQXX


View attachment 1345

Cheers
TLS
That's an impressive CV.

Maybe @uiux could apply for the patent watch job.
 
  • Like
  • Love
Reactions: 9 users

Diogenese

Top 20
  • Like
  • Love
Reactions: 6 users

uiux

Regular
  • Like
  • Haha
  • Wow
Reactions: 9 users

cyber

Member
Mercedes were never announced as an EAP but Ford were... I wonder if they were trying to outdo Ford?
Hi Fox,
Well the Mercedes / BRN collaboration started well before the EAP program was actually announced.
Have always had the view that this collaboration with the European auto maker, their interest in BRN and all things neuromorphic was strategically so important to us it probably set them apart maybe from other EAP's.
Probably the case that all the EAP's maybe are ranked in terms of their perceived importance to the Co.
Anyway IMO the sheer importance of this collaboration with the auto maker probably puts Mercedes in their very own VIP EAP category.
The new CEO had the vision to put into words exactly how strategically important this collaboration is for BRN.....
because of our market lead, BRN intends to grab the opportunity through this collaboration in Akida establishing itself as the de facto standard in automotive AI at the Edge.
Love the sound of that.
Cheers
Cyber
 
  • Like
  • Fire
Reactions: 49 users

Proga

Regular
Hi Fox,
Well the Mercedes / BRN collaboration started well before the EAP program was actually announced.
Have always had the view that this collaboration with the European auto maker, their interest in BRN and all things neuromorphic was strategically so important to us it probably set them apart maybe from other EAP's.
Probably the case that all the EAP's maybe are ranked in terms of their perceived importance to the Co.
Anyway IMO the sheer importance of this collaboration with the auto maker probably puts Mercedes in their very own VIP EAP category.
The new CEO had the vision to put into words exactly how strategically important this collaboration is for BRN.....
because of our market lead, BRN intends to grab the opportunity through this collaboration in Akida establishing itself as the de facto standard in automotive AI at the Edge.
Love the sound of that.
Cheers
Cyber
I agree on the VIP EAP category for 4 reasons

1/ the massive amount of chips an EV requires

2/ Mercedes stating 2 things in their article about the EQXX on their website - Mercedes-Benz engineers developed systems based on BrainChip’s Akida hardware and software. In the next paragraph, when applied on scale throughout a vehicle, they have the potential to radically reduce the energy needed to run the latest AI technologies.

3/ where Mercedes leads the rest will quickly follow

4/ royalties

Putting on my broken record, this will take time for the engineers to put everything together and get it into production. As Musk has said on numerous occasions, the hard part is setting up the manufacturing plants to produce the new designs at scale.
 
Last edited:
  • Like
Reactions: 30 users

jtardif999

Regular
Just a thought about the logos displayed in Sean's presentation. I know the Mercedes logo was a first timer in a BrainChip presentation, but when was the last time we saw Valeo’s logo displayed like that? AFAICR not ever?.., maybe when they were announced as an EAP back in 2020 but I can’t remember their logo being grouped with others in a presentation in which the BrainChip CEO referred to them as early adopters? Early adopters is a bigger deal description I think than BrainChip having them as an EAP. They have adopted our technology into something they are producing rather than just trialling its potential for some use case. It’s definitely stronger language to see the Valeo logo as early adopters. Exciting times continue.
 
  • Like
  • Fire
  • Love
Reactions: 38 users

stockduck

Regular
It may already be known to most fellow shareholders, but I would like to share my train of thought with you without being annoying.

The article I came across is from 2017.


If you read the cache text you get the following link:

https://webcache.googleusercontent....mobil-elektronik.html+&cd=3&hl=de&ct=clnk&gl= en

and this translated into English here:


The statement by the CEO of Nvidia in today's context is particularly interesting:


...

Artificial intelligence​

For many visitors, the special atmosphere and the familiar character of this industry get-together in Ludwigsburg are an important reason for their participation. However, one guest was a bit irritated by the free exchange of ideas: "We wouldn't be able to have a conference like this, where competitors sit together and talk about new trends," emphasized Nvidia's CEO Jensen Huang right at the beginning of his keynote speech. "We prefer to keep the good ideas to ourselves." But Huang was happy to talk about the current projects at Nvidia. The starting point for his remarks was the development of performance in different computing architectures. While the microprocessor with its sequential data processing is showing signs of saturation in the increase in performance, this does not apply to parallel data processing with graphics processors (GPUs). Here, more transistors continued to mean more power.

Parallel data processing with graphics processors ensures further increases in computing power.
Jensen Huang, Nvidia: "The parallel data processing with graphics processors ensures further increasing computing power." Matthias Baumgartner
With regard to specific applications in the automotive sector, Huang put forward a thesis that was somewhat surprising in view of the concentrated expertise in the audience: "At the moment there are very few people in the world who really understand the enormous computing power required for autonomous driving." And added that in addition to performance, particularly high energy efficiency is just as important. While conventional high-performance computers load them with a few thousand watts, GPU-based systems such as Nvidia's Drive PX solution make do with considerably less.

Nvidia primarily relies on deep learning technology, in which artificial neural networks are trained with corresponding data sets in order to recognize traffic signs, for example. Such an approach requires massive parallel data processing, for which GPU-based hardware is particularly suited. The big difference between deep learning and conventional programming methods is that the source code is not generated manually, but is located in the data with which a deep learning system is trained, according to Huang: “The data is the source Code.” With deep learning, the computer writes the software itself, so to speak, in which the learned experiences are stored in the neural network.

Jensen Huang, Nvidia: Such a conference would not be possible with us.
Jensen Huang, Nvidia: "We wouldn't be able to hold a conference like this." Matthias Baumgartner
Both the methods of deep learning and the hardware platform developed by Nvidia can be used in addition to the development of highly automatedUse driving functions up to autonomous driving for completely different applications. As an example, Huang cited the cooperative , in which industrial robots are trained to interact with workers in the production process without endangering them. Nvidia has developed a virtual parallel world for training robots, in which virtual ones are trained. Analogous to this "Holodeck" it should also take place in the automotive world.

...


In connection with the announcement by Mercedes Benz that they are using brain chip technology in their latest study

and another announcement by Mercedes Benz that Nvidia is granted a high share of the sales of future vehicle technologies,


I suspect that Nvidia has been linked to brainchip for some time (till 2017 ?). In my opinion, after such reports, it's about time for Nvidia to put on their pants and open the curtain, too much fear of the competition also conveys a lack of self-confidence in their own products... what do you think?

Hope the links work, sorry if not. :unsure:

Thank you all for your great contribution and support.
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 47 users

uiux

Regular
It was actually this reply from a moron on Twitter that got me thinking that there are probably a lot more people that haven’t quite grasped what we’re capable of….. So if it worked on him then maybe our competitors are in the same boat.

View attachment 1377



Sebastian Schmitt isnt a moron:


 
  • Like
Reactions: 11 users

Townyj

Ermahgerd
  • Like
Reactions: 8 users

uiux

Regular
Wow.. definitely has been around the block. Wonder why... he said what he said. Hmmmm

probably being sarcastic
 
  • Like
Reactions: 11 users

Townyj

Ermahgerd
probably being sarcastic

I agree 100% with it being sarcastic.

He may know some insider info... kinda doubt we were used for Voice Recognition only.
 
  • Like
Reactions: 11 users

Diogenese

Top 20
Sebastian Schmitt isnt a moron:


Thanks ui,

Sebastian is involved in research into analog neural networks at a research stage rather than an applied stage:

The neuromorphic system discussed in this paper, BSS-2, has been designed as an emulation platform for neuroscientific research and differs from previous implementations in several aspects.

https://www.sciencedirect.com/science/article/pii/S0306452221004218

His work seems to relate to the wafer-scale neuromorphic chip (ie, one chip per wafer) project intended to replicate the capabilities of a full human brain:

Neuromorphic hardware in the loop: Training a deep spiking network on the brainscales wafer-scale system
S Schmitt, J Klähn, G Bellec, A Grübl, M Guettler, A Hartel, S Hartmann, ...
2017 International Joint Conference on Neural Networks (IJCNN), 2227-2234


[I think they use pizza boxes as packaging for the wafer-scale chips.]

A lot of university research is devoted to analog SNNs, possibly because ASNNs are a much closer simulacrum of actual neurons and work with real spikes, whereas Akida uses bibary digital bits to represent spikes.

Sebastian's latest paper looks at the decay of the spike signal as it passes to other neurons. Effectively, Akida does not involve such decay.

Earlier papers look at ways of correcting the manufacturing variability which plagues analog neurons.

It is difficult to reliably encode much information in an analog spike due to the manufacturing variability. On the other hand, the 4-bit weights and actuations of the production version of Akida 1000 enable 16 levels of information to be included (the "strength" of the "spike") as discussed above.

As we have discussed previously, many university research groups are apparently ignorant of Akida as there are many research papers which do not mention it, even when they mention True North and Loihi.

So it seems Sebastian is barking up a different tree from the one Akida has climbed.

My thought on his comment is that he considers that there is so much power used with CPU/GPU applications in ADAS vehicles that the power used for voice control would be negligible, whereas MB have applied the many-a-mickle theory to the EQXX. Little does he know ...
 
  • Like
  • Fire
  • Love
Reactions: 39 users

uiux

Regular
Thanks ui,

Sebastian is involved in research into analog neural networks at a research stage rather than an applied stage:

The neuromorphic system discussed in this paper, BSS-2, has been designed as an emulation platform for neuroscientific research and differs from previous implementations in several aspects.

https://www.sciencedirect.com/science/article/pii/S0306452221004218

His work seems to relate to the wafer-scale neuromorphic chip (ie, one chip per wafer) project intended to replicate the capabilities of a full human brain:

Neuromorphic hardware in the loop: Training a deep spiking network on the brainscales wafer-scale system
S Schmitt, J Klähn, G Bellec, A Grübl, M Guettler, A Hartel, S Hartmann, ...
2017 International Joint Conference on Neural Networks (IJCNN), 2227-2234


[I think they use pizza boxes as packaging for the wafer-scale chips.]

A lot of university research is devoted to analog SNNs, possibly because ASNNs are a much closer simulacrum of actual neurons and work with real spikes, whereas Akida uses bibary digital bits to represent spikes.

Sebastian's latest paper looks at the decay of the spike signal as it passes to other neurons. Effectively, Akida does not involve such decay.

Earlier papers look at ways of correcting the manufacturing variability which plagues analog neurons.

It is difficult to reliably encode much information in an analog spike due to the manufacturing variability. On the other hand, the 4-bit weights and actuations of the production version of Akida 1000 enable 16 levels of information to be included (the "strength" of the "spike") as discussed above.

As we have discussed previously, many university research groups are apparently ignorant of Akida as there are many research papers which do not mention it, even when they mention True North and Loihi.

So it seems Sebastian is barking up a different tree from the one Akida has climbed.

My thought on his comment is that he considers that there is so much power used with CPU/GPU applications in ADAS vehicles that the power used for voice control would be negligible, whereas MB have applied the many-a-mickle theory to the EQXX. Little does he know ...


"My thought on his comment is that he considers that there is so much power used with CPU/GPU applications in ADAS vehicles that the power used for voice control would be negligible..."


Yep I agree, this is what I thought too
 
  • Like
Reactions: 17 users
Personally (with my limited understanding of the patent world) I wouldn't be surprised if he was planted by Samsung to assist BRN (and possibly to protect Samsungs interests) so they would be confident that BRNs IP is protected if Samsung are implementing into future products.........Pure speculation of course
Lol. He stopped working for Samsung about 18 month before working for us and had other jobs in between. Love it though.

SC
 
  • Like
Reactions: 9 users
Don't be too harsh on people who don't understand our technology yet. They do not have the benefit of our '1,000 eyes' and daily reading everything we have posted over the past few years.

The world has a lot of catching up to do.
Maybe in their papers under references they should have rhis.
REFERENCES:
1. 1000 eyes thestockexchange.com.au

LOL

SC
 
  • Like
Reactions: 17 users
Hi 1000 eyes
Slightly unusual request. Over on the Brainchip Nanose thread Bacon Lover has found a 2022 two page magazine article relating to Nanose and the Sniffphone. Only problem is it appears to be written in Hebrew. The Google translate function is not performing for those of us over there so if you are a wizard with translate or better still read Hebrew you assistance would be greatly appreciated.
FF.
 
  • Like
  • Love
  • Thinking
Reactions: 12 users
Lol. He stopped working for Samsung about 18 month before working for us and had other jobs in between. Love it though.

SC
Would have made such a great chapter for the Brainchip story. Bugger. LOL FF.
 
  • Like
  • Haha
Reactions: 9 users

M_C

Founding Member
@SERA2g This may be of interest.......I too have my suspicions about NXP and this is another Dot.........I think this "like" by Rob Telson means something not nothing.......


Screenshot_20220220-081139_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 43 users
Hi Pepsin,

We have loked at GraiMatter a couple of times in the last couple of years.

In another place, I mentioned that I thought GraiMatter and Hailo were worth keeping an eye on:

31/08/21 20:58 Post #: 55728384

If I were worried about competitors, I'd keep an eye on Hailo and GrAi Matter both of which have silicon NNs, GrAi Matter claiming a SNN.

GrAiMatter: WO2020025680A1 DATA PROCESSING MODULE, DATA PROCESSING SYSTEM AND DATA PROCESSING METHOD

https://worldwide.espacenet.com/patent/search/family/063254640/publication/WO2020025680A1?q=wo2020025680


The GraiMatter NPU:-
https://hotcrapper.com.au/data/attachments/3530/3530672-efc1458ce703c58131eeba14841dd69b.jpg
While GrAiMatter claim to be avoiding the von Neumann bottleneck, I don't think the apple has fallen that far from the tree.

#########################################################################

GraiMatter uses time multiplexing:

WO2020025680A1 DATA PROCESSING MODULE, DATA PROCESSING SYSTEM AND DATA PROCESSING METHOD

The paper you cited points out a problem with time multiplexing:


page 88

On-chip learning requires precious memory and routing resources [21], which hinders scalability. On digital technologies, this problem can be sidestepped by time-multiplexing a dedicated local plasticity processor [6, 12]. The time multiplexing approach however suffers from the same caveats as a Von Neumann computer due to the separation between the SNN and the associated plasticity processor. Other promising alternatives for true local plasticity are emerging devices (Sec. 1) and related architectures (Sec. 2.1), which allow storage and computation for plasticity to occur at the same place.

Hey @Diogenese

Very happy to be proved wrong on this one, but GrAI Matter Labs (GML) look to also have a chip/similar tech available for trial?

This May 2020 article mentions that both Brainchip and GrAI Matter Labs were working to commercialise chips. Have they also found success?

I must be wrong

@uiux in recent correspondence on HC, you have used an 8 point comparison to Brainchip

I unfortunately do not have your technical abilities but was keen to understand how the GML technology compares

Cheers
TLS
 
Last edited:
  • Like
Reactions: 4 users
Don’t panic if you are a new investor Quantum computing is not a threat at the edge and the former CEO in a moment of candour spoke about the potential for AKIDA technology to work with the new Quantum technology.
Add to this Quantum annealing and spiking neural networks made a brief appearance at NASA and my requests for clarity from Brainchip have gone unanswered in a very polite fashion. As I have never looked good in colours particularly Orange I have not pressed beyond gentle follow ups of a by the way nature.

In any event Google’s Quantum reveal has been negated to some extent by IBM’s observations about what is still required:


My opinion only DYOR
FF

AKIDA BALLISTA

PS: BarrelSitter gave a short but interesting master class on Quantum annealing.
 
  • Like
Reactions: 18 users
Top Bottom