BRN Discussion Ongoing

My personal theory and that is all it is is that the first model released with Brainchip involved will be a high range electric vehicle show casing Mercedes Benz technology leap ahead of the competition.

The target range will be 800 kilometres plus.

It will be the car that thinks like you.

It will be the car that learns and adapts to you.

It will be the car taking full advantage of the JAST learning rules owned by…?…that does one shot, few shot, incremental learning securely on chip without connection.

There is only one chip and one company in the WORLD that offers this technology.

My opinion only DYOR
FF

AKIDA BALLISTA
Not sure why so many sad face emoji’s?

Crack the champagne team - We have become a market product!!!

Hey Mercedes: very powerful voice assistant
The "Hey Mercedes" voice assistant is highly capable of dialogue and learning by activating online services in the Mercedes me App. Moreover, certain actions can be performed even without the activation keyword "Hey Mercedes". These include taking a telephone call. "Hey Mercedes" also explains vehicle functions, and, for example, can help when asked how to connect a smartphone via Bluetooth or where the first-aid kit can be found.
If compatible home technology and household devices are present, they can also be networked with the vehicle thanks to the smart home function and controlled from the vehicle by voice. "Hey Mercedes" can also detect occupants audibly. Once the individual voice characteristics have been learned, this can be used to access personal data and functions by activating a profile.


This is from the media PDF that has been sent out.
 
  • Like
  • Fire
  • Love
Reactions: 39 users

Dhm

Regular
Sony together with Prophesee have produced an Event Based Vision Sensor and we have a strong liklihood that Akida is embedded within the product. This is not necessarily new to the 1000 Eyes but it is good to review and remind ourselves how good it is to be shareholders.

It is however, over a year old.

Sony says in their press release:
https://www.sony-semicon.com/en/new...+-+IMX636-637+launch&utm_id=IMX636-637+launch

Screen Shot 2022-10-17 at 10.58.34 am.png


Lots of Brainchip style jargon mentioned in this video....

 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 30 users

Diogenese

Top 20
Little more background.

This looks like could be him based on profile pic and Rigpa article pic.



Is apparently SNN.

Was working with Steve Furber at Manchester at one point. Sure I've seen his name around before.

Also had funding from US DTRA.




This new brain-inspired chip is 23 times faster and needs 28 times less energy

It was 2009 when Mike Huang first knew he wanted to become a chip design engineer, as he developed a 16-bit microprocessor from scratch with his classmates in his university lab.

Soon, Huang grew interested in understanding how the mind works — and “as an engineer, to understand and prove how the mind works is to effectively reverse engineer the brain,” he says. He reached out to Professor Steve Furber at the University of Manchester, who was designing a supercomputer with 1 million ARM cores that draws inspiration from the communication architecture of the brain, and started working closely with him.

A decade later, Huang has just developed a brain-inspired microchip that could process large amounts of data faster and with lower power, improving performance and energy efficiency for AI applications.

“The technology itself closely mimics how biological neural networks work compared to conventional AI solutions,” Huang says.

Huang first created the chip as part of his PhD in Neuromorphic Computing at the University of Edinburgh School of Engineering, funded by UK-based radiation detection company Kromek and the US Defense Threat Reduction Agency (DTRA). He then launched a startup, Rigpa, and joined Cohort IV at Conception X to learn how to commercialise his technology.

The problem Huang originally set out to solve was to reduce power consumption and inference times compared to traditional chip architectures. He achieved this by designing a spiking neural network chip to accelerate the next generation of AI — efficient, sustainable and human brain-like.

“GPUs are an old technology — they were originally designed for video games,” Huang says. “The median GPU consumes a huge amount of power. Just think that the latest neural network model GPT-3 generated 552 metric tons of carbon dioxide during training — that’s the CO2 emissions the average American produces over more than 34 years. It’s not a sustainable solution.”

Rigpa’s technology achieves 28 times less power consumption and 23 times faster inference speed than conventional architectures, with key applications in situations that require reliable, real-time computation — think computer vision, drones, smart home appliances, self-driving cars, wearables, high-frequency trading and more.


“National security is a good example of how this technology could be used,” Huang says. “Imagine a police officer working in counterterrorism who’s equipped with a handheld radiation detector connected to their mobile phone, which processes the data from the detector. Rigpa’s new chip can be integrated directly into the detector so that everything happens in there, and the phone is no longer needed.”

At Conception X, Huang learned how to turn blue sky ideas into something tangible. “Conception X has helped to sharpen my mindset. I’m still doing research, but it’s completely different from one year ago,” he says. “Before, I wasn’t sure when or how this technology would be useful. Now, I know in which direction to go and I’m constantly thinking about how a new piece of research I’m working on will feed into my technology.”

Rigpa plans to launch its product on the market in 2024, when demand for neuromorphic technologies is set to take off, and is currently looking to raise.


Great sleuthing Fmf,

This is a paper Huang published disclosing his SNN.


https://arxiv.org/ftp/arxiv/papers/2010/2010.13125.pdf

1665965807563.png

It appears that one reason why it is slower than Akida is that it is clocked rather than asynchronous:

Each neuron in the same layer triggered by the same presynaptic neuron, is processed one-by-one in a Time Division Multiplexing (TDM) manner. The sequence of the operations and data flow are managed by the state machine in the Control Logic block. As shown in Fig. 5, to handle the worst case of back-to-back spikes from the hidden layer, 9 cycles of delay are added after each spike fire event of any neurons in hidden layer to allow sufficient time for the 9 cycles needed by TDM process of 8 neurons in the output layer.
 
  • Like
  • Love
  • Fire
Reactions: 36 users

Quercuskid

Regular
  • Like
  • Love
Reactions: 4 users
Oh my goodness my entire share portfolio is currently down $625.00. That’s right 62,500 cents where will it all end.🤡🤣😂🤡 Boring flat day at this stage. The WANCA’s must be tearing their hair out. 😁
 
Last edited:
  • Haha
  • Like
  • Love
Reactions: 18 users

buena suerte :-)

BOB Bank of Brainchip
Nice to see the BUY side looking a bit healthier! :cool:

654 buyers for 6,459,174 units

390 sellers for 4,490,809 units
 
  • Like
  • Fire
Reactions: 13 users

Quercuskid

Regular
Oh my goodness my entire share portfolio is currently down $625.00. That’s right 6,250 cents where will it all end.🤡🤣😂🤡 Boring flat day at this stage. The WANCA’s must be tearing their hair out. 😁
Mines up lol thanks to a few patrys and their amazing announcement today!
 
  • Like
  • Love
  • Fire
Reactions: 9 users
Great sleuthing Fmf,

This is a paper Huang published disclosing his SNN.


https://arxiv.org/ftp/arxiv/papers/2010/2010.13125.pdf

View attachment 19110
It appears that one reason why it is slower than Akida is that it is clocked rather than asynchronous:

Each neuron in the same layer triggered by the same presynaptic neuron, is processed one-by-one in a Time Division Multiplexing (TDM) manner. The sequence of the operations and data flow are managed by the state machine in the Control Logic block. As shown in Fig. 5, to handle the worst case of back-to-back spikes from the hidden layer, 9 cycles of delay are added after each spike fire event of any neurons in hidden layer to allow sufficient time for the 9 cycles needed by TDM process of 8 neurons in the output layer.
Thanks D

I try to never discount potential competition in so much as not necessarily better than us but just a possible direct player in our space if a close product.

We know a lot of the other pretenders out there are actually different, general AI etc, or run a watered down attempt at SNN but this one just seemed a little closer for some reason.

Good to see there is still a key diff that keeps his SNN a bit slower and trust our increasing patent moat should ensure can't encroach too much into our abilities.

Cheers
 
  • Like
  • Fire
Reactions: 22 users
Mines up lol thanks to a few patrys and their amazing announcement today!
Nothing like a few PAB to make the Doctor popular. 😂🤣😂
 
  • Like
  • Haha
Reactions: 5 users

Diogenese

Top 20
Great sleuthing Fmf,

This is a paper Huang published disclosing his SNN.


https://arxiv.org/ftp/arxiv/papers/2010/2010.13125.pdf

View attachment 19110
It appears that one reason why it is slower than Akida is that it is clocked rather than asynchronous:

Each neuron in the same layer triggered by the same presynaptic neuron, is processed one-by-one in a Time Division Multiplexing (TDM) manner. The sequence of the operations and data flow are managed by the state machine in the Control Logic block. As shown in Fig. 5, to handle the worst case of back-to-back spikes from the hidden layer, 9 cycles of delay are added after each spike fire event of any neurons in hidden layer to allow sufficient time for the 9 cycles needed by TDM process of 8 neurons in the output layer.
More lead in the Rigpa saddlebags:

B. ANN-to-SNN Conversion The process of conversion from the trained ANN to a SNN model running on SpiNNaker was made up of two steps: quantisation and network conversion. In the quantisation step the ANN parameters were fine-tuned using quantisation aware training available in the TensorFlow Model Optimization Toolkit [9] and the weights were quantised to 8-bit signed integers. Figure 9 (right) shows the difference that the losses due to weight quantisation were negligible. This suggests that the model has a high number of redundant connections and that reduction of the model complexity through further reduction of the precision of the weights or through pruning of connections could give further energy savings in the hardware implementation. Lower precisions were not investigated here due to the lack of software support for precisions lower than 8-bit integer..

To misquote Mick Dundee:

"Those aren't spikes ...

THIS is a spike!"
 
  • Like
  • Haha
  • Fire
Reactions: 24 users

JK200SX

Regular
This has to be confirmation that AKIDA has been incorporated in the NEW EQE !

1665968558289.png



Click on the link above, then click on "5 documents" and select the document in English, or whatever other language you can read! (screenshot above is from page 20)
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 46 users

Mn2019

Regular
Thanks JK200SX. Now I will have to go and buy a MB electric car....
 
  • Like
  • Haha
Reactions: 9 users

JK200SX

Regular
This has to be confirmation that AKIDA has been incorporated in the NEW EQE !

View attachment 19116


Click on the link above, then click on "5 documents" and select the document in English, or whatever other language you can read! (screenshot above is from page 20)


1665969086713.png
 
  • Like
  • Fire
Reactions: 10 users

alwaysgreen

Top 20
  • Like
  • Love
Reactions: 4 users
So are we saying that to have Akida in this latest MB release they would have gone via ARM for the IP?

If they were using direct from BRN as per the EQXX reveal then would expect a signed agreement of some level with BRN which would necessitate an Ann.

I'm just not sure as yet personally as we know the time cycle to Design, Dev and Produce a vehicle and these would have been in the pipeline for quite sometime.

Unless they developed the production Akida integration for this release parallel to the EQXX concept :unsure:
 
  • Like
Reactions: 10 users

Mn2019

Regular
So are we saying that to have Akida in this latest MB release they would have gone via ARM for the IP?

If they were using direct from BRN as per the EQXX reveal then would expect a signed agreement of some level with BRN which would necessitate an Ann.

I'm just not sure as yet personally as we know the time cycle to Design, Dev and Produce a vehicle and these would have been in the pipeline for quite sometime.

Unless they developed the production Akida integration for this release parallel to the EQXX concept :unsure:
Marcus Schaefer did say to "stay tuned"..........
 
  • Like
  • Haha
Reactions: 15 users

equanimous

Norse clairvoyant shapeshifter goddess
TSE customer requirements: does this have Akida in it?

Sales team: Not at this present time

TSE customer: No sale. Hangs up the phone
 
  • Haha
  • Like
Reactions: 12 users
Marcus Schaefer did say to "stay tuned"..........
Ecstatic like everyone if is Akida however it would need be ARM or us if so, I would think?

Ok, if ARM, can probs sit behind that and we would need official MB confirm themselves, or uplift in revenue in due course or if direct BRN then we should expect ANN you would think now the MB vehicles released.
 
  • Like
Reactions: 8 users
Top Bottom