BRN Discussion Ongoing

Stockbob

Regular
Good afternoon, can anybody who attended the AGM comment about the LLM sentence generating and de noising demo given by Tony Lewis, were they live demos or pre recorded demos? Were they impressive?
It was a very simple and short demo got the point across and nothing too fancy , LLM demo was where they showed akida generating more words / minute with low power compared to GPT2 and audio denoising demo was just that.
 
  • Like
  • Fire
Reactions: 9 users

7für7

Top 20
Me after watching the whole AGM, more confident then before, and reading stupid statements from the well known downrampers in the well known forums.
1716279141559.gif
 
  • Like
  • Haha
  • Fire
Reactions: 21 users

Diogenese

Top 20
I see a lot of blanket statements based on the above , but what I heard Antonio say is a “ 5 year architecture license “ , I could be wrong. My interpretation of what he said is not all licenses are equal , there are multiple tiers of IP license , each with its own use cases and accompanying royalty schemes.
Well, if the speculation about Scala 3 is correct, there could be software licences.
 
  • Like
  • Fire
  • Love
Reactions: 32 users

cosors

👀
"Building Temporal Kernels with Orthogonal Polynomials
20 May 2024


Yan Ru Pei
Brainchip Inc.
Laguna Hills, CA 92653
ypei@brainchip.com

Olivier Coenen
Brainchip Inc.
Laguna Hills, CA 92653
ocoenen@brainchip.com

Abstract
We introduce a class of models named PLEIADES
(PoLynomial Expansion In Adaptive Distributed Event-based Systems), which contains temporal convolution kernels generated from orthogonal polynomial basis functions. We focus on interfacing these networks with event-based data to perform online spatiotemporal classification and detection with low latency. By virtue of using structured temporal kernels and event-based data, we have the freedom to vary the sample rate of the data along with the discretization step-size of the network without additional finetuning. We experimented with three event-based benchmarks and obtained state-of-the-art results on all three by large margins with significantly smaller memory and compute costs. We achieved: 1) 99.59% accuracy with 192K parameters on the DVS128 hand gesture recognition dataset and 100% with a small additional output filter; 2) 99.58% test accuracy with 277K parameters on the AIS 2024 eye tracking challenge; and 3) 0.556 mAP with 576k parameters on the PROPHESEE 1 Megapixel Automotive Detection Dataset.

1 Introduction
...

In Section 5, we run three event-based benchmarks:
1) the IBM DVS128 hand gesture recognition dataset,
2) the CVPR 2024 AIS event-based eye tracking challenge,
3) and the PROPHESEE 1 megapixel automotive detection dataset (Prophesee GEN4 Dataset).
We achieved SOTA results on all three benchmarks.

The code for building the structured temporal kernels, along with a pre-trained PLEIADES network for evaluation on the DVS128 dataset is available here: https://github.com/PeaBrane/Pleiades

...
5 Experiments
...
Table 1: The raw 10-class test accuracy of several networks on the DVS128 dataset. With the
exception of models marked with an asterisk, no output filtering is performed on the networks.
PLEIADES is evaluated on output predictions where all temporal layers process nonzero valid frames,
which incurs a natural warm-up latency of 0.44 seconds (see Section 5.1). Additionally, a majority
filter of window 0.15 seconds is applied to the raw PLEIADES predictions.
1716283353813.png


...


7 Conclusion
We introduced a spatiotemporal network with temporal kernels built from orthogonal polynomials. The network achieved state-of-the-art results on all the event-based benchmarks we tested, and its performance is shown to be stable under temporal resampling without additional fine-tuning. Currently, the network is configured as a standard neural network, which by itself is already ultra- light in memory and computational costs. To truly leverage the full advantage of event-based processing, we can consider using intermediate loss functions to promote activation sparsity [ 24 ]. Another direction is to adapt/convert this architecture into a spiking system via Lebesgue sampling [ 2] of the structured temporal kernels, to make efficient computations/predictions of future spike timings at each temporal layer, for even further edge-compatibility.

8 Acknowledgement
We would like to acknowledge Nolan Ardolino, Kristofor Carlson, M. Anthony Lewis, and Anup Varase (listed in alphabetical order) for discussing ideas and offering insights for this project. We would also like to thank Daniel Endraws for performing quantization studies on the PLEIADES
network, and Sasskia Brüers for help with producing the figures.
..."
full PDF:
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 45 users

Wags

Regular
Good afternoon, can anybody who attended the AGM comment about the LLM sentence generating and de noising demo given by Tony Lewis, were they live demos or pre recorded demos? Were they impressive?
Pre recorded, I'd like to hear them again as my old ears didn't initially notice a huge difference. Not sure what the others thought, wasn't really discussed.
 
  • Like
Reactions: 7 users

Xray1

Regular
Also evident. One deal in isolation can be worth a fortune. Many deals? Boom.
One only needs to look at the M Benz mention of using Akida with the S/p hitting some ~ $2.30
 
  • Like
Reactions: 10 users

MDhere

Regular
Good afternoon, can anybody who attended the AGM comment about the LLM sentence generating and de noising demo given by Tony Lewis, were they live demos or pre recorded demos? Were they impressive?
The denoise was super impressive!!
 
  • Like
  • Fire
Reactions: 11 users

MDhere

Regular
Pre recorded, I'd like to hear them again as my old ears didn't initially notice a huge difference. Not sure what the others thought, wasn't really discussed.
I noticed a big difference as I wear hearing aïds wags. The 1st speech with background noise was bearable but with the denoise I heard EVERY WORD clear as day in the sentence. 👍
 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 21 users

Xray1

Regular
impressive yes!
I attended the AGM .... The de noising demo was pre recorded ..... but it was an absolutely fantastic result ..... I see it being implemented in many audio technologies ...... imo it could be the new standard in listening to clear audio signals.
 
  • Like
  • Fire
Reactions: 17 users

Wags

Regular
I noticed a big difference as I wear heading aïds wags. The 1st speech with background noise was bearable but with the denoise I heard EVERY WORD clear as far in the sentence. 👍
I admit that i heard the second words better, though I still heard the background noise (obviously less noise), but I felt I heard them better as it was the second time for the same words in the space of 30 seconds, that's sort of what meant, it didn't really jump out for me because of this.

Maybe I need to clear my ears out better, lol
 
  • Like
  • Haha
  • Love
Reactions: 6 users

FJ-215

Regular
Jesus christ who are these people standing up on the floor of the AGM in Sydney asking questions.
The AGM isn't about you. Ask your question succinctly then sit down, stfu and listen.
Another AGM question time wasted on shareholders speaking for half of the time.
So fucking frustrating.

Dimitri wasting 5 minutes on questions that have already been answered and now seeking financial advice LOL. Fuck me dead.
By Shareholders.... you mean the people who own the company... and who are now asking questions of the people they put in charge of managing their company. I general like your stuff SERA2g but this is the only time "the owners of the company" get to confront their managers.

Yep, not pleasant but it is what it is. (and yes, some of it cringe worthy).

I've walked many miles in these peoples' shoes. I get!!
 
  • Like
  • Love
Reactions: 13 users

Wags

Regular
Jesus christ who are these people standing up on the floor of the AGM in Sydney asking questions.
The AGM isn't about you. Ask your question succinctly then sit down, stfu and listen.
Another AGM question time wasted on shareholders speaking for half of the time.
So fucking frustrating.

Dimitri wasting 5 minutes on questions that have already been answered and now seeking financial advice LOL. Fuck me dead.
What other questions to you refer to SERA2g, I was there, I asked a couple of questions, that I felt relevant to me and possibly other shareholders. Apologies if they weren't up to scratch.
Maybe you should provide a script in future if your not available to attend personally.

I left the AGM and the get together after, as enthused as ever, with the potential of this company and my investment decisions.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 33 users

wilzy123

Founding Member
Funny how some don’t know how to read the results in the HC forum 😂 I bet they voted NO on the spill
Who honestly cares what HC thinks except for you champ?
 
  • Like
Reactions: 7 users

wilzy123

Founding Member
Good afternoon, can anybody who attended the AGM comment about the LLM sentence generating and de noising demo given by Tony Lewis, were they live demos or pre recorded demos? Were they impressive?

Impressive will be subject to your own understanding of the tech, knowledge in the space, and personal opinion... as will be true of anyone you're soliciting feedback from on this matter. Without this context, the opinions you receive will most likely be as useful as @Cardpro's contributions here.

However - if you wish to judge for yourself https://brainchip.com/2024-agm/
 
  • Sad
  • Fire
Reactions: 2 users

7für7

Top 20
I attended the AGM .... The de noising demo was pre recorded ..... but it was an absolutely fantastic result ..... I see it being implemented in many audio technologies ...... imo it could be the new standard in listening to clear audio signals.
I watched it also live ! And I mean it if I write it was impressive because you couldn’t hear anything of the cancelled noise! Btw I bet they used it for their main microphones because you couldn’t hear the clapping and of the visitors. But when they changed the microphone of Sean. suddenly you could hear even people talking in the background… but only my opinion
 
  • Like
Reactions: 1 users

Wags

Regular
  • Like
Reactions: 1 users

Kachoo

Regular
It was good to see that the company is open to producing other hardware "demonstrators" as well as the Edge Box, so when can they company get its hands on Akida 2 SoC with TENNs?
When Intel or Arm build it lol BRN does not want to compete with the customer !
 
  • Like
Reactions: 1 users

FJ-215

Regular
Ok,

Had not been happy with our progress leading in to the AGM. How do you send the message that you're dissatisfaction with the performance of the board, to the Board, without smashing your own company to bits??? Vote, not vote or abstain or just don't bother as it won't matter??? Had given it up as a bad joke until @Stable Genius pointed out how we could all vote AFTER listening to our managers present their case for their and not before as indicated by some agents!!

Bugger me.....

I managed to watch most of the AGM remotely this morning before listening to the back half in the car.
 
  • Like
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Our Chief Technology Officer, Dr Tony Lewis was the former Senior Director of Technology at Qualcomm and was the creator of Qualcomm's Zeroth neural processing unit and its software API, which is so AMAZING since the cognitive computing abilities developed though the Zeroth program were subsequently incorporated into the SNAPDRAGON processor.

That being said, wouldn't it would be fantastic to get Tony do a podcast or two to hear his detailed thoughts on Zeroth (SnapDragon) and AKIDA and all of the various complementary aspects of each technology that when combined together will be guaranteed to blow everyone's sock off)?

While we wait for these podcasts to be produced, we can entertain ourselves by watching this video from Tony when he was with Qualcomm.




Pretty sure Tony was referring to Qualcomm when he spoke today about having previously worked for a major cell phone company and how he's never been more excited with the prospect of TENNs and how AKIDA will be central to the next revolution in AI. I mean, who esle would it be, looking back on his previous employment history?

IMO. DYOR and DYO laundry and lawn mowing.
 
  • Like
  • Fire
  • Love
Reactions: 45 users

wilzy123

Founding Member
cheers rocket. good job

it's ok buddy... just make better life choices.
 
Top Bottom