BRN Discussion Ongoing

IloveLamp

Top 20
Good Morning Chippers,

RENESAS locking in wafer material.


Slooooooowly but surely.

Regards,
Esq.



Screenshot_20230706_085639_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 24 users

Learning

Learning to the Top 🕵‍♂️
Hopefully some of these projects involve Akida. 100,000 in a year.



Learning 🏖
 
  • Like
  • Love
  • Fire
Reactions: 33 users

Dozzaman1977

Regular
I'm expecting a correction on today's Notification regarding unquoted securities. The RSU balance was just over 30 million after yesterdays notification of 7 million being exercised. Todays announcement about 130,000 options expiring has the RSU total back at 37 odd million.............
I guess its hard to keep up with all the options/RSU's being handed out. Maybe they could hire a RSU/Option Manager as this seems to be the area that's pretty busy at the moment.
I would phone up Tony but I'll save my oxygen for the rest of the day.
Cheers
 
  • Like
  • Haha
  • Thinking
Reactions: 10 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers,

Just cut open my peice of morning fruit & look what I found.

😃 .

Impatient shareholder , YES.

* Artists impression , any conclusions drawn from imagery ( 🍎 ) are purely those of viewers vivid imagination and are NOT those expressed or implied even in slightest by BrainChip management or those in their engage .

Bring on 2nd Generation AKIDA & some revinue.

Regards,
Esq.
 

Attachments

  • 20230706_091900.jpg
    20230706_091900.jpg
    1.8 MB · Views: 195
  • Haha
  • Like
  • Love
Reactions: 49 users

Diogenese

Top 20


View attachment 39324
This is AMD's just published in-memory compute CNN:

US2023206395A1 HARDWARE SUPPORT FOR CONVOLUTION OPERATIONS


1688601804813.png



A technique for performing convolution operations is disclosed. The technique includes performing a first convolution operation based on a first convolutional layer input image to generate at least a portion of a first convolutional layer output image; while performing the first convolution operation, performing a second convolution operation based on a second convolutional layer input image to generate at least a portion of a second convolutional layer output image, wherein the second convolutional layer input image is based on the first convolutional layer output image; storing the portion of the first convolutional layer output image in a first memory dedicated to storing image data for convolution operations; and storing the portion of the second convolutional layer output image in a second memory dedicated to storing image data for convolution operations.


Not a spike to be found ...
 
  • Like
  • Love
  • Sad
Reactions: 26 users

buena suerte :-)

BOB Bank of Brainchip
Good Morning Chippers,

Just cut open my peice of morning fruit & look what I found.

😃 .

Impatient shareholder , YES.

* Artists impression , any conclusions drawn from imagery ( 🍎 ) are purely those of viewers vivid imagination and are NOT those expressed or implied even in slightest by BrainChip management or those in their engage .

Bring on 2nd Generation AKIDA & some revinue.

Regards,
Esq.
Love ya nicely handcrafted Akida 2nd Gen chip Esqy 👏👏... And how good would that be 🍎🙏🙏 (all imaginary of course) ;)
 
  • Haha
  • Like
Reactions: 9 users

Taproot

Regular


View attachment 39328
FYI
Our very own Duy-Loan Le is on the board of Wolfspeed.
 
  • Like
  • Fire
  • Thinking
Reactions: 49 users

Learning

Learning to the Top 🕵‍♂️
Good Morning Chippers,

Just cut open my peice of morning fruit & look what I found.

😃 .

Impatient shareholder , YES.

* Artists impression , any conclusions drawn from imagery ( 🍎 ) are purely those of viewers vivid imagination and are NOT those expressed or implied even in slightest by BrainChip management or those in their engage .

Bring on 2nd Generation AKIDA & some revinue.

Regards,
Esq.
Hopefully in the near future,

You can used a few BRN shares to redesign AKD 2.0 with a gold bullion.😊
Screenshot_20230706_103555_Chrome.jpg

Learning 🏖
 
  • Like
  • Love
  • Haha
Reactions: 17 users

Tony Coles

Regular
Good Morning Chippers,

Just cut open my peice of morning fruit & look what I found.

😃 .

Impatient shareholder , YES.

* Artists impression , any conclusions drawn from imagery ( 🍎 ) are purely those of viewers vivid imagination and are NOT those expressed or implied even in slightest by BrainChip management or those in their engage .

Bring on 2nd Generation AKIDA & some revinue.

Regards,
Esq.
WOW! Where do you get your apples from mate? Just got in shit for cutting all the apples from the fridge and not eating them.
 
  • Haha
  • Like
  • Sad
Reactions: 26 users

TheFunkMachine

seeds have the potential to become trees.

This made me think of Brainchips demo from way back in 2017/18 ish. Of a car learning to drive a course trough trial and error and adapting to its surroundings compared to traditional machine learning where you have to retrain the model again and again.

It made me wonder how much this robot would improve with the help of Akida. Man I would like to see a demo like this with Akida!

I can’t find that original demo so if anyone who knows where it is could post a link that would be great!:)
 
  • Like
  • Fire
  • Love
Reactions: 14 users

buena suerte :-)

BOB Bank of Brainchip

This made me think of Brainchips demo from way back in 2017/18 ish. Of a car learning to drive a course trough trial and error and adapting to its surroundings compared to traditional machine learning where you have to retrain the model again and again.

It made me wonder how much this robot would improve with the help of Akida. Man I would like to see a demo like this with Akida!

I can’t find that original demo so if anyone who knows where it is could post a link that would be great!:)

 
  • Like
  • Love
  • Fire
Reactions: 23 users

Diogenese

Top 20
This is AMD's just published in-memory compute CNN:

US2023206395A1 HARDWARE SUPPORT FOR CONVOLUTION OPERATIONS


View attachment 39329


A technique for performing convolution operations is disclosed. The technique includes performing a first convolution operation based on a first convolutional layer input image to generate at least a portion of a first convolutional layer output image; while performing the first convolution operation, performing a second convolution operation based on a second convolutional layer input image to generate at least a portion of a second convolutional layer output image, wherein the second convolutional layer input image is based on the first convolutional layer output image; storing the portion of the first convolutional layer output image in a first memory dedicated to storing image data for convolution operations; and storing the portion of the second convolutional layer output image in a second memory dedicated to storing image data for convolution operations.


Not a spike to be found ...
Hi overpup,

We don't want to spread ourselves too thinly.
 
  • Haha
  • Like
Reactions: 5 users

Terroni2105

Founding Member
Hey @TopCat, I reckon I’ve got exciting news to share! Keep your paws crossed that our sleuthing skills will soon be validated by an official announcement - I can already hear you happily purring away!

I only just found the time to watch the recorded CVPR 2023 Workshop presentations by Kynan Eng (CEO of iniVation) and Nandan Nayampally - they both have such wonderfully soothing voices, by the way.

The mosaic of conjecture we’ve both been creating over the past couple of weeks is almost complete; here are two more tiles to lay, one of them a shiny golden one.

While the videos’ tech content is way above my pay grade, both presentations were quite illuminating, nevertheless! Let me share with you what piqued my interest:

I first watched Kynan Eng’s talk on iniVation’s new Aeveon sensor:



The wallaby featured on the iniVation website must have hopped away and is now presumably roaming freely somewhere around Lake Zurich, as it didn’t make an appearance during the presentation. Instead, it had been substituted by a beautiful, iridescent hummingbird, legendary for its rapid wing beat, which is notoriously difficult to capture on regular camera without blurring.

Watch from 8 min onwards, where Kynan Eng starts to talk about the new, soon-to-be released Aeveon sensor:
„What‘s important to note here is, this is mainly digital. We‘ve moved quite a distance away from the analog circuit in current DVS pixels…

And from 10:45 min onwards: „So the chip doesn’t exist, yet. We are getting very close to the first tape-out of it now, but because it is in digital, we are able to create an emulator for this, for the chip.”


Well, I immediately took a liking to the sound of “digital”, especially since Synsense, iniVation’s sister start-up and partner for their Speck SoC, is into analog SNNs, but it got even better, when I then listened to Nandan Nayampally’s virtual presentation (which explains why he was nowhere to be seen in the CVPR 2023 Workshop group photo - I had actually hoped to spot him next to the likes of Kynan Eng, Tobi Delbrück or André van Schaik, which could have hinted at the fact that he was in conversation with one of them just before the group photo was taken. Clever idea of mine, eh?!)





Just over a minute into the presentation, I couldn’t believe my ears: “Let me start with the key technology changes that we are making to support our partners like Prophesee, iniVation and other folks that are building not only event-based solutions…”

WHAT???!!! My real-time auditory sensor processing DID get that right, didn’t it?! Listen for yourself and correct me if I’m wrong…

Did he really just let slip that iniVation is indeed a Brainchip partner?!

They obviously didn’t get listed under “partners” on the “Brainchip at a glance” presentation slide, as there hasn’t been any official announcement so far, but IMO this (unintentionally) revealing statement strongly supports our hypothesis.

Admittedly, it could have been a lapsus linguae, a disclosure of our Brainchip CMO’s secretly harboured wish of teaming up with iniVation, but honestly, how probable is that?! Much more likely it was an inadvertent divulgence in front of a small audience of computer vision researchers, plus the Aeveon sensor screams Akida, doesn’t it?
I suppose we’ll find out soon, as “we are getting very close to the first tape-out” of the new wonder chip.

My opinion only, DYOR.

Sensational :D Thanks for all your sleuthing and contributions Frangipani
 
  • Like
  • Love
  • Fire
Reactions: 17 users
Always interesting bits sometimes when doing an Edgar search for a few mins.

These guys report filed end June show they were flicking out stock to short.

PACE SELECT ADVISORS TRUST

Only small amount but they all add up and I haven't bothered going through all lodgements for all US listed funds etc.

Screenshot_2023-07-06-13-23-07-87_4641ebc0df1485bf6b47ebd018b5ee76.jpg



On the flip side and again a small amount....a back of couch lunch money purchase, is IBM grabbed close to 600,000 shares at some point...probs through State Street or someone.
 
  • Like
  • Thinking
  • Wow
Reactions: 18 users

MDhere

Regular
Good Morning Chippers,

Just cut open my peice of morning fruit & look what I found.

😃 .

Impatient shareholder , YES.

* Artists impression , any conclusions drawn from imagery ( 🍎 ) are purely those of viewers vivid imagination and are NOT those expressed or implied even in slightest by BrainChip management or those in their engage .

Bring on 2nd Generation AKIDA & some revinue.

Regards,
Esq.
pretty sure i read that they are going straighr it IP LICENSING with 2nd gen no need for chip dev. if i need to dig that info up but I pretty sure i read that. in any case nice chip. will give u a beer next tine if u can chisel one of those for me 🤣
 
  • Like
  • Fire
  • Haha
Reactions: 7 users

buena suerte :-)

BOB Bank of Brainchip
Always interesting bits sometimes when doing an Edgar search for a few mins.

These guys report filed end June show they were flicking out stock to short.

PACE SELECT ADVISORS TRUST

Only small amount but they all add up and I haven't bothered going through all lodgements for all US listed funds etc.

View attachment 39337


On the flip side and again a small amount....a back of couch lunch money purchase, is IBM grabbed close to 600,000 shares at some point...probs through State Street or someone.
Lets hope they don't stop there and sift out all the other illegal crap that's going on in the ASX!!! 😡😡

 
  • Like
  • Love
Reactions: 14 users

cosors

👀

This made me think of Brainchips demo from way back in 2017/18 ish. Of a car learning to drive a course trough trial and error and adapting to its surroundings compared to traditional machine learning where you have to retrain the model again and again.

It made me wonder how much this robot would improve with the help of Akida. Man I would like to see a demo like this with Akida!

I can’t find that original demo so if anyone who knows where it is could post a link that would be great!:)

Thanks for the very amusing and very interesting video! I would also like to see the Brainchip team get in touch with AI Warehouse.

"In every “AI learns to walk” video I’ve seen, the AI either learns to walk in a weird, non-human way, or they use motion capture of a real person walking and simply train the AI to imitate that. I thought it was weird that nobody tried to train it to walk properly from scratch (without any external data), so I wanted to give it a shot! That’s what I said 4 months ago. It’s been really difficult, but I’ve finally managed to do it, so please watch the whole video! The final result ended up being awesome :)

NOTE: From the last video, you guys made it clear you didn’t like that Albert had his brain reset, so from now his brain is here to stay (hopefully)! The next video I make with Albert will start with the brain we trained in this video, so with every video Albert will become more and more capable until he eventually learns to break out of my computer and take over the world. You also can only see one Albert, but there are actually 200 copies of Albert and the room he’s in training behind the camera to speed up the training.

If you want to learn more about how Albert actually works, you can read the rest of this very long comment I wrote explaining exactly how I trained him! (and please let the video play in the background while reading so YouTube will show Albert to more people) I created everything using Unity and ML-Agents. Albert is controlled entirely by an artificial brain (neural network) which has 5 layers, the first layer consists of the inputs (the information Albert is given before taking action, like his limb positions and velocities), the last layer tells him what actions to take and the middle 3 layers, called hidden layers, are where the calculations are performed to convert the inputs into actions. His brain was trained using the standard algorithm in reinforcement learning; proximal policy optimization (PPO).

For each of Albert’s limbs I’ve given him (as an input) the position, velocity, angular velocity, contacts (if it’s touching the ground, wall or obstacle) and the strength applied to it. I’ve also given him the distance from each foot to the ground, direction of the closest target, the direction his body’s moving, his body’s velocity, the distance from his chest to his feet and the amount of time one foot has been in front of the other. As for his actions, we allow Albert to control each body part’s rotation and strength (with some limitations so his arm can’t bend backwards, for example).

Just like the last videos, Albert was trained using reinforcement learning. For each of Albert's attempts, we calculate a score for how 'good' it was and make small, calculated adjustments to his brain to try to encourage the behaviors that led to a higher score and avoid those that led to a lower score. You can think of increasing Albert’s score as rewarding him and decreasing his score as punishing him, or you can think about it like natural selection where the best performing Alberts are most likely to reproduce. For this video there are 13 different types of rewards (ways to calculate Albert's score), we start off with only a couple and with each new room add more, always in an attempt to get him to walk.

Room 1: We start off very simple in the first room, we reward him based on how much he moved to the target and we punish him for moving in the wrong direction. This led to Albert doing the worm towards the target, since he figured out that was the easiest way for him to move the quickest/get the highest score. It would have been possible to get Albert to walk in a janky way by just rewarding him for moving towards the target and also punish him for falling as a team at Google (DeepMind) showed in 2017, but I thought it would make for a more enjoyable video if he starts off with the worm and over time learns to use his legs, rather than immediately being able to partially walk.

Room 2: In the second room we start checking if his limbs hit the ground. If the limb that hits the ground is a foot we reward him (but only if it's in front of his other foot, more on that later), if it isn’t, we punish him. I also made it so Albert wasn’t rewarded at all unless his chest was high enough to force it to at least be partially standing. As seen in the video, this encourages him to not fall over and encourages him to use his feet to do it. We also introduced a new reward designed to encourage smoother movement; if he approaches the maximum strength allowed on a limb he's punished, and he's rewarded if he uses a strength of almost 0. This encourages him to opt for the more human-like movement of using a bit of strength from many limbs as opposed to a lot of strength from one limb.

Room 3: This is where we start to polish Albert’s gait that developed in room 2 and teach him to turn. From here on we start using the chest height calculation as another direct reward where the higher his chest is the more he’s rewarded in an attempt to get him to stand up as straight as possible. These rewards so far give Albert a decent gait, however he’s still not using both of his feet (which was by far the hardest part of this project), so room 4 is designed to do exactly that.

Room 4: We get Albert to take more steps from a few additional rewards. To start, we introduce a 2 second timer that resets when one foot goes in front of the other. We reward Albert whenever this timer is above 0 (the front foot has been in front for < 2 seconds), and we punish him whenever the timer goes below 0 (the front foot has been in front > 2 seconds). We add another reward proportional to the distance of his steps to encourage him to take larger steps. To smooth out the movement, we also add a punishment every frame proportional to the difference in his body’s velocity from the previous frame to the current frame, so if he’s moving at a perfectly consistent velocity he isn’t punished at all, and if he makes very quick erratic movements he’s punished a lot.

Room 5: For the final room the only change I made to the reward function was to go back to an earlier version of a reward. Throughout the other rooms I had been tinkering with how I should reward Albert’s feet being grounded, my initial thought was to only reward the front foot for being grounded to try to get him to put more weight on his front foot when taking steps, but somewhere along the way I changed it to just rewarding Albert for any foot being grounded, and that was the version Albert trained with in rooms 3 and 4. For this final room I switched back to the old front foot grounded reward which resulted in a much nicer looking walk. Also, the video makes it seem like I never reset Albert’s brain, that isn't entirely true, I had to occasionally reset it because of something called decaying plasticity.

Decaying plasticity was a big issue. Basically, Albert’s brain specializes a lot from training in one room, then training in the next room on top of that brain is difficult because he first needs to unlearn that specialization from the first room. The best way to solve the issue is by resetting a random neuron every once in a while so over time he “forgets” the specialization of the network without it ever being noticeable, the problem is I don’t know how to do that through ML-Agents. My solution was to keep training on top of the same brain, but if Albert’s movement doesn’t converge as needed I record another attempt trained from scratch, then stitch the videos together when their movements are similar. If you know how to reset a single neuron in ML-Agents please let me know! The outcome from both methods is exactly the same, but it would be a smoother experience having the neurons reset over time instead of all at once.

For rooms 1 to 4 I only allowed Albert to make a decision every 5 game ticks, but for the final room I removed that constraint and let him make decisions every frame. I found if Albert makes a decision every game tick it’s too difficult for him to commit to any proper movements, he ends up just making very small movements like slightly pushing his front foot forward when he should be taking a full step. The 5 game tick decision time forces him to commit to his decision for at least 5 game ticks so he ends up being more careful when moving a limb. When I recorded him beating the final room I removed this limitation because he’s already learned to commit to his actions so allowing him to make a decision every tick just results in a smoother motion.

If you’re still reading this thank you for being so interested in the project! I’d like to upload much more often than once every few months, and to do that I need some help. I have 2 part time positions open, one for a Unity AI Developer and one for a Unity Scene Designer. It would start off as part time (paid per project) but I’d love to get someone full time provided they’re skilled enough:) If you think you’d be able to help, please apply here for the AI Developer position: forms.gle/rExRJCKcxNmxnBRu5 and here for the Scene Designer position: forms.gle/VafZTMZ8QMruSBiRA I’ve hidden these job postings in this long pinned comment to make sure anybody who applies is interested enough in the videos to actually read the whole comment, so thank you for reading all the way through!:D Also if you have any ideas for how to improve the AI (or solve the issue of decaying plasticity with ML-Agents), include the text "Technical idea" in your comment so I can find it easier! Thank you so much for watching, this video took me 4 months to make, so please, if you enjoyed it or learned something from it, share it with someone you think will also enjoy it! :)"
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 17 users

Jannemann

Member
Hey @TopCat, I reckon I’ve got exciting news to share! Keep your paws crossed that our sleuthing skills will soon be validated by an official announcement - I can already hear you happily purring away!

I only just found the time to watch the recorded CVPR 2023 Workshop presentations by Kynan Eng (CEO of iniVation) and Nandan Nayampally - they both have such wonderfully soothing voices, by the way.

The mosaic of conjecture we’ve both been creating over the past couple of weeks is almost complete; here are two more tiles to lay, one of them a shiny golden one.

While the videos’ tech content is way above my pay grade, both presentations were quite illuminating, nevertheless! Let me share with you what piqued my interest:

I first watched Kynan Eng’s talk on iniVation’s new Aeveon sensor:



The wallaby featured on the iniVation website must have hopped away and is now presumably roaming freely somewhere around Lake Zurich, as it didn’t make an appearance during the presentation. Instead, it had been substituted by a beautiful, iridescent hummingbird, legendary for its rapid wing beat, which is notoriously difficult to capture on regular camera without blurring.

Watch from 8 min onwards, where Kynan Eng starts to talk about the new, soon-to-be released Aeveon sensor:
„What‘s important to note here is, this is mainly digital. We‘ve moved quite a distance away from the analog circuit in current DVS pixels…

And from 10:45 min onwards: „So the chip doesn’t exist, yet. We are getting very close to the first tape-out of it now, but because it is in digital, we are able to create an emulator for this, for the chip.”


Well, I immediately took a liking to the sound of “digital”, especially since Synsense, iniVation’s sister start-up and partner for their Speck SoC, is into analog SNNs, but it got even better, when I then listened to Nandan Nayampally’s virtual presentation (which explains why he was nowhere to be seen in the CVPR 2023 Workshop group photo - I had actually hoped to spot him next to the likes of Kynan Eng, Tobi Delbrück or André van Schaik, which could have hinted at the fact that he was in conversation with one of them just before the group photo was taken. Clever idea of mine, eh?!)





Just over a minute into the presentation, I couldn’t believe my ears: “Let me start with the key technology changes that we are making to support our partners like Prophesee, iniVation and other folks that are building not only event-based solutions…”

WHAT???!!! My real-time auditory sensor processing DID get that right, didn’t it?! Listen for yourself and correct me if I’m wrong…

Did he really just let slip that iniVation is indeed a Brainchip partner?!

They obviously didn’t get listed under “partners” on the “Brainchip at a glance” presentation slide, as there hasn’t been any official announcement so far, but IMO this (unintentionally) revealing statement strongly supports our hypothesis.

Admittedly, it could have been a lapsus linguae, a disclosure of our Brainchip CMO’s secretly harboured wish of teaming up with iniVation, but honestly, how probable is that?! Much more likely it was an inadvertent divulgence in front of a small audience of computer vision researchers, plus the Aeveon sensor screams Akida, doesn’t it?
I suppose we’ll find out soon, as “we are getting very close to the first tape-out” of the new wonder chip.

My opinion only, DYOR.


Hi,

i wasnt online for a while now..so i dont know if you got this information.

Mitre > Inivation

The company Mitre (USA)
https://www.mitre.org/
has some interesting connections to the world of neuromorphic computing. They already tested neuromorphic stuff since 2015 from IBM, Intel and so on. So they know this space. Mitre is a big company in the US for Cybersecurity, health, ISS Spacestadion, they are working close with the government and much more.

I attached a file where you can read about a vision camera with Inivation which "WAS" probably working with Synsense at this time. For Mitre...

"Extreme Machine Vision
High-perfomance neuromorphic vision systems for demanding real-time applications"


With that new informations it could be now BRN > Inivation > Mitre..

This is on Inivations Homepage and they admit to work with Brainchip.


it's an old video, but remember the news from edgeimpuls.

https://docs.edgeimpulse.com/expert...g-projects/brainchip-akida-traffic-monitoring

Same usecase with traffic.

So at the bottomline Mitre knows BRN, maybe already more.... Could be interesting to have an eye on Mitre.

I hope this is a new information.
Have a nice day.
 

Attachments

  • Doucette.pdf
    1.5 MB · Views: 171
  • Like
  • Fire
  • Love
Reactions: 42 users
Question: Does anyone know why Brainchip has 2 Linkedin profiles? I once thought that it could be region related, or the one with less followers was an older profile, but neither of those 2 scenarios make sense when you look at the employees on each of the 2 sites.

The positive is that there are now 92 employees across both profiles.

(I hope the answer is that one profile wont be big enough for us in the near future :) )
 
  • Like
  • Haha
  • Fire
Reactions: 16 users
Could someone tell me are we still partners with socionext?

BrainChip and Socionext Provide a New Low-Power Artificial Intelligence Platform for AI Edge Applications.​

It's just that Socionext today on the Tokyo stock exchange was down nearly 23%,
and I was wondering why?

View attachment 39360
Here is an article why:


Haven't looked into why 3 of the top shareholders have sold but looks like it was planned, and doesn't seem to be a real issue for the long term.
 
  • Like
  • Thinking
  • Sad
Reactions: 8 users
Top Bottom