BRN Discussion Ongoing

buena suerte :-)

BOB Bank of Brainchip
If I were the "Bank of Brainchip," I definitely would make sure everyone of us (of you) got a little payout! 🎉 FREE SHARES FOR EVERYONE !!!! But until then, we’ll just have to keep dreaming and keep an eye on the current developments. 😉
Yep and now a very depleted..... 'BoB' ... 😭😭
 
  • Haha
  • Like
Reactions: 3 users

7für7

Top 20
  • Fire
  • Like
  • Sad
Reactions: 3 users

7für7

Top 20
  • Like
  • Fire
  • Sad
Reactions: 4 users

Esq.111

Fascinatingly Intuitive.


😲🤔
Afternoon 7fur7 ,

Interesting..... from the site you have provided above , under products....

Might be one for Diogenese to clarify if we may or may not be incorperated in this partnership between Samsung & MediaTek.

javascript:;
ps://bit.ly/3xZboWV

Samsung Completes Validation of Industry’s Fastest LPDDR5X for Use With MediaTek’s Flagship Mobile Platform​

Korea on July 16, 2024
Audio AUDIO Play/Stop
Share Share open/close Print

URL copy
Layer close

Samsung’s 10.7Gbps LPDDR5X was validated on MediaTek’s next-generation Dimensity platform

With over 25% improvement in power consumption and performance, new DRAM enables longer battery life and more powerful on-device AI features for mobile​

Samsung-Semiconductors-LPDDR5X-MediaTek-Platform-Validation_main1.jpg


Samsung Electronics, the world leader in advanced memory technology, today announced it has successfully completed verification of the industry’s fastest 10.7 gigabit-per-second (Gbps) Low Power Double Data Rate 5X (LPDDR5X) DRAM for use on MediaTek’s next-generation Dimensity platform.

The 10.7Gbps operation speed verification was carried out using Samsung’s LPDDR5X 16-gigabyte (GB) package on MediaTek’s upcoming flagship Dimensity 9400 System on Chip (SoC), scheduled to be released in the second half of this year. The two companies have closely collaborated to complete the verification within just three months.

Samsung’s 10.7Gbps LPDDR5X delivers more than 25% improved power consumption and performance compared to the previous generation. This allows longer battery life for mobile devices and enhanced on-device AI performance, boosting the speed of AI features, such as voice-text generation, without requiring server or cloud access.

“Through our strategic cooperation with MediaTek, Samsung has verified the industry’s fastest LPDDR5X DRAM that is poised to lead the AI smartphone market,” said YongCheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics. “Samsung will continue to innovate through active collaboration with customers and provide optimum solutions for the on-device AI era.”

“Working together with Samsung Electronics has made it possible for MediaTek’s next-generation Dimensity chipset to become the world’s first to be validated at LPDDR5X operating speeds up to 10.7Gbps, enabling upcoming devices to deliver AI functionality and mobile performance at a level we’ve never seen before,” said JC Hsu, Corporate Senior Vice President at MediaTek. “This updated architecture will make it easier for developers and users to leverage more AI capabilities and take advantage of more features with less impact on battery life.”

Amid the expansion of the on-device AI market, especially for AI smartphones, energy-efficient, high-performance LPDDR DRAM solutions are becoming increasingly important. Through the validation with MediaTek, Samsung is solidifying its technological leadership in the low-power, high-performance DRAM market and is expected to expand the application beyond mobile to servers, PCs and automotive devices.

Samsung-Semiconductors-LPDDR5X-MediaTek-Platform-Validation_main2.jpg

LPDDR DRAMLPDDR5XMediaTekSamsung DRAM

Regards ,
Esq.
 
  • Like
Reactions: 9 users

7für7

Top 20
  • Like
  • Fire
  • Sad
Reactions: 9 users

Csharmo

Regular
Was just over on the other site for a squiz.

Interesting comment by @Csharmo who I don't think is over here (?) who feels we are working with Wabtec.

Never looked at them myself but did a very quick google and this Dir popped up. Not looked too deeply yet.

Now....is he just a SH or maybe finds some interest in a recent BRN post.fron 3 weeks ago re the Neurobus, Frontgrade, Airbus consortium :unsure:




View attachment 67747 View attachment 67748
Hello! I do come visit here, mainly just read. I try and stay in HC and balance out the downrampers
 
  • Like
  • Love
  • Haha
Reactions: 23 users

Fenris78

Regular
  • Love
  • Like
  • Thinking
Reactions: 3 users

IloveLamp

Top 20
1000017634.jpg
 
  • Like
  • Fire
  • Wow
Reactions: 9 users

FJ-215

Regular
  • Like
  • Fire
Reactions: 3 users

7für7

Top 20
Cockroach time ahead 🙄

All good… they tried it… but failed for today
 
Last edited:
  • Like
  • Sad
Reactions: 2 users
Last edited:
  • Like
  • Thinking
Reactions: 2 users
Guess you are right @DingoBorat and 300k shares is a better number than 200k only 75k to go between my personal and super totals and I better hurry up while the downrampers on here and HC are working overtime, plus I’ve gotten an extra 4000 plus shares buying them on market and not on that fab offer the company were giving us SH 😂


IMG_0862.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 11 users

7für7

Top 20
Guess you are right @DingoBorat and 300k shares is a better number than 200k only 75k to go between my personal and super totals and I better hurry up while the downrampers on here and HC are working overtime, plus I’ve gotten an extra 4000 plus shares buying them on market and not on that fab offer the company were giving us SH 😂


View attachment 67764
Soon top 50?
 
  • Like
  • Sad
Reactions: 2 users
Soon top 50?
Depends how long the shorters carry on playing the game and also it depends on the if we get no news in the next 4-6 months as I’m completely out of money as I just spent my airfare to the uk on some more today 😂 plus I’m going for a takeover instead as top 50 sounds boring.
 
Last edited:
  • Haha
  • Wow
Reactions: 6 users

Esq.111

Fascinatingly Intuitive.
Evening Chippers,

Listening to the ABC radio ... new technology using Ai to take photos of the placenta once child is born ... looks for anything which may be amiss.

Company called PlacentaVision.

Also of note , University of Pennsylvania helping on the technical side.


* whilst getting lost in all of this ... also came appon this site.

* for the technically minded only, got a light nose bleed on quick perusal.

* this info may have been shared by others , those in the know will know if it's new or not.



Regards,
Esq.
 
  • Like
  • Love
  • Fire
Reactions: 15 users
Evening Chippers,

Listening to the ABC radio ... new technology using Ai to take photos of the placenta once child is born ... looks for anything which may be amiss.

Company called PlacentaVision.

Also of note , University of Pennsylvania helping on the technical side.

* whilst getting lost in all of this ... also came appon this site.

* for the technically minded only, got a light nose bleed on quick perusal.

* this info may have been shared by others , those in the know will know if it's new or not.



Regards,
Esq.
Weird as most people have there photos taken with the baby.
 
  • Haha
Reactions: 10 users

Esq.111

Fascinatingly Intuitive.
  • Haha
Reactions: 2 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 10 users
Nice :)

Paper HERE


Submitted on 21 Jul 2024]

Few-Shot Transfer Learning for Individualized Braking Intent Detection on Neuromorphic Hardware​

Nathan Lutes, Venkata Sriram Siddhardh Nadendla, K. Krishnamurthy

Objective: This work explores use of a few-shot transfer learning method to train and implement a convolutional spiking neural network (CSNN) on a BrainChip Akida AKD1000 neuromorphic system-on-chip for developing individual-level, instead of traditionally used group-level, models using electroencephalographic data. The efficacy of the method is studied on an advanced driver assist system related task of predicting braking intention. Main Results: Efficacy of the above methodology to develop individual specific braking intention predictive models by rapidly adapting the group-level model in as few as three training epochs while achieving at least 90% accuracy, true positive rate and true negative rate is presented. Further, results show an energy reduction of over 97% with only a 1.3x increase in latency when using the Akida AKD1000 processor for network inference compared to an Intel Xeon CPU. Similar results were obtained in a subsequent ablation study using a subset of five out of 19 channels.

Significance: Especially relevant to real-time applications, this work presents an energy-efficient, few-shot transfer learning method that is implemented on a neuromorphic processor capable of training a CSNN as new data becomes available, operating conditions change, or to customize group-level models to yield personalized models unique to each individual.

5. Conclusion
The results show that the methodology presented was effective to develop individual-level models deployed on a state-of-the-art neuromorphic processor with predictive abilities for ADAS relevant tasks, specifically braking intent detection.

This study explored a novel application of deep SNNs to the field of ADAS using a neuromorphic processor by creating and validating individual-level braking intent classification models with data from three experiments involving pseudo-realistic conditions. These conditions included cognitive atrophy through physical fatigue and real-time distraction
and providing braking imperatives via commonly encountered visual stimulus of traffic lights. The method presented demonstrates that individual-level models could be quickly created with a small amount of data, achieving greater than 90% scores across all three classification performance metrics in a few shots (three epochs) on average for both the ACS and FCAS. This demonstrated the efficacy of the method for different participants operating under non-ideal conditions and using realistic driving cues and further suggests that a reduced data acquisition scheme might be feasible in the field.
Furthermore, the applicability to energy-constrained systems was demonstrated through comparison of the inference energy consumed with a very powerful CPU in which the Akida processor offered energy savings of 97% or greater. The Akida processor was also shown to be competitive in inference latency compared to the CPU. Future work could
include implementation of the method presented on a larger number of participants, other neuromorphic hardware, different driving scenarios, and in real-world scenarios where individual-level models are created by refining previously developed group-level models in real time.
 
  • Like
  • Love
  • Fire
Reactions: 63 users
Top Bottom