BRN Discussion Ongoing

Boab

I wish I could paint like Vincent
Last edited:
  • Like
  • Love
  • Fire
Reactions: 33 users

Getupthere

Regular

From Inspiration to Innovation​

BrainChip will be making waves at CES® this year – will you be part of the excitement?
Experience ultra-efficient Edge AI processing sensor modalities — including the 3 Vs - Vision, Voice, and Vibration. Explore the possibilities for next generation solutions through live demos that showcase the Akida™ neuromorphic processor in action, integrated with numerous partner platforms. From strengthening security through anti-spoofing, on-chip learning, to industrial control through anomaly detection, and numerous others – which we will unveil over the coming weeks.
We will also be hosting a special CES edition of the BrainChip Podcast, where we’ll be inviting thought-leaders to share their perspectives on “All things AI”.
Don’t miss out on this exciting opportunity to see this leading-edge technology firsthand, and engage with the BrainChip executives, technical experts, and solutions architects. Stay tuned for more details.
Want to learn more about how BrainChip’s Akida technology expands the boundaries of AI on-chip compute? Contact us today at sales@brainchip.com to schedule a meeting in the BrainChip Suite at CES or virtually for those who cannot be in attendance this year.
Website
LinkedIn
Twitter
YouTube
Copyright © 2023 BrainChip, Inc. All rights reserved.

23041 Avenida De La Carlota, Suite 250
Laguna Hills CA 92653

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.
open.php
 
  • Like
  • Love
  • Fire
Reactions: 34 users
Let’s hope that BRN is allowed to reveal something at CES. Or even better that someone else says “Akida inside” 🤞
 
  • Like
  • Fire
  • Wow
Reactions: 9 users

Home101

Regular
Let’s hope that BRN is allowed to reveal something at CES. Or even better that someone else says “Akida inside” 🤞
The language of the email is very upbeat, seems like they will shows demo/s with partners.
 
  • Like
  • Fire
Reactions: 17 users

Esq.111

Fascinatingly Intuitive.
So what I gather from this is if I take $1 to the Casino each day and try to guess the colour on roulette 25 times in a row I can turn $1 in to $33M.

View attachment 51890

The odds for the first 20 spins are below. Not great odds! 1 in 1,813,778 to turn $1 into $1.048M!

Number of SpinsRatioPercentage
11.06 to 148.6%
23.23 to 123.7%
37.69 to 111.5%
416.9 to 15.60%
535.7 to 12.73%
674.4 to 11.33%
7154 to 10.65%
8318 to 10.31%
9654 to 10.15%
101,346 to 10.074%
1549,423 to 10.0020%
201,813,778 to 10.000055%
Evening SERA2g ,

Ahhhhh .

The Monte Carlo.

Lost a couple of years looking at all sorts of gambling / steaking scenarios.

Amazing field of random numbers , and how to cut and splice.

Can be done , yet one will quickly be removed from platforms - bookies once winnings become to frequent / concistant.

Thay simply cut you off if one gets into reasonably large numbers.

Regards,
Esq.
 
  • Like
Reactions: 3 users

IloveLamp

Top 20

Screenshot_20231212_195509_LinkedIn~2.jpg
 
Last edited:
  • Like
  • Fire
Reactions: 5 users
So what I gather from this is if I take $1 to the Casino each day and try to guess the colour on roulette 25 times in a row I can turn $1 in to $33M.

View attachment 51890

The odds for the first 20 spins are below. Not great odds! 1 in 1,813,778 to turn $1 into $1.048M!

Number of SpinsRatioPercentage
11.06 to 148.6%
23.23 to 123.7%
37.69 to 111.5%
416.9 to 15.60%
535.7 to 12.73%
674.4 to 11.33%
7154 to 10.65%
8318 to 10.31%
9654 to 10.15%
101,346 to 10.074%
1549,423 to 10.0020%
201,813,778 to 10.000055%
Something to do hey, while Brainchip isn't presenting during CES 2024 in Las Vegas 😂
 
  • Haha
  • Like
Reactions: 7 users

Moonshot

Regular
BACE30B3-F324-4D31-B64E-9180F700EFF4.jpeg
 
  • Haha
  • Like
  • Sad
Reactions: 17 users

Diogenese

Top 20

DeepMind has developed the graph neural network GNoME, predicting material stability. GNoME has identified 2.2 million new materials, with 380 thousand deemed stable for application in developing computer chips, batteries, and solar panels.

Before the advent of GNoME (Graph Networks for Materials Exploration), only 48 thousand stable inorganic crystals were known. The model increased this number by almost 9 times. DeepMind claims the model’s output is equivalent to 800 years of researchers’ work.



This is the kind of thing, that concerns me..
The absolutely incredible rate of development, now becoming possible, through "unintelligent" A.I.

Technologies will be made obsolete, almost the second they are created (known), in the very near future.
Just like yesterday's LSTM.
 
  • Like
Reactions: 3 users
Probably worthless info and likely mentioned already at some stage by longer term holders than myself

Her contribution at BrainChip sounds significant, but not sure this adds to a potential Qualcomm-BrainChip relationship. Timing might not be quite right either 🤷🏼‍♂️

View attachment 17291
View attachment 17292
View attachment 17293
View attachment 17294
Appears this ex BRN employee on a similar path as us now :unsure:

Say they are running some type of neural network cores / IP and Risc-V.

Haven't searched any patents yet to see what they using...maybe @Diogenese if he has time?


A Synergistic Future: How Always-On Edge AI Complements Generative AI in Shaping the AI Ecosystem​


Mouna ELKHATIB
Mouna ELKHATIB

Mouna ELKHATIB​

Co-Founder, CEO & CTO at AONDevices, Inc. |…​

Published Nov 24, 2023
+ Follow

The AI landscape is undergoing a remarkable transformation, led by Generative AI, and always-on edge AI. Together, these technologies are redefining our interaction with devices, making them more intuitive, responsive, and aligned with our everyday needs.

Complementary Functions:
Generative AI: Specializing in creating complex content like text, images, and music, these AI forms thrive in cloud-based environments requiring significant computational power.
Always-On Edge AI: Embedded in devices such as smartphones and smartwatches, this technology operates with minimal power, processing data locally and in real-time, with a focus on efficiency and privacy.
Devices powered by the AON1100™ chip exemplify this technology, as they enable voice command recognition, acoustic event detection, acoustic scene classification, speaker identification and sensor fusion, enhancing user interaction without compromising battery life.

Integration for Enhanced Applications in Personal Devices:
Data Processing and Responsiveness: Always-on edge AI in devices like smart wearables can track health metrics or user behavior. This real-time data, processed through chips like the AON1100™, can enhance the capabilities of Generative AI, leading to highly personalized content and interactions.
Expanding Generative AI in Personal Technology:
Dynamic Content Creation: A smart device using edge AI can understand user preferences and context. With the AON1100™'s ability to detect acoustic scenes and user's activity, this information could feed into a Gen AI system to create custom visual content, personalized news feeds, or even unique music playlists tailored to the user's current activity or mood.
Enhanced Interactivity and Personalization: Generative AI, combined with user data processed by edge AI, can lead to innovative applications. For instance, a smart device with the AON1100™ chip could provide real-time health monitoring info like coughing and snoring data. Generative AI could suggest personalized workouts or dietary advice based on this health data.
In devices like hearing aids, the AON2000™ denoiser algorithm can adaptively filter and enhance specific sounds, and when combined with Gen AI, these devices can offer real-time, context-aware auditory assistance.

The integration of always-on edge AI with Generative AI is set to revolutionize personal devices. This combination significantly elevates the level of personalization in technology. The AON1100™ and AON2000™ are at the forefront of this transformation, providing the capabilities necessary for your devices to be more intuitive and responsive to your needs.

If you share my passion for these topics, I welcome the opportunity for a follow-up discussion. Please don't hesitate to reach out to me. Additionally, you can book a meeting with me at CES 2024 through this link
 
  • Like
  • Fire
Reactions: 11 users

Guzzi62

Regular
Audio

Samsung new Galaxy Buds could one-up the Pixel Buds with on-device AI​

The new earbuds are expected to power live audio and video call translations — all processed offline.

That sounds very interesting but no idea if Akida is involved, one can hope.

 
  • Like
  • Fire
  • Thinking
Reactions: 15 users

Diogenese

Top 20
Appears this ex BRN employee on a similar path as us now :unsure:

Say they are running some type of neural network cores / IP and Risc-V.

Haven't searched any patents yet to see what they using...maybe @Diogenese if he has time?


A Synergistic Future: How Always-On Edge AI Complements Generative AI in Shaping the AI Ecosystem​


Mouna ELKHATIB
Mouna ELKHATIB

Mouna ELKHATIB​

Co-Founder, CEO & CTO at AONDevices, Inc. |…​

Published Nov 24, 2023
+ Follow

The AI landscape is undergoing a remarkable transformation, led by Generative AI, and always-on edge AI. Together, these technologies are redefining our interaction with devices, making them more intuitive, responsive, and aligned with our everyday needs.

Complementary Functions:
Generative AI: Specializing in creating complex content like text, images, and music, these AI forms thrive in cloud-based environments requiring significant computational power.
Always-On Edge AI: Embedded in devices such as smartphones and smartwatches, this technology operates with minimal power, processing data locally and in real-time, with a focus on efficiency and privacy.
Devices powered by the AON1100™ chip exemplify this technology, as they enable voice command recognition, acoustic event detection, acoustic scene classification, speaker identification and sensor fusion, enhancing user interaction without compromising battery life.

Integration for Enhanced Applications in Personal Devices:
Data Processing and Responsiveness: Always-on edge AI in devices like smart wearables can track health metrics or user behavior. This real-time data, processed through chips like the AON1100™, can enhance the capabilities of Generative AI, leading to highly personalized content and interactions.
Expanding Generative AI in Personal Technology:
Dynamic Content Creation: A smart device using edge AI can understand user preferences and context. With the AON1100™'s ability to detect acoustic scenes and user's activity, this information could feed into a Gen AI system to create custom visual content, personalized news feeds, or even unique music playlists tailored to the user's current activity or mood.
Enhanced Interactivity and Personalization: Generative AI, combined with user data processed by edge AI, can lead to innovative applications. For instance, a smart device with the AON1100™ chip could provide real-time health monitoring info like coughing and snoring data. Generative AI could suggest personalized workouts or dietary advice based on this health data.
In devices like hearing aids, the AON2000™ denoiser algorithm can adaptively filter and enhance specific sounds, and when combined with Gen AI, these devices can offer real-time, context-aware auditory assistance.

The integration of always-on edge AI with Generative AI is set to revolutionize personal devices. This combination significantly elevates the level of personalization in technology. The AON1100™ and AON2000™ are at the forefront of this transformation, providing the capabilities necessary for your devices to be more intuitive and responsive to your needs.

If you share my passion for these topics, I welcome the opportunity for a follow-up discussion. Please don't hesitate to reach out to me. Additionally, you can book a meeting with me at CES 2024 through this link
Hi FMF,

This lady has a background in acoustic circuits.

She filed 6 patents in her own name between 2008 and 2013, mostly for audio processing circuits.

She filed 2 patents for Qualcomm again for audio.

She was at BRN for 6 months and got her name on 3 patent applications. She was involved in making the FPGA for the Akida proof-of-concept chip. Anil would have been very hands-on in developing this chip.

US10157629B2 Low power neuromorphic voice activation system and method 20160205
The present invention provides a system and method for controlling a device by recognizing voice commands through a spiking neural network. The system comprises a spiking neural adaptive processor receiving an input stream that is being forwarded from a microphone, a decimation filter and then an artificial cochlea. The spiking neural adaptive processor further comprises a first spiking neural network and a second spiking neural network. The first spiking neural network checks for voice activities in output spikes received from artificial cochlea. If any voice activity is detected, it activates the second spiking neural network and passes the output spike of the artificial cochlea to the second spiking neural network that is further configured to recognize spike patterns indicative of specific voice commands. If the first spiking neural network does not detect any voice activity, it halts the second spiking neural network


US11157798B2 Intelligent autonomous feature extraction system using two hardware spiking neutral networks with spike timing dependent plasticity 20160212
Embodiments of the present invention provide an artificial neural network system for feature pattern extraction and output labeling. The system comprises a first spiking neural network and a second spiking neural network. The first spiking neural network is configured to autonomously learn complex, temporally overlapping features arising in an input pattern stream. Competitive learning is implemented as spike timing dependent plasticity with lateral inhibition in the first spiking neural network. The second spiking neural network is connected with the first spiking neural network through dynamic synapses, and is trained to interpret and label the output data of the first spiking neural network. Additionally, the labeled output of the second spiking neural network is transmitted to a computing device, such as a central processing unit for post processing

US2017236027A1 INTELLIGENT BIOMORPHIC SYSTEM FOR PATTERN RECOGNITION WITH AUTONOMOUS VISUAL FEATURE EXTRACTION 20160216
Embodiments of the present invention provide a hierarchical arrangement of one or more artificial neural networks for recognizing visual feature pattern extraction and output labeling. The system comprises a first spiking neural network and a second spiking neural network. The first spiking neural network is configured to autonomously learn complex, temporally overlapping visual features arising in an input pattern stream. Competitive learning is implemented as spike time dependent plasticity with lateral inhibition in the first spiking neural network. The second spiking neural network is connected by means of dynamic synapses with the first spiking neural network, and is trained for interpreting and labeling output data of the first spiking neural network. Additionally, the output of the second spiking neural network is transmitted to a computing device, such as a CPU for post processing
.

I doubt that she would have the in-depth understanding of NNs that Kristofer Carlsson would have.

She is named as inventor on several AON patents.

The AONDevices patents use NNs, basically treating them as a COTS product, but there is no detail of the circuit architecture at NPU level, so I don't think they are designing NNs so much as using NNs.

AON employs 5 ML developers/engineers.
 
  • Like
  • Love
  • Fire
Reactions: 34 users
Hi FMF,

This lady has a background in acoustic circuits.

She filed 6 patents in her own name between 2008 and 2013, mostly for audio processing circuits.

She filed 2 patents for Qualcomm again for audio.

She was at BRN for 6 months and got her name on 3 patent applications. She was involved in making the FPGA for the Akida proof-of-concept chip. Anil would have been very hands-on in developing this chip.

US10157629B2 Low power neuromorphic voice activation system and method 20160205
The present invention provides a system and method for controlling a device by recognizing voice commands through a spiking neural network. The system comprises a spiking neural adaptive processor receiving an input stream that is being forwarded from a microphone, a decimation filter and then an artificial cochlea. The spiking neural adaptive processor further comprises a first spiking neural network and a second spiking neural network. The first spiking neural network checks for voice activities in output spikes received from artificial cochlea. If any voice activity is detected, it activates the second spiking neural network and passes the output spike of the artificial cochlea to the second spiking neural network that is further configured to recognize spike patterns indicative of specific voice commands. If the first spiking neural network does not detect any voice activity, it halts the second spiking neural network


US11157798B2 Intelligent autonomous feature extraction system using two hardware spiking neutral networks with spike timing dependent plasticity 20160212
Embodiments of the present invention provide an artificial neural network system for feature pattern extraction and output labeling. The system comprises a first spiking neural network and a second spiking neural network. The first spiking neural network is configured to autonomously learn complex, temporally overlapping features arising in an input pattern stream. Competitive learning is implemented as spike timing dependent plasticity with lateral inhibition in the first spiking neural network. The second spiking neural network is connected with the first spiking neural network through dynamic synapses, and is trained to interpret and label the output data of the first spiking neural network. Additionally, the labeled output of the second spiking neural network is transmitted to a computing device, such as a central processing unit for post processing

US2017236027A1 INTELLIGENT BIOMORPHIC SYSTEM FOR PATTERN RECOGNITION WITH AUTONOMOUS VISUAL FEATURE EXTRACTION 20160216
Embodiments of the present invention provide a hierarchical arrangement of one or more artificial neural networks for recognizing visual feature pattern extraction and output labeling. The system comprises a first spiking neural network and a second spiking neural network. The first spiking neural network is configured to autonomously learn complex, temporally overlapping visual features arising in an input pattern stream. Competitive learning is implemented as spike time dependent plasticity with lateral inhibition in the first spiking neural network. The second spiking neural network is connected by means of dynamic synapses with the first spiking neural network, and is trained for interpreting and labeling output data of the first spiking neural network. Additionally, the output of the second spiking neural network is transmitted to a computing device, such as a CPU for post processing
.

I doubt that she would have the in-depth understanding of NNs that Kristofer Carlsson would have.

She is named as inventor on several AON patents.

The AONDevices patents use NNs, basically treating them as a COTS product, but there is no detail of the circuit architecture at NPU level, so I don't think they are designing NNs so much as using NNs.
Thanks D

Knew you'd explain it better.

So, if they aren't designing their own NN's could one can presume they are getting them COTS then?

Not necessarily Akida but sourcing outside their org potentially which wouldn't rule Akida out entirely?

I was just looking their patents as well and noticed in one they spoke of utilising or choosing various CNN, RNN, LSTM etc but no mention of SNN unfortunately.
 
  • Like
Reactions: 6 users
Borrowed from Socionexts FB page posted 6d ago.

Not holding my breath....yet...but the highlighted section re power and non reliance on a CPU does make me wonder or is that hope or dream or wish or...:LOL:


IMG_20231212_235935.jpg
Screenshot_2023-12-12-23-55-46-63_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
 
  • Like
  • Fire
  • Love
Reactions: 33 users

goodvibes

Regular
Last edited:
  • Like
  • Thinking
Reactions: 6 users

cosors

👀
Good evening Chippers ,

Couple of notes from past bought to the fore ,

Absolute monza ready to bear fruit ,
The executive suite have been fully engorging themselves in between times , actual shareholders about to, as well thay fricken well deserve.

Several notes & general ponderances to follow.

CRANK THE TUNES.



Regards,
Esq.

You like smoking pipe?
 
  • Like
Reactions: 3 users

Esq.111

Fascinatingly Intuitive.
Morning Cosors ,

From my commercial fishing days .

Could not roll a smoke with wet hands so went to a pipe.

Is a filthy habit which one day wish to shake.

Regards,
Esq.
 
  • Like
  • Love
  • Fire
Reactions: 5 users

MDhere

Regular
Can this be us involved???

Edge Impulse Object Detection on RA8D1​

RA8D1
480 MHz Arm® Cortex®-M85 Based Graphics Microcontroller with Helium and TrustZone®
Can't open attachement GV?
 
  • Like
Reactions: 2 users
Top Bottom