BRN Discussion Ongoing

Slade

Top 20
It's been 2 years 5 months and 22 days since the agreement seems like a longtime to some but Rome was not built in a day as each day passes the closer we are to being unveiled imo.
BrainChip is trusted by Valeo. It says so on our website. They gotta be trusting us with something.
 
  • Like
  • Love
Reactions: 20 users
BrainChip is trusted by Valeo. It says so on our website. They gotta be trusting us with something.
And one day millions of people will trust us with their safety.
 
  • Like
  • Fire
  • Love
Reactions: 15 users

White Horse

Regular

How's this for Bull Shit !!!​


Course of Sales​

Course of sales table
TimePrice $VolumeValue $MarketCondition
12:59:50 PM0.7351,222898.170ASX
12:58:33 PM0.73510.735ASX
12:58:32 PM0.73510.735ASX
12:58:30 PM0.73510.735ASX
12:58:29 PM0.73510.735ASX
12:58:26 PM0.73521.470ASX
12:58:24 PM0.73521.470ASX
12:58:22 PM0.73510.735ASX
12:58:21 PM0.73510.735ASX
12:58:20 PM0.73510.735ASX
12:58:16 PM0.73542.940ASX
12:58:12 PM0.73510.735ASX
12:58:10 PM0.73510.735ASX
12:58:09 PM0.73510.735ASX
12:58:08 PM0.73510.735ASX
12:58:05 PM0.73575.145ASX
12:57:57 PM0.73510.735ASX
12:57:56 PM0.73521.470ASX
12:57:54 PM0.73521.470ASX
12:57:52 PM0.73521.470ASX
12:57:48 PM0.73532.205ASX
12:57:45 PM0.73510.735ASX
12:57:41 PM0.735118.085ASX
12:57:25 PM0.73542.940ASX
12:57:18 PM0.735128.820ASX
12:57:00 PM0.735139.555ASX
 
  • Sad
  • Like
  • Wow
Reactions: 35 users

Slade

Top 20
And one day millions of people will trust us with their safety.
One day I am going to talk to Akida and tell her about the early days and how she was loved by thousands. That we all had her back.
 
  • Haha
  • Like
Reactions: 15 users

Esq.111

Fascinatingly Intuitive.

How's this for Bull Shit !!!​


Course of Sales​

Course of sales table
TimePrice $VolumeValue $MarketCondition
12:59:50 PM0.7351,222898.170ASX
12:58:33 PM0.73510.735ASX
12:58:32 PM0.73510.735ASX
12:58:30 PM0.73510.735ASX
12:58:29 PM0.73510.735ASX
12:58:26 PM0.73521.470ASX
12:58:24 PM0.73521.470ASX
12:58:22 PM0.73510.735ASX
12:58:21 PM0.73510.735ASX
12:58:20 PM0.73510.735ASX
12:58:16 PM0.73542.940ASX
12:58:12 PM0.73510.735ASX
12:58:10 PM0.73510.735ASX
12:58:09 PM0.73510.735ASX
12:58:08 PM0.73510.735ASX
12:58:05 PM0.73575.145ASX
12:57:57 PM0.73510.735ASX
12:57:56 PM0.73521.470ASX
12:57:54 PM0.73521.470ASX
12:57:52 PM0.73521.470ASX
12:57:48 PM0.73532.205ASX
12:57:45 PM0.73510.735ASX
12:57:41 PM0.735118.085ASX
12:57:25 PM0.73542.940ASX
12:57:18 PM0.735128.820ASX
12:57:00 PM0.735139.555ASX
Afternoon White Horse,

Must be a big fish , Berkshire Hathaway , executing a iceberg order.

Regards,
Esq.
 
  • Like
  • Haha
Reactions: 18 users
One day I am going to talk to Akida and tell her about the early days and how she was loved by thousands. That we all had her back.
I envision a lot of people one day talking to Akida 😛
72pqiq.jpg

ADIKA🤔
 
Last edited:
  • Haha
  • Like
Reactions: 9 users

Diogenese

Top 20
  • Like
  • Haha
Reactions: 2 users

Learning

Learning to the Top 🕵‍♂️
Here an Valeo Lidar video refresher.



Learning
 
  • Like
  • Fire
Reactions: 17 users
Just a reminder I'm giving away a 1 month subscription to TSE for one lucky member.
 
  • Like
Reactions: 6 users

Lex555

Regular
28nm next generation Neuralink with integrated Arm m-core!!!!!
7A01DFD3-79DF-4437-9D3B-5596564EACB6.png
 
Last edited:
  • Like
  • Thinking
  • Wow
Reactions: 13 users

Just a short extract from the article in case others are having trouble getting it to open:​

Neuromorphic TinyML​

TinyML (tiny machine learning) is all about executing ML and NNs on tightly memory/processor constrained devices such as microcontrollers (MCUs). As a result, it’s a natural step to incorporate a neuromorphic core for TinyML use cases due to several distinct advantages.
Neuromorphic devices are event-based processors operating on non-zero events. Event-based convolution and dot products are significantly less computationally expensive since zeroes aren’t processed.
Event-based convolution performance improves further with the larger number of zeroes in the filter channels or kernels. This along with activation functions such as Relu being centered around zero provides the property of event-based processors’ inherent activation sparsity, thus reducing effective MAC requirements.
Furthermore, as a neuromorphic device’s process spikes, more constrained quantization can be used, such as 1-,2- and 4-bit quantization, versus the conventional 8-bit quantization on ANNs. Moreover, because SNNs are incorporated into hardware, neuromorphic devices (such as Akida from Brainchip) have the unique capability of on-edge learning.
That’s not possible with conventional devices. They only simulate a neural network with Von Neumann architecture, leading to on-edge learning being computationally expensive with large memory overheads in a TinyML systems budget. In addition, to train a NN model, integers would not provide enough range to train a model accurately. Therefore, training with 8 bits isn’t currently feasible on traditional architectures.
For traditional architectures, a few on-edge learning implementations with machine-learning algorithms (autoencoders, decision trees) have reached a production stage for simple real-time analytics use cases, whereas NNs are still under research.
To summarize, the advantages of using neuromorphic devices and SNNs at the endpoint include:
  • Ultra-low power consumption (millijoule to microjoule per inference)
  • Lower MAC requirements as compared to conventional NNs
  • Lower parameter memory usage as compared to conventional NNs
  • On-edge learning capabilities

Neuromorphic TinyML Use Cases​

Microcontrollers with neuromorphic cores can excel in use cases throughout the industry (Fig. 3) thanks to their distinct characteristics of on-edge learning, such as:
  • In anomaly-detection applications for existing industrial equipment, using the cloud to train a model is inefficient. Adding an endpoint AI device on the motor and training on the edge would allow for ease of scalability, as equipment aging tends to differ from machine to machine even if they’re the same model.
  • In robotics, as time passes, the joints of robotic arms tend to wear down, becoming untuned and stop operating as needed. Re-tuning the controller on the edge without human intervention mitigates the need to call a professional, reducing downtime and saving time and money.
  • In face-recognition applications, a user would have to add their face to the dataset and retrain the model on the cloud. With a few snaps of a person’s face, the neuromorphic device can identify the end-user via on-edge learning. Thus, users’ data can be secured on the device, and there’s a more seamless experience. This can be employed in cars, where different users have different preferences on seat position, climate control, etc.
  • In keyword-spotting applications, extra words can be added to your device to recognize on the edge. It can be used in biometric applications, where a person would add a “secret word” that they would want to keep secure on the device.

Renesas Electronics

3. These represent some edge-computing learning use cases for neuromorphic devices.

3. These represent some edge-computing learning use cases for neuromorphic devices.​

The balance of ultra-low-power neuromorphic endpoint devices and enhanced performance makes them suitable for prolonged battery-powered applications, executing algorithms not possible on other low-power devices due to them being computationally constrained (Fig. 4). Or they can be applied to higher-end devices capable of similar processing power that’s too power-hungry. Use cases include:
  • Smartwatches that monitor and process the data at the endpoint, sending only relevant information to the cloud.
  • Smart camera sensors for people detection to execute a logical command. For instance, automated door opening when a person is approaching, as current technology is based on proximity sensors.
  • Area with no connectivity or charging capabilities, such as in forests for smart animal tracking or monitoring under ocean pipes for any potential cracks using real-time vibration, vision, and sound data.
  • For infrastructure monitoring use cases, where a neuromorphic MCU can be used to continuously monitor movements, vibrations, and structural changes in bridges (via images) to identify potential failures.

Renesas Electronics

4. These use cases can be implemented using ultra-low-power solutions with high performance using SNNs.

4. These use cases can be implemented using ultra-low-power solutions with high performance using SNNs.​

On this front,
webicon_green.png
Renesas has acknowledged the vast potential of neuromorphic devices and SNNs. The company licensed a neuromorphic core from
webicon_gray.png
Brainchip,3,4 the world’s first commercial producer of neuromorphic IP.
References
1. “Neuromorphic computing market –industry analysis, size, share, growth, trends, and forecast, 2020-2028,”
webicon_gray.png
sheeranalyticsandinsights.com.
webicon_gray.png
https://www.sheeranalyticsandinsights.com/market-report-research/neuromorphic-computing-market-21/.
2. “
webicon_green.png
Neuromorphic Chip Market Growth, Forecast (2022-27)” | Industry Trends.
3. “
webicon_green.png
BrainChip’s Akida set for spaceflight via NASA as Renesas Electronics America signs first IP agreement”.
4. “
webicon_green.png
ARM battles RISC-V at Renesas,” eeNews Europe.

1669866821064.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 43 users

Deadpool

hyper-efficient Ai
Got to love Renesas, they love us! And they are not afraid to praise BrainChip. I would say an ANN is imminently imminent. Renesas love us, ARM love us, Prophesee love us, Mercedes love us, Edge Impulse love us, NASA love us, MegaChips love us.
Who loves ya baby
 
  • Like
  • Haha
  • Love
Reactions: 12 users
Why is this market manipulating itself, US is very green on close, ASX opened very green now everything is literally back down or even in red or just up a small pip.
 

alwaysgreen

Top 20
Why is this market manipulating itself, US is very green on close, ASX opened very green now everything is literally back down or even in red or just up a small pip.
If every stock always followed the US markets, every trader would be loaded.
 
  • Like
  • Love
Reactions: 3 users
I have been thinking about this and it occurred to me that Renesas is marketing AKIDA to the low end of the market with two nodes.

Whereas Edge Impulse is following the course set out by Sean Hehir “
• Refining, expanding, and accelerating our roadmap and see opportunities beyond the edge.”

This is why they are comparing AKIDA to GPU’s.

Remember ARM is combining AKIDA with their mass market Cortex M33 processor. This fills in between Renesas and Edge Impulse.

Saturating the market, being essential and ubiquitous seems to be the strategy.

My opinion only DYOR
FF

AKIDA BALLISTA
On relection I am still puzzled as to why Edge Impulse have Akida at the high only when we know it can operate at ultra low power. I would have thought the arrow for Brainchip should cover the whole spectrum.
 
  • Like
Reactions: 7 users

mrgds

Regular
28nm next generation Neuralink with integrated Arm m-core!!!!!
View attachment 23306
Thnx for posting @Lex555 ..............(y)
Many might remember the period when RT was always mentioning "BENEFICIAL AI", Apart from a "integration with the fast evolving world of Ai, Neuralink aims to restore abilities of those with mental/physical dissabilities.
Should i, like the 1000 eyes be a little bit excited when hearing Neuralink is using 28nm SoC with a integrated Arm Cortex M23 core ? :eek:
The emphasise on low power consumption and thermal control while using CNN/SNN neuromorphic compute certainly got me "a little bit chubbed up " .......:cool:

AKIDA ( off to the hot tub ) BALLISTA
 
  • Like
  • Haha
  • Thinking
Reactions: 20 users
Thnx for posting @Lex555 ..............(y)
Many might remember the period when RT was always mentioning "BENEFICIAL AI", Apart from a "integration with the fast evolving world of Ai, Neuralink aims to restore abilities of those with mental/physical dissabilities.
Should i, like the 1000 eyes be a little bit excited when hearing Neuralink is using 28nm SoC with a integrated Arm Cortex M23 core ? :eek:
The emphasise on low power consumption and thermal control while using CNN/SNN neuromorphic compute certainly got me "a little bit chubbed up " .......:cool:

AKIDA ( off to the hot tub ) BALLISTA
"while using CNN/SNN neuromorphic compute"
Can you point to where you saw that regarding neuralink?
 
  • Like
Reactions: 4 users

jk6199

Regular
  • Like
  • Fire
Reactions: 10 users

Jefwilto

Regular

Attachments

  • 94CAD8F8-C71E-49BB-BE75-726167F799AA.png
    94CAD8F8-C71E-49BB-BE75-726167F799AA.png
    1.1 MB · Views: 86
  • Like
Reactions: 6 users
On relection I am still puzzled as to why Edge Impulse have Akida at the high only when we know it can operate at ultra low power. I would have thought the arrow for Brainchip should cover the whole spectrum.
I would be careful not to loose sight of the fact that Edge Impulse is a separate corporate entity with existing commercial relationships which could tie them to promote another companies product in a particular market sector.

We also do not know the full commercial terms of the other commercial relationships between Brainchip and its other partners where they may have been given certain rights regarding promoting of AKIDA use cases.

As a corporate entity MegaChips is promoting AKIDA across all use cases at the Edge and has said they will probably move to producing an ‘AKIDA’ powered chip so they have an off the shelf solution for less technologically able customers who do not want to produce their own chips with AKIDA or anyone else’s IP.

I am not sure what the actual concern could be Edge Impulse is describing what AKIDA can do as the stuff of Science Fiction and comparing AKIDA favourably to GPUs. They clearly have a target market that they are selling AKIDA into And as we know from that same presentation they have already achieved at least one commercial engagement following their approach.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 21 users
Top Bottom