BRN Discussion Ongoing

Moonshot

Regular

One of the creators of spikeGPT thinks it can run on neuromorphic chips… early days but exciting.
 

Attachments

  • 21BF147B-7FE9-46FA-AB08-5C74EC68D9AC.jpeg
    21BF147B-7FE9-46FA-AB08-5C74EC68D9AC.jpeg
    184.1 KB · Views: 168
  • Like
  • Fire
  • Love
Reactions: 15 users

White Horse

Regular
Insto's increasing holdings. Vanguard to be precise. Up 0.05 %.

Institutional Ownership
7.79%

Top 10 Institutions
6.73%

Mutual Fund Ownership
7.05%

Float
83.08%
Mutual Fund Ownership
Institutional Ownership
Institution NameShares Held (% Change)% Outstanding
Vanguard Investments Australia Ltd.23,171,293 (‎+0.06%)
1.34
The Vanguard Group, Inc.22,963,058 (‎+0.01%)
1.33
BlackRock Institutional Trust Company, N.A.18,881,892 (‎+0.04%)
1.09
BlackRock Advisors (UK) Limited12,901,595 (‎-0.01%)
0.75
LDA Capital Limited10,000,000 (‎-0.07%)
0.58
Irish Life Investment Managers Ltd.9,141,627 (‎-0.00%)
0.53
FV Frankfurter Vermögen AG7,500,000 (‎+0.01%)
0.43
BetaShares Capital Ltd.5,308,642 (‎+0.00%)
0.31
BlackRock Investment Management (Australia) Ltd.3,157,867 (‎+0.00%)
0.18
State Street Global Advisors Australia Ltd.3,133,230 (‎+0.00%)
0.18
State Street Global Advisors (US)2,641,218 (‎+0.01%)0.15
First Trust Advisors L.P.2,016,088 (‎-0.00%)0.12
Nuveen LLC1,773,407 (‎+0.00%)0.10
Charles Schwab Investment Management, Inc.1,543,302 (‎-0.00%)0.09
California State Teachers Retirement System1,479,448 (‎+0.01%)0.09



https://www.msn.com/en-au/money/wat...dbe88e447bdea7c96&duration=1D&l3=L3_Ownership
Make of this, what you will. Tonight's MSN details.
When I last looked, last week. LDA were still on 10mil.

https://www.msn.com/en-au/money/wat...a9&duration=1D&l3=L3_Ownership&investorId=all

Institutional Ownership
9.18%

Top 10 Institutions
7.85%

Mutual Fund Ownership
7.59%

Float
83.04%
Mutual Fund Ownership
Institutional Ownership
Institution NameShares Held (% Change)% Outstanding
LDA Capital Limited24,295,141 (‎+0.81%)1.37
Vanguard Investments Australia Ltd.23,216,588 (‎+0.06%)1.31
The Vanguard Group, Inc.22,963,058 (‎0.00%)1.30
BlackRock Institutional Trust Company, N.A.19,253,360 (‎+0.01%)1.09
BlackRock Advisors (UK) Limited12,635,810 (‎-0.01%)0.72
Norges Bank Investment Management (NBIM)9,985,674 (‎+0.57%)0.57
Irish Life Investment Managers Ltd.9,222,212 (‎+0.00%)0.52
FV Frankfurter Vermögen AG7,500,000 (‎+0.01%)0.42
BetaShares Capital Ltd.5,298,194 (‎-0.00%)0.30
State Street Global Advisors (US)4,389,683 (‎+0.01%)0.25
BlackRock Investment Management (Australia) Ltd.3,242,489 (‎+0.00%)0.18
State Street Global Advisors Australia Ltd.3,071,142 (‎0.00%)0.17
First Trust Advisors L.P.2,772,166 (‎+0.01%)0.16
Charles Schwab Investment Management, Inc.1,572,242 (‎+0.00%)0.09
Nuveen LLC1,484,937 (‎-0.02%)0.08
 
  • Like
  • Thinking
Reactions: 13 users
  • Like
  • Love
  • Fire
Reactions: 8 users

Slade

Top 20
Part 2 is out and it’s a ripper of a read.

 
  • Like
  • Fire
  • Love
Reactions: 72 users
Anyone here with decent English writing skills like to help me out by writing a small message for me to send to someone who all of us would like to have access to an akida Dev board?
I have reached first base already.
My written word is like a serial killer writing his manifesto.
 
  • Haha
Reactions: 7 users

Diogenese

Top 20
Part 2 is out and it’s a ripper of a read.

Thanks Slade,

This chart is a useful reference for the capabilities of E, S, and P:

1678374368125.png
 
  • Like
  • Fire
  • Love
Reactions: 43 users
For all those frustrated with the share price not being green of late, I think Kermit sums it up pretty well.
 
  • Haha
  • Love
  • Like
Reactions: 7 users
 
  • Like
Reactions: 9 users
D

Deleted member 118

Guest
  • Like
  • Thinking
Reactions: 3 users

Sirod69

bavarian girl ;-)
PROPHESEE
PROPHESEE
1 Std. •


Our own Luca Verre weighs in with some thoughts on how #AI and #Machinelearning are enabling more effective vision sensing, even in resource constrained applications.

Thanks Vision Systems Design for including our views. With the explosion of data generated by ubiquitous video, combined with the increasing use of vision at the Edge, more efficient methods are required to capture and process scenes with mobile, wearable and IoT devices.

Neuromorphic enabled #eventbased vision can address the performance, power and challenging lighting and motion conditions many of these use cases operate in.

👉 https://lnkd.in/gjWmnhjF

#AR #neuromorphic #Eventcamera #EdgeAI
1678388374124.png
 
  • Like
  • Love
Reactions: 16 users

Quiltman

Regular
Part 2 is out and it’s a ripper of a read.


What a wonderful read, although my comprehension of all the new capabilities is somewhat lacking.
However, I think most of us understand real world comparisons and benchmarking.
Hence, I think this extract from the article is particularly powerful:
......

And so we come to the old proverb that states, “The proof of the pudding is in the eating.” Just how well does the Akida perform with industry-standard, real-world benchmarks?

Well, the lads and lasses at Prophesee.ai are working on some of the world’s most advanced neuromorphic vision systems. From their website we read: “Inspired by human vision, Prophesee’s technology uses a patented sensor design and AI algorithms that mimic the eye and brain to reveal what was invisible until now using standard frame-based technology.”

According to the paper Learning to Detect Objects with a 1 Megapixel Event Camera, Gray.Retinanet is the latest state-of-the-art in event-camera based object detection. When working with the Prophesee Event Camera Road Scene Object Detection Dataset at a resolution of 1280×720, the Akida achieved 30% better precision while using 50X fewer parameters (0.576M compared to 32.8M with Gray.Retinanet) and 30X fewer operations (94B MACs/sec versus 2432B MACs/sec with Gray.Retinanet). The result was improved performance (including better learning and object detection) with a substantially smaller model (requiring less memory and less load on the system) and much greater efficiency (a lot less time and energy to compute).

As another example, if we move to a frame-based camera with a resolution of 1352×512 using the KITTI 2D Dataset, then ResNet-50 is kind of a standard benchmark today. In this case, Akida returns equivalent precision using 50X fewer parameters (0.57M vs. 26M) and 5X fewer operations (18B MACs/sec vs. 82B MACs/sec) while providing much greater efficiency (75mW at 30 frames per second in a 16nm device). This is the sort of efficiency and performance that could be supported by untethered or battery-operated cameras.

Another very interesting application area involves networks that are targeted at 1D data. One example would be processing raw audio data without the need for all the traditional signal conditioning and hardware filtering.

Consider today’s generic solution as depicted on the left-hand side of the image below. This solution is based on the combination of Mel-frequency cepstral coefficients (MFCCs) and a depth-wise separable CNN (DSCNN). In addition to hardware filtering, transforms, and encoding, this memory-intensive solution involves a heavy software load.

max-0216-05-simplifying-raw-audio.png


Raw audio processing: Traditional solution (left) vs. Akida solution (right)
(Source: BrainChip)


By comparison, as we see on the right-hand side of the image, the raw audio signal can be fed directly into an Akida TENN with no additional filtering or DSP hardware. The result is to increase the accuracy from 92% to 97%, lower the memory (26kB vs. 93kB), and use 16X fewer operations (19M MACs/sec vs. 320M MACs/sec). All of this basically returns single inference while consuming two microjoules of energy. Looking at this another way, assuming 15 inferences per second, we’re talking less than 100µW for always-on keyword detection.

Similar 1D data is found in the medical arena for tasks like vital signs prediction based on a patient’s heart rate or respiratory rate. Preprocessing techniques don’t work well with this kind of data, which means we must work with raw signals. Akida’s TENNs do really well with raw data of this type.

In this case, comparisons are made between Akida and the state-of-the-art S4 (SOTA) algorithm (where S4 stands for structured state space sequence model) with respect to vital signs prediction based on heart rate or respiratory rate using the Beth Israel Deaconess Medical Center Dataset. In the case of respiration, Akida achieves ~SOTA accuracy with 2.5X fewer parameters (128k vs. 300k) and 80X fewer operations (0.142B MACs/sec vs. 11.2B MACs/sec). Meanwhile, in the case of heart rate, Akida achieves ~SOTA accuracy with 5X fewer parameters (63k vs. 600k) and 500X fewer operations (0.02B MACs/sec vs. 11.2B MACs/sec).

It’s impossible to list all the applications for which Akida could be used. In the case of industrial, obvious apps are robotics, predictive maintenance, and manufacturing management. When it comes to automotive, there’s real-time sensing and the in-cabin experience. In the case of health and wellness, we have vital signs monitoring and prediction; also, sensory augmentation. There are also smart home and smart city applications like security, surveillance, personalization, and proactive maintenance. And all of these are just scratching the surface of what is possible.
 
  • Like
  • Fire
  • Love
Reactions: 51 users

RobjHunt

Regular
Market/shorts have decided to keep the price where it is. 2nd half of the year I expect some more action or it might be time for me to reconsider my investment
Especially recently, I too have been continually reconsidering my investment but unfortunately, of late, I don’t have available funds to top up.

Pantene Peeps!
 
  • Haha
  • Like
  • Fire
Reactions: 14 users

JDelekto

Regular
What a wonderful read, although my comprehension of all the new capabilities is somewhat lacking.
However, I think most of us understand real world comparisons and benchmarking.
Hence, I think this extract from the article is particularly powerful:
......

And so we come to the old proverb that states, “The proof of the pudding is in the eating.” Just how well does the Akida perform with industry-standard, real-world benchmarks?

Well, the lads and lasses at Prophesee.ai are working on some of the world’s most advanced neuromorphic vision systems. From their website we read: “Inspired by human vision, Prophesee’s technology uses a patented sensor design and AI algorithms that mimic the eye and brain to reveal what was invisible until now using standard frame-based technology.”

According to the paper Learning to Detect Objects with a 1 Megapixel Event Camera, Gray.Retinanet is the latest state-of-the-art in event-camera based object detection. When working with the Prophesee Event Camera Road Scene Object Detection Dataset at a resolution of 1280×720, the Akida achieved 30% better precision while using 50X fewer parameters (0.576M compared to 32.8M with Gray.Retinanet) and 30X fewer operations (94B MACs/sec versus 2432B MACs/sec with Gray.Retinanet). The result was improved performance (including better learning and object detection) with a substantially smaller model (requiring less memory and less load on the system) and much greater efficiency (a lot less time and energy to compute).

As another example, if we move to a frame-based camera with a resolution of 1352×512 using the KITTI 2D Dataset, then ResNet-50 is kind of a standard benchmark today. In this case, Akida returns equivalent precision using 50X fewer parameters (0.57M vs. 26M) and 5X fewer operations (18B MACs/sec vs. 82B MACs/sec) while providing much greater efficiency (75mW at 30 frames per second in a 16nm device). This is the sort of efficiency and performance that could be supported by untethered or battery-operated cameras.

Another very interesting application area involves networks that are targeted at 1D data. One example would be processing raw audio data without the need for all the traditional signal conditioning and hardware filtering.

Consider today’s generic solution as depicted on the left-hand side of the image below. This solution is based on the combination of Mel-frequency cepstral coefficients (MFCCs) and a depth-wise separable CNN (DSCNN). In addition to hardware filtering, transforms, and encoding, this memory-intensive solution involves a heavy software load.

max-0216-05-simplifying-raw-audio.png


Raw audio processing: Traditional solution (left) vs. Akida solution (right)
(Source: BrainChip)


By comparison, as we see on the right-hand side of the image, the raw audio signal can be fed directly into an Akida TENN with no additional filtering or DSP hardware. The result is to increase the accuracy from 92% to 97%, lower the memory (26kB vs. 93kB), and use 16X fewer operations (19M MACs/sec vs. 320M MACs/sec). All of this basically returns single inference while consuming two microjoules of energy. Looking at this another way, assuming 15 inferences per second, we’re talking less than 100µW for always-on keyword detection.

Similar 1D data is found in the medical arena for tasks like vital signs prediction based on a patient’s heart rate or respiratory rate. Preprocessing techniques don’t work well with this kind of data, which means we must work with raw signals. Akida’s TENNs do really well with raw data of this type.

In this case, comparisons are made between Akida and the state-of-the-art S4 (SOTA) algorithm (where S4 stands for structured state space sequence model) with respect to vital signs prediction based on heart rate or respiratory rate using the Beth Israel Deaconess Medical Center Dataset. In the case of respiration, Akida achieves ~SOTA accuracy with 2.5X fewer parameters (128k vs. 300k) and 80X fewer operations (0.142B MACs/sec vs. 11.2B MACs/sec). Meanwhile, in the case of heart rate, Akida achieves ~SOTA accuracy with 5X fewer parameters (63k vs. 600k) and 500X fewer operations (0.02B MACs/sec vs. 11.2B MACs/sec).

It’s impossible to list all the applications for which Akida could be used. In the case of industrial, obvious apps are robotics, predictive maintenance, and manufacturing management. When it comes to automotive, there’s real-time sensing and the in-cabin experience. In the case of health and wellness, we have vital signs monitoring and prediction; also, sensory augmentation. There are also smart home and smart city applications like security, surveillance, personalization, and proactive maintenance. And all of these are just scratching the surface of what is possible.
I have to say that this is my favorite article to date. Written by the industry for whom its target audience is as opposed to being written by investors that do not seem to understand the technology.

One of my favorite sentences in this article: "The bottom line is that vision transformers can do a much better job of vision analysis by treating a picture like a sentence or a paragraph, as it were."

That puts a whole new spin on a picture worth a thousand words!
 
  • Like
  • Love
  • Fire
Reactions: 41 users

chapman89

Founding Member
Show me another company that’s going to change the world in every industry?

This is how I see the Brainchip story playing out.
I believe later in the year from Q3 onwards we will see 1-2 material contracts signed, one being Prophesee, although Prophesee as @Diogenese sees it is it’s more of a co-development partnership. I see multiple licensing agreements/payments in 2024.

Now as we know factually that Renesas will have finished taping out MCU’s containing akida IP that will be available for the market.

I see more partners being revealed, I see updates on megachips and megachips taping out, possibly with akd1500.

I see Socionext taping out containing akida IP.

AI is the hottest topic right now, and Brainchip is set to be the dominant Edge AI player.

Now what comes with this? Well yes a much much higher share price but for me personally I see once Renesas hits the market with MCU’s containing akida IP being huge huge huge, because no longer are we just a company that is talking about neuromorphic, we are a company that has commercial applications containing neuromorphic technology and the wider market will see this, no longer will those that are skeptical be skeptic, their interest will raise and I believe 2024 will be our hockey stick growth.

Big funds like Cathy Woods ARK investment who has likened neural networks bigger than the internet, will be scrambling to get a piece of Brainchip shares.



Those who are patient and have done their own research and have the foresight imo will reap the rewards.



You only have to look at those in the industry at what they’re saying about Brainchip and how amazing and differentiated and unique it is.



Truly exciting times.

My opinion only.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 107 users
Interesting article from Design & Reuse. The Renesas spokesperson just gave AKIDA 2nd gen a rap and Plumeria well some will remember the name and the connections to Brainchip and ARM should be immediately recognised but where did the Ai come from. If you believe the article it came from Renesas ecosystem partners:

https://www.design-reuse.com/redir3/35974/352793/8IDCcehr87FC7QuZSKvfeNO7vLEwt

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 43 users

TechGirl

Founding Member
Part 2 is out and it’s a ripper of a read.


Thanks Slade ❤️

Wow what a read 💥

This is my new favourite, Mind Boggling.

But I must admit I do love science fiction oh so much too, so how am I supposed to choose now?

I know I’ve slapped them together, so from here on in peeps BrainChip’s Akida shall be known as

BRAINCHIP’s AKIDA - MIND BOGGLING SCIENCE FICTION - as described by industry experts and respected press

Gotta love the Chip 💪
 
  • Like
  • Love
  • Fire
Reactions: 57 users

chapman89

Founding Member
Interesting article from Design & Reuse. The Renesas spokesperson just gave AKIDA 2nd gen a rap and Plumeria well some will remember the name and the connections to Brainchip and ARM should be immediately recognised but where did the Ai come from. If you believe the article it came from Renesas partners:

https://www.design-reuse.com/redir3/35974/352793/8IDCcehr87FC7QuZSKvfeNO7vLEwt

My opinion only DYOR
FF

AKIDA BALLISTA


“At embedded world in 2022, Renesas became the first company to demonstrate working silicon based on the Arm Cortex-M85 processor. This year, Renesas is extending its leadership by showcasing the features of the new processor in demanding AI use cases. The first demonstration showcases a people detection application developed in collaboration with Plumerai, a leader in Vision AI, that identifies and tracks persons in the camera frame in varying lighting and environmental conditions. The compact and efficient TinyML models used in this application lead to low-cost and lower power AI solutions for a wide range of IoT implementations. The second demo showcases a motor control predictive maintenance use case with an AI-based unbalanced load detection application using Tensorflow Lite for Microcontrollers with CMSIS-NN.

Delivering over 6 CoreMark/MHz, Cortex-M85 enables demanding IoT use cases that require the highest compute performance and DSP or ML capability, realized on a single, simple-to-program Cortex-M processor. The Arm Cortex-M85 processor features Helium technology, Arm’s M-Profile Vector Extension, available as part of the Armv8.1M architecture. It delivers a significant performance uplift for machine learning (ML) and digital signal processing (DSP) applications, accelerating compute-intensive applications such as endpoint AI. Both demos will showcase the performance uplift made possible by the application of this technology in AI use cases. Cortex-M hallmarks such as deterministic operation, short interrupt response time, and state-of-the-art low-power support are uncompromised on Cortex-M85.

“We’re proud to again lead the industry in implementing the powerful new Arm Cortex-M85 processor with Helium technology,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “By showcasing the performance of AI on the new processor, we are highlighting technical advantages of the new platform and at the same time demonstrating Renesas’ strengths in providing solutions for emerging applications with our innovative ecosystem partners.”
 
  • Like
  • Love
  • Fire
Reactions: 47 users
  • Like
  • Love
  • Fire
Reactions: 37 users
“At embedded world in 2022, Renesas became the first company to demonstrate working silicon based on the Arm Cortex-M85 processor. This year, Renesas is extending its leadership by showcasing the features of the new processor in demanding AI use cases. The first demonstration showcases a people detection application developed in collaboration with Plumerai, a leader in Vision AI, that identifies and tracks persons in the camera frame in varying lighting and environmental conditions. The compact and efficient TinyML models used in this application lead to low-cost and lower power AI solutions for a wide range of IoT implementations. The second demo showcases a motor control predictive maintenance use case with an AI-based unbalanced load detection application using Tensorflow Lite for Microcontrollers with CMSIS-NN.

Delivering over 6 CoreMark/MHz, Cortex-M85 enables demanding IoT use cases that require the highest compute performance and DSP or ML capability, realized on a single, simple-to-program Cortex-M processor. The Arm Cortex-M85 processor features Helium technology, Arm’s M-Profile Vector Extension, available as part of the Armv8.1M architecture. It delivers a significant performance uplift for machine learning (ML) and digital signal processing (DSP) applications, accelerating compute-intensive applications such as endpoint AI. Both demos will showcase the performance uplift made possible by the application of this technology in AI use cases. Cortex-M hallmarks such as deterministic operation, short interrupt response time, and state-of-the-art low-power support are uncompromised on Cortex-M85.

“We’re proud to again lead the industry in implementing the powerful new Arm Cortex-M85 processor with Helium technology,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “By showcasing the performance of AI on the new processor, we are highlighting technical advantages of the new platform and at the same time demonstrating Renesas’ strengths in providing solutions for emerging applications with our innovative ecosystem partners.”
“We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge,” says Roger Wendelken, the senior vice president of Renesas’ IoT and Infrastructure Business Unit.”

Despite this a WANCA said today while recommending Retail Food Group as the pick for 2023 “what’s a neuromorphic chip anyway”.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Love
Reactions: 40 users
Top Bottom