BRN Discussion Ongoing

Doesn’t look like Neuromorphic is involved, looks like their software ai running on the sim.
Hard to say really, but I thought BrainChip tech, was now "a part" of their A.I. emotion detection models (which is available also at a software level) and the mention of "Edge processing" for "privacy" at least hints at us?..
 
  • Like
  • Love
  • Thinking
Reactions: 8 users

manny100

Regular
Chartists is that a pennant breakout on the daily and pending tomorrows price action the weekly as well?
From a beginners level charter it looks possible.
BRN 23 JAN 25.png
BRN WK 23RD JAN 25.png
 
  • Like
  • Love
Reactions: 6 users
  • Like
  • Haha
Reactions: 7 users

Mt09

Regular
Hard to say really, but I thought BrainChip tech, was now "a part" of their A.I. emotion detection models (which is available also at a software level) and the mention of "Edge processing" for "privacy" at least hints at us?..
True, one of the links embedded in the LinkedIn post does mention edge processing, in with a chance perhaps..

They can run their software on a customers GPU or other device, but will be less efficient etc etc.


1737622489222.jpeg
 
  • Like
  • Love
  • Thinking
Reactions: 26 users

manny100

Regular
Definitely one of those penchant breakout thingys, in my professional opinion.
I guess that confirms it.:ROFLMAO:
 
  • Haha
  • Like
Reactions: 9 users

Frangipani

Top 20
Researchers at UC Irvine’s Cognitive Anteater Robotics Laboratory (CARL), led by Jeffrey Krichmar, have been experimenting with AKD1000:








View attachment 67694


View attachment 67690


View attachment 67691

View attachment 67692


View attachment 67693



View attachment 67695





View attachment 67696

This is the paper I linked in my previous post, co-authored by Lars Niedermeier, a Zurich-based IT consultant, and the above-mentioned Jeff Krichmar from UC Irvine.


View attachment 67703

The two of them co-authored three papers in recent years, including one in 2022 with another UC Irvine professor and member of the CARL team, Nikil Dutt (https://ics.uci.edu/~dutt/) as well as Anup Das from Drexel University, whose endorsement of Akida is quoted on the BrainChip website:

View attachment 67702


View attachment 67700




View attachment 67701

Lars Niedermeier’s and Jeff Krichmar’s April 2024 publication on CARLsim++ (which does not mention Akida) ends with the following conclusion and the acknowledgement that their work was supported by the Air Force Office of Scientific Research - the funding has been going on at least since 2022 -



and a UCI Beall Applied Innovation Proof of Product Award (https://innovation.uci.edu/pop/)

and they also thank the regional NSF I-Corps (= Innovation Corps) for valuable insights.

View attachment 67699



View attachment 67704


Their use of an E-Puck robot (https://en.m.wikipedia.org/wiki/E-puck_mobile_robot) for their work reminded me of our CTO’s address at the AGM in May, during which he envisioned the following object (from 22:44 min):

“Imagine a compact device similar in size to a hockey puck that combines speech recognition, LLMs and an intelligent agent capable of controlling your home’s lighting, assisting with home repairs and much more. All without needing constant connectivity or having to worry about privacy and security concerns, a major barrier to adaptation, particularly in industrial settings.”

Possibly something in the works here?

The version the two authors were envisioning in their April 2024 paper is, however, conceptualised as being available as a cloud service:

“We plan a hybrid approach to large language models available as cloud service for processing of voice and text to speech.”


The authors gave a tutorial on CARLsim++ at NICE 2024, where our CTO Tony Lewis was also presenting. Maybe they had a fruitful discussion at that conference in La Jolla, which resulted in UC Irvine’s Cognitive Anteater Robotics Laboratory (CARL) team experimenting with AKD1000, as evidenced in the video uploaded a couple of hours ago that I shared in my previous post?





View attachment 67705



View attachment 67716

About six months ago, I posted a video which showed that researchers at UC Irvine’s Cognitive Anteater Robotics Laboratory (CARL), led by Jeff Krichmar, had been experimenting with AKD1000 mounted on an E-Puck2 robot.

The April 2024 paper I linked to at the time (“An Integrated Toolbox for Creating Neuromorphic Edge Applications”), co-authored by Lars Niedermeier (Niedermeier Consulting, Zurich) and Jeff Krichmar (UC Irvine), did not yet contain a reference to Akida, but has recently been updated to a newer version (Accepted Manuscript online 22 January 2025). It now has heaps of references to AKD1000 and describes how it was used for visual object detection and classification.

Nikil Dutt, one of Jeff Krichmar’s colleagues at UC Irvine and also member of the CARL team, contributed to this Accepted Manuscript version as an additional co-author.



What caught my eye was that the researchers, who had used an AKD1000 PCIe Board (with an engineering sample chip) as part of their hardware stack, had already gotten their hands on an Akida M.2 form factor as well, even though BrainChip’s latest offering wasn’t officially revealed until January 8th at CES 2025:

“For productive deployments, the Raspberry Pi 5 19 Compute Module and Akida.M2 form factor were used.” (page 9)


Maybe thanks to Kristofor Carlson?

Kristofor Carlson was a postdoc at Jeff Krichmar‘s Cognitive Robotics Lab a decade ago and co-authored a number of research papers with both Jeff Krichmar and Nikil Dutt over the years, the last one published in 2019:

View attachment 67717

View attachment 67718


Here are some pages from the Accepted Manuscript version:


DD62965A-C876-4048-9163-79D0B2745044.jpeg



76973013-2D69-4EF3-8A47-B061F3F20C8F.jpeg




D973E46F-D416-466F-A2B3-885344B9BBD6.jpeg



CAB240B9-E8E0-451F-ADD8-0E7238E2DE51.jpeg



FCAF924D-B99B-42DA-A04F-3BD48AD956F7.jpeg

72A73673-F8D9-4C56-B4C3-7E6755DC2F4A.jpeg



We already knew from the April 2024 version of that paper that…
their work was supported by the Air Force Office of Scientific Research - the funding has been going on at least since 2022 -

and a UCI Beall Applied Innovation Proof of Product Award (https://innovation.uci.edu/pop/)

and they also thank the regional NSF I-Corps (= Innovation Corps) for valuable insights.

View attachment 67699



View attachment 67704


And finally, here’s a close-up of the photo on page 9:

5735DD4E-B9B7-4348-8328-B160FABAC4E1.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 70 users
About six months ago, I posted a video which showed that researchers at UC Irvine’s Cognitive Anteater Robotics Laboratory (CARL), led by Jeff Krichmar, had been experimenting with AKD1000 mounted on an E-Puck2 robot.

The April 2024 paper I linked to at the time (“An Integrated Toolbox for Creating Neuromorphic Edge Applications”), co-authored by Lars Niedermeier (Niedermeier Consulting, Zurich) and Jeff Krichmar (UC Irvine), did not yet contain a reference to Akida, but has recently been updated to a newer version (Accepted Manuscript online 22 January 2025). It now has heaps of references to AKD1000 and describes how it was used for visual object detection and classification.

Nikil Dutt, one of Jeff Krichmar’s colleagues at UC Irvine and also member of the CARL team, contributed to this Accepted Manuscript version as an additional co-author.



What caught my eye was that the researchers, who had used an AKD1000 PCIe Board (with an engineering sample chip) as part of their hardware stack, had already gotten their hands on an Akida M.2 form factor as well, even though BrainChip’s latest offering wasn’t officially revealed until January 8th at CES 2025:

“For productive deployments, the Raspberry Pi 5 19 Compute Module and Akida.M2 form factor were used.” (page 9)


Maybe thanks to Kristofor Carlson?




Here are some pages from the Accepted Manuscript version:


View attachment 76552


View attachment 76553



View attachment 76554


View attachment 76558


View attachment 76556
View attachment 76557


We already knew from the April 2024 version of that paper that…



And finally, here’s a close-up of the photo on page 9:

View attachment 76555
Upside down Miss Jane.

SC
 
  • Haha
  • Thinking
Reactions: 6 users

Esq.111

Fascinatingly Intuitive.
Pardon my ignorance but does that mean there was a reduction in the amount of shares that are held by shorters?
Evening Boab ,

Sorry for the delayed reply , basically this is the GROSE for the day ,
The pheasant phuketrs could have taken ...shorted X stock ...returned some ( closed out ) their positions, or carried foward ... or anything Indetweewn . BULLSHITE SPORT FOR THE DEPRAVED.

* Note, and apparently these numbers may vary .... depending if thay wish ..remember to loge their position for the day , week month.

The ASX is f%<^ing useless , as has been displayed to all when their manigement was grilled not long ago before a government enquiry.

Accenture PLC & Tata Controle Systems... both partners of BrainChips , has the contract to re do the ASX platform , as we all have witnessed, thay are useless at keeping up with the times or simply complicit, leave that up to all to get a bearing on.

Back to the question at hand...this figure is the gross shorts for the day, The pheasant s may have sold then bought back...etc .. etc.

Gross for the day, not the NET ( outstanding position , Open...bent over position , as it were ). Isn’t stated.

The more confusion thay can insert into the system , the smoother it all runs apparently.


Hope this helps .


Regards,
Esq
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 20 users

Boab

I wish I could paint like Vincent
  • Like
  • Fire
Reactions: 4 users

Boab

I wish I could paint like Vincent
Evening Boab ,

Sorry for the delayed reply , basically this is the GROSE for the day ,
The pheasant phuketrs could have taken ...shorted X stock ...returned some ( closed out ) their positions, or carried foward ... or anything Indetweewn . BULLSHITE SPORT FOR THE DEPRAVED.

* Note, and apparently these numbers may vary .... depending if thay wish ..remember to loge their position for the day , week month.

The ASX is fucking useless , as has been displayed to all when their manigement was grilled not long ago before a government enquiry.

Accenture PLC & Tata Controle Systems... both partners of BrainChips , has the contract to re do the ASX platform , as we all have witnessed, thay are fucking useless at keeping up with the times or simply complicit, leave that up to all to get a bearing on.

Back to the question at hand...this figure is the gross shorts for the day, The pheasant s may have sold then bought back...etc .. etc.

Gross for the day, not the NET ( outstanding position , Open...bent over position , as it were ). Isn’t stated.

The more confusion thay can insert into the system , the smoother it all runs apparently.


Hope this helps .


Regards,
Esq
Many thanks Esq. Business as usual then eh.😩😩
 
  • Haha
  • Like
Reactions: 3 users

manny100

Regular
About six months ago, I posted a video which showed that researchers at UC Irvine’s Cognitive Anteater Robotics Laboratory (CARL), led by Jeff Krichmar, had been experimenting with AKD1000 mounted on an E-Puck2 robot.

The April 2024 paper I linked to at the time (“An Integrated Toolbox for Creating Neuromorphic Edge Applications”), co-authored by Lars Niedermeier (Niedermeier Consulting, Zurich) and Jeff Krichmar (UC Irvine), did not yet contain a reference to Akida, but has recently been updated to a newer version (Accepted Manuscript online 22 January 2025). It now has heaps of references to AKD1000 and describes how it was used for visual object detection and classification.

Nikil Dutt, one of Jeff Krichmar’s colleagues at UC Irvine and also member of the CARL team, contributed to this Accepted Manuscript version as an additional co-author.



What caught my eye was that the researchers, who had used an AKD1000 PCIe Board (with an engineering sample chip) as part of their hardware stack, had already gotten their hands on an Akida M.2 form factor as well, even though BrainChip’s latest offering wasn’t officially revealed until January 8th at CES 2025:

“For productive deployments, the Raspberry Pi 5 19 Compute Module and Akida.M2 form factor were used.” (page 9)


Maybe thanks to Kristofor Carlson?




Here are some pages from the Accepted Manuscript version:


View attachment 76552


View attachment 76553



View attachment 76554


View attachment 76558


View attachment 76556
View attachment 76557


We already knew from the April 2024 version of that paper that…



And finally, here’s a close-up of the photo on page 9:

View attachment 76555
Wow, that is an awesome find Frangipani.
 
  • Like
Reactions: 10 users

Diogenese

Top 20
Anil liked this.
Hi Boab,

'sfunny - that first bit about exciting times was my reaction when I stumbled across BRN 7 years ago.

From Walter Goodwin, I traced a patent application to a company called Neu Edge, presumably Fractile's predecessor. Then I found this patent appliction in Neu Edge's name:

GB2625821A Analog neural network 20221229

1737636341495.png

An analogue neural network comprises layers connected to form an electrical circuit having an input and an output. The input receives an electrical signal corresponding to an input example, and the output corresponds to an output of the neural network. Each layer comprises at least one programmable electronic element 1, e.g. a memristor, representing a weight of the neuron. At least one non-linear element 4, e.g. a diode, implements a non-linear transfer function. At least one amplifier block 2 amplifies an output signal to prevent signal diminishment. An error element 3, e.g. a resistor or capacitor, allows measurement of the effect the whole electrical circuit has on the element. Each layer also comprises one measurement element for measuring an electrical signal across the error element, and a second measurement element for measuring a weight input of the programmable electronic element. The neural network may be trained by clamping an input signal to a signal corresponding to an input example, clamping an output to an electrical value representing a ground truth label of the input example, and once an equilibrium state is reached, using the values from the measurement elements to update the weight.

I'm going to suggestl Gelsinger have a saver on BRCHF.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 21 users

uiux

Regular

View attachment 67696

This is the paper I linked in my previous post, co-authored by Lars Niedermeier, a Zurich-based IT consultant, and the above-mentioned Jeff Krichmar from UC Irvine.


View attachment 67703

The two of them co-authored three papers in recent years, including one in 2022 with another UC Irvine professor and member of the CARL team, Nikil Dutt (https://ics.uci.edu/~dutt/) as well as Anup Das from Drexel University, whose endorsement of Akida is quoted on the BrainChip website:

View attachment 67702


View attachment 67700




View attachment 67701

Lars Niedermeier’s and Jeff Krichmar’s April 2024 publication on CARLsim++ (which does not mention Akida) ends with the following conclusion and the acknowledgement that their work was supported by the Air Force Office of Scientific Research - the funding has been going on at least since 2022 -



and a UCI Beall Applied Innovation Proof of Product Award (https://innovation.uci.edu/pop/)

and they also thank the regional NSF I-Corps (= Innovation Corps) for valuable insights.

View attachment 67699



View attachment 67704


Their use of an E-Puck robot (https://en.m.wikipedia.org/wiki/E-puck_mobile_robot) for their work reminded me of our CTO’s address at the AGM in May, during which he envisioned the following object (from 22:44 min):

“Imagine a compact device similar in size to a hockey puck that combines speech recognition, LLMs and an intelligent agent capable of controlling your home’s lighting, assisting with home repairs and much more. All without needing constant connectivity or having to worry about privacy and security concerns, a major barrier to adaptation, particularly in industrial settings.”

Possibly something in the works here?

The version the two authors were envisioning in their April 2024 paper is, however, conceptualised as being available as a cloud service:

“We plan a hybrid approach to large language models available as cloud service for processing of voice and text to speech.”


The authors gave a tutorial on CARLsim++ at NICE 2024, where our CTO Tony Lewis was also presenting. Maybe they had a fruitful discussion at that conference in La Jolla, which resulted in UC Irvine’s Cognitive Anteater Robotics Laboratory (CARL) team experimenting with AKD1000, as evidenced in the video uploaded a couple of hours ago that I shared in my previous post?





View attachment 67705



View attachment 67716



I remember researching this guy nearly 8 years ago:


https://patents.google.com/patent/GB2556314A/en

Financial data encoder for spiking neural networks​

A computer implemented method for structuring and converting financial data into spike streams for a spiking neural network (SNN) 7 comprises structuring a continuous financial data stream 4 so that it scales over a data provider, securities, their components and derived meta data; generating multidimensional spike streams utilizing spiking neurons according to the structure of the financial data feed; configuring artificial neurons so they support the specific properties of the financial data; and synchronizing real time and historical financial data streams with the internal time of the SNN by supporting custom time lines such as real time, slow motion, and time lapse. The method is executed on a hybrid many core computing architecture such as von-Neumann combined with a GP-GPU or field programming gate array, or directly by a custom build microchip. The method may identify irregularities in a financial market data feed, and in response may send a control signal 8 to a cash vault 13 of an ATM to lock the vault.


1737639017965.png
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Kozikan

Regular

View attachment 76527 View attachment 76528 View attachment 76529 View attachment 76530
lloveLamp, WOW, that seems like an excellent find.
I’m definitely not technically qualified to judge on the “ Centaurus layer “ …..but , I’m surprised at the lack of acclaim from others that may be technically adept.
Can anyone plain English this.
Thou Rudy does seem to say enough if you reread his final paragraph

Cheers 👍
 

Attachments

  • IMG_1699.jpeg
    IMG_1699.jpeg
    187.8 KB · Views: 107
  • Like
  • Fire
  • Love
Reactions: 23 users

BrainShit

Regular
1000067879.jpg




I'm not quite sure if the 0.49% (9,705,329) shorts from yesterday will be vanish in a mysterical way...
 
  • Like
  • Fire
  • Love
Reactions: 30 users
My thoughts on todays SP

1737655450734.gif
 
  • Haha
  • Like
  • Fire
Reactions: 17 users

Tothemoon24

Top 20
IMG_0522.jpeg


At CES 2025, Vivek Bhan, Senior Vice President and General Manager of High-Performance Computing at Renesas, appeared onstage with Stephen Frey from Honda during a press conference to announce that we have entered into a joint development agreement with Honda to create a high-performance system on chip (SoC) tailored for software-defined vehicles (#SDVs).

By 2030, AI performance requirements for SDVs will be 500x higher than today, demanding enormous data processing capabilities with low power consumption. That’s why we’re excited to collaborate with Honda to integrate our recently announced automotive R-Car 5th generation SoC and an AI accelerator customized for Honda’s AI software with chiplet technology. Together, we aim to provide the world’s top-class 2000 TOPS level AI processing performance with 20 TOPS/W power efficiency—enabling the future of mobility! Our solution will power Honda’s upcoming EV models in the “Honda 0 Series,” launching in the late 2020s.

 
  • Like
  • Fire
  • Wow
Reactions: 27 users

manny100

Regular
About six months ago, I posted a video which showed that researchers at UC Irvine’s Cognitive Anteater Robotics Laboratory (CARL), led by Jeff Krichmar, had been experimenting with AKD1000 mounted on an E-Puck2 robot.

The April 2024 paper I linked to at the time (“An Integrated Toolbox for Creating Neuromorphic Edge Applications”), co-authored by Lars Niedermeier (Niedermeier Consulting, Zurich) and Jeff Krichmar (UC Irvine), did not yet contain a reference to Akida, but has recently been updated to a newer version (Accepted Manuscript online 22 January 2025). It now has heaps of references to AKD1000 and describes how it was used for visual object detection and classification.

Nikil Dutt, one of Jeff Krichmar’s colleagues at UC Irvine and also member of the CARL team, contributed to this Accepted Manuscript version as an additional co-author.



What caught my eye was that the researchers, who had used an AKD1000 PCIe Board (with an engineering sample chip) as part of their hardware stack, had already gotten their hands on an Akida M.2 form factor as well, even though BrainChip’s latest offering wasn’t officially revealed until January 8th at CES 2025:

“For productive deployments, the Raspberry Pi 5 19 Compute Module and Akida.M2 form factor were used.” (page 9)


Maybe thanks to Kristofor Carlson?




Here are some pages from the Accepted Manuscript version:


View attachment 76552


View attachment 76553



View attachment 76554


View attachment 76558


View attachment 76556
View attachment 76557


We already knew from the April 2024 version of that paper that…



And finally, here’s a close-up of the photo on page 9:

View attachment 76555
The really interesting part of your find is confirmation that the M.2 card has been with developers for some time.
We knew from the QV Cyber Security news the other day that they must have had access to the M.2 some time ago.
It's quite possible that we might see further end product news as a result of developers' work in the near future.
Tata for example have committed to drive AKIDA into health and industrial end products.
There will likely be others working with the M.2 which we have not heard about.
 
  • Like
  • Fire
Reactions: 21 users

MDhere

Top 20
View attachment 76592

At CES 2025, Vivek Bhan, Senior Vice President and General Manager of High-Performance Computing at Renesas, appeared onstage with Stephen Frey from Honda during a press conference to announce that we have entered into a joint development agreement with Honda to create a high-performance system on chip (SoC) tailored for software-defined vehicles (#SDVs).

By 2030, AI performance requirements for SDVs will be 500x higher than today, demanding enormous data processing capabilities with low power consumption. That’s why we’re excited to collaborate with Honda to integrate our recently announced automotive R-Car 5th generation SoC and an AI accelerator customized for Honda’s AI software with chiplet technology. Together, we aim to provide the world’s top-class 2000 TOPS level AI processing performance with 20 TOPS/W power efficiency—enabling the future of mobility! Our solution will power Honda’s upcoming EV models in the “Honda 0 Series,” launching in the late 2020s.


Thanks to the moon, I love the video all the way to the end. Especially like this "Renesas’ R-Car solutions provide enhanced AI performance through the utilization of multi-die chiplet technology and the integration of AI accelerators into its SoC." Now spinning back to 2023 (courtesy of WaihikJoe)
-
Renesas manufacture the Akida IP on its R-Car V3H system-on-a-chip (SoC) platform. The Akida IP is a neuromorphic processor that is designed to accelerate artificial intelligence (AI) applications. It is based on Brainchip's Akida neuromorphic processor architecture, which is inspired by the human brain. The Akida IP is capable of running AI applications at much lower power consumption than traditional processors. This makes it ideal for a wide range of applications, including edge AI, automotive, and industrial automation.

The Renesas R-Car V3H SoC is a powerful and versatile platform that is well-suited for the Akida IP. It features a quad-core ARM Cortex-A72 processor, a quad-core ARM Cortex-A53 processor, and a Neural Network Engine (NNE). The NNE is a dedicated accelerator for neural network processing. It is based on Renesas's Synergy architecture and is designed to accelerate AI applications.

The collaboration between Renesas and Brainchip is a significant development for the AI industry. It brings together two leading companies with complementary technologies. Renesas has a strong track record in manufacturing and delivering high-performance SoCs. Brainchip has developed a leading-edge neuromorphic processor architecture. Together, they are well-positioned to bring the Akida IP to market and accelerate the adoption of AI.

Here are some of the benefits of using the Renesas-manufactured Brainchip Akida IP SoC:

  • Low power consumption: The Akida IP is designed to run at very low power consumption, making it ideal for edge AI applications.
  • High performance: The Akida IP is capable of running AI applications at high performance, making it suitable for a wide range of applications.
  • Flexibility: The Akida IP can be used in a variety of applications, including edge AI, automotive, and industrial automation.
  • Scalability: The Akida IP can be scaled to meet the needs of different applications.
And back to current, I found this under the R-Car on Rensesas site -

R-Car Consortium Partners​

Hmmm Elxsinteresting :)

Happy Friday fellow brners 🥳
 
  • Like
  • Fire
  • Love
Reactions: 53 users
Top Bottom