Morning fellow brners, happy Friday!
We should be receiving the top 20 shareholders notice today? I checked back on previous notices and quite a number of top 20 increased their shareholdings so will be interesting to compare when this one comes out. O to be a top 20...
Whatever our individual investments are we are part of this growing company and soon imo Brainchip will be thriving just like Arm, Nividia , Amazon and the rest of them that grew into giants
Feeling positive on the last day of January. Looking forward to seeing the Top 20 chart. Bring it on, boomidty boom boom
About six months ago, I posted a video which showed that researchers at UC Irvine’s Cognitive Anteater Robotics Laboratory (CARL), led by Jeff Krichmar, had been experimenting with AKD1000 mounted on an E-Puck2 robot.
The April 2024 paper I linked to at the time (“An Integrated Toolbox for Creating Neuromorphic Edge Applications”), co-authored by Lars Niedermeier (Niedermeier Consulting, Zurich) and Jeff Krichmar (UC Irvine), did not yet contain a reference to Akida, but has recently been updated to a newer version (Accepted Manuscript online 22 January 2025). It now has heaps of references to AKD1000 and describes how it was used for visual object detection and classification.
Nikil Dutt, one of Jeff Krichmar’s colleagues at UC Irvine and also member of the CARL team, contributed to this Accepted Manuscript version as an additional co-author.
What caught my eye was that the researchers, who had used an AKD1000 PCIe Board (with an engineering sample chip) as part of their hardware stack, had already gotten their hands on an Akida M.2 form factor as well, even though BrainChip’s latest offering wasn’t officially revealed until January 8th at CES 2025:
“For productive deployments, the Raspberry Pi 5 19 Compute Module and Akida.M2 form factor were used.” (page 9)
Maybe thanks to Kristofor Carlson?
Here are some pages from the Accepted Manuscript version:
View attachment 76552
View attachment 76553
View attachment 76554
View attachment 76558
View attachment 76556
View attachment 76557
We already knew from the April 2024 version of that paper that…
And finally, here’s a close-up of the photo on page 9:
View attachment 76555
Just an afterthought…
Academic research utilising Akida shouldn’t generally be underestimated or dismissed as mere playtime in an ivory tower.
Some of these researchers have excellent connections to big players in the industry and/or to government agencies and sometimes even prior work experience in relevant sectors themselves - hence their recommendations would likely be given quite a bit of weight.
Take Jeff Krichmarfor example, whose 27 page (!) CV can be found on his LinkedIn profile.
Krichmar’s first job after graduating with a Bachelor in Computer Science (and before going to grad school to pursue his Master’s) was that of a software engineer at Raytheon Corporation (now RTX), working on the PATRIOT surface-to-air missile system - a position, which also saw him become a consultant to the Japanese Self-Defense Forces from 1988-1989, while deployed to Mitsubishi Heavy Industries in Nagoya (which to this day is manufacturing PATRIOT missiles for domestic use under license from RTX and Lockheed Martin).
View attachment 76748
Over the years, he has received quite a bit of funding from the defence-related sector, mostly from the US government, but also from Northrop Grumman.
View attachment 76751
In 2015 he gave an invited talk at Northrop Grumman…
View attachment 76752
… and he was co-author of a paper published in November 2016, whose first author, his then graduate student Tiffany Hwu, was a Basic Research Systems Engineer Intern with Northrop Grumman at the time. (“This work was supported by the National Science Foundation Award number 1302125 and Northrop Grumman Aerospace Systems.”)
The neuromorphic hardware used for the self-driving robot was unsurprisingly IBM’s TrueNorth, as this was then the only neuromorphic chip around - Loihi wasn’t announced until September 2017.
View attachment 76756
One of the paper’s other co-authors was a former postdoctoral student of Krichmar’s, Nicolas Oros, who had started working for BrainChip in December 2014 - on his LinkedIn profile it says he was in fact our company’s first employee! He is also listed as co-inventor of the Low power neuromorphic voice activation system and method patent alongside Peter van der Made and Mouna Elkhatib.
Nicolas Oros left BrainChip in February 2021 and is presently a Senior Product Manager at Aicadium, “leading the development of a computer vision SaaS product for visual inspection”. I don’t think we’ve ever looked into them?
View attachment 76754
View attachment 76755
By the time of said paper’s publication, Jeff Krichmar had become a member of BrainChip’s Scientific Advisory Board - see this link of an April 2016 BRN presentation, courtesy of @uiux:
View attachment 76753
As mentioned before, Kristofor Carlson is another of Jeff Krichmar’s former postdoctoral students (from 2011-2015), who co-authored a number of research papers with Jeff Krichmar and Nikil Dutt (both UC Irvine) over the years - the last one published in 2019.
In September, Kris Carlson gave a presentation on TENNs at UC Irvine, as an invited speaker at SAB 2024: From Animals to Animats - 17th International Conference on the Simulation of Adaptive Behavior.
View attachment 76815
Kris Carlson’s September 2024 conference talk on TENNs and the CARL lab’s recent video and paper featuring an E-Puck2 robot, which had an Akida PCIe Board mounted on top, as well as the additional info contained in that 22 January 2025 paper that CARL researchers had already experimented with the brand new AKD1000 M.2 form factor is ample evidence that there is continued interest in what BrainChip is doing from Jeff Krichmar’s side.
Academic researchers like him could very well be door openers to people in charge of other entities’ research that will result in meaningful revenue one day…
The above paperon CARLsim++ (which got implemented on Akida hardware in the researchers’ experiments) just got a lot more exposure thanks to LinkedIn posts by two of its co-authors…
![]()
Jeff Krichmar on LinkedIn: Our new version of the CARLsim spiking neural network framework puts…
Our new version of the CARLsim spiking neural network framework puts neuromorphic computing closer to the edge. See the paper that just came out in…www.linkedin.com
View attachment 77005
5 to 10 years ... 2030..2035 maybeA fellow forum user who in recent months repeatedly referred to his brief LinkedIn exchange with Mercedes-Benz Chief Software Officer Magnus Östberg (and thereby willingly revealed his identity to all of us here on TSE, which in turn means I’m not guilty of spilling a secret with this post that should have been kept private), asked Mercedes-Benz a question in the comment section underneath the company’s latest LinkedIn post on neuromorphic computing. This time, however, he decided not to share the carmaker’s reply with all of us here on TSE. You gotta wonder why.
Could it possibly have to do with the fact that MB’s reply refutes the hypothesis he had been advancing for months, namely that Mercedes-Benz, who have been heavily promoting their future SDV (software defined vehicle) approach that gives them the option of OTA (over-the-air) updates, would “more than likely” have used Akida 2.0/TENNs simulation software in the upcoming MB.OS release as an interim solution during ongoing development until the not-yet existing Akida 2.0 silicon became available at a later stage? The underlying reason being competitive pressure to be first-to-market…
The way I see it, the January 29 reply by MB clearly puts this speculation to bed:
![]()
Mercedes-Benz AG on LinkedIn: Neuromorphic Computing | 21 comments
The intelligent vehicle functionalities of the future call for pioneering new algorithms and hardware. That’s why Mercedes-Benz is researching artificial… | 21 comments on LinkedInwww.linkedin.com
View attachment 77012
Does that sound as if an MB.OS “Akida inside” reveal at the upcoming world premiere of the CLA were on the cards?
Setting aside the questions
a) about any requirements for testing and certification of car parts making up the infotainment system (being used to German/EU bureaucracy, I find it hard to believe there wouldn’t be any at all - maybe someone who is knowledgeable about automotive regulations within Germany and the EU could comment on this) and
b) whether any new MB model containing our tech could roll off the production line despite no prior IP license deal having been signed (or at least an Akida 1.0 chips sales deal; there has never been a joint development announcement either which could possibly somehow circumvent the necessity of an upfront payment showing up in our financials)….
… various MB statements in recent months (cf. Dominik Blum’s presentation at HKA Karlsruhe I shared in October: https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-439352 - the university of applied sciences they have since confirmed to cooperate with regarding research on neuromorphic cameras, journalists quoting MB engineers after having visited their Future Technologies Lab as well as relevant posts and comments on LinkedIn) have diminished the likelihood of neuromorphic tech making its debut in any soon to be released Mercedes-Benz models.
If NC were to enhance voice control and infotainment functions in their production vehicles much sooner than safety-critical ones (ADAS), MB would surely have clarified this in their reply to the above question posed to them on LinkedIn, which specifically referred to the soon-to-be released CLA, which happens to be the first model to come with the next-generation MB.OS that also boasts the new AI-powered MBUX Virtual Assistant (developed in collaboration with Google).
Instead, they literally wrote:
“(…) To fully leverage the potential of neuromorphic processes, specialised hardware architectures that efficiently mimic biologically inspired systems are required (…) we’re currently looking into Neuromorphic Computing as part of a research project. Depending on the further development progress, integration could become possible within a timeframe of 5 to 10 years.”
They are evidently exploring full scale integration to maximise the benefits of energy efficiency, latency and privacy. The voice control implementation of Akida in the Vision EQXX was their initial proof-of-concept to demonstrate feasibility of NC in general (cf. the podcast with Steven Peters, MB’s former Head of AI Research from 2016-2022: https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-407798). Whether they’ll eventually partner with us or a competitor (provided they are happy with their research project’s results) remains to be seen.
So I certainly do not expect the soon-to-be revealed CLA 2025 with the all-new MB.OS to have “Akida inside”, although I’d be more than happy to be proven wrong, as we’d all love to see the BRN share price soar on these news…
Time - and the financials - will ultimately tell.
Hi HG.Has the list come out today?
Couldn't find this conference paper from 2024 posted but maybe my search didn't capture it.
Anyway, was a positive presentation at a IEEE conference and below are a couple of snips and as they advise, "This is an accepted manuscript version of a paper before final publisher editing and formatting. Archived with thanks to IEEE."
HERE
As the authors acknowledge:
"This work has been partially supported by King Abdullah University of Science and Technology CRG program under grant number: URF/1/4704-01-01.
We would like also to thank Edge Impulse and Brainchip companies for providing us with the software tools and hardware platform used during this work.*
One of the authors, M E Fouda, caught my eye given his/her relationship with "3", maybe employer...hmmmm.
D. A. Silva1
, A. Shymyrbay1
, K. Smagulova1
, A. Elsheikh2
, M. E. Fouda3,†
and A. M. Eltawil1
1 Department of ECE, CEMSE Division, King Abdullah University of Science and Technology, Thuwal 23955, Saudi Arabia
2 Department of Mathematics and Engineering Physics, Faculty of Engineering, Cairo University, Giza 12613, Egypt
3 Rain Neuromorphics, Inc., San Francisco, CA, 94110, USA
†Email:foudam@uci.edu
End-to-End Edge Neuromorphic Object Detection System
Abstract—Neuromorphic accelerators are emerging as a potential solution to the growing power demands of Artificial
Intelligence (AI) applications. Spiking Neural Networks (SNNs), which are bio-inspired architectures, are being considered as a way to address this issue. Neuromorphic cameras, which operate on a similar principle, have also been developed, offering low power consumption, microsecond latency, and robustness in various lighting conditions.
This work presents a full neuromorphic
system for Computer Vision, from the camera to the processing hardware, with a focus on object detection. The system was evaluated on a compiled real-world dataset and a new synthetic dataset generated from existing videos, and it demonstrated good performance in both cases. The system was able to make accurate predictions while consuming 66mW, with a sparsity of 83%, and a time response of 138ms.
View attachment 77018
VI. CONCLUSION AND FUTURE WORK
This work showed a low-power and real-time latency full spiking neuromorphic system for object detection based on IniVation’s DVXplorer Lite event-based camera and Brainchip’s Akida AKD1000 spiking platform. The system was evaluated on three different datasets, comprising real-world and synthetic samples. The final mapped model achieved mAPs of 28.58 for the GEN1 dataset, equivalent to 54% of a more complex state of-the-art model and 89% of the performance detection from the best-reported result for the single-class dataset PEDRo, having 17x less parameters. A power consumption of 66mW and a latency of 138.88ms were reported, being suitable for real-time edge applications.
For future works, different models are expected to be adapted to the Akida platform, from which more recent
releases of the YOLO family can be implemented. Moreover, it is expected to evaluate those models in real-world scenarios instead of recordings, as well as the acquisition of more data to evaluate this setup under different challenging situations.
Wondering if BrainChip management are still claiming a three year lead over its competitors.Interesting read on the concerns over Deepseek:
![]()
Effectiveness of US AI restrictions questioned after DeepSeek upset
US officials are now investigating whether DeepSeek obtained Nvidia chips through intermediaries in Singapore to avoid AI restrictions.www.investmentmonitor.ai
"US officials are now investigating whether DeepSeek purchased NVIDIA chips through intermediaries in Singapore, effectively circumventing the AI restrictions the government had employed, Bloomberg reported."
"Nvidia’s chips, which they bought tons of, and they found their ways around it, drive their DeepSeek model […] It has got to end. If they are going to compete with us, let them compete, but stop using our tools to compete with us. So I am going to be very strong on that,” Lutnick said. If confirmed as Commerce Secretary, he would be at the helm of enforcing semiconductor restrictions."
"Typically, AI development has been understood to be very expensive and resource-intensive. Investors have expressed worry about these high costs given the sector’s slow returns. The arrival of DeepSeek and R1 has put this framework into question. After the model’s release on Monday, Nvidia’s market value decreased by nearly $600bn, dropping a staggering 17%. It was the biggest single-day loss in the history of the US stock market."
Don't know about that but Brainchip and its cloistered management are making me feel 19 again.Wondering if BrainChip management are still claiming a three year lead over its competitors.
Tony Lewis posted that DeepSeek is a boom for us.Wondering if BrainChip management are still claiming a three year lead over its competitors.