BRN Discussion Ongoing

When I saw the pictures at the start of this article it reminded me of an older Brainchip video they posted a couple of years ago. Could something like this contain our IP?


1732008931494.png
 
  • Like
Reactions: 9 users
  • Like
  • Fire
  • Love
Reactions: 19 users

BrainShit

Regular
I think I have found a new Patent, 6 days ago!

METHODS AND SYSTEM FOR IMPROVED PROCESSING OF SEQUENTIAL DATA IN A NEURAL NETWORK

Abstract​

Disclosed is a system that includes a processor configured to process data in a neural network and a memory associated with a primary flow path and at least one secondary flow path within the neural network. The primary flow path comprises one or more primary operators to process the data and the at least one secondary flow path is configured to pass the data to a combining operator by skipping the processing of the data over the primary flow path. The processor is configured to provide the primary flow path and the at least one secondary flow path with a primary sequence of data and a secondary sequence of data respectively such that the secondary sequence of data being time offset from the processed primary sequence of data.

View attachment 73027

This patent is from 12-May-2023 ... pending in the US and in AU.

BC_Patent_Pending.png
 
  • Like
  • Fire
Reactions: 12 users

Frangipani

Regular
EDGX are currently exhibiting their EDGX-1 edge processor at SpaceTech Bremen:

DACFB900-E113-4E35-B346-2E4B6B821CA3.jpeg




F7063372-B836-4B4B-B556-D49ECBAADAB6.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 29 users

Frangipani

Regular
Interesting draft proposal by Brian Anderson, who left Intel Labs just last week and kicked off what he code-named Project Phasor:

B59DFF33-028B-4532-9EC4-A85596C1B129.jpeg



9574AF8E-3E9F-4085-B8CD-65152B4D9B69.jpeg



BrainChip gets mentioned, too (although the author didn’t research very thoroughly, apparently being under the impression that Akida 2.0 were introduced “recently” and available in silicon).

While a lot in this open proposal is far too technical for me, it clearly shows how optimistic and psyched the writer/s of this proposal is/are about the future given NC’s disruptive potential. Plus, there are some intriguing insights into the topic from someone who is not just another external analyst vaguely familiar with what NC is all about:

7CCAC1CB-ED19-4807-8E2D-3D3AAE8DCFEB.jpeg

B1C20387-58CF-4646-A27D-89F7FFB07291.jpeg

075E55B4-195D-41B9-8DED-F376EC94DA5D.jpeg

A5CAAD61-6BC6-4168-BA20-D7E313B5D332.jpeg


CBDF5685-D6DC-4E27-B33D-524BD2A4A157.jpeg

D0991ED0-0820-43F7-91FB-B18F447103BE.jpeg



97421AA0-D2FC-4C07-BD88-C8F9A14800F7.jpeg
DD68EFB7-C78D-4607-A51F-BFCD094E8DD0.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 37 users

IloveLamp

Top 20
If you’re short on time listen from 22min


1000019855.jpg
1000016441.gif
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 22 users

Frangipani

Regular
We also get a mention in this September 2024 tech brief on Neuromorphic Computing co-authored by Céline Nauer (Project Advisor @ Global Innovation Hub at Friedrich-Naumann-Stiftung für die Freiheit / Friedrich Naumann Foundation for Freedom, Taipei Office) and Nik Dennler (Dual PhD student in Neuromorphic Computing and Sensing at Western Sydney University’s International Centre for Neuromorphic Systems and at the University of Hertfordshire) - a text, which would, however, have benefited greatly from some proper proofreading before publication (both layout- and content-wise, such as correcting the misleading reference to a “new” Mercedes concept car, which - as we all know - refers to the Vision EQXX that was revealed almost three years ago, one year prior to the Concept CLA Class). Nevertheless it’s good exposure!




15238D12-B670-4F35-A4FB-D5A96E34E082.jpeg



22B18334-9626-4C47-9B87-4C7B1A552B26.jpeg



0CDFD4B7-5F82-4E4C-B2E7-29954F1FB705.jpeg

6323A3DE-2A02-4003-8950-2C7D9BA26AC8.jpeg

40AB00BB-DF6F-4822-941B-BC21CFACC2CE.jpeg


758C1929-6F3E-4853-9795-E245B6A4482C.jpeg



D064A040-7C82-482E-B39D-58BBF6983B1D.jpeg

8DB51CEF-911D-424D-8599-84FB005481F8.jpeg



DEB73E28-4440-4556-8BC4-5A7F1DC17E30.jpeg

89319F6D-A3F8-42E1-B31D-CB82E8F45636.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Frangipani

Regular
Another reminder that medical imaging will greatly benefit from neuromorphic applications:

Jason Eshraghian (UC Santa Cruz) - one of the three members of our Scientific Advisory Board - just co-authored a paper titled NEUROMORPHIC IMAGING CYTOMETRY ON HUMAN BLOOD CELLS with researchers from The University of Sydney and University of Technology Sydney.

“A future endeavour to implement this architecture in neuromorphic hardware can lead to significant acceleration in latency and power gain.”



21A3AD7A-3CAD-42CB-9E11-716D925EFC18.jpeg

7B22CAB0-45BA-4D5F-AD6A-4A0CE08B81A3.jpeg

70176D25-FB7A-4E87-9C46-DAC481423DB2.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 19 users

GStocks123

Regular
Nikunj was quite the star ⭐

 

Attachments

  • IMG_9718.png
    IMG_9718.png
    492 KB · Views: 39
  • Like
  • Love
Reactions: 8 users

Frangipani

Regular

HOME
Publication Name: ETedge-insights.com
Date: November 18, 2024

How on-device intelligence is redefining industry standards and efficiency
How on-device intelligence is redefining industry standards and efficiency


In today’s increasingly connected world, Edge AI started as a thought process to address the challenges of data processing and transmission. With the growing relevance of the semiconductor industry, the need for efficient data management has become more prominent. Traditionally, this data would be sent to centralised cloud servers for processing, which is a transactionally heavy on communication channels and not cost effective. Edge AI can mitigate these issues by enabling data processing at or near the source—on the devices themselves. By processing data locally, Edge AI minimises reliance on cloud infrastructure, significantly reducing computing costs and energy consumption, making this technology important for today’s data-centric technological landscape and more environmentally sustainable.

A key component of this mechanism is also collective intelligence, which is facilitated through federated learning and meta-learning. Federated learning allows multiple devices to collaboratively improve a shared AI model without exchanging raw data, thereby preserving privacy while enhancing the model’s accuracy. Meta-learning, on the other hand, can enhance this framework by enabling devices to adapt their learning strategies based on collective experiences, effectively “learning to learn”. This dual-layered approach—immediate learning from edge devices combined with meta-learning through federated collaboration—creates a robust ecosystem that adds value to edge computing as a process. As we move towards ‘Sovereign AI’ where data is preferred to be local and compute engines only exchange meta information essential for learining, Edge AI is now more relevant to bring this transformation.

Challenges in Edge AI
One significant hurdle is the predominance of supervised learning models, which require extensive training before deployment, making immediate inference difficult. While unsupervised learning approaches exist, they often lack the accuracy needed for reliable decision-making.

Secondly, many AI models, including neural networks and deep learning architectures, tend to be very large and computationally intensive, often requiring several megabytes or even gigabytes of memory—far exceeding the capabilities of typical edge devices. Additionally, existing frameworks and tools for AI model generation are often not optimised for embedded platforms, as they rely on tensor data structures and GPU-based processing that are ill-suited for edge computing environments.

In response to these challenges, developments are underway for novel lightweight frameworks that mimic the essential properties of traditional models while employing simpler mathematical constructs to deliver adequate performance with reduced accuracy. As technology has evolved, advancements such as co-processors have enabled more complex processing capabilities at the edge, allowing for the deployment of even small language models (SLM) and other sophisticated applications. However, significant challenges remain, particularly when it comes to running AI models on microcontrollers with limited memory resources. Despite progress in techniques such as vectorisation, quantisation, and advanced hyperparameter search and tuning to optimise model sizes, there is still a need to fully leverage the benefits of Edge AI.

Key Breakthroughs
Since the advent of Edge AI, a significant progress has been the adoption of Apache TVM (Tensor Virtual Machine), an open-source deep learning compiler that facilitates the optimisation and interoperability of AI models across various hardware architectures. By allowing models generated through its framework to run on multiple processor families, TVM enhances flexibility and performance, making it easier for developers to deploy their applications efficiently.

Additionally, the emergence of new chipset companies focused on embedded platforms has been instrumental in addressing the challenges of fitting complex AI models into constrained hardware. Notably, NVIDIA’s tools, such as NVIDIA EGX, TensorRT and the Jetson Nano, Industrial grade IGX Orin enable comprehensive robotic frameworks to operate effectively at the edge, showcasing the potential for running sophisticated applications like motion planning and object recognition without relying heavily on cloud resources.
Furthermore, advancements in model quantisation techniques have paved the way for developing integer models, which optimise performance while reducing resource consumption. There are now more advanced methods like QLoRA, GPTQ which are more popular for quantizing LLMs to make it suitable for running on the edge as the LLM based applications are also finding there way on edge devices. These innovations collectively represent a significant leap forward in Edge AI, enabling more complex processing capabilities directly on edge devices. These factors define the breakthroughs in the edge computing process.

An Industry Perspective
As companies navigate the complexities of implementing data policies, they are focusing on faster pathways from data to insights, which is reshaping their standard operating procedures (SOPs). The rise of AI-centric decision-making processes, including human-in-the-loop systems, is enhancing operational efficiency and productivity, particularly in areas like quality inspections, process control, and automation. Moreover, advancements in neuromorphic computing are enabling ultra-low power processing at the edge, allowing for rapid decision-making in scenarios that require immediate feedback, such as sorting fruits or packaged goods.

As the technology matures and costs decrease, the potential for Edge AI to penetrate sectors like agriculture and fast-moving consumer goods (FMCG) will expand significantly. In agriculture, drones equipped with Edge AI technology can identify ripe fruits in real-time, optimising the harvesting process by ensuring that only the best quality produce is picked. This capability not only boosts productivity but also reduces waste by minimising the chances of overripe fruit being harvested. Similarly, in fast-moving consumer goods (FMCG) environments, such as bottling plants, Edge AI systems can inspect packaging at high speeds—scanning up to 200 bottles per minute—ensuring quality control while maintaining rapid output. This technology can achieve sub-millisecond response times when paired with specialised neuromorphic cameras, although these devices remain costly.

Achieving these efficiencies requires a holistic approach that optimises the entire data collection and processing pipeline. This includes selecting appropriate capture devices, ensuring optimal lighting conditions, and mitigating environmental factors that could degrade data quality, such as dust or corrosive gases in manufacturing settings. Edge AI implementations require consistency across the engineering development cycle, ensuring that investments yield a rapid return while minimising operational disruptions.

To maximise success, companies should target applications where AI can achieve accuracy rates above 95%, ensuring that they address high-impact problems. Identifying these “low-hanging fruits” is crucial for realising quick returns on investment. Implementing Edge AI necessitates a comprehensive understanding of the entire process flow—from data collection to model inference—while also considering environmental factors that could affect sensor performance. As organisations navigate this complex landscape, maintaining an open mindset and exercising patience will be essential for fully harnessing the transformative potential of Edge AI, ultimately leading to improved productivity and operational excellence.

Author:
Biswajit Biswas, Chief Data Scientist, Tata Elxsi
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
If you’re short on time listen from 22min


View attachment 73083 View attachment 73088


giphy (1).gif
 
  • Haha
  • Like
  • Love
Reactions: 17 users

manny100

Regular
Have a read of wikipedia description of NVIDIA especially their history.
They did it as hard if not harder than we have.
For many years the unofficial company motto was " our company is only 30 days from going out of business". Huang routinely opened staff presentations for many years using those words.
Just like NVIDIA when we get our 1st deal it's game on. We are getting closer.
 
  • Like
  • Fire
  • Love
Reactions: 28 users

db1969oz

Regular
The share price today, directly reflects the fact that i bought more yesterday at 26c!! Mutha trucka!!
 
  • Haha
  • Like
  • Sad
Reactions: 14 users
The share price today, directly reflects the fact that i bought more yesterday at 26c!! Mutha trucka!!
It's always nice to get immediate confirmation, that you made the right decision.

But assuming you didn't buy, with the intention of selling at 26.5 cents..

"Time in the market" is always worse than trying to "time" the market.
 
  • Like
Reactions: 11 users

Frangipani

Regular
The BeEmotion.ai (formerly NVISO) Japan team will be demonstrating Interior Monitoring Solutions for Smart Vehicles and highlighting the enhanced capabilities of their algorithms utilising Akida IP in Yokohama this week.

448095BE-53B4-48B4-83E6-047DCAAF73E6.jpeg



8207F88C-59B2-4D9F-8628-A9B68E42A034.jpeg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 44 users
The share price today, directly reflects the fact that i bought more yesterday at 26c!! Mutha trucka!!
Let’s hope it’s just a tree shake, but I’ll be happy if it’s another pump and dump as I’ll have more $$$ in a week or so 😂 but might even sell some of my xrp after eventually breaking a 7 year cycle, fuk the sec and we all love Donald trump when it comes to crypto ❤️
 
  • Like
Reactions: 4 users

Frangipani

Regular
Last edited:
  • Like
  • Fire
  • Love
Reactions: 15 users

7für7

Top 20
The share price today, directly reflects the fact that i bought more yesterday at 26c!! Mutha trucka!!
I am 100% sure they are always watching all of us very closely… something like, “Wait… wait a little longer… I have the feeling he’s thinking about buying more…” And then, when one of us thinks, “Ah, screw it… it probably won’t drop if I buy now since it looks so stable,” and ends up buying, the command center says, “DUMP IT… NOW NOW NOW!” And the opposite when we sell… “PUMP IIIIIIT”

1732069856178.gif
 
  • Haha
  • Like
  • Fire
Reactions: 10 users

TECH

Regular
Nikunj was quite the star ⭐


Good evening from New Zealand,

The question I am going to be asking is, why would/did Nikunj decide to leave Brainchip, he was obviously spearheading many
different groundbreaking areas from within our company, including the Universities program, I would like clarity or closure so to
speak, did he feel he was being underpaid ?, did he feel that his growth or creative mind was being constrained under Sean's
leadership........why would an extremely intelligent staff member move on.

He had to be unhappy, we are a company at the forefront of Edge AI....something doesn't quite sit right in my uninformed opinion.

We have a brilliant team. don't misread what I'm trying to say, but unless I hear directly from Anil or maybe Peter as to why we would
let a superior staff member walk, I'll always be wondering, like some of you maybe, whether a personality clash of some sort was at play.

As many long termers already know, getting staff that are trained and specialized in Neuromorphic technology is and is still extremely
hard to secure....keep valuable staff at all costs I say !

Purely my thoughts....nothing more.

Regards.......Tech.
 
  • Like
  • Thinking
  • Fire
Reactions: 27 users
Top Bottom