BRN Discussion Ongoing

Hi Rach,

The patent is an improvement on a known NN technique called skip processing. If we conider an input image to a multi-layer NN to be made of of a number of distinct visual elements (eg, bounding boxes (BB)), and input signals representing the whole image are fed to a NN, then different BBs may be classified at different layers of the NN. The invention is an improvement which prevents the duplication of features (hallucinations) in the classification/inference result by stopping the processing of input signals which relate to an element (BB) of the input signals when the element they represent has been classified at an intermediate layer of the NN. The identified element is supplied to the NN output directly from the intermediate layer, bypassing any subsequent NN layers. This stops the further processing of the BB data so it cannot become confused by adjacent portions of the image data. As well as avoiding hallucinations, this technique also reduces the amount of downstream classification operations and hence reduces the power usage at all the downstream layers as there are fewer "events/spikes" to be classified.

This patent relates to the synchronization of the skipped elements with the elements which passed through all the NN layers so the full image can be reassembled at the output.*

There are already techniques for skip processing in NNs, so this is an improvement on existing processes.

In isolation, I would probably rank this invention at 3 if Akida1 is 10 because it is an improvement on existing techniques. In combination with the full BRN patent portfolio, it is a significant improvement as it does improve both accuracy and power consumption. I'm guessing that it could be applied to other NNs in addition to Akida, which would further increase its licensing potential value.

I would rank Akida2, which includes TENNs, at 17+ on the 1 to 10 scale.

Pico also has high value in low power/battery/remote applications. Its value increases when used in conjunction with Akida2/TENNs.

TENNs on its own also ranks above Akida1, as it is able to be used as software (a new income generating product line) and brings the temporal element to both software and hardware. I think TeNNs is the basis of our newish algorithm product line.

*Synchronization is vital for video - recall the many-headed dog video?
Just trying to understand which type of applications would benefit most of it. Do you think it could play a role in the combination of event-based and classical image sensors?
 
  • Like
Reactions: 3 users

Diogenese

Top 20
Just trying to understand which type of applications would benefit most of it. Do you think it could play a role in the combination of event-based and classical image sensors?
Hi CMF,

Skip is used to stop "overfitting", ie, reprocessing data which has already been classified in an earlier layer of the multi-layer NN.

The classified bounding box (BB) is passed to the output, bypassing the subsequent NN layers. This means it arives at the output before the other data captured at the same time. This is what leads to the hallucinations where segments from different times are combined. Thus it is necessary to delay the arrival of the early classified BB until its companion input data has passed through the whole NN.

With several layers, different BBs can be classified at different layers, so there can be several different arrival times, which means that different delays have to be applied to different BBs depending on the layer at which they were classified.

With video images, this would be particularly important to get all the contemporaneous input bits arriving at the output at the same time.

So I would think that the primary application would be for video.

However, as you suggest, the combined conventional movie/video frame camera and DVS or lidar applications would also benefit from this technique. Lidar has quite a slow frame rate whereas DVS has a very high "equivalent" frame rate.
 
  • Like
  • Fire
  • Love
Reactions: 33 users
Nice these guys have chosen to single out BRN Akida "as not strictly a sensor"....but hey let's include it anyway ;)

They have a pretty extensive partner ecosystem too.


About​

Altium​

Our software tools empower and connect PCB designers, part suppliers and manufacturers to develop and manufacture electronics products faster and more efficiently than ever before.

With our new cloud platform Altium 365 and its productivity apps, and Octopart, our component search engine, Altium’s industry-leading electronics design solutions are accelerating innovation by enabling seamless collaboration across the entire electronics creation process.



10 Sensor Technologies Making Waves in 2025​

Adam J. Fleischer
| Created: November 18, 2024
Sensor Technologies

The sensor revolution isn't just knocking on our door – it's already picked the lock and made itself at home. IoT devices are multiplying like rabbits, AI is getting smarter by the minute, and the push for sustainability is changing how we approach electronic design. These forces are converging to create a massive wave of sensor innovation.
Gone are the days when sensors were just simple input devices. Today, they're our increasingly connected world's eyes, ears, and nervous system. As an electronic engineer or designer, you're standing at the forefront of a sensor revolution that promises to unleash the next generation of electronic innovation.

Sensing the Future​

We're living in a world where cars can see better than humans, your watch knows you're getting sick before you do, and factories can predict and prevent breakdowns before they happen. From autonomous vehicles to personalized healthcare, sensors are powering innovation across sectors. Staying ahead of the curve in sensor technology is essential for those looking to succeed in our rapidly changing industry.
With that in mind, let's take a look at ten types of sensors that will be making waves in 2025:


Excerpt:

3. Neuromorphic Sensors: Teaching Old Sensors New Tricks​

Neuromorphic sensors are the brainiacs of the sensor world. Designed to mimic the structure and function of biological neural networks, these sensors process information in ways that are eerily similar to the human brain. The result? Sensors that can learn, adapt, and make decisions on the fly.

Neuromorphic sensors are expected to play an increasingly important role in advanced AI systems, potentially enabling more efficient and intelligent data processing at the edge. While not strictly a sensor, BrainChip's Akida neural network processor chip can be integrated with various sensors to enable neuromorphic processing of sensor data.
 
  • Like
  • Fire
  • Love
Reactions: 52 users

Esq.111

Fascinatingly Intuitive.
Nice these guys have chosen to single out BRN Akida "as not strictly a sensor"....but hey let's include it anyway ;)

They have a pretty extensive partner ecosystem too.


About​

Altium​

Our software tools empower and connect PCB designers, part suppliers and manufacturers to develop and manufacture electronics products faster and more efficiently than ever before.

With our new cloud platform Altium 365 and its productivity apps, and Octopart, our component search engine, Altium’s industry-leading electronics design solutions are accelerating innovation by enabling seamless collaboration across the entire electronics creation process.



10 Sensor Technologies Making Waves in 2025​

Adam J. Fleischer
| Created: November 18, 2024
Sensor Technologies

The sensor revolution isn't just knocking on our door – it's already picked the lock and made itself at home. IoT devices are multiplying like rabbits, AI is getting smarter by the minute, and the push for sustainability is changing how we approach electronic design. These forces are converging to create a massive wave of sensor innovation.
Gone are the days when sensors were just simple input devices. Today, they're our increasingly connected world's eyes, ears, and nervous system. As an electronic engineer or designer, you're standing at the forefront of a sensor revolution that promises to unleash the next generation of electronic innovation.

Sensing the Future​

We're living in a world where cars can see better than humans, your watch knows you're getting sick before you do, and factories can predict and prevent breakdowns before they happen. From autonomous vehicles to personalized healthcare, sensors are powering innovation across sectors. Staying ahead of the curve in sensor technology is essential for those looking to succeed in our rapidly changing industry.
With that in mind, let's take a look at ten types of sensors that will be making waves in 2025:


Excerpt:

3. Neuromorphic Sensors: Teaching Old Sensors New Tricks​

Neuromorphic sensors are the brainiacs of the sensor world. Designed to mimic the structure and function of biological neural networks, these sensors process information in ways that are eerily similar to the human brain. The result? Sensors that can learn, adapt, and make decisions on the fly.

Neuromorphic sensors are expected to play an increasingly important role in advanced AI systems, potentially enabling more efficient and intelligent data processing at the edge. While not strictly a sensor, BrainChip's Akida neural network processor chip can be integrated with various sensors to enable neuromorphic processing of sensor data.

Afternoon Fullmoonfever ,



Interesting.

Regards,
Esq.
 
  • Like
  • Love
  • Fire
Reactions: 15 users
Hey ESQ

Thanks, didn't know that.

Wonder if they know of us through Renesas then or just through the industry in general.

I noticed that Renesas have been using Autobrains for some of their AI work (like on the R Car V3H) but haven't found anything on Akida as yet.

Be good still if Renesas offered Akida through Altium in some way too.
 
  • Like
  • Fire
Reactions: 12 users
Another write up that includes BRN...

 
  • Like
  • Fire
  • Love
Reactions: 26 users
When I saw the pictures at the start of this article it reminded me of an older Brainchip video they posted a couple of years ago. Could something like this contain our IP?


1732008931494.png
 
  • Like
Reactions: 9 users
  • Like
  • Fire
  • Love
Reactions: 19 users

BrainShit

Regular
I think I have found a new Patent, 6 days ago!

METHODS AND SYSTEM FOR IMPROVED PROCESSING OF SEQUENTIAL DATA IN A NEURAL NETWORK

Abstract​

Disclosed is a system that includes a processor configured to process data in a neural network and a memory associated with a primary flow path and at least one secondary flow path within the neural network. The primary flow path comprises one or more primary operators to process the data and the at least one secondary flow path is configured to pass the data to a combining operator by skipping the processing of the data over the primary flow path. The processor is configured to provide the primary flow path and the at least one secondary flow path with a primary sequence of data and a secondary sequence of data respectively such that the secondary sequence of data being time offset from the processed primary sequence of data.

View attachment 73027

This patent is from 12-May-2023 ... pending in the US and in AU.

BC_Patent_Pending.png
 
  • Like
  • Fire
Reactions: 12 users

Mazewolf

Regular
  • Like
Reactions: 3 users

Frangipani

Top 20
EDGX are currently exhibiting their EDGX-1 edge processor at SpaceTech Bremen:

DACFB900-E113-4E35-B346-2E4B6B821CA3.jpeg




F7063372-B836-4B4B-B556-D49ECBAADAB6.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 29 users

Frangipani

Top 20
Interesting draft proposal by Brian Anderson, who left Intel Labs just last week and kicked off what he code-named Project Phasor:

B59DFF33-028B-4532-9EC4-A85596C1B129.jpeg



9574AF8E-3E9F-4085-B8CD-65152B4D9B69.jpeg



BrainChip gets mentioned, too (although the author didn’t research very thoroughly, apparently being under the impression that Akida 2.0 were introduced “recently” and available in silicon).

While a lot in this open proposal is far too technical for me, it clearly shows how optimistic and psyched the writer/s of this proposal is/are about the future given NC’s disruptive potential. Plus, there are some intriguing insights into the topic from someone who is not just another external analyst vaguely familiar with what NC is all about:

7CCAC1CB-ED19-4807-8E2D-3D3AAE8DCFEB.jpeg

B1C20387-58CF-4646-A27D-89F7FFB07291.jpeg

075E55B4-195D-41B9-8DED-F376EC94DA5D.jpeg

A5CAAD61-6BC6-4168-BA20-D7E313B5D332.jpeg


CBDF5685-D6DC-4E27-B33D-524BD2A4A157.jpeg

D0991ED0-0820-43F7-91FB-B18F447103BE.jpeg



97421AA0-D2FC-4C07-BD88-C8F9A14800F7.jpeg
DD68EFB7-C78D-4607-A51F-BFCD094E8DD0.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 38 users

IloveLamp

Top 20
If you’re short on time listen from 22min


1000019855.jpg
1000016441.gif
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 23 users

Frangipani

Top 20
We also get a mention in this September 2024 tech brief on Neuromorphic Computing co-authored by Céline Nauer (Project Advisor @ Global Innovation Hub at Friedrich-Naumann-Stiftung für die Freiheit / Friedrich Naumann Foundation for Freedom, Taipei Office) and Nik Dennler (Dual PhD student in Neuromorphic Computing and Sensing at Western Sydney University’s International Centre for Neuromorphic Systems and at the University of Hertfordshire) - a text, which would, however, have benefited greatly from some proper proofreading before publication (both layout- and content-wise, such as correcting the misleading reference to a “new” Mercedes concept car, which - as we all know - refers to the Vision EQXX that was revealed almost three years ago, one year prior to the Concept CLA Class). Nevertheless it’s good exposure!




15238D12-B670-4F35-A4FB-D5A96E34E082.jpeg



22B18334-9626-4C47-9B87-4C7B1A552B26.jpeg



0CDFD4B7-5F82-4E4C-B2E7-29954F1FB705.jpeg

6323A3DE-2A02-4003-8950-2C7D9BA26AC8.jpeg

40AB00BB-DF6F-4822-941B-BC21CFACC2CE.jpeg


758C1929-6F3E-4853-9795-E245B6A4482C.jpeg



D064A040-7C82-482E-B39D-58BBF6983B1D.jpeg

8DB51CEF-911D-424D-8599-84FB005481F8.jpeg



DEB73E28-4440-4556-8BC4-5A7F1DC17E30.jpeg

89319F6D-A3F8-42E1-B31D-CB82E8F45636.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 35 users

Frangipani

Top 20
Another reminder that medical imaging will greatly benefit from neuromorphic applications:

Jason Eshraghian (UC Santa Cruz) - one of the three members of our Scientific Advisory Board - just co-authored a paper titled NEUROMORPHIC IMAGING CYTOMETRY ON HUMAN BLOOD CELLS with researchers from The University of Sydney and University of Technology Sydney.

“A future endeavour to implement this architecture in neuromorphic hardware can lead to significant acceleration in latency and power gain.”



21A3AD7A-3CAD-42CB-9E11-716D925EFC18.jpeg

7B22CAB0-45BA-4D5F-AD6A-4A0CE08B81A3.jpeg

70176D25-FB7A-4E87-9C46-DAC481423DB2.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 20 users

GStocks123

Regular

Attachments

  • IMG_9718.png
    IMG_9718.png
    492 KB · Views: 55
  • Like
  • Love
Reactions: 8 users

Frangipani

Top 20

HOME
Publication Name: ETedge-insights.com
Date: November 18, 2024

How on-device intelligence is redefining industry standards and efficiency
How on-device intelligence is redefining industry standards and efficiency


In today’s increasingly connected world, Edge AI started as a thought process to address the challenges of data processing and transmission. With the growing relevance of the semiconductor industry, the need for efficient data management has become more prominent. Traditionally, this data would be sent to centralised cloud servers for processing, which is a transactionally heavy on communication channels and not cost effective. Edge AI can mitigate these issues by enabling data processing at or near the source—on the devices themselves. By processing data locally, Edge AI minimises reliance on cloud infrastructure, significantly reducing computing costs and energy consumption, making this technology important for today’s data-centric technological landscape and more environmentally sustainable.

A key component of this mechanism is also collective intelligence, which is facilitated through federated learning and meta-learning. Federated learning allows multiple devices to collaboratively improve a shared AI model without exchanging raw data, thereby preserving privacy while enhancing the model’s accuracy. Meta-learning, on the other hand, can enhance this framework by enabling devices to adapt their learning strategies based on collective experiences, effectively “learning to learn”. This dual-layered approach—immediate learning from edge devices combined with meta-learning through federated collaboration—creates a robust ecosystem that adds value to edge computing as a process. As we move towards ‘Sovereign AI’ where data is preferred to be local and compute engines only exchange meta information essential for learining, Edge AI is now more relevant to bring this transformation.

Challenges in Edge AI
One significant hurdle is the predominance of supervised learning models, which require extensive training before deployment, making immediate inference difficult. While unsupervised learning approaches exist, they often lack the accuracy needed for reliable decision-making.

Secondly, many AI models, including neural networks and deep learning architectures, tend to be very large and computationally intensive, often requiring several megabytes or even gigabytes of memory—far exceeding the capabilities of typical edge devices. Additionally, existing frameworks and tools for AI model generation are often not optimised for embedded platforms, as they rely on tensor data structures and GPU-based processing that are ill-suited for edge computing environments.

In response to these challenges, developments are underway for novel lightweight frameworks that mimic the essential properties of traditional models while employing simpler mathematical constructs to deliver adequate performance with reduced accuracy. As technology has evolved, advancements such as co-processors have enabled more complex processing capabilities at the edge, allowing for the deployment of even small language models (SLM) and other sophisticated applications. However, significant challenges remain, particularly when it comes to running AI models on microcontrollers with limited memory resources. Despite progress in techniques such as vectorisation, quantisation, and advanced hyperparameter search and tuning to optimise model sizes, there is still a need to fully leverage the benefits of Edge AI.

Key Breakthroughs
Since the advent of Edge AI, a significant progress has been the adoption of Apache TVM (Tensor Virtual Machine), an open-source deep learning compiler that facilitates the optimisation and interoperability of AI models across various hardware architectures. By allowing models generated through its framework to run on multiple processor families, TVM enhances flexibility and performance, making it easier for developers to deploy their applications efficiently.

Additionally, the emergence of new chipset companies focused on embedded platforms has been instrumental in addressing the challenges of fitting complex AI models into constrained hardware. Notably, NVIDIA’s tools, such as NVIDIA EGX, TensorRT and the Jetson Nano, Industrial grade IGX Orin enable comprehensive robotic frameworks to operate effectively at the edge, showcasing the potential for running sophisticated applications like motion planning and object recognition without relying heavily on cloud resources.
Furthermore, advancements in model quantisation techniques have paved the way for developing integer models, which optimise performance while reducing resource consumption. There are now more advanced methods like QLoRA, GPTQ which are more popular for quantizing LLMs to make it suitable for running on the edge as the LLM based applications are also finding there way on edge devices. These innovations collectively represent a significant leap forward in Edge AI, enabling more complex processing capabilities directly on edge devices. These factors define the breakthroughs in the edge computing process.

An Industry Perspective
As companies navigate the complexities of implementing data policies, they are focusing on faster pathways from data to insights, which is reshaping their standard operating procedures (SOPs). The rise of AI-centric decision-making processes, including human-in-the-loop systems, is enhancing operational efficiency and productivity, particularly in areas like quality inspections, process control, and automation. Moreover, advancements in neuromorphic computing are enabling ultra-low power processing at the edge, allowing for rapid decision-making in scenarios that require immediate feedback, such as sorting fruits or packaged goods.

As the technology matures and costs decrease, the potential for Edge AI to penetrate sectors like agriculture and fast-moving consumer goods (FMCG) will expand significantly. In agriculture, drones equipped with Edge AI technology can identify ripe fruits in real-time, optimising the harvesting process by ensuring that only the best quality produce is picked. This capability not only boosts productivity but also reduces waste by minimising the chances of overripe fruit being harvested. Similarly, in fast-moving consumer goods (FMCG) environments, such as bottling plants, Edge AI systems can inspect packaging at high speeds—scanning up to 200 bottles per minute—ensuring quality control while maintaining rapid output. This technology can achieve sub-millisecond response times when paired with specialised neuromorphic cameras, although these devices remain costly.

Achieving these efficiencies requires a holistic approach that optimises the entire data collection and processing pipeline. This includes selecting appropriate capture devices, ensuring optimal lighting conditions, and mitigating environmental factors that could degrade data quality, such as dust or corrosive gases in manufacturing settings. Edge AI implementations require consistency across the engineering development cycle, ensuring that investments yield a rapid return while minimising operational disruptions.

To maximise success, companies should target applications where AI can achieve accuracy rates above 95%, ensuring that they address high-impact problems. Identifying these “low-hanging fruits” is crucial for realising quick returns on investment. Implementing Edge AI necessitates a comprehensive understanding of the entire process flow—from data collection to model inference—while also considering environmental factors that could affect sensor performance. As organisations navigate this complex landscape, maintaining an open mindset and exercising patience will be essential for fully harnessing the transformative potential of Edge AI, ultimately leading to improved productivity and operational excellence.

Author:
Biswajit Biswas, Chief Data Scientist, Tata Elxsi
 
  • Like
  • Love
  • Fire
Reactions: 21 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
If you’re short on time listen from 22min


View attachment 73083 View attachment 73088


giphy (1).gif
 
  • Haha
  • Like
  • Love
Reactions: 19 users

manny100

Regular
Have a read of wikipedia description of NVIDIA especially their history.
They did it as hard if not harder than we have.
For many years the unofficial company motto was " our company is only 30 days from going out of business". Huang routinely opened staff presentations for many years using those words.
Just like NVIDIA when we get our 1st deal it's game on. We are getting closer.
 
  • Like
  • Fire
  • Love
Reactions: 29 users

db1969oz

Regular
The share price today, directly reflects the fact that i bought more yesterday at 26c!! Mutha trucka!!
 
  • Haha
  • Like
  • Sad
Reactions: 14 users
Top Bottom