BRN Discussion Ongoing

Esq.111

Fascinatingly Intuitive.
Afternoon Chippers ,

Potential sugar rush imminent...


Child Eating Candy Images – Browse 95,699 Stock Photos, Vectors, and Video  | Adobe Stock


XTX Index ( S&P / ASX ALL Technology Index ) UP+117.40 points or 3.09%.

SHE'S PUMPING

Regards ,
Esq.
 
Last edited:
  • Like
  • Love
  • Wow
Reactions: 10 users

7für7

Top 20
Afternoon Chippers ,

Potential sugar rush imminent...


Child Eating Candy Images – Browse 95,699 Stock Photos, Vectors, and Video  | Adobe Stock
Thank god i stopped eating sugar for years now… do you have something out of xylitol? Stevia? Erythritol?
 
  • Haha
Reactions: 1 users

miaeffect

Oat latte lover
Afternoon Chippers ,

Potential sugar rush imminent...


Child Eating Candy Images – Browse 95,699 Stock Photos, Vectors, and Video  | Adobe Stock


XTX Index ( S&P / ASX ALL Technology Index ) UP+117.40 points or 3.09%.

SHE'S PUMPING

Regards ,
Esq.
Do I get to see "green baby" + "beer" + "rocket" soon?!
 
  • Haha
  • Like
  • Fire
Reactions: 7 users

Diogenese

Top 20
I think I have found a new Patent, 6 days ago!

METHODS AND SYSTEM FOR IMPROVED PROCESSING OF SEQUENTIAL DATA IN A NEURAL NETWORK

Abstract​

Disclosed is a system that includes a processor configured to process data in a neural network and a memory associated with a primary flow path and at least one secondary flow path within the neural network. The primary flow path comprises one or more primary operators to process the data and the at least one secondary flow path is configured to pass the data to a combining operator by skipping the processing of the data over the primary flow path. The processor is configured to provide the primary flow path and the at least one secondary flow path with a primary sequence of data and a secondary sequence of data respectively such that the secondary sequence of data being time offset from the processed primary sequence of data.

View attachment 73027
Hi Aaron,

Good pick up. The ink is not yet dry.

The invention provides a parallel path to the output in a multi-layer NN for elements which have been classified in an earlier layer so they are not further processed (overfitting) in subsequent layers. Overfitting can cause NN hallucinations.

[0002] Convolutional neural networks (CNNs) are generally used for various tasks, such as segmentation and classification. Skip connections may be implemented within the CNNs in order to solve problems, such as overfitting and vanishing gradients and improve performance of the CNNs. Skip connections neural networks (skipNN) generally involve providing a pathway for some neural responses to bypass one or more convolution layers within the skipNN.
 
  • Like
  • Fire
  • Love
Reactions: 51 users

7für7

Top 20
Do I get to see "green baby" + "beer" + "rocket" soon?!
And also it’s kind of discriminating Man… why is SHE pumping? I guess when it’s red he will write “he’s dumping” 🙄 always the men are responsible when something goes wrong.. thank you
 
  • Haha
Reactions: 4 users

toasty

Regular
Hi Aaron,

Good pick up. The ink is not yet dry.

The invention provides a parallel path to the output in a multi-layer NN for elements which have been classified in an earlier layer so they are not further processed (overfitting) in subsequent layers. Overfitting can cause NN hallucinations.

[0002] Convolutional neural networks (CNNs) are generally used for various tasks, such as segmentation and classification. Skip connections may be implemented within the CNNs in order to solve problems, such as overfitting and vanishing gradients and improve performance of the CNNs. Skip connections neural networks (skipNN) generally involve providing a pathway for some neural responses to bypass one or more convolution layers within the skipNN.
Yes, but what application might this be useful for??
 
  • Thinking
  • Like
Reactions: 2 users

Rach2512

Regular
Hi Aaron,

Good pick up. The ink is not yet dry.

The invention provides a parallel path to the output in a multi-layer NN for elements which have been classified in an earlier layer so they are not further processed (overfitting) in subsequent layers. Overfitting can cause NN hallucinations.

[0002] Convolutional neural networks (CNNs) are generally used for various tasks, such as segmentation and classification. Skip connections may be implemented within the CNNs in order to solve problems, such as overfitting and vanishing gradients and improve performance of the CNNs. Skip connections neural networks (skipNN) generally involve providing a pathway for some neural responses to bypass one or more convolution layers within the skipNN.


So on a scale of 1 to 10, 10 being the final nail in the coffin for any potential competitors, how good is it? Cos me no understand, I got lost on the first path. Thank you for your time, please and thank you.
 
  • Like
Reactions: 8 users

rgupta

Regular
Good morning,

For anyone to be second guessing their investment in our company, well, stop worrying...we are positioned better than many
of our so-called competitors, we have established a very strong patent perimeter, have actual 100% proven in silicon, have moved
way beyond our first generation NSoC, have been engaged with many leading (already named) companies for a number of years,
and most importantly, we have established a very trusting relationship within our eco-system partners, for keeping our mouth shut.

"More data is heading to the edge — with Gartner predicting more than 50 percent of enterprise managed data will be processed outside the data center or cloud by 2025. Instead, workloads and data will be run across several edge systems and locations."

Texas Instruments is a very interesting prospect, I wonder if Duy-Loan Le has opened any doors for us over the last 2 years ?

Have a positive day ahead.....regards......Tech :coffee:(y)
Hi Tech
We all hope the same way.
But a matter of fact is we have zero commercial revenue till end of 24, and processing 50% enterprise data is a lot of change not only brainchip but the entire industry. But anyway let us wait and watch, but it looks impossible to process 50% enterprise data on edge in a year. Upto now no big player is talking about brainchip at all.
Let us wait and watch.
Dyor
 
  • Like
  • Thinking
Reactions: 7 users

Diogenese

Top 20
So on a scale of 1 to 10, 10 being the final nail in the coffin for any potential competitors, how good is it? Cos me no understand, I got lost on the first path. Thank you for your time, please and thank you.
Hi Rach,

The patent is an improvement on a known NN technique called skip processing. If we conider an input image to a multi-layer NN to be made of of a number of distinct visual elements (eg, bounding boxes (BB)), and input signals representing the whole image are fed to a NN, then different BBs may be classified at different layers of the NN. The invention is an improvement which prevents the duplication of features (hallucinations) in the classification/inference result by stopping the processing of input signals which relate to an element (BB) of the input signals when the element they represent has been classified at an intermediate layer of the NN. The identified element is supplied to the NN output directly from the intermediate layer, bypassing any subsequent NN layers. This stops the further processing of the BB data so it cannot become confused by adjacent portions of the image data. As well as avoiding hallucinations, this technique also reduces the amount of downstream classification operations and hence reduces the power usage at all the downstream layers as there are fewer "events/spikes" to be classified.

This patent relates to the synchronization of the skipped elements with the elements which passed through all the NN layers so the full image can be reassembled at the output.*

There are already techniques for skip processing in NNs, so this is an improvement on existing processes.

In isolation, I would probably rank this invention at 3 if Akida1 is 10 because it is an improvement on existing techniques. In combination with the full BRN patent portfolio, it is a significant improvement as it does improve both accuracy and power consumption. I'm guessing that it could be applied to other NNs in addition to Akida, which would further increase its licensing potential value.

I would rank Akida2, which includes TENNs, at 17+ on the 1 to 10 scale.

Pico also has high value in low power/battery/remote applications. Its value increases when used in conjunction with Akida2/TENNs.

TENNs on its own also ranks above Akida1, as it is able to be used as software (a new income generating product line) and brings the temporal element to both software and hardware. I think TeNNs is the basis of our newish algorithm product line.

*Synchronization is vital for video - recall the many-headed dog video?
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 76 users

Tothemoon24

Top 20
Hi Rach,

The patent is an improvement on a known NN technique called skip processing. If we conider an input image to a multi-layer NN to be made of of a number of distinct visual elements (eg, bounding boxes (BB)), and input signals representing the whole image are fed to a NN, then different BBs may be classified at different layers of the NN. The invention is an improvement which prevents the duplication of features (hallucinations) in the classification/inference result by stopping the processing of input signals which relate to an element (BB) of the input signals when the element they represent has been classified at an intermediate layer of the NN. The identified element is supplied to the NN output directly from the intermediate layer, bypassing any subsequent NN layers. This stops the further processing of the BB data so it cannot become confused by adjacent portions of the image data. As well as avoiding hallucinations, this technique also reduces the amount of downstream classification operations and hence reduces the power usage at all the downstream layers as there are fewer "events/spikes" to be classified.

This patent relates to the synchronization of the skipped elements whith the elements which passed through all the NN layers so the full image can be reassembled at the output.

There are already techniques for skip processing in NNs, so this is an improvement on existing processes.

In isolation, I would probably rank this invention at 3 if Akida1 is 10 because it is an improvement on existing techniques. In combination with the full BRN patent portfolio, it is a significant improvement as it does improve both accuracy and power consumption. I'm guessing that it could be applied to other NNs in addition to Akida, which would further increase its licensing potential value.

I would rank Akida2, which includes TENNs, at 17+ on the 1 to 10 scale.

Pico also has high value in low power/battery/remote applications. Its value increases when used in conjunction with Akida2/TENNs.

TENNs on its own also ranks above Akida1, as it is able to be used as software (a new income generating product line) and brings the temporal element to both software and hardware. I think TeNNs is the basis of our newish algorithm product line.
Dio your a TENNs out of TENNs
 
  • Like
  • Haha
  • Love
Reactions: 53 users

Rach2512

Regular
Hi Rach,

The patent is an improvement on a known NN technique called skip processing. If we conider an input image to a multi-layer NN to be made of of a number of distinct visual elements (eg, bounding boxes (BB)), and input signals representing the whole image are fed to a NN, then different BBs may be classified at different layers of the NN. The invention is an improvement which prevents the duplication of features (hallucinations) in the classification/inference result by stopping the processing of input signals which relate to an element (BB) of the input signals when the element they represent has been classified at an intermediate layer of the NN. The identified element is supplied to the NN output directly from the intermediate layer, bypassing any subsequent NN layers. This stops the further processing of the BB data so it cannot become confused by adjacent portions of the image data. As well as avoiding hallucinations, this technique also reduces the amount of downstream classification operations and hence reduces the power usage at all the downstream layers as there are fewer "events/spikes" to be classified.

This patent relates to the synchronization of the skipped elements whith the elements which passed through all the NN layers so the full image can be reassembled at the output.

There are already techniques for skip processing in NNs, so this is an improvement on existing processes.

In isolation, I would probably rank this invention at 3 if Akida1 is 10 because it is an improvement on existing techniques. In combination with the full BRN patent portfolio, it is a significant improvement as it does improve both accuracy and power consumption. I'm guessing that it could be applied to other NNs in addition to Akida, which would further increase its licensing potential value.

I would rank Akida2, which includes TENNs, at 17+ on the 1 to 10 scale.

Pico also has high value in low power/battery/remote applications. Its value increases when used in conjunction with Akida2/TENNs.

TENNs on its own also ranks above Akida1, as it is able to be used as software (a new income generating product line) and brings the temporal element to both software and hardware. I think TeNNs is the basis of our newish algorithm product line.


The 17+ sounds good to me, the rest of it I still don't understand I'm ashamed to say, but many thanks for taking the time, I'm sure you've helped many more here. 🙏
 
  • Like
Reactions: 19 users

HopalongPetrovski

I'm Spartacus!
Hi Rach,

The patent is an improvement on a known NN technique called skip processing. If we conider an input image to a multi-layer NN to be made of of a number of distinct visual elements (eg, bounding boxes (BB)), and input signals representing the whole image are fed to a NN, then different BBs may be classified at different layers of the NN. The invention is an improvement which prevents the duplication of features (hallucinations) in the classification/inference result by stopping the processing of input signals which relate to an element (BB) of the input signals when the element they represent has been classified at an intermediate layer of the NN. The identified element is supplied to the NN output directly from the intermediate layer, bypassing any subsequent NN layers. This stops the further processing of the BB data so it cannot become confused by adjacent portions of the image data. As well as avoiding hallucinations, this technique also reduces the amount of downstream classification operations and hence reduces the power usage at all the downstream layers as there are fewer "events/spikes" to be classified.

This patent relates to the synchronization of the skipped elements with the elements which passed through all the NN layers so the full image can be reassembled at the output.*

There are already techniques for skip processing in NNs, so this is an improvement on existing processes.

In isolation, I would probably rank this invention at 3 if Akida1 is 10 because it is an improvement on existing techniques. In combination with the full BRN patent portfolio, it is a significant improvement as it does improve both accuracy and power consumption. I'm guessing that it could be applied to other NNs in addition to Akida, which would further increase its licensing potential value.

I would rank Akida2, which includes TENNs, at 17+ on the 1 to 10 scale.

Pico also has high value in low power/battery/remote applications. Its value increases when used in conjunction with Akida2/TENNs.

TENNs on its own also ranks above Akida1, as it is able to be used as software (a new income generating product line) and brings the temporal element to both software and hardware. I think TeNNs is the basis of our newish algorithm product line.

*Synchronization is vital for video - recall the many-headed dog video?
9b1a8669255e32e4b5ba6464b2c52c14.jpg


Our Mr Diogenese is the best boy around. 🤣🤣🤣
 
  • Like
  • Haha
  • Love
Reactions: 31 users

MrNick

Regular
Hi Aaron,

Good pick up. The ink is not yet dry.

The invention provides a parallel path to the output in a multi-layer NN for elements which have been classified in an earlier layer so they are not further processed (overfitting) in subsequent layers. Overfitting can cause NN hallucinations.

[0002] Convolutional neural networks (CNNs) are generally used for various tasks, such as segmentation and classification. Skip connections may be implemented within the CNNs in order to solve problems, such as overfitting and vanishing gradients and improve performance of the CNNs. Skip connections neural networks (skipNN) generally involve providing a pathway for some neural responses to bypass one or more convolution layers within the skipNN.
5268180b-9bad-4606-86a7-bb5dfbb1b01b-1668498620877.jpg
 
  • Haha
Reactions: 1 users
Afternoon Chippers ,

Potential sugar rush imminent...


Child Eating Candy Images – Browse 95,699 Stock Photos, Vectors, and Video  | Adobe Stock


XTX Index ( S&P / ASX ALL Technology Index ) UP+117.40 points or 3.09%.

SHE'S PUMPING

Regards ,
Esq.
1731999456346.gif
 
  • Haha
Reactions: 2 users
Hi Rach,

The patent is an improvement on a known NN technique called skip processing. If we conider an input image to a multi-layer NN to be made of of a number of distinct visual elements (eg, bounding boxes (BB)), and input signals representing the whole image are fed to a NN, then different BBs may be classified at different layers of the NN. The invention is an improvement which prevents the duplication of features (hallucinations) in the classification/inference result by stopping the processing of input signals which relate to an element (BB) of the input signals when the element they represent has been classified at an intermediate layer of the NN. The identified element is supplied to the NN output directly from the intermediate layer, bypassing any subsequent NN layers. This stops the further processing of the BB data so it cannot become confused by adjacent portions of the image data. As well as avoiding hallucinations, this technique also reduces the amount of downstream classification operations and hence reduces the power usage at all the downstream layers as there are fewer "events/spikes" to be classified.

This patent relates to the synchronization of the skipped elements with the elements which passed through all the NN layers so the full image can be reassembled at the output.*

There are already techniques for skip processing in NNs, so this is an improvement on existing processes.

In isolation, I would probably rank this invention at 3 if Akida1 is 10 because it is an improvement on existing techniques. In combination with the full BRN patent portfolio, it is a significant improvement as it does improve both accuracy and power consumption. I'm guessing that it could be applied to other NNs in addition to Akida, which would further increase its licensing potential value.

I would rank Akida2, which includes TENNs, at 17+ on the 1 to 10 scale.

Pico also has high value in low power/battery/remote applications. Its value increases when used in conjunction with Akida2/TENNs.

TENNs on its own also ranks above Akida1, as it is able to be used as software (a new income generating product line) and brings the temporal element to both software and hardware. I think TeNNs is the basis of our newish algorithm product line.

*Synchronization is vital for video - recall the many-headed dog video?
Just trying to understand which type of applications would benefit most of it. Do you think it could play a role in the combination of event-based and classical image sensors?
 
  • Like
Reactions: 3 users

Diogenese

Top 20
Just trying to understand which type of applications would benefit most of it. Do you think it could play a role in the combination of event-based and classical image sensors?
Hi CMF,

Skip is used to stop "overfitting", ie, reprocessing data which has already been classified in an earlier layer of the multi-layer NN.

The classified bounding box (BB) is passed to the output, bypassing the subsequent NN layers. This means it arives at the output before the other data captured at the same time. This is what leads to the hallucinations where segments from different times are combined. Thus it is necessary to delay the arrival of the early classified BB until its companion input data has passed through the whole NN.

With several layers, different BBs can be classified at different layers, so there can be several different arrival times, which means that different delays have to be applied to different BBs depending on the layer at which they were classified.

With video images, this would be particularly important to get all the contemporaneous input bits arriving at the output at the same time.

So I would think that the primary application would be for video.

However, as you suggest, the combined conventional movie/video frame camera and DVS or lidar applications would also benefit from this technique. Lidar has quite a slow frame rate whereas DVS has a very high "equivalent" frame rate.
 
  • Like
  • Fire
  • Love
Reactions: 31 users
Nice these guys have chosen to single out BRN Akida "as not strictly a sensor"....but hey let's include it anyway ;)

They have a pretty extensive partner ecosystem too.


About​

Altium​

Our software tools empower and connect PCB designers, part suppliers and manufacturers to develop and manufacture electronics products faster and more efficiently than ever before.

With our new cloud platform Altium 365 and its productivity apps, and Octopart, our component search engine, Altium’s industry-leading electronics design solutions are accelerating innovation by enabling seamless collaboration across the entire electronics creation process.



10 Sensor Technologies Making Waves in 2025​

Adam J. Fleischer
| Created: November 18, 2024
Sensor Technologies

The sensor revolution isn't just knocking on our door – it's already picked the lock and made itself at home. IoT devices are multiplying like rabbits, AI is getting smarter by the minute, and the push for sustainability is changing how we approach electronic design. These forces are converging to create a massive wave of sensor innovation.
Gone are the days when sensors were just simple input devices. Today, they're our increasingly connected world's eyes, ears, and nervous system. As an electronic engineer or designer, you're standing at the forefront of a sensor revolution that promises to unleash the next generation of electronic innovation.

Sensing the Future​

We're living in a world where cars can see better than humans, your watch knows you're getting sick before you do, and factories can predict and prevent breakdowns before they happen. From autonomous vehicles to personalized healthcare, sensors are powering innovation across sectors. Staying ahead of the curve in sensor technology is essential for those looking to succeed in our rapidly changing industry.
With that in mind, let's take a look at ten types of sensors that will be making waves in 2025:


Excerpt:

3. Neuromorphic Sensors: Teaching Old Sensors New Tricks​

Neuromorphic sensors are the brainiacs of the sensor world. Designed to mimic the structure and function of biological neural networks, these sensors process information in ways that are eerily similar to the human brain. The result? Sensors that can learn, adapt, and make decisions on the fly.

Neuromorphic sensors are expected to play an increasingly important role in advanced AI systems, potentially enabling more efficient and intelligent data processing at the edge. While not strictly a sensor, BrainChip's Akida neural network processor chip can be integrated with various sensors to enable neuromorphic processing of sensor data.
 
  • Like
  • Fire
  • Love
Reactions: 50 users

Esq.111

Fascinatingly Intuitive.
Nice these guys have chosen to single out BRN Akida "as not strictly a sensor"....but hey let's include it anyway ;)

They have a pretty extensive partner ecosystem too.


About​

Altium​

Our software tools empower and connect PCB designers, part suppliers and manufacturers to develop and manufacture electronics products faster and more efficiently than ever before.

With our new cloud platform Altium 365 and its productivity apps, and Octopart, our component search engine, Altium’s industry-leading electronics design solutions are accelerating innovation by enabling seamless collaboration across the entire electronics creation process.



10 Sensor Technologies Making Waves in 2025​

Adam J. Fleischer
| Created: November 18, 2024
Sensor Technologies

The sensor revolution isn't just knocking on our door – it's already picked the lock and made itself at home. IoT devices are multiplying like rabbits, AI is getting smarter by the minute, and the push for sustainability is changing how we approach electronic design. These forces are converging to create a massive wave of sensor innovation.
Gone are the days when sensors were just simple input devices. Today, they're our increasingly connected world's eyes, ears, and nervous system. As an electronic engineer or designer, you're standing at the forefront of a sensor revolution that promises to unleash the next generation of electronic innovation.

Sensing the Future​

We're living in a world where cars can see better than humans, your watch knows you're getting sick before you do, and factories can predict and prevent breakdowns before they happen. From autonomous vehicles to personalized healthcare, sensors are powering innovation across sectors. Staying ahead of the curve in sensor technology is essential for those looking to succeed in our rapidly changing industry.
With that in mind, let's take a look at ten types of sensors that will be making waves in 2025:


Excerpt:

3. Neuromorphic Sensors: Teaching Old Sensors New Tricks​

Neuromorphic sensors are the brainiacs of the sensor world. Designed to mimic the structure and function of biological neural networks, these sensors process information in ways that are eerily similar to the human brain. The result? Sensors that can learn, adapt, and make decisions on the fly.

Neuromorphic sensors are expected to play an increasingly important role in advanced AI systems, potentially enabling more efficient and intelligent data processing at the edge. While not strictly a sensor, BrainChip's Akida neural network processor chip can be integrated with various sensors to enable neuromorphic processing of sensor data.

Afternoon Fullmoonfever ,



Interesting.

Regards,
Esq.
 
  • Like
  • Love
  • Fire
Reactions: 14 users
Hey ESQ

Thanks, didn't know that.

Wonder if they know of us through Renesas then or just through the industry in general.

I noticed that Renesas have been using Autobrains for some of their AI work (like on the R Car V3H) but haven't found anything on Akida as yet.

Be good still if Renesas offered Akida through Altium in some way too.
 
  • Like
  • Fire
Reactions: 11 users
Another write up that includes BRN...

 
  • Like
  • Fire
  • Love
Reactions: 25 users
Top Bottom