BRN Discussion Ongoing

Diogenese

Top 20
From what I can tell based on my 5 minutes worth of poking around the source code of that page, each item in the result set receives a score. When you sort by 'relevance', Brainchip comes up top because it has the highest 'score'.

The site is using a service called coveo search, as part of the sitecore CMS. According to coveo documentation, The relevance score is a combination of the index ranking algorithm in action during the index ranking phases, and other relevance modifiers such as query ranking expressions (QRE) and query ranking functions.

While the index ranking is supposedly algorithmic, I am not sure what it is actually indexing in order to produce a score.

In any case - we are obviously very 'relevant' :cool:
Top notch sleuthing Wilzy,

So does that mean that, each time the 1000 eyes searches the ARM partners page for "BrainChip", our score goes up?

Just one little daily chore for the 1000 eyes ...
 
  • Like
  • Haha
Reactions: 23 users

wilzy123

Founding Member
So does that mean that, each time the 1000 eyes searches the ARM partners page for "BrainChip", our score goes up?

Haha, maybe. It all depends on what 'ranking factors' are used in determining a relevance score as part of that particular search engine.

Actual use of the page might be one - but I cannot see that far into their configuration.

We could see if the score changes over time. BRN has a score of 3732, while the 'next best result' (NXP) has a score of 2761. All of this information is visible from within the response to the AJAX request that the page makes each time it looks for results: https://www.arm.com/coveo/rest/search/v2?sitecoreItemUri=sitecore://web/{7502997D-E820-488F-8264-B8BA0B39CBA6}?lang=en&ver=3&siteName=arm-redesign-website

The score will likely change each time the pool gets re-indexed (I.e. to keep the 'relevance' fresh) - I suspect this is done maybe once a week for a site like this that doesn't change that much.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 24 users
The new black in computing at least according to this lengthy article which I have scrolled through on Earth and in Space is ARD - Analysis Ready Data:


It is worth having just a little peak at it, if for no other reason than, to realise that there are actually people in this world whose brains work in these mysterious ways.

However after you master the convoluted writing style effectively what they are trying to say is that sensors need to be made smart in five million words or less with about 500 supporting references.

My takeaway is that:

1. Valeo Brainchip Scala AKIDA Lidar with 3 D point cloud would meet ADR;

2. Prophesee Brainchip powered event based vision sensors would meet ADR; and

3. Nviso AKIDA Brainchip powered human monitoring Apps for ADAS and medical applications would meet ADR.

All of which reminds me of a song from my youth:

‘What the world needs now is AKIDA ADR it’s the only thing that there is just too little of….’ - Sorry Burt.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Love
  • Haha
Reactions: 24 users

Diogenese

Top 20
Haha, maybe. It all depends on what 'ranking factors' are used in determining a relevance score as part of that particular search engine.

Actual use of the page might be one - but I cannot see that far into their configuration.

We could see if the score changes over time. BRN has a score of 3732, while the 'next best result' (NXP) has a score of 2761. All of this information is visible from within the response to the AJAX request that the page makes each time it looks for results: https://www.arm.com/coveo/rest/search/v2?sitecoreItemUri=sitecore://web/{7502997D-E820-488F-8264-B8BA0B39CBA6}?lang=en&ver=3&siteName=arm-redesign-website
Never have I been so bethumped with words ... he cudgels our ears ... he gives the bastinado with his tongue ...
 
  • Haha
  • Like
Reactions: 11 users
Haha, maybe. It all depends on what 'ranking factors' are used in determining a relevance score as part of that particular search engine.

Actual use of the page might be one - but I cannot see that far into their configuration.

We could see if the score changes over time. BRN has a score of 3732, while the 'next best result' (NXP) has a score of 2761. All of this information is visible from within the response to the AJAX request that the page makes each time it looks for results: https://www.arm.com/coveo/rest/search/v2?sitecoreItemUri=sitecore://web/{7502997D-E820-488F-8264-B8BA0B39CBA6}?lang=en&ver=3&siteName=arm-redesign-website

The score will likely change each time the pool gets re-indexed (I.e. to keep the 'relevance' fresh) - I suspect this is done maybe once a week for a site like this that doesn't change that much.
I have no idea really but on the ARM website:

1. to be rating close to 1,000 points higher

2. than the next highest ARM partner

3. when that partner is NXP which according to Wiki “is a Dutch semiconductor designer and manufacturer with headquarters in Eindhoven, Netherlands.

A company that employs approximately 31,000 people in more than 30 countries.

With reported revenue of $11.06 billion in 2021.

Traded as: Nasdaq: NXPI; NASDAQ-100 component; S&P 500 component

Revenue: US$11.063 billion (2021)

4. IS THIS NOT JUST LIKE THE HUGESTEST PIECE OF FACTUAL INFORMATION LIKE FOREVER, and,

Perhaps why the CEO Sean Hehir in the last 4C Report stated:

“We are seeing the greatest amount of sales activity and engagement in the Company’s history”

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 68 users
Maybe just a little thing and maybe already known. I noticed that we appear on the page of arm at partners under relevance at the top.
How is this to be evaluated ? Where I live, that means a lot! View attachment 21831
And if it is the hugestest ever FACT then we have @Baneino to thank for kicking off the 1,000 Eye research machine.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 27 users

chapman89

Founding Member
I have no idea really but on the ARM website:

1. to be rating close to 1,000 points higher

2. than the next highest ARM partner

3. when that partner is NXP which according to Wiki “is a Dutch semiconductor designer and manufacturer with headquarters in Eindhoven, Netherlands.

A company that employs approximately 31,000 people in more than 30 countries.

With reported revenue of $11.06 billion in 2021.

Traded as: Nasdaq: NXPI; NASDAQ-100 component; S&P 500 component

Revenue: US$11.063 billion (2021)

4. IS THIS NOT JUST LIKE THE HUGESTEST PIECE OF FACTUAL INFORMATION LIKE FOREVER, and,

Perhaps why the CEO Sean Hehir in the last 4C Report stated:

“We are seeing the greatest amount of sales activity and engagement in the Company’s history”

My opinion only DYOR
FF

AKIDA BALLISTA
Without even typing anything in the search bar we sit at the top as well….5 above the largest company in the world…APPLE 👏
 

Attachments

  • 9CF5AA3B-340F-45B0-B5D4-6342096301D2.jpeg
    9CF5AA3B-340F-45B0-B5D4-6342096301D2.jpeg
    125.3 KB · Views: 170
  • Like
  • Fire
  • Love
Reactions: 47 users

wilzy123

Founding Member
  • Like
  • Haha
  • Love
Reactions: 15 users

goodvibes

Regular
Who can check this…DNN…new competitor?


 
  • Like
  • Fire
Reactions: 3 users
Who can check this…DNN…new competitor?


Not yet.

It is only working with CNN models and is purely an accelerator and has the following issues:

“Key takeaways:

The results in this subsection imply that it is important to use the minimum number of groups with consecutive processors for runtime performance optimization.

While the latency overhead seems small in absolute terms (≈ 200𝜇s), it adds up quickly for models with many layers and results in significant penalties in terms of inference time, and consequently energy consumption.

The optimal processor placement is still an open problem given that automatic tools are not provided.

We leave this for future work.

5 CONCLUSION
In this paper, we conducted a variety of benchmark studies to char- acterize the resource and performance of the ultra-low power DNN accelerator, MAX78000.

First, we analyzed the operational latency, power consumption, and memory footprint of five DNN models with various sizes and architecture.

Second, we further investigated the system implications in terms of the model architecture and convolutional processor selection in order to maximize the accel- eration.

Beyond the numbers, our benchmark study further offers meaningful insights for the development of on-device AI systems on ultra-low power, tiny-scale AI accelerators”

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 18 users
Without even typing anything in the search bar we sit at the top as well….5 above the largest company in the world…APPLE 👏
Just like every other verification engineer I have ever known. Bone idle can’t even be bothered to type into a search bar. No wonder you can’t get a real job when you are happy to be just five places above APPLE. 😂🤣🤡🤣😂🤣😎
 
  • Haha
  • Like
  • Fire
Reactions: 26 users

Diogenese

Top 20
Who can check this…DNN…new competitor?




US2021216868A1 SYSTEMS AND METHODS FOR REDUCING MEMORY REQUIREMENTS IN NEURAL NETWORKS

1668323366335.png

1668323389234.png





9 . A system for processing large amounts of neural network data, the system comprising:
a processor; and
a non-transitory computer-readable medium comprising instructions that, when executed by the processor, cause steps to be performed, the steps comprising:
determining one or more active layers in a neural network;
using the one or more active layers to process a subset of a set of input data of a first neural network layer the subset having a data size that is substantially less than the size of the set of input data;
outputting a first set of output data from the first network layer;
using first set of output data in a second neural network layer; and
outputting a second set of output data from the second network layer prior to processing all of the set of input data
.

Looks like an attempt to claim quasi-asynchronous processing of CNN data on a CPU/GPU.
 
  • Like
  • Fire
  • Love
Reactions: 20 users

HopalongPetrovski

I'm Spartacus!
US2021216868A1 SYSTEMS AND METHODS FOR REDUCING MEMORY REQUIREMENTS IN NEURAL NETWORKS

View attachment 21858
View attachment 21859




9 . A system for processing large amounts of neural network data, the system comprising:
a processor; and
a non-transitory computer-readable medium comprising instructions that, when executed by the processor, cause steps to be performed, the steps comprising:
determining one or more active layers in a neural network;
using the one or more active layers to process a subset of a set of input data of a first neural network layer the subset having a data size that is substantially less than the size of the set of input data;
outputting a first set of output data from the first network layer;
using first set of output data in a second neural network layer; and
outputting a second set of output data from the second network layer prior to processing all of the set of input data
.

Looks like an attempt to claim quasi-asynchronous processing of CNN data on a CPU/GPU.



Yeah......that's what I thought too........🤣

REqm.gif
 
  • Haha
  • Like
  • Love
Reactions: 18 users
US2021216868A1 SYSTEMS AND METHODS FOR REDUCING MEMORY REQUIREMENTS IN NEURAL NETWORKS

View attachment 21858
View attachment 21859




9 . A system for processing large amounts of neural network data, the system comprising:
a processor; and
a non-transitory computer-readable medium comprising instructions that, when executed by the processor, cause steps to be performed, the steps comprising:
determining one or more active layers in a neural network;
using the one or more active layers to process a subset of a set of input data of a first neural network layer the subset having a data size that is substantially less than the size of the set of input data;
outputting a first set of output data from the first network layer;
using first set of output data in a second neural network layer; and
outputting a second set of output data from the second network layer prior to processing all of the set of input data
.

Looks like an attempt to claim quasi-asynchronous processing of CNN data on a CPU/GPU.
Is that what I said too? It’s what I meant to say if only I had had the words, the training, and intellect. 😁🤡😁🤓
 
  • Haha
  • Like
  • Love
Reactions: 14 users
Just looking at wbt wee bit nano
So just looking at there SP over the last few years just to see how the went on there journey
A very similar tale as Brainchip
From highs of $ 2.20 to $0.37 cents over a year and then up and up.
But I think we will crush their SP over the next. 12 months
And then from there skyrockets in all directions like world war 3
 
  • Like
  • Haha
  • Fire
Reactions: 21 users

HopalongPetrovski

I'm Spartacus!
Just looking at wbt wee bit nano
So just looking at there SP over the last few years just to see how the went on there journey
A very similar tale as Brainchip
From highs of $ 2.20 to $0.37 cents over a year and then up and up.
But I think we will crush their SP over the next. 12 months
And then from there skyrockets in all directions like world war 3
Don't mention the war!

 
  • Haha
  • Like
  • Fire
Reactions: 12 users

skutza

Regular
Just looking at wbt wee bit nano
So just looking at there SP over the last few years just to see how the went on there journey
A very similar tale as Brainchip
From highs of $ 2.20 to $0.37 cents over a year and then up and up.
But I think we will crush their SP over the next. 12 months
And then from there skyrockets in all directions like world war 3
I think you're you're forgetting the fact they did a 25-1 consolidation. Really WBT has gone from 2.9c to 12c. So not even close. If BRN did the same we would have a share price of $15.50. (if my numbers are correct....)
 
  • Like
  • Wow
  • Fire
Reactions: 17 users
And the proposed uses just keep lining up:

“3 CONCLUSION
We present a framework for symbolic computing with spiking neurons based on KGs and graph embedding algorithms. Compared to previous approaches based on semantic pointers [8], our method allows the learning of semantically meaningful, low-dimensional and purely spike-based representations of graph elements. Due to the differentiability of our approach, such semantic representations can be trained end-to-end in unison with other differentiable models or architectures, such as multi-layer or recurrent SNNs, for specific use-cases, as demonstrated with our spiking GNN. In addition, by changing the scoring function (Eq. 2), alternative spike-based embedding schemes can be devised in future work that leverage gradient-based optimization in SNNs – similarly to how a variety of embedding schemes exist in the graph embedding literature [27].
The combination of symbolic data and spike-based computing is of particular interest for emerging neuromorphic technologies [11], as it bears the potential of opening up new data formats and applications. Thus, we are convinced that our work constitutes an important step towards not only enabling, but also exploring neuro-symbolic reasoning with spiking systems.
ACKNOWLEDGMENTS
We thank Serghei Mogoreanu, Alexander Hadjiivanov and Gabriele Meoni for helpful discussions. We further thank our colleagues at the Semantics and Reasoning Research Group, the Siemens AI Lab and ESA’s Advanced Concepts Team for their support. This work was partially funded by the Federal Ministry for Economic Affairs and Energy of Germany (IIP-Ecosphere Project) and by the German Federal Ministry for Education and Research (“MLWin”, grant 01IS18050). DD acknowledges support through the European Space Agency fellowship programme.”


What was it about AKIDA out performing GPUs???😎

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 27 users

miaeffect

Oat latte lover
And the proposed uses just keep lining up:

“3 CONCLUSION
We present a framework for symbolic computing with spiking neurons based on KGs and graph embedding algorithms. Compared to previous approaches based on semantic pointers [8], our method allows the learning of semantically meaningful, low-dimensional and purely spike-based representations of graph elements. Due to the differentiability of our approach, such semantic representations can be trained end-to-end in unison with other differentiable models or architectures, such as multi-layer or recurrent SNNs, for specific use-cases, as demonstrated with our spiking GNN. In addition, by changing the scoring function (Eq. 2), alternative spike-based embedding schemes can be devised in future work that leverage gradient-based optimization in SNNs – similarly to how a variety of embedding schemes exist in the graph embedding literature [27].
The combination of symbolic data and spike-based computing is of particular interest for emerging neuromorphic technologies [11], as it bears the potential of opening up new data formats and applications. Thus, we are convinced that our work constitutes an important step towards not only enabling, but also exploring neuro-symbolic reasoning with spiking systems.
ACKNOWLEDGMENTS
We thank Serghei Mogoreanu, Alexander Hadjiivanov and Gabriele Meoni for helpful discussions. We further thank our colleagues at the Semantics and Reasoning Research Group, the Siemens AI Lab and ESA’s Advanced Concepts Team for their support. This work was partially funded by the Federal Ministry for Economic Affairs and Energy of Germany (IIP-Ecosphere Project) and by the German Federal Ministry for Education and Research (“MLWin”, grant 01IS18050). DD acknowledges support through the European Space Agency fellowship programme.”


What was it about AKIDA out performing GPUs???😎

My opinion only DYOR
FF

AKIDA BALLISTA
milo.gif


FF so energetic today. Too much milo in your blood
 
  • Haha
  • Like
  • Love
Reactions: 20 users
Top Bottom