BRN Discussion Ongoing

  • Like
Reactions: 6 users

mrgds

Regular
top gear made a video of the new mercedes hypercar. heres the link.


Thanks @mkg6R for posting,
$2.4 mil ................ ?????????? WTF :eek::eek::eek::eek:
Give me 12 of these for the same price .............. 🥰(y)🥰(y) ............... any day.
Screenshot (34).png
 
  • Like
  • Fire
Reactions: 5 users

JoMo68

Regular

Gotta love old mate Aaron.
 
  • Like
  • Love
Reactions: 10 users
Screenshot_2022-06-02-00-13-45-39.jpg

Got to love Tim 😂
 
  • Like
  • Love
  • Fire
Reactions: 45 users

skutza

Regular
Megachips are well on board now 😀
Screenshot_20220602-074157_Twitter.jpg
 
  • Like
  • Fire
  • Love
Reactions: 51 users

FJ-215

Regular
One of the big stories overnight was Sheryl Sandberg leaving Meta to focus on philanthropy. If she has any spare time up her sleeve, there are a few board positions becoming available at BRN soon. 😁😁
 
  • Like
  • Haha
Reactions: 19 users
I recently posted about the lead AKIDA had over Loihi 2. To achieve what these researchers have with Loihi 1 they needed 22 x Loihi 1 chips and expect to improve these results with Loihi 2.

As you read this article remember AKIDA 1 is already head of Loihi 2 and AKIDA 2. With LSTM is about to be released on the World.

AKIDA replacing the GPU now there is a market we did not estimate the value of but Peter did.😂🤣😂

My opinion only DYOR
FF

AKIDA BALLISTA

Artificial Neurons Could be 16 Times More Efficient Than GPUs​

Austrian researchers are making the processing of data sequences more efficient by optimizing short-term memory.​

0*NQoDhWNSbj-SwUMv

Photo by Josh Riemer on Unsplash
Neuromorphic computers should make artificial intelligence more efficient by replicating real neurons. However, processing sequential data, such as sentences, requires additional circuitry and negates the efficiency advantage. Researchers at the Graz University of Technology were able to show that these circuits are not required. Their results were published in Nature Machine Intelligence, a preliminary version available free of charge.
Instead of large matrices, neuromorphic computers use artificial neurons. This goes hand in hand with another network model called the Spiking Neural Network (SNN). In this case, the neurons are inactive most of the time and only generate a short current impulse when their action potential is reached. Only when a neuron triggers an impulse do the potentials of the neurons connected to it have to be recalculated, which saves energy.
However, large neural networks typically use long short-term memory (LSTM). It is only with this that they become manageable at all. Neuromorphic computers such as Intel’s Loihi, which the researchers used, use memory for this purpose.

Even more, copied from the brain​

This is not particularly efficient, which is why the researchers used a different approach, also inspired by real neurons. Real neurons, after firing once, are less excitable for a time. According to Wolfgang Maass, a professor at TU Graz, this is believed to be the basis of short-term memory in the real brain. With the artificial neurons, the lower sensitivity is achieved via a small current that counteracts the incoming activating current. It decreases over time, this function is also easy to implement on the Loihi chip. The neuron thus briefly remembers what happened in the past.
The researchers tested their approach with the Babi data set, in which the network reads a sequence of sentences and then has to answer a question about their content. To do this, they converted an existing network into an SNN and pitted it against the original, which computed a GTX 2070 Super. The neuromorphic implementation was up to 16 times more efficient, but only for the smallest two-sentence network. If the network had to remember 20 sentences, the Loihis were only four times more efficient.

Even more efficient with a new chip​

The more sentences the model had to process, the smaller the gain in efficiency was. The reason for this is that more artificial neurons and thus more of the 32 Loihi chips are required on the Nahuku board used. The network used to process a sequence of 20 sets used 22 chips. The researchers are hoping for further increases in efficiency from the successor to the Loihi used. Since this contains eight times as many artificial neurons, there is no communication between the individual chips.
 
  • Like
  • Fire
  • Wow
Reactions: 43 users
In 2020, the global graphics processing unit (GPU) market was valued at 25.41 billion U.S. dollars, with forecasts suggesting that by 2028 this is likely to rise to 246.51 billion U.S. dollars, growing at a compound annual growth rate (CAGR) of 32.82 percent from 2021 to 2028.
1654122993413.png

https://www.statista.com › statistics

• GPU market size worldwide 2020-2028 | Statista


In 2028 the Global GPU market will be US 246.51 billion dollars.

In Brainchip we trust but importantly so do:

1. Renesas
2. MegaChips
3. Valeo
4. Mercedes Benz
5. NASA

1% of the Global GPU market in 2028 is US 2.47 billion dollars.

0.5% of the Global GPU market in 2028 is US 1.23 billion dollars.

0.25% of the Global GPU market in 2028 is US 600 million dollars.

It seems like I am being ridiculous suggesting Brainchip will only capture a quarter of one percent of this market.

What did Nviso say 1,000 frames per second with a few tweaks.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 52 users

Boab

I wish I could paint like Vincent
In 2020, the global graphics processing unit (GPU) market was valued at 25.41 billion U.S. dollars, with forecasts suggesting that by 2028 this is likely to rise to 246.51 billion U.S. dollars, growing at a compound annual growth rate (CAGR) of 32.82 percent from 2021 to 2028.
View attachment 8321
https://www.statista.com › statistics

• GPU market size worldwide 2020-2028 | Statista


In 2028 the Global GPU market will be US 246.51 billion dollars.

In Brainchip we trust but importantly so do:

1. Renesas
2. MegaChips
3. Valeo
4. Mercedes Benz
5. NASA

1% of the Global GPU market in 2028 is US 2.47 billion dollars.

0.5% of the Global GPU market in 2028 is US 1.23 billion dollars.

0.25% of the Global GPU market I. 2028 is US 600 million dollars.

It seems like I am being ridiculous suggesting Brainchip will only capture a quarter of one percent of this market.

What did Nviso say 1,000 frames per second with a few tweaks.

My opinion only DYOR
FF

AKIDA BALLISTA

In 2020, the global graphics processing unit (GPU) market was valued at 25.41 billion U.S. dollars, with forecasts suggesting that by 2028 this is likely to rise to 246.51 billion U.S. dollars, growing at a compound annual growth rate (CAGR) of 32.82 percent from 2021 to 2028.
View attachment 8321
https://www.statista.com › statistics

• GPU market size worldwide 2020-2028 | Statista


In 2028 the Global GPU market will be US 246.51 billion dollars.

In Brainchip we trust but importantly so do:

1. Renesas
2. MegaChips
3. Valeo
4. Mercedes Benz
5. NASA

1% of the Global GPU market in 2028 is US 2.47 billion dollars.

0.5% of the Global GPU market in 2028 is US 1.23 billion dollars.

0.25% of the Global GPU market I. 2028 is US 600 million dollars.

It seems like I am being ridiculous suggesting Brainchip will only capture a quarter of one percent of this market.

What did Nviso say 1,000 frames per second with a few tweaks.

My opinion only DYOR
FF

AKIDA BALLISTA
I'd be more than happy if our SP could have a CAGR similar to the above prediction but I believe it will be much greater than that.
 
  • Like
  • Fire
  • Love
Reactions: 14 users

M_C

Founding Member
Hi MC

Gidday @MC🐠
And a nice nugget it may be..
It says within in two places that it uses "an IP licensed by Panasonic" which sounds great.
Do you think that an NDA can still cover this up, or the fact that they have publicly said "an IP licensed by Panasonic", that we should have had an announcement if its Akida.
Interested in thoughts, cheers
Hey Macca,

Sorry for the delayed response to your question. *IF it is AKIDA IP, then it will need to be disclosed once it becomes material in nature (my understanding and interpretation)...........pretty grey area which can be argued either way so hard to say when we would see an ann regarding it (if it isn't through one of our existing licensees eg: Renesas / Megachips)

The article states that the evaluation kit is being presented between the 1st and 3rd of June and starts shipping in June, so.......................

Could we see an ann about it in June? Possibly. OR we may have to wait for Panasonic to pay Socionext before it becomes "material" in the eyes of the company.

OR we may only find out when we get revenue...........

OR......... it just simply may not be our IP..........although I am reasonably confident in our chances involving Socionext............ and Panasonic for that matter

Your guess is as good as mine, just my thoughts
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 24 users

Mccabe84

Regular
In 2020, the global graphics processing unit (GPU) market was valued at 25.41 billion U.S. dollars, with forecasts suggesting that by 2028 this is likely to rise to 246.51 billion U.S. dollars, growing at a compound annual growth rate (CAGR) of 32.82 percent from 2021 to 2028.
View attachment 8321
https://www.statista.com › statistics

• GPU market size worldwide 2020-2028 | Statista


In 2028 the Global GPU market will be US 246.51 billion dollars.

In Brainchip we trust but importantly so do:

1. Renesas
2. MegaChips
3. Valeo
4. Mercedes Benz
5. NASA

1% of the Global GPU market in 2028 is US 2.47 billion dollars.

0.5% of the Global GPU market in 2028 is US 1.23 billion dollars.

0.25% of the Global GPU market I. 2028 is US 600 million dollars.

It seems like I am being ridiculous suggesting Brainchip will only capture a quarter of one percent of this market.

What did Nviso say 1,000 frames per second with a few tweaks.

My opinion only DYOR
FF

AKIDA BALLISTA
I guess I might have to buy some more shares today so I can retire early 😁
 
  • Like
  • Love
  • Fire
Reactions: 19 users

jtardif999

Regular
If we are turning over 800 billion and we still only have 100 staff that’s 8 billion each.. should make how many dividends paid interesting 🤔
 
  • Like
  • Love
  • Fire
Reactions: 9 users

JK200SX

Regular
If we are turning over 800 billion and we still only have 100 staff that’s 8 billion each.. should make how many dividends paid interesting 🤔
$88/share
 
  • Haha
  • Like
  • Love
Reactions: 16 users

M_C

Founding Member
  • Haha
  • Love
  • Fire
Reactions: 8 users
Using sparsity to advantage allows AKIDA to excel and out perform the incumbents.

The following extract is interesting because it highlights why systems that cannot embrace sparsity have to eventually hit road blocks when sparsity is all there is or you can spend time and energy making up synthetic data to allow traditional Von Neumann deep learning to keep on processing.

Special Issue on Deep Learning for Time Series Data

CGAN-based synthetic multivariate time-series generation: a solution to data scarcity in solar flare forecasting​

Show authors
Neural Computing and Applications(2022)Cite this article

Abstract​

One of the major bottlenecks in refining supervised algorithms is data scarcity. This might be caused by a number of reasons often rooted in extremely expensive and lengthy data collection processes. In natural domains such as Heliophysics, it may take decades for sufficiently large samples for machine learning purposes. Inspired by the massive success of generative adversarial networks (GANs) in generating synthetic images, in this study we employed the conditional GAN (CGAN) on a recently released benchmark dataset tailored for solar flare forecasting. Our goal is to generate synthetic multivariate time-series data that (1) are statistically similar to the real data and (2) improve the performance of flare prediction when used to remedy the scarcity of strong flares. To evaluate the generated samples, first, we used the Kullback–Leibler divergence and adversarial accuracy measures to quantify the similarity between the real and synthetic data in terms of their descriptive statistics. Second, we evaluated the impact of the generated samples by training a predictive model on their descriptive statistics, which resulted in a significant improvement (over 1100% in TSS and 350% in HSS). Third, we used the generated time series to examine their high-dimensional contribution to mitigating the scarcity of the strong flares, which we also observed a significant improvement in terms of TSS (4%, 7%, and 31%) and HSS (75%, 35%, and 72%), compared to oversampling, undersampling, and synthetic oversampling methods, respectively. We believe our findings can open new doors toward more robust and accurate flare forecasting models.
 
  • Like
  • Love
Reactions: 12 users

M_C

Founding Member
  • Haha
  • Love
  • Like
Reactions: 12 users

Shadow59

Regular
  • Like
  • Haha
  • Love
Reactions: 11 users

JK200SX

Regular
  • Like
  • Haha
  • Fire
Reactions: 15 users
Top Bottom