BRN Discussion Ongoing

Diogenese

Top 20
@Diogenese do you know how Rohm’s Von Neumann NN accelerator can product such supposed efficiencies with on-chip learning. I see this was discussed on TSE back in November when media was released. Did you find out anything further.

I wonder if their minimal 20,000 circuit gates would reduce speed of inference compared to Akida.
I'm sure you're right about the speed:

Introduction to ROHM’s On-Device Learning AI Chip BD15035
1679887353213.png

As you can see from the right hand graph, their Matisse CPU is slower than 2 of the 3 comparison CPUs. This is several times slower than Akida.

The ROHM SoC is a single purpose device for detecting vibrations, hence the FFT (Fast Fourier Transform) input function. Fourier transform breaks a signal down to its individual frequency components.
1679887844983.png



Couldn't find any patents by Professor Matsutani from Keio Uni, but here is a 2021 paper he jointly authored:

https://arxiv.org/pdf/2107.12824v1.pdf

Although high-performance deep neural networks are in high demand in edge environments, computation resources are strictly limited in edge devices, and light-weight neural network techniques, such as Depthwise Separable Convolution (DSC), have been developed. ResNet is one of conventional deep neural network models that stack a lot of layers and parameters for a higher accuracy. To reduce the parameter size of ResNet, by utilizing a similarity to ODE (Ordinary Differential Equation), Neural ODE repeatedly uses most of weight parameters instead of having a lot of different parameters. Thus, Neural ODE becomes significantly small compared to that of ResNet so that it can be implemented in resource-limited edge devices. In this paper, a combination of Neural ODE and DSC, called dsODENet, is designed and implemented for FPGAs (Field-Programmable Gate Arrays). dsODENet is then applied to edge domain adaptation as a practical use case and evaluated with image classification datasets. It is implemented on Xilinx ZCU104 board and evaluated in terms of domain adaptation accuracy, training speed, FPGA resource utilization, and speedup rate compared to a software execution. The results demonstrate that dsODENet is comparable to or slightly better than our baseline Neural ODE implementation in terms of domain adaptation accuracy, while the total parameter size without pre- and post-processing layers is reduced by 54.2% to 79.8%. The FPGA implementation accelerates the prediction tasks by 27.9 times faster than a software implementation.

Basically, Neural ODE is a NN model that uses an abbreviated NN configuration (eg, 3 layers in ROHM vibration detector).
 
  • Like
  • Love
Reactions: 18 users

HopalongPetrovski

I'm Spartacus!
Well if we're going to Brag about it lets get them all together shall we?

View attachment 33042
Good publicity when/if we win with any/all of these.
I wonder how legit these are though?
I certainly do not know where they rank from the Oscars to the Nobels?
My old boss was always entering us in these trophy /awards competitions run by our Industry bodies, year after year for the bragging rights and to have on our letterhead etc, but I was left with the feeling that it was all a bit rigged. Sort of a boys club mutual back slapping/washing type of thing.
Still, lots of people paid to participate and seemed to have a good old time and in this instance I do like our categories. 🤣
AKIDA BALLISTA
AKIDA EVERYWHERE
GLTA (Good Luck To Akida 🤣)
 
  • Like
  • Fire
  • Love
Reactions: 17 users

wilzy123

Founding Member
Good publicity when/if we win with any/all of these.
I wonder how legit these are though?
I certainly do not know where they rank from the Oscars to the Nobels?
My old boss was always entering us in these trophy /awards competitions run by our Industry bodies, year after year for the bragging rights and to have on our letterhead etc, but I was left with the feeling that it was all a bit rigged. Sort of a boys club mutual back slapping/washing type of thing.
Still, lots of people paid to participate and seemed to have a good old time and in this instance I do like our categories. 🤣
AKIDA BALLISTA
AKIDA EVERYWHERE
GLTA (Good Luck To Akida 🤣)

It has "Global" in the award name, so it surely must be legit.
 
  • Haha
  • Like
Reactions: 10 users

Diogenese

Top 20
Sorry if this has already been mentioned but I rarely visit the forum these days.


MANUFACTURING TECH DISRUPTOR OF THE YEAR

BrainChip Ltd. – Neuromorphic convolutional chip saves up to 97% of power while offering high performance Artificial Intelligence processing.

Phaos Technology

Porotech

SMARTECH Manewfacturing™ Technologies

MARCH 24​



Shortlist Announced



MAY 11​



Winners Announced

The Awards are judged by leading tech industry experts from around the world and judging is ethical, transparent and fair will all entrants receiving feedback on their submissions.
Further down the page:


BEST USE OF INNOVATION

Adthena’s Brand Activator – saving marketers up to 20% on brand search ads

Brainchip Ltd. – Smarter AI through neuromorphic cortical processing hardware.

Epsilon Telecommunications –
Infiny by Epsilon

Cornerstone OnDemand – Skills Graph

Freyr Digital

GrowByData-
Holistic Search Explorer

HighRadius – Fluree – Fluree Sense

Jenson8 – Apollo

Leap Telkom Digital

MIGGRA –
TARO, a Collaborative Robot

Nobi – AI Powered Lamp

River Square Company Limited – River Chain


TRANSFORMATION THROUGH TECHNOLOGY

Adthena’s Brand Activator – saving marketers up to 20% on brand search ads

Agreeculture Telkom Digital

BigHand –
Transformation Through Technology

Brainchip Ltd. Saving the world’s energy resources through distributed high-performance and low power AI processing

Eazi-Business

Proxymity

River Square Company Limited –
River Chain

Roads & Transport Authority (RTA) – Monitoring and Enforcement System (MES)

Yext & Optum – Delivering a Better Digital Experience



Edit: Did not even podium this time ...
 
  • Like
  • Fire
  • Haha
Reactions: 15 users

zeeb0t

Administrator
Staff member
Zeeb0t..........so the website does not have the info that BrainChip has BOUGHT the JAST (owns it), only the old licencing info?
Some serious updating needed to the website to keep it current.

Yak52

Following up by asking, "Did Brainchip acquire IP rights to JAST?"

You get this response:

"Yes, BrainChip acquired full ownership of the IP rights related to JAST learning rule and algorithms from French technology transfer-based company Toulouse Tech Transfer (TTT), including issued patent EP3324344 and pending patents US2019/0286944 and EP3324343. The invention related to the acquired IP rights include pattern detection algorithms that provide BrainChip with significant competitive advantages. The company held an exclusive license for the IP prior to their acquisition."
 
  • Like
  • Fire
  • Love
Reactions: 13 users

TECH

Regular
Good afternoon,

Is it just me or do any of you sense a change in the breeze, it's picking up a touch from the South.

Looking forward to LDA Capital's cash entering our accounts, news of the completion of our obligation must be very close now.

Some great posts being shared, thanks everyone.

Tech.
 
  • Like
  • Love
  • Haha
Reactions: 44 users

Yak52

Regular
Following up by asking, "Did Brainchip acquire IP rights to JAST?"

You get this response:

"Yes, BrainChip acquired full ownership of the IP rights related to JAST learning rule and algorithms from French technology transfer-based company Toulouse Tech Transfer (TTT), including issued patent EP3324344 and pending patents US2019/0286944 and EP3324343. The invention related to the acquired IP rights include pattern detection algorithms that provide BrainChip with significant competitive advantages. The company held an exclusive license for the IP prior to their acquisition."

Ah good that this info is also on the website! very important info .
Yes wording of the question can have quite different results, though none bad just different answers.
Did anyone confirm if Merc & NASA are mentioned at all?

Y.
 
  • Like
Reactions: 6 users

Boab

I wish I could paint like Vincent
I'm sure you're right about the speed:

Introduction to ROHM’s On-Device Learning AI Chip BD15035
View attachment 33039
As you can see from the right hand graph, their Matisse CPU is slower than 2 of the 3 comparison CPUs. This is several times slower than Akida.

The ROHM SoC is a single purpose device for detecting vibrations, hence the FFT (Fast Fourier Transform) input function. Fourier transform breaks a signal down to its individual frequency components.
View attachment 33040


Couldn't find any patents by Professor Matsutani from Keio Uni, but here is a 2021 paper he jointly authored:

https://arxiv.org/pdf/2107.12824v1.pdf

Although high-performance deep neural networks are in high demand in edge environments, computation resources are strictly limited in edge devices, and light-weight neural network techniques, such as Depthwise Separable Convolution (DSC), have been developed. ResNet is one of conventional deep neural network models that stack a lot of layers and parameters for a higher accuracy. To reduce the parameter size of ResNet, by utilizing a similarity to ODE (Ordinary Differential Equation), Neural ODE repeatedly uses most of weight parameters instead of having a lot of different parameters. Thus, Neural ODE becomes significantly small compared to that of ResNet so that it can be implemented in resource-limited edge devices. In this paper, a combination of Neural ODE and DSC, called dsODENet, is designed and implemented for FPGAs (Field-Programmable Gate Arrays). dsODENet is then applied to edge domain adaptation as a practical use case and evaluated with image classification datasets. It is implemented on Xilinx ZCU104 board and evaluated in terms of domain adaptation accuracy, training speed, FPGA resource utilization, and speedup rate compared to a software execution. The results demonstrate that dsODENet is comparable to or slightly better than our baseline Neural ODE implementation in terms of domain adaptation accuracy, while the total parameter size without pre- and post-processing layers is reduced by 54.2% to 79.8%. The FPGA implementation accelerates the prediction tasks by 27.9 times faster than a software implementation.

Basically, Neural ODE is a NN model that uses an abbreviated NN configuration (eg, 3 layers in ROHM vibration detector).
This is the only Matisse that I understand.
Matisse.jpg
 
  • Like
  • Haha
  • Love
Reactions: 14 users

Cgc516

Regular
There is an interesting thing, my watch showed price was .480 at 3PM today. ( I set my watch 35 minutes earlier than normal time. )
Apple predicted today’s finish?


D77004CF-F0C6-47FE-A843-29D74827F8C9.jpeg
 
  • Haha
  • Like
  • Love
Reactions: 13 users
No, three times in fact.
My mother taught me not to 'brag'. I also learnt in the law that letting the Judge get to a conclusion first made it more likely he or she would give the conclusion greater weight. Wow three times a nominee. :ROFLMAO:😂🤣😍😇:giggle:
 
  • Like
  • Haha
Reactions: 18 users

HopalongPetrovski

I'm Spartacus!
There is an interesting thing, my watch showed price was .480 at 3PM today. ( I set my watch 35 minutes earlier than normal time. )
Apple predicted today’s finish?


View attachment 33044
I do wish you, Bravo and Corsairs would stop messing around with time! No wonder I'm bloody dizzy! 🤣
 
Last edited:
  • Haha
  • Like
Reactions: 8 users
  • Like
  • Love
  • Wow
Reactions: 20 users

zeeb0t

Administrator
Staff member
Ah good that this info is also on the website! very important info .
Yes wording of the question can have quite different results, though none bad just different answers.
Did anyone confirm if Merc & NASA are mentioned at all?

Y.

I asked it: "Are Brainchip, NASA, and Vorago connected or partnered in any way?"

The answer it gave is: "Yes, BrainChip and Vorago have signed an agreement to collaborate through the Akida Early Access Program to support a Phase I NASA program for a radiation-hardened neuromorphic processor. The Akida neuromorphic processor is uniquely suited for spaceflight and aerospace applications. The device is a complete neural processor and does not require an external CPU, memory or Deep Learning Accelerator (DLA). VORAGO Technologies is a privately held, high technology company based in Austin, Texas with over 15 years of experience in providing radiation-hardened and extreme-temperature solutions for the Hi-reliability marketplace."

I think this one is a little nuanced, and so you have to ask the right question. Brainchip and Vorago both signed an agreement to support the NASA program. But Brainchip didn't put "NASA" in the title, so the AI looks elsewhere.

This would be more accessible had they mentioned NASA in the headline or had any other pages about it. Unfortunately just an image on the site doesn't cut the mustard as yet, as I do not have image recognition built in at this stage.
 
  • Like
  • Wow
  • Fire
Reactions: 15 users

HopalongPetrovski

I'm Spartacus!
Good afternoon,

Is it just me or do any of you sense a change in the breeze, it's picking up a touch from the South.

Looking forward to LDA Capital's cash entering our accounts, news of the completion of our obligation must be very close now.

Some great posts being shared, thanks everyone.

Tech.
I hope you are right Tech, but looking at the course of trades leads me to believe we are still heavily under the influence of manipulators.
I think their unwanted attentions are atm primarily focused on WBT and AGY amongst others and they perhaps are just giving us a little rest whilst they are otherwise occupied. Would love some news or announcement catalyst to drop and catch them unawares but am not relying upon it.
Whilst their embrace is unpleasant our fundamentals and unrolling strategy will, over time, see us longs prevail.
 
  • Like
  • Fire
Reactions: 14 users

Foxdog

Regular
Very long list of panel judges but I did note there are judges from Google, Microsoft and Ericson. 🙈🙉🙊
Well they at least should be qualified enough to actually understand AKIDA technology. I guess this is where we find out if AKIDA really is science fiction. By all accounts we should blow the other entrants out of the water 🤔
 
  • Like
  • Thinking
Reactions: 6 users
What was the question you asked? :D

Hi @zeeb0t , just a suggestion, could you make it crawl past versions of the website, such as archived pages at Waybackmachine? This way information published previously (but not on the current version of the website) can still be accessible?
 
  • Like
  • Fire
Reactions: 3 users

zeeb0t

Administrator
Staff member
Hi @zeeb0t , just a suggestion, could you make it crawl past versions of the website, such as archived pages at Waybackmachine? This way information published previously (but not on the current version of the website) can still be accessible?

Nice idea, but I don't think I will do that with this particular software. The intended customer are the companies themselves, and they are probably only interested in publishing what they consider to be the most reliable set of facts as of today (an assumption of mine).
 
  • Like
Reactions: 7 users

Slade

Top 20
Had a feeling this week was going to be a good one. A nice Monday. More nice days to come. ❤️
 
  • Like
  • Love
  • Fire
Reactions: 24 users

skutza

Regular
Very long list of panel judges but I did note there are judges from Google, Microsoft and Ericson. 🙈🙉🙊

Well I can just imagine what the judges will think.

They'll probably look at all the other tech/competition and say, if these guys used AKIDA themselves, wouldn't they improve their own products/Tech?

(silence hits the room, other companies look down at their feet)

Well the winner is ................. 🤣🤣🤣🤣🤣🤣🤣🤣
 
  • Like
  • Fire
  • Haha
Reactions: 19 users

manny100

Top 20
Who are Brainchip competitors:
" We do not provide information about BrainChip's competitors on our website. However, we are the worldwide leader in edge AI on-chip processing and learning of neuromorphic IP. Our focus is on optimizing compute, maximizing performance, and minimizing power in real-time. We believe that on-chip AI, close to the sensor, has a sensational future for our customers' products, as well as the planet."
Now that is an accumulate statement if i ever heard one!!!
 
  • Like
  • Love
  • Fire
Reactions: 14 users
Top Bottom