Morning Zeeb0t
Yes! the phrasing of the question has significant impact.
I tried a few variaitons again with mixed results.
Here ist he one asked that offered no relationship
What relationship does Megachips have with Brainchip:
Today there is alternate questions offered which I selected and they too came up blank
Here are the variations offered when my first is posed
If not, try asking: "Is there any connection or relation between Megachips and Brainchip?"
If not, try asking: Are Megachips and Brainchip related or connected in any way?
If not, try asking: Is there any connection or relation between Megachips and Brainchip?
all gave the same response which included "However, there is no mention of Megachips or any connection or relation with Brainchip."
When this question is posed;
"Discribe the megachips and brainchip partnership"
A more appropriate response is received
"Megachips is incorporating BrainChip's Akida technology into its ASIC solutions service, enabling the development and support required to design and manufacture integrated circuits and systems on chips with intelligence that will drive AI into the next generation of edge-based devices. This means that Megachips is utilizing BrainChip's technology by designing the Akida IP into system-on-chip (SoC) products that will be ready-to-use systems or implementing Akida silicon into ready-to-use modules.
Thanks again for your work on the TSX
@Diogenese do you know how Rohm’s Von Neumann NN accelerator can product such supposed efficiencies with on-chip learning. I see this was discussed on TSE back in November when media was released. Did you find out anything further.
Wow pretty good first attempt at a mini ChatGPT with a domain of just this website or is the Brainchipinc website? Anyway, it stumbled on my first question - I asked it ‘What is JAST?’ and it said (paraphrasing) that this website contains no information about JAST? A little disappointing but that’s why you want the feedback I’m guessing.
Zeeb0t..........so the website does not have the info that BrainChip has BOUGHT the JAST (owns it), only the old licencing info?Upgrade complete... now when asking the question, you get:
"JAST is a new unsupervised learning algorithm developed by the Brain and Cognition Research Center (CERCO) in Toulouse, France. BrainChip Holdings Ltd has signed an agreement with French-based technology transfer company Toulouse Tech Transfer to license the exclusive rights to the JAST learning rules and algorithms. The groundbreaking JAST technology significantly enhances the Company’s existing neural network design and will be implemented on the Company’s proprietary Spiking Neural Adaptive Processor (SNAP)."
Let me know whatever other feedback you have! Thanks!
Also:Sorry if this has already been mentioned but I rarely visit the forum these days.
2023 Shortlist - Global Business Tech Awards
Congratulations to our 2023 Global Business Tech Awards finalists. Finalist Assetsglobalbusinesstechawards.com
MANUFACTURING TECH DISRUPTOR OF THE YEAR
BrainChip Ltd. – Neuromorphic convolutional chip saves up to 97% of power while offering high performance Artificial Intelligence processing.
Phaos Technology
Porotech
SMARTECH Manewfacturing™ Technologies
MARCH 24
Shortlist Announced
MAY 11
Winners Announced
The Awards are judged by leading tech industry experts from around the world and judging is ethical, transparent and fair will all entrants receiving feedback on their submissions.
No, three times in fact.Also:
BEST USE OF INNOVATION
Brainchip Ltd. – Smarter AI through neuromorphic cortical processing hardware.
Short listed twice in different categories.
My opinion only DYOR
FF
AKIDA BALLISTA
I'm sure you're right about the speed:@Diogenese do you know how Rohm’s Von Neumann NN accelerator can product such supposed efficiencies with on-chip learning. I see this was discussed on TSE back in November when media was released. Did you find out anything further.
I wonder if their minimal 20,000 circuit gates would reduce speed of inference compared to Akida.
Good publicity when/if we win with any/all of these.
Good publicity when/if we win with any/all of these.
I wonder how legit these are though?
I certainly do not know where they rank from the Oscars to the Nobels?
My old boss was always entering us in these trophy /awards competitions run by our Industry bodies, year after year for the bragging rights and to have on our letterhead etc, but I was left with the feeling that it was all a bit rigged. Sort of a boys club mutual back slapping/washing type of thing.
Still, lots of people paid to participate and seemed to have a good old time and in this instance I do like our categories.
AKIDA BALLISTA
AKIDA EVERYWHERE
GLTA (Good Luck To Akida )
Further down the page:Sorry if this has already been mentioned but I rarely visit the forum these days.
2023 Shortlist - Global Business Tech Awards
Congratulations to our 2023 Global Business Tech Awards finalists. Finalist Assetsglobalbusinesstechawards.com
MANUFACTURING TECH DISRUPTOR OF THE YEAR
BrainChip Ltd. – Neuromorphic convolutional chip saves up to 97% of power while offering high performance Artificial Intelligence processing.
Phaos Technology
Porotech
SMARTECH Manewfacturing™ Technologies
MARCH 24
Shortlist Announced
MAY 11
Winners Announced
The Awards are judged by leading tech industry experts from around the world and judging is ethical, transparent and fair will all entrants receiving feedback on their submissions.
Zeeb0t..........so the website does not have the info that BrainChip has BOUGHT the JAST (owns it), only the old licencing info?
Some serious updating needed to the website to keep it current.
Yak52
Following up by asking, "Did Brainchip acquire IP rights to JAST?"
You get this response:
"Yes, BrainChip acquired full ownership of the IP rights related to JAST learning rule and algorithms from French technology transfer-based company Toulouse Tech Transfer (TTT), including issued patent EP3324344 and pending patents US2019/0286944 and EP3324343. The invention related to the acquired IP rights include pattern detection algorithms that provide BrainChip with significant competitive advantages. The company held an exclusive license for the IP prior to their acquisition."
This is the only Matisse that I understand.I'm sure you're right about the speed:
Introduction to ROHM’s On-Device Learning AI Chip BD15035
View attachment 33039
As you can see from the right hand graph, their Matisse CPU is slower than 2 of the 3 comparison CPUs. This is several times slower than Akida.
The ROHM SoC is a single purpose device for detecting vibrations, hence the FFT (Fast Fourier Transform) input function. Fourier transform breaks a signal down to its individual frequency components.
View attachment 33040
Couldn't find any patents by Professor Matsutani from Keio Uni, but here is a 2021 paper he jointly authored:
https://arxiv.org/pdf/2107.12824v1.pdf
Although high-performance deep neural networks are in high demand in edge environments, computation resources are strictly limited in edge devices, and light-weight neural network techniques, such as Depthwise Separable Convolution (DSC), have been developed. ResNet is one of conventional deep neural network models that stack a lot of layers and parameters for a higher accuracy. To reduce the parameter size of ResNet, by utilizing a similarity to ODE (Ordinary Differential Equation), Neural ODE repeatedly uses most of weight parameters instead of having a lot of different parameters. Thus, Neural ODE becomes significantly small compared to that of ResNet so that it can be implemented in resource-limited edge devices. In this paper, a combination of Neural ODE and DSC, called dsODENet, is designed and implemented for FPGAs (Field-Programmable Gate Arrays). dsODENet is then applied to edge domain adaptation as a practical use case and evaluated with image classification datasets. It is implemented on Xilinx ZCU104 board and evaluated in terms of domain adaptation accuracy, training speed, FPGA resource utilization, and speedup rate compared to a software execution. The results demonstrate that dsODENet is comparable to or slightly better than our baseline Neural ODE implementation in terms of domain adaptation accuracy, while the total parameter size without pre- and post-processing layers is reduced by 54.2% to 79.8%. The FPGA implementation accelerates the prediction tasks by 27.9 times faster than a software implementation.
Basically, Neural ODE is a NN model that uses an abbreviated NN configuration (eg, 3 layers in ROHM vibration detector).