BRN Discussion Ongoing

Slade

Top 20
I check Gerts profile every few weeks, so I would have seen the patent before



So what exactly are you saying then Slade?
Nothing, I’m taking a break.
 
  • Fire
Reactions: 1 users

gilti

Regular
There exists a video of Jeff Krichmar, Gert Cauwenbergs and Nicholas Spitzer talking up Brainchip



Maybe the really long long termers remember it and can source it?
I remember there being a video of the SAB outside, like on a bush walk, using an early version of the technology.
Anyone else?
 
  • Like
Reactions: 2 users

uiux

Regular
I remember there being a video of the SAB outside, like on a bush walk, using an early version of the technology.
Anyone else?



They are using TrueNorth
 
  • Like
Reactions: 3 users

Diogenese

Top 20
By the sound of it, this is student project which has been running for a few years involving a couple of generations of students. Compute-in-memory and analog NPUs were around when PvdM started. His invention solves the problems with analog ReRAM.

"Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago," Cauwenberghs said. "What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms."

This is a patent application related to the compute-in-memory chip discussed:

US2021342678A1 COMPUTE-IN-MEMORY ARCHITECTURE FOR NEURAL NETWORKS

Inventors​

MOSTAFA HESHAM [US]; KUBENDRAN RAJKUMAR CHINNAKONDA [US]; CAUWENBERGHS GERT [US]

A neural network architecture for inference and learning comprising:
a plurality of network modules,
each network module comprising a combination of CMOS neural circuits and RRAM synaptic crossbar memory structures interconnected by bit lines and source lines,
each network module having an input port and an output port,
wherein weights are stored in the crossbar memory structures, and
wherein learning is effected using approximate backpropagation with ternary errors;
wherein the CMOS neural circuits include a source line block having dynamic comparators, and
wherein inference is effected by clamping pairs of bit lines in a differential manner and comparing within the dynamic comparator voltages on each differential bit line pair to obtain a binary output activation for output neurons
.

1660793250704.png




1660793285861.png


[005] ... [Prior ReRAM systems] By storing the weights of the neural network as conductance values of the memory elements, and by arranging these elements in a crossbar configuration as shown in FIG. 1, the crossbar memory structure can be used to perform a matrix-vector product operation in the analog domain. In the illustrated example, the input layer 10 neural activity, yl-1 , is encoded as analog voltages. The output neurons 12 maintain a virtual ground at their input terminals and their input currents represent weighted sums of the activities of the neurons in the previous layer, where the weights are encoded in the memory-resistor, or “memristor”, conductances 14 a - 14 n. The output neurons generate an output voltage proportional to their input currents. Additional details are provided by S. Hamdioui, et al., in “Memristor For Computing: Myth of Reality?”, Proceedings of the Conference on Design, Automation & Test in Europe (DATE ), IEEE, pp. 722-731, 2017. This approach has two advantages: (1) weights do not need to be shuttled between memory and a compute device as computation is done directly within the memory structure; and (2) minimal computing hardware is needed around the crossbar array as most of the computation is done through Kirchoff's current and voltage laws. A common issue with this type of memory structure is a data-dependent problem called “sneak paths”. This phenomenon occurs when a resistor in the high-resistance state is being read while a series of resistors in the low-resistance state exist parallel to it, causing it to be erroneously read as low-resistance. The “sneak path” problem in analog crossbar array architectures can be avoided by driving all input lines with voltages from the input neurons. Other approaches involve including diodes or transistors to isolate each device, which limits array density and increases cost.

[0006] Deep neural networks have demonstrated state-of-the-art performance on a variety of tasks such as image classification and automatic speech recognition. Before neural networks can be deployed, however, they must first be trained. The training phase for deep neural networks can be very power-hungry and is typically executed on centralized and powerful computing systems. The network is subsequently deployed and operated in the “inference mode” where the network becomes static and its parameters fixed. This use scenario is dictated by the prohibitively high power costs of the “learning mode” which makes it impractical for use on power-constrained deployment devices such as mobile phones or drones. This use scenario, in which the network does not change after deployment, is inadequate in situations where the network needs to adapt online to new stimuli, or to personalize its output to the characteristics of different environments or users
.

This looks like genuine competition
 
  • Like
  • Fire
  • Love
Reactions: 28 users

Dang Son

Regular
By the sound of it, this is student project which has been running for a few years involving a couple of generations of students. Compute-in-memory and analog NPUs were around when PvdM started. His invention solves the problems with analog ReRAM.

"Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago," Cauwenberghs said. "What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms."

This is a patent application related to the compute-in-memory chip discussed:

US2021342678A1 COMPUTE-IN-MEMORY ARCHITECTURE FOR NEURAL NETWORKS

Inventors​

MOSTAFA HESHAM [US]; KUBENDRAN RAJKUMAR CHINNAKONDA [US]; CAUWENBERGHS GERT [US]

A neural network architecture for inference and learning comprising:
a plurality of network modules,
each network module comprising a combination of CMOS neural circuits and RRAM synaptic crossbar memory structures interconnected by bit lines and source lines,
each network module having an input port and an output port,
wherein weights are stored in the crossbar memory structures, and
wherein learning is effected using approximate backpropagation with ternary errors;
wherein the CMOS neural circuits include a source line block having dynamic comparators, and
wherein inference is effected by clamping pairs of bit lines in a differential manner and comparing within the dynamic comparator voltages on each differential bit line pair to obtain a binary output activation for output neurons
.

View attachment 14425



View attachment 14426

[005] ... [Prior ReRAM systems] By storing the weights of the neural network as conductance values of the memory elements, and by arranging these elements in a crossbar configuration as shown in FIG. 1, the crossbar memory structure can be used to perform a matrix-vector product operation in the analog domain. In the illustrated example, the input layer 10 neural activity, yl-1 , is encoded as analog voltages. The output neurons 12 maintain a virtual ground at their input terminals and their input currents represent weighted sums of the activities of the neurons in the previous layer, where the weights are encoded in the memory-resistor, or “memristor”, conductances 14 a - 14 n. The output neurons generate an output voltage proportional to their input currents. Additional details are provided by S. Hamdioui, et al., in “Memristor For Computing: Myth of Reality?”, Proceedings of the Conference on Design, Automation & Test in Europe (DATE ), IEEE, pp. 722-731, 2017. This approach has two advantages: (1) weights do not need to be shuttled between memory and a compute device as computation is done directly within the memory structure; and (2) minimal computing hardware is needed around the crossbar array as most of the computation is done through Kirchoff's current and voltage laws. A common issue with this type of memory structure is a data-dependent problem called “sneak paths”. This phenomenon occurs when a resistor in the high-resistance state is being read while a series of resistors in the low-resistance state exist parallel to it, causing it to be erroneously read as low-resistance. The “sneak path” problem in analog crossbar array architectures can be avoided by driving all input lines with voltages from the input neurons. Other approaches involve including diodes or transistors to isolate each device, which limits array density and increases cost.

[0006] Deep neural networks have demonstrated state-of-the-art performance on a variety of tasks such as image classification and automatic speech recognition. Before neural networks can be deployed, however, they must first be trained. The training phase for deep neural networks can be very power-hungry and is typically executed on centralized and powerful computing systems. The network is subsequently deployed and operated in the “inference mode” where the network becomes static and its parameters fixed. This use scenario is dictated by the prohibitively high power costs of the “learning mode” which makes it impractical for use on power-constrained deployment devices such as mobile phones or drones. This use scenario, in which the network does not change after deployment, is inadequate in situations where the network needs to adapt online to new stimuli, or to personalize its output to the characteristics of different environments or users
.
IMO members on our scientific advisory board should have signed a Stat Dec on the subject of no conflict of interest to insure against sharing of our trade secrets with competition.
It seems very suss to me that Gunter goes off and releases a competitive chip after being privy to AKIDA IP.
How else could he be an advisor without knowing intimate details of our chip?
 
  • Like
Reactions: 6 users

uiux

Regular
IMO members on our scientific advisory board should have signed a Stat Dec on the subject of no conflict of interest to insure against sharing of our trade secrets with competition.
It seems very suss to me that Gunter goes off and releases a competitive chip after being privy to AKIDA IP.
How else could he be an advisor without knowing intimate details of our chip?

His technology is compute-in-memory


It's completely different
 
  • Like
  • Fire
  • Love
Reactions: 16 users

Taproot

Regular
  • Like
  • Fire
Reactions: 5 users

cosors

👀
Hi cosors,

This is something which I have noted before - academics confine their research to peer-reviewed publications because anything that is not peer reviewed is not "proven" scientifically.

In fact, finding Akida in peer reviewed papers may be a benefit of the Carnegie Mellon University project, as the students and academics will be experimenting with Akida and producing peer reviewed papers.
Your understanding makes me understand better. Nevertheless, some things remain open for me. So far, I was used to scientific reports giving me a glimpse into the future of what might come. The main topic is where does the world stand regarding neuromorphic computing. For example, if they only research with the Loihi then scientifically only Loihi exists in the report? The question was also whether it is already so far whether there are applications and products. For me as a non-scientist this is clear and it could have been clear to the moderator. Yes, there are applications and there are first products. Even if it is very difficult to prove scientifically that there is a neuromorphic chip in the EQXX or Nicobo or at first our PCI board. But I hear you. Let's wait and see. Maybe the next scientific elaboration from them with the following podcast will give a more concrete outlook where we stand on the world with neuromorphic computing. Now they don't need to research other neuromorphic chips besides the Loihi but can just contact Brainchip's sales team. But my statements are unfair because we don't have that podcast in English
___
To be fair I add that they clearly mentioned that there are start-ups.
 
Last edited:
  • Like
Reactions: 6 users

Labsy

Regular
So I noticed an after trade of over 100k shares at 1.07
I wonder why its not registered as official closing price?

Im feeling warm and fuzzies for next week...
Was hoping to accumulate tomorrow afternoon but I fear I may not get the price I'm after...
AKIDA BALLISTA!
 
  • Like
  • Fire
Reactions: 9 users

alwaysgreen

Top 20
So I noticed an after trade of over 100k shares at 1.07
I wonder why its not registered as official closing price?

Im feeling warm and fuzzies for next week...
Was hoping to accumulate tomorrow afternoon but I fear I may not get the price I'm after...
AKIDA BALLISTA!
Plus some large after market trades... :unsure:

1660805099670.png
 
  • Like
  • Fire
  • Wow
Reactions: 15 users

Makeme 2020

Regular
  • Like
  • Love
  • Fire
Reactions: 19 users

Quercuskid

Regular
I hope members leaving or taking a break have not been monstered out by the palpable aggression which has made its way onto the site.
 
  • Like
  • Sad
  • Thinking
Reactions: 24 users

cosors

👀
I'm no tech head but I cant see NeuRRam being in the same century as Akida much less the same market. my understanding is it uses convolution processing (analogue to digital). they are yet to tackle spiking architecture and their power saving is like 30 to 40 times less efficient than Akida. Furthermore the chip achieved 87% accuracy on image classification - imagine using it in self driving cars, it would make Tesla look good.
I'm not an IT tech head either and additionally a beginner. I have fast skimmed the report and come to the same conclusion. It's CNN and in addition the chip doesn't seem to be ready or all parts integrated. I also read about the accuracy.

"As a result, when performing multi-core parallel inference on a deep CNN, ResNet-20, the measured accuracy on CIFAR-10 classification (83.67%) is still 3.36% lower than that of a 4-bit-weight software model (87.03%)."

"The intermediate data buffers and partial-sum accumulators are implemented by a field-programmable gate array (FPGA) integrated on the same board as the NeuRRAM chip. Although these digital peripheral modules are not the focus of this study, they will eventually need to be integrated within the same chip in production-ready RRAM-CIM hardware."
Of course I don't know what that means in detail. But for me it doesn't seem to be ready. Maybe one of you who knows about this will comment.

"We use CNN models for the CIFAR-10 and MNIST image classification tasks. The CIFAR-10 dataset consists of 50,000 training images and 10,000 testing images belonging to 10 object classes."
https://www.nature.com/articles/s41586-022-04992-8

@Slymeat I had the topic of our manufacturing technology (WBT) the other day so these details jumped out at me. They don't do what Weebit deliberately chose to do. You mentioned that this is exactly where the advantage of Weebit lies, far cheaper and easier to manufacture.

"The RRAM device stack consists of a titanium nitride (TiN) bottom-electrode layer, a hafnium oxide (HfOx) switching layer, a tantalum oxide (TaOx) thermal-enhancement layer and a TiN top-electrode layer."

"The current RRAM array density under a 1T1R configuration is limited not by the fabrication process but by the RRAM write current and voltage. The current NeuRRAM chip uses large thick-oxide I/O transistors as the ‘T’ to withstand >4-V RRAM forming voltage and provide enough write current. Only if we lower both the forming voltage and the write current can we obtain higher density and therefore lower parasitic capacitance for improved energy efficiency."

https://thestockexchange.com.au/threads/brn-discussion-2022.1/post-119627
https://thestockexchange.com.au/threads/brn-discussion-2022.1/post-119495
 
  • Like
  • Fire
Reactions: 12 users

Cgc516

Regular

Zedjack33

Regular
Normally, when the order happened after market like these big, it will be a SP drop followed next day. Finger cross !🙏
Agreed. Seems to be the norm atm.
 
  • Like
Reactions: 1 users

cosors

👀
I hope members leaving or taking a break have not been monstered out by the palpable aggression which has made its way onto the site.
Perhaps it would be more pleasant in general to leave the ego aside and not offensively claim the right to be right. Don't get me wrong. I love DIScussions. They just have to lead to something and not apart, but further. That's only possible with some movement. My thought.

"Here, you will find everything from friendly discussion through to advanced research and analysis contributed freely by very engaged and supportive members." zeeb0t
 
  • Like
  • Fire
Reactions: 24 users

Deadpool

hyper-efficient Ai
So I noticed an after trade of over 100k shares at 1.07
I wonder why its not registered as official closing price?

Im feeling warm and fuzzies for next week...
Was hoping to accumulate tomorrow afternoon but I fear I may not get the price I'm after...
AKIDA BALLISTA!
Hey there @Labsy, always thought your avatar looked vaguely familiar, so finally decided to click on it and low and behold Les Grossman appears. Not a fan of Tom Cruise in general, but he pulled this character off spectacularly. I can imagine that stock shorters have this same personality.
Anyway, Regards
I'm sure your not that way inclined
Tom Cruise Dance GIF
 
  • Like
  • Haha
  • Fire
Reactions: 9 users

Slymeat

Move on, nothing to see.
I'm not an IT tech head either and additionally a beginner. I have fast skimmed the report and come to the same conclusion. It's CNN and in addition the chip doesn't seem to be ready or all parts integrated. I also read about the accuracy.

"As a result, when performing multi-core parallel inference on a deep CNN, ResNet-20, the measured accuracy on CIFAR-10 classification (83.67%) is still 3.36% lower than that of a 4-bit-weight software model (87.03%)."

"The intermediate data buffers and partial-sum accumulators are implemented by a field-programmable gate array (FPGA) integrated on the same board as the NeuRRAM chip. Although these digital peripheral modules are not the focus of this study, they will eventually need to be integrated within the same chip in production-ready RRAM-CIM hardware."
Of course I don't know what that means in detail. But for me it doesn't seem to be ready. Maybe one of you who knows about this will comment.

"We use CNN models for the CIFAR-10 and MNIST image classification tasks. The CIFAR-10 dataset consists of 50,000 training images and 10,000 testing images belonging to 10 object classes."
https://www.nature.com/articles/s41586-022-04992-8

@Slymeat I had the topic of our manufacturing technology (WBT) the other day so these details jumped out at me. They don't do what Weebit deliberately chose to do. You mentioned that this is exactly where the advantage of Weebit lies, far cheaper and easier to manufacture.

"The RRAM device stack consists of a titanium nitride (TiN) bottom-electrode layer, a hafnium oxide (HfOx) switching layer, a tantalum oxide (TaOx) thermal-enhancement layer and a TiN top-electrode layer."

"The current RRAM array density under a 1T1R configuration is limited not by the fabrication process but by the RRAM write current and voltage. The current NeuRRAM chip uses large thick-oxide I/O transistors as the ‘T’ to withstand >4-V RRAM forming voltage and provide enough write current. Only if we lower both the forming voltage and the write current can we obtain higher density and therefore lower parasitic capacitance for improved energy efficiency."

https://thestockexchange.com.au/threads/brn-discussion-2022.1/post-119627
https://thestockexchange.com.au/threads/brn-discussion-2022.1/post-119495
I fear people are over-thinking things.

As investors, even just as human beings, we all need to accept that we don‘t know everything, and we don‘t need to know everything. Some things we simply need to accept, especially if they come directly from the company or from reputable research institutions.

Weebit Nano state their product is cheaper to produce as it contains no exotic materials and can be mass produced by fabs with no need to re-tool. So let’s start by simply accepting that.

In a previous post, I think it was on the WBT forum, I summarised a detailed article that @cosors brought to my attention, this used a physical 16kb weebit ReRAM chip to perform limited neuromorphic processing. From the physically measured attributes of the 16k chip, they used industry standard software (SPICE) to simulate a 20Mb block of ReRAM to process trained images. They needed to simulate the 20Mb chip as a physical device did not yet exist. From this we should accept that ReRAM CAN be used to perform some neuromorphic operations. And that again is all we need to accept.

In this article, weights are stored in memory cells and ReRAM is used to create logic paths that simulate synapses. ReRAM is also used to store results and even provide some degree of LSTM. The system didn‘t, however, have the ability to learn on the fly.

I stated that I think such systems will have a place, they can be viewed as a poor-man’s/scaled-down version of Akida. And in a lot of situations, that is all that will be needed. They will have a place, and that place may simply be a feeder to future Akida development.

Once developers have something to play with, they may either realize the limitations of their system or decide they need more fuctionality—functions that Akida already provides. It’s quite a natural development cycle to not fully understand what you need, or what is possible, until you first build a prototype.

I believe part of the problem with general uptake/acceptance of BrainChip’s Akida technology is that it is so bloody powerful, it is a foreign concept, and so many people don‘t know what the hell to do with it. And the “it” extends to neuromorphic computing In general, let alone when further complicated by LSTM, cortical columns and the like.

There truly are a lot of WANCAs out there and not all of them are wankers!

Getting people to think of sparse neuromorphic spiking neural networks, as well as the concept of a power-restricted and connectionless edge, may be a bit too much all at once.

Sure BrainChip supplies tools that effortlessly port standard trained CNNs to Akida, but we, on this forum, have heard evidence that developers do this porting and start looking at their designs in a different light. Some of them even deciding to take a backward step when they realise the vast improvements that Akida opens up to them.

As investors we want products out and in the hands of consumers, but the companies developing them don‘t want to release something that has less than optimal potential, and in some cases looks quite bad compared to what they see Akida can achieve for them.

That’s why I applaud BrainChip’s initiative of taking their technologies to universities for the next generation of technical innovators to play with it. Empowering these students with AI concepts thatthey WILL need in the decades to come. What a brilliant move.
 
  • Like
  • Love
  • Fire
Reactions: 66 users

Earlyrelease

Regular
I fear people are over-thinking things.

As investors, even just as human beings, we all need to accept that we don‘t know everything, and we don‘t need to know everything. Some things we simply need to accept, especially if they come directly from the company or from reputable research institutions.

Weebit Nano state their product is cheaper to produce as it contains no exotic materials and can be mass produced by fabs with no need to re-tool. So let’s start by simply accepting that.

In a previous post, I think it was on the WBT forum, I summarised a detailed article that @cosors brought to my attention, this used a physical 16kb weebit ReRAM chip to perform limited neuromorphic processing. From the physically measured attributes of the 16k chip, they used industry standard software (SPICE) to simulate a 20Mb block of ReRAM to process trained images. They needed to simulate the 20Mb chip as a physical device did not yet exist. From this we should accept that ReRAM CAN be used to perform some neuromorphic operations. And that again is all we need to accept.

In this article, weights are stored in memory cells and ReRAM is used to create logic paths that simulate synapses. ReRAM is also used to store results and even provide some degree of LSTM. The system didn‘t, however, have the ability to learn on the fly.

I stated that I think such systems will have a place, they can be viewed as a poor-man’s/scaled-down version of Akida. And in a lot of situations, that is all that will be needed. They will have a place, and that place may simply be a feeder to future Akida development.

Once developers have something to play with, they may either realize the limitations of their system or decide they need more fuctionality—functions that Akida already provides. It’s quite a natural development cycle to not fully understand what you need, or what is possible, until you first build a prototype.

I believe part of the problem with general uptake/acceptance of BrainChip’s Akida technology is that it is so bloody powerful, it is a foreign concept, and so many people don‘t know what the hell to do with it. And the “it” extends to neuromorphic computing In general, let alone when further complicated by LSTM, cortical columns and the like.

There truly are a lot of WANCAs out there and not all of them are wankers!

Getting people to think of sparse neuromorphic spiking neural networks, as well as the concept of a power-restricted and connectionless edge, may be a bit too much all at once.

Sure BrainChip supplies tools that effortlessly port standard trained CNNs to Akida, but we, on this forum, have heard evidence that developers do this porting and start looking at their designs in a different light. Some of them even deciding to take a backward step when they realise the vast improvements that Akida opens up to them.

As investors we want products out and in the hands of consumers, but the companies developing them don‘t want to release something that has less than optimal potential, and in some cases looks quite bad compared to what they see Akida can achieve for them.

That’s why I applaud BrainChip’s initiative of taking their technologies to universities for the next generation of technical innovators to play with it. Empowering these students with AI concepts thatthey WILL need in the decades to come. What a brilliant move.
Sly.
And that why I believe our founders have licences a two mode model and allowed the chip to be scaleable. Finally the price originally talked about was low to a) capture market and b) keep others from spending millions on research and then not recoup it as people wont pay a higher price if there is a better model for cheaper. Thus while this is interesting I totally agree with you as to me ( glass always half full) this just shows how good our product is and what hurdles the others must cross to be equal and that’s before they hit the patent hurdles. Panteen

Brainers stay strong, stay long.
 
  • Like
  • Fire
  • Love
Reactions: 41 users

Cgc516

Regular
Agreed. Seems to be the norm atm.

What we can do the best is to hold our shares even tighter than ever.
 
  • Like
  • Fire
Reactions: 10 users
Top Bottom