Hi Frederick,I'm so bloody slow sometimes, only now it hit me that neuromorphic computing in a way solves the problem af scaling down the process node, by enabling more performance in different ways:
1) Now that we have such a low power consumption and dissipated heat, it must be possible to run them more agressively.
2) As I see it, neuromorphic computing lends itself perfectly to expanding the amount of silicon used, like connecting multiple chiplets to support greater models and/or running multiple models utilizing each other. They can even be stacked not to take up any significant space.
3) It's a young technology that is already beating the old technology and has a long runway of innovation ahead of it, like the jump from Akida 1 to 2. I bet that there's a vast space of possibilities to be explored, like hardware support for n-dimensional models.
While nVidia hit the brickwall and others are struggeling with Moores law, we just got started and are seemingly already way ahead of their Jetson.
So, now I think neuromorphic computing is going to be indespensable for future performance gains and it might branch out like the three suggestions above and combinations/more branches may appear.
Unemployment in Russia and overcrowding in jails being reduced would surely be a direct result of Putin sending every able body to the Ukraine to fight.!!!Well every cloud ...
Unemployment in Russia is down 30%, overcrowding in jails has been reduced ...
Hi Frederick,
1. Akida is asynchronous, responding to input events, so I don't think it can be overclocked. However, producing it is, say, 7nm will increase speed and reduce power.
2. Because Akida does not need cooling because it is so low power, it would be an ideal buried layer in a stacked chip arrangement. One potential application would be the Sony/Prophesee image sensor which uses Sony's 3D technology. I imagine the Prophesee DVS is also low power as it only fires on events. Also some IR night-vision systems need to be cooled, so Akida would be a good fit as it would reduce cooling requirements. Akida is designed for multiple chip applications.
One of my friends whom I convinced to buy BRN was asking about Ai following the recent ChatGPT publicity, so I sent the following:
You asked whether BrainChip's Akida was involved in AI, and I subsequently realized that ChatGPT has been in the news lately, not always favourably.
It is not related to ChatGPT. It could be used on the input side, but, at this stage, not on the output side generating the responses. That is all done with software using a web browser and language interpreting software.
Akida is a spiking neural network which imitates the brain's neural/synaptic processes. It does not use the same digital mathematical methods as conventional computers which consume enormous amounts of processing power in performing object recognition. Think of objects in a field of view as having a line around the edge, ie, where the pixel illumination changes. Akida compares the edge line with stored edge lines in a library of edge lines, whereas an object recognition program in a conventional computer does a pixel-by-pixel comparison of full frame images. Akida also does it in silicon, not in software. It is loaded with its compact object library data and does a hardware comparison, not a software comparison.
Akida is used to classify input signals, voice, video, etc. It does not generate replies.
It can be used for autonomous driving sensors such as LiDaR, radar, video, and event-cameras (shutterless cameras which detect only changes in the pixel illumination) aka DVS (Dynamic Vision Systems) to recognize the surroundings. It is also be used in-cabin driver monitoring, speech recognition, and any application which involves sensor signal interpretation such as vibration sensing to anticipate the need for maintenance. I suppose it could detect when a Boeing spits a turbine blade out.
It is used in Mercedes EQXX concept car which achieved 1000 km in the in-vehicle voice recognition system where it performed 5 to 10 times more efficiently that the other systems they had tested in a system where every Watt counts.
It can be used for autonomous drone navigation and image detection.
NASA are trialing it for navigation, Mars rovers, as well as wireless communication via a company called Intellisense.
Similarly, ISL (Information Systems Limited) is using it in USAF trials.
Akida is listed by ARM as being compatible with all their processors, as well as being compatible with ARM's upstart challenger SiFive which use RISC-V architecture compared with ARM's RISC-IV.
It is also available to Intel Foundry Services (IFS) customers.
It is also part of a few US university computer courses including Carnegie Mellon.
BrainChip's main business is licensing the design of Akida, a similar business model to ARM. This does limit the customer base to those who can afford the licence fee and the cost of incorporating the design in their product and manufacturing the chips. It also introduces a large time lag for designing, making and testing the chips.
A company called Renesas has licenced the Akida IC design and will be bringing out microprocessors capable of handling 30 frames per second (fps) later this year. Akida is capable of much faster fps (> 1000) but Renesas only licenced two Akida nodes out of a possible 80.
Similarly, MegaChips is also in the process of producing chips containing the Akida design.
A second generation of Akida will also be available later this year. This will have the capability to determine the speed and direction of objects in the hardware rather than relying on software to process the images identified by Akida. Software is much slower and more power hungry.
Sorry, missed this response.Thank you for the lesson in comprehension. I'll interpret what you said how I spelled it out last time. You implied that our involvement with Valeo and their $1 billion sale would be a company maker for us. It will be a very nice little earner but I disagree that it will be a company maker. Let's move on.
I think you may need to take some of your own advice regarding what I wrote in relation to my holding and "sale" not purchase of shares at $2.13.
Blackberry has been hinted as being an EAP before:
View attachment 36323
Maybe they’ll be the first one to adopt the Prophesee/Akida combo?
Well done!I thought that the obvious application for a low SWaP neuromorphic system would be to analyse patterns in network packet data so I actually did the introductions between Rob and a senior executive in BB in 2021 over email though unfortunately of course I'm not privy to the outcome of those discussions. Obviously I'm also hoping that something comes to fruition.
Brainchip should employ you to proof read all future marketing releases - not a single mention of Alida or Akido!Hi Frederick,
1. Akida is asynchronous, responding to input events, so I don't think it can be overclocked. However, producing it as, say, 7nm will increase speed and reduce power.
2. Because Akida does not need cooling because it is so low power, it would be an ideal buried layer in a stacked chip arrangement. One potential application would be the Sony/Prophesee image sensor which uses Sony's 3D technology. I imagine the Prophesee DVS is also low power as it only fires on events. Also some IR night-vision systems need to be cooled, so Akida would be a good fit as it would reduce cooling requirements. Akida is designed for multiple chip applications.
One of my friends whom I convinced to buy BRN was asking about Ai following the recent ChatGPT publicity, so I sent the following:
You asked whether BrainChip's Akida was involved in AI, and I subsequently realized that ChatGPT has been in the news lately, not always favourably.
It is not related to ChatGPT. It could be used on the input side, but, at this stage, not on the output side generating the responses. That is all done with software using a web browser and language interpreting software.
Akida is a spiking neural network which imitates the brain's neural/synaptic processes. It does not use the same digital mathematical methods as conventional computers which consume enormous amounts of processing power in performing object recognition. Think of objects in a field of view as having a line around the edge, ie, where the pixel illumination changes. Akida compares the edge line with stored edge lines in a library of edge lines, whereas an object recognition program in a conventional computer does a pixel-by-pixel comparison of full frame images. Akida also does it in silicon, not in software. It is loaded with its compact object library data and does a hardware comparison, not a software comparison.
Akida is used to classify input signals, voice, video, etc. It does not generate replies.
It can be used for autonomous driving sensors such as LiDaR, radar, video, and event-cameras (shutterless cameras which detect only changes in the pixel illumination) aka DVS (Dynamic Vision Systems) to recognize the surroundings. It is also be used in-cabin driver monitoring, speech recognition, and any application which involves sensor signal interpretation such as vibration sensing to anticipate the need for maintenance. I suppose it could detect when a Boeing spits a turbine blade out.
It is used in Mercedes EQXX concept car which achieved 1000 km in the in-vehicle voice recognition system where it performed 5 to 10 times more efficiently that the other systems they had tested in a system where every Watt counts.
It can be used for autonomous drone navigation and image detection.
NASA are trialing it for navigation, Mars rovers, as well as wireless communication via a company called Intellisense.
Similarly, ISL (Information Systems Limited) is using it in USAF trials.
Akida is listed by ARM as being compatible with all their processors, as well as being compatible with ARM's upstart challenger SiFive which use RISC-V architecture compared with ARM's RISC-IV.
It is also available to Intel Foundry Services (IFS) customers.
It is also part of a few US university computer courses including Carnegie Mellon.
BrainChip's main business is licensing the design of Akida, a similar business model to ARM. This does limit the customer base to those who can afford the licence fee and the cost of incorporating the design in their product and manufacturing the chips. It also introduces a large time lag for designing, making and testing the chips.
A company called Renesas has licenced the Akida IC design and will be bringing out microprocessors capable of handling 30 frames per second (fps) later this year. Akida is capable of much faster fps (> 1000) but Renesas only licenced two Akida nodes out of a possible 80.
Similarly, MegaChips is also in the process of producing chips containing the Akida design.
A second generation of Akida will also be available later this year. This will have the capability to determine the speed and direction of objects in hardware rather than relying on software to process the images identified by Akida. Software is much slower and more power hungry.
Well done!
That is several levels above and beyond dot joining.
Let's hope something comes of it.
Nintendo won’t release a ‘Switch 2′ before April 2024
The president of Nintendo stated that the company doesn’t have any plans to release a successor to the Switch or any other hardware before March 31, 2024.en.as.com
No new Nintendo Switch model until 2024.
The Nintendo switch has been proving it’s age with newer game release performance issues.
Hopefully the delay is due to implementing new technology…neuromorphic technology. Nintendo is keeping very tight lipped about their new hardware. I can’t imagine a world where Megachips hasn’t brought Akida to their biggest customers attention.
Ahh yeees . It is a massive step change in technology as long as it is full adopted! And from what we can see many big players have embraced it …and adopting it/developing it.I'm so bloody slow sometimes, only now it hit me that neuromorphic computing in a way solves the problem af scaling down the process node, by enabling more performance in different ways:
1) Now that we have such a low power consumption and dissipated heat, it must be possible to run them more agressively.
2) As I see it, neuromorphic computing lends itself perfectly to expanding the amount of silicon used, like connecting multiple chiplets to support greater models and/or running multiple models utilizing each other. They can even be stacked not to take up any significant space.
3) It's a young technology that is already beating the old technology and has a long runway of innovation ahead of it, like the jump from Akida 1 to 2. I bet that there's a vast space of possibilities to be explored, like hardware support for n-dimensional models.
While nVidia hit the brickwall and others are struggeling with Moores law, we just got started and are seemingly already way ahead of their Jetson.
So, now I think neuromorphic computing is going to be indespensable for future performance gains and it might branch out like the three suggestions above and combinations/more branches may appear.
Absolutely @RobjHunt ......................... PANTENENice and GREEN to finish off the week.
Pantene Peeps
Yes there was some strong buying “up” of stock yesterday and today. Great to see that momentum is shifting.Absolutely @RobjHunt ......................... PANTENE
In fact it was a good week and i believe the ARM/BRN Collaboration/Tech Talk is just starting to take effect ..............
Followers of ARM asking, who is this Brainchip? ............... so doing some DD
............................................ "OH CRAP", .....have you seen how many partnerships they"ve created?
............................................. "OH CRAP" , ...... have you seen/listened to thier series of podcasts?
............................................. "OH CRAP" , .......have you seen how cheap the s/p is as of now?
And i do believe that the high volume of shorts ................................. are now starting to "OH CRAP " themselves !!!!!
Watched some of the trading action today, and IMO ............There is both accumulation and covering going on !
Some probably taking profits after getting in around the mid/high 30s ...................... Good for us, ............ .............. i mean them.
So to celebrate, ............... im treating myself to a "GREEN GINGER WINE" HA HA
AKIDA ( a GO GO ) BALLISTO
Solid post FF. Glad to see you are back.Hi Frederick,
1. Akida is asynchronous, responding to input events, so I don't think it can be overclocked. However, producing it as, say, 7nm will increase speed and reduce power.
2. Because Akida does not need cooling because it is so low power, it would be an ideal buried layer in a stacked chip arrangement. One potential application would be the Sony/Prophesee image sensor which uses Sony's 3D technology. I imagine the Prophesee DVS is also low power as it only fires on events. Also some IR night-vision systems need to be cooled, so Akida would be a good fit as it would reduce cooling requirements. Akida is designed for multiple chip applications.
One of my friends whom I convinced to buy BRN was asking about Ai following the recent ChatGPT publicity, so I sent the following:
You asked whether BrainChip's Akida was involved in AI, and I subsequently realized that ChatGPT has been in the news lately, not always favourably.
It is not related to ChatGPT. It could be used on the input side, but, at this stage, not on the output side generating the responses. That is all done with software using a web browser and language interpreting software.
Akida is a spiking neural network which imitates the brain's neural/synaptic processes. It does not use the same digital mathematical methods as conventional computers which consume enormous amounts of processing power in performing object recognition. Think of objects in a field of view as having a line around the edge, ie, where the pixel illumination changes. Akida compares the edge line with stored edge lines in a library of edge lines, whereas an object recognition program in a conventional computer does a pixel-by-pixel comparison of full frame images. Akida also does it in silicon, not in software. It is loaded with its compact object library data and does a hardware comparison, not a software comparison.
Akida is used to classify input signals, voice, video, etc. It does not generate replies.
It can be used for autonomous driving sensors such as LiDaR, radar, video, and event-cameras (shutterless cameras which detect only changes in the pixel illumination) aka DVS (Dynamic Vision Systems) to recognize the surroundings. It is also be used in-cabin driver monitoring, speech recognition, and any application which involves sensor signal interpretation such as vibration sensing to anticipate the need for maintenance. I suppose it could detect when a Boeing spits a turbine blade out.
It is used in Mercedes EQXX concept car which achieved 1000 km in the in-vehicle voice recognition system where it performed 5 to 10 times more efficiently that the other systems they had tested in a system where every Watt counts.
It can be used for autonomous drone navigation and image detection.
NASA are trialing it for navigation, Mars rovers, as well as wireless communication via a company called Intellisense.
Similarly, ISL (Information Systems Limited) is using it in USAF trials.
Akida is listed by ARM as being compatible with all their processors, as well as being compatible with ARM's upstart challenger SiFive which use RISC-V architecture compared with ARM's RISC-IV.
It is also available to Intel Foundry Services (IFS) customers.
It is also part of a few US university computer courses including Carnegie Mellon.
BrainChip's main business is licensing the design of Akida, a similar business model to ARM. This does limit the customer base to those who can afford the licence fee and the cost of incorporating the design in their product and manufacturing the chips. It also introduces a large time lag for designing, making and testing the chips.
A company called Renesas has licenced the Akida IC design and will be bringing out microprocessors capable of handling 30 frames per second (fps) later this year. Akida is capable of much faster fps (> 1000) but Renesas only licenced two Akida nodes out of a possible 80.
Similarly, MegaChips is also in the process of producing chips containing the Akida design.
A second generation of Akida will also be available later this year. This will have the capability to determine the speed and direction of objects in hardware rather than relying on software to process the images identified by Akida. Software is much slower and more power hungry.
Specsavers, Mr Pope, could come in handy, I wish FF had/has returned, or did I miss something, maybe. Have another red, popey, and start praying for his return, imho. I hope all's well with FF, because we miss him heaps.Solid post FF. Glad to see you are back.