BRN Discussion Ongoing

Damo4

Regular

Rain Drops Water Splash GIF | GIFDB.com
 

Attachments

  • 1683851610537.jpeg
    1683851610537.jpeg
    3.4 KB · Views: 44
  • Like
Reactions: 3 users
RT is a fan of Hugging Face 🤗 and Edge impulse is excited and commented on this this as well. Looks like it’s another software technology advancement to help with the implementation of our transformers to be released soon:




🤗 We just released Transformers' boldest feature: Transformers Agents.

This removes the barrier of entry to machine learning: it offers more than 100,000 HF models to be controlled with natural language.

This is a truly fully multimodal agent: text, images, video, audio, docs, and way more to come.

Read the documentation here: https://lnkd.in/eAmxV_zi

What is it?

🗣️ Create an agent using LLMs (OpenAssistant, StarCoder, OpenAI ...) and start talking to transformers and diffusers through curated tools.

How does it work?

It's straightforward prompt-building:

• Tell the agent what it aims to do
• Give it tools
• Show examples
• Give it a task

The agent uses chain-of-thought reasoning to identify its task and outputs Python code using the tools. It comes with a myriad of built-in tools such as document, text, image QA, Speech-to-text and text-to-speech, text classification, summarization, translation, image edition tools, text-to-video...

🪄 But it is EXTENSIBLE by design.

Tools are elementary: a name, a description, a function. Designing a tool and pushing it to the Hub can be done in a few lines of code.

The toolkit of the agent serves as a base: extend it with your tools, or with other community-contributed tools:

Please play with it, add your tools, and let's create super-powerful agents together.

Here's a notebook to get started: https://lnkd.in/eYsh9eqG

😀
 
  • Like
  • Love
  • Fire
Reactions: 33 users

Diogenese

Top 20
I'm so bloody slow sometimes, only now it hit me that neuromorphic computing in a way solves the problem af scaling down the process node, by enabling more performance in different ways:
1) Now that we have such a low power consumption and dissipated heat, it must be possible to run them more agressively.
2) As I see it, neuromorphic computing lends itself perfectly to expanding the amount of silicon used, like connecting multiple chiplets to support greater models and/or running multiple models utilizing each other. They can even be stacked not to take up any significant space.
3) It's a young technology that is already beating the old technology and has a long runway of innovation ahead of it, like the jump from Akida 1 to 2. I bet that there's a vast space of possibilities to be explored, like hardware support for n-dimensional models.

While nVidia hit the brickwall and others are struggeling with Moores law, we just got started and are seemingly already way ahead of their Jetson.

So, now I think neuromorphic computing is going to be indespensable for future performance gains and it might branch out like the three suggestions above and combinations/more branches may appear.
Hi Frederick,

1. Akida is asynchronous, responding to input events, so I don't think it can be overclocked. However, producing it as, say, 7nm will increase speed and reduce power.

2. Because Akida does not need cooling because it is so low power, it would be an ideal buried layer in a stacked chip arrangement. One potential application would be the Sony/Prophesee image sensor which uses Sony's 3D technology. I imagine the Prophesee DVS is also low power as it only fires on events. Also some IR night-vision systems need to be cooled, so Akida would be a good fit as it would reduce cooling requirements. Akida is designed for multiple chip applications.

One of my friends whom I convinced to buy BRN was asking about Ai following the recent ChatGPT publicity, so I sent the following:

You asked whether BrainChip's Akida was involved in AI, and I subsequently realized that ChatGPT has been in the news lately, not always favourably.

It is not related to ChatGPT. It could be used on the input side, but, at this stage, not on the output side generating the responses. That is all done with software using a web browser and language interpreting software.

Akida is a spiking neural network which imitates the brain's neural/synaptic processes. It does not use the same digital mathematical methods as conventional computers which consume enormous amounts of processing power in performing object recognition. Think of objects in a field of view as having a line around the edge, ie, where the pixel illumination changes. Akida compares the edge line with stored edge lines in a library of edge lines, whereas an object recognition program in a conventional computer does a pixel-by-pixel comparison of full frame images. Akida also does it in silicon, not in software. It is loaded with its compact object library data and does a hardware comparison, not a software comparison.

Akida is used to classify input signals, voice, video, etc. It does not generate replies.

It can be used for autonomous driving sensors such as LiDaR, radar, video, and event-cameras (shutterless cameras which detect only changes in the pixel illumination) aka DVS (Dynamic Vision Systems) to recognize the surroundings. It is also be used in-cabin driver monitoring, speech recognition, and any application which involves sensor signal interpretation such as vibration sensing to anticipate the need for maintenance. I suppose it could detect when a Boeing spits a turbine blade out.

It is used in Mercedes EQXX concept car which achieved 1000 km in the in-vehicle voice recognition system where it performed 5 to 10 times more efficiently that the other systems they had tested in a system where every Watt counts.

It can be used for autonomous drone navigation and image detection.

NASA are trialing it for navigation, Mars rovers, as well as wireless communication via a company called Intellisense.

Similarly, ISL (Information Systems Limited) is using it in USAF trials.

Akida is listed by ARM as being compatible with all their processors, as well as being compatible with ARM's upstart challenger SiFive which use RISC-V architecture compared with ARM's RISC-IV.

It is also available to Intel Foundry Services (IFS) customers.

It is also part of a few US university computer courses including Carnegie Mellon.

BrainChip's main business is licensing the design of Akida, a similar business model to ARM. This does limit the customer base to those who can afford the licence fee and the cost of incorporating the design in their product and manufacturing the chips. It also introduces a large time lag for designing, making and testing the chips.

A company called Renesas has licenced the Akida IC design and will be bringing out microprocessors capable of handling 30 frames per second (fps) later this year. Akida is capable of much faster fps (> 1000) but Renesas only licenced two Akida nodes out of a possible 80.

Similarly, MegaChips is also in the process of producing chips containing the Akida design.

A second generation of Akida will also be available later this year. This will have the capability to determine the speed and direction of objects in hardware rather than relying on software to process the images identified by Akida. Software is much slower and more power hungry
.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 116 users

Dougie54

Regular
Well every cloud ...

Unemployment in Russia is down 30%, overcrowding in jails has been reduced ...
Unemployment in Russia and overcrowding in jails being reduced would surely be a direct result of Putin sending every able body to the Ukraine to fight.!!!
 
  • Like
  • Fire
  • Sad
Reactions: 8 users

Damo4

Regular
Hi Frederick,

1. Akida is asynchronous, responding to input events, so I don't think it can be overclocked. However, producing it is, say, 7nm will increase speed and reduce power.

2. Because Akida does not need cooling because it is so low power, it would be an ideal buried layer in a stacked chip arrangement. One potential application would be the Sony/Prophesee image sensor which uses Sony's 3D technology. I imagine the Prophesee DVS is also low power as it only fires on events. Also some IR night-vision systems need to be cooled, so Akida would be a good fit as it would reduce cooling requirements. Akida is designed for multiple chip applications.

One of my friends whom I convinced to buy BRN was asking about Ai following the recent ChatGPT publicity, so I sent the following:

You asked whether BrainChip's Akida was involved in AI, and I subsequently realized that ChatGPT has been in the news lately, not always favourably.

It is not related to ChatGPT. It could be used on the input side, but, at this stage, not on the output side generating the responses. That is all done with software using a web browser and language interpreting software.

Akida is a spiking neural network which imitates the brain's neural/synaptic processes. It does not use the same digital mathematical methods as conventional computers which consume enormous amounts of processing power in performing object recognition. Think of objects in a field of view as having a line around the edge, ie, where the pixel illumination changes. Akida compares the edge line with stored edge lines in a library of edge lines, whereas an object recognition program in a conventional computer does a pixel-by-pixel comparison of full frame images. Akida also does it in silicon, not in software. It is loaded with its compact object library data and does a hardware comparison, not a software comparison.

Akida is used to classify input signals, voice, video, etc. It does not generate replies.

It can be used for autonomous driving sensors such as LiDaR, radar, video, and event-cameras (shutterless cameras which detect only changes in the pixel illumination) aka DVS (Dynamic Vision Systems) to recognize the surroundings. It is also be used in-cabin driver monitoring, speech recognition, and any application which involves sensor signal interpretation such as vibration sensing to anticipate the need for maintenance. I suppose it could detect when a Boeing spits a turbine blade out.

It is used in Mercedes EQXX concept car which achieved 1000 km in the in-vehicle voice recognition system where it performed 5 to 10 times more efficiently that the other systems they had tested in a system where every Watt counts.

It can be used for autonomous drone navigation and image detection.

NASA are trialing it for navigation, Mars rovers, as well as wireless communication via a company called Intellisense.

Similarly, ISL (Information Systems Limited) is using it in USAF trials.

Akida is listed by ARM as being compatible with all their processors, as well as being compatible with ARM's upstart challenger SiFive which use RISC-V architecture compared with ARM's RISC-IV.

It is also available to Intel Foundry Services (IFS) customers.

It is also part of a few US university computer courses including Carnegie Mellon.

BrainChip's main business is licensing the design of Akida, a similar business model to ARM. This does limit the customer base to those who can afford the licence fee and the cost of incorporating the design in their product and manufacturing the chips. It also introduces a large time lag for designing, making and testing the chips.

A company called Renesas has licenced the Akida IC design and will be bringing out microprocessors capable of handling 30 frames per second (fps) later this year. Akida is capable of much faster fps (> 1000) but Renesas only licenced two Akida nodes out of a possible 80.

Similarly, MegaChips is also in the process of producing chips containing the Akida design.

A second generation of Akida will also be available later this year. This will have the capability to determine the speed and direction of objects in the hardware rather than relying on software to process the images identified by Akida. Software is much slower and more power hungry
.

Great post Dio!

It's also important to zoom out and think about who wants Akida and why.
They aren't chasing extra flops (equivalent) per watt, they are trying to reduce watts per operation.
The super computers and Nordic self-cooling warehouses can handle the big stuff, but only low power, edge NM technology will solve the use-cases you mention above.

Great post again, it's easy to lose track of all the connections and reasons why Akida excels.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 31 users
Blackberry has been hinted as being an EAP before:

1683857900135.png



Maybe they’ll be the first one to adopt the Prophesee/Akida combo?

😀
 
  • Like
  • Fire
Reactions: 9 users

Draed

Regular
Bottomed out at 40c with low buyong volumes... I think the shorters will try to close soon or are starting too... I'm hoping they start feeling some panic.. maybe something coming towards the agm?
 
  • Like
  • Thinking
Reactions: 10 users

SERA2g

Founding Member
Thank you for the lesson in comprehension. I'll interpret what you said how I spelled it out last time. You implied that our involvement with Valeo and their $1 billion sale would be a company maker for us. It will be a very nice little earner but I disagree that it will be a company maker. Let's move on.

I think you may need to take some of your own advice regarding what I wrote in relation to my holding and "sale" not purchase of shares at $2.13. :cool:
Sorry, missed this response.

My bad on not reading your post correctly. Well played on the $2.13 sale!

For the rest of it, let's agree to disagree :)
 
  • Like
  • Fire
Reactions: 12 users
Blackberry has been hinted as being an EAP before:

View attachment 36323


Maybe they’ll be the first one to adopt the Prophesee/Akida combo?

😀

I thought that the obvious application for a low SWaP neuromorphic system would be to analyse patterns in network packet data so I actually did the introductions between Rob and a senior executive in BB (whom I coldcalled) in 2021 over email, though unfortunately of course I'm not privy to the outcome of those discussions. Obviously I'm also hoping that something comes to fruition.

Edit: Rob liking a Blackberry executive's post in light of the above... :unsure:
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 38 users

Diogenese

Top 20
I thought that the obvious application for a low SWaP neuromorphic system would be to analyse patterns in network packet data so I actually did the introductions between Rob and a senior executive in BB in 2021 over email though unfortunately of course I'm not privy to the outcome of those discussions. Obviously I'm also hoping that something comes to fruition.
Well done!

That is several levels above and beyond dot joining.

Let's hope something comes of it.
 
  • Like
  • Fire
Reactions: 24 users
Hi Frederick,

1. Akida is asynchronous, responding to input events, so I don't think it can be overclocked. However, producing it as, say, 7nm will increase speed and reduce power.

2. Because Akida does not need cooling because it is so low power, it would be an ideal buried layer in a stacked chip arrangement. One potential application would be the Sony/Prophesee image sensor which uses Sony's 3D technology. I imagine the Prophesee DVS is also low power as it only fires on events. Also some IR night-vision systems need to be cooled, so Akida would be a good fit as it would reduce cooling requirements. Akida is designed for multiple chip applications.

One of my friends whom I convinced to buy BRN was asking about Ai following the recent ChatGPT publicity, so I sent the following:

You asked whether BrainChip's Akida was involved in AI, and I subsequently realized that ChatGPT has been in the news lately, not always favourably.

It is not related to ChatGPT. It could be used on the input side, but, at this stage, not on the output side generating the responses. That is all done with software using a web browser and language interpreting software.

Akida is a spiking neural network which imitates the brain's neural/synaptic processes. It does not use the same digital mathematical methods as conventional computers which consume enormous amounts of processing power in performing object recognition. Think of objects in a field of view as having a line around the edge, ie, where the pixel illumination changes. Akida compares the edge line with stored edge lines in a library of edge lines, whereas an object recognition program in a conventional computer does a pixel-by-pixel comparison of full frame images. Akida also does it in silicon, not in software. It is loaded with its compact object library data and does a hardware comparison, not a software comparison.

Akida is used to classify input signals, voice, video, etc. It does not generate replies.

It can be used for autonomous driving sensors such as LiDaR, radar, video, and event-cameras (shutterless cameras which detect only changes in the pixel illumination) aka DVS (Dynamic Vision Systems) to recognize the surroundings. It is also be used in-cabin driver monitoring, speech recognition, and any application which involves sensor signal interpretation such as vibration sensing to anticipate the need for maintenance. I suppose it could detect when a Boeing spits a turbine blade out.

It is used in Mercedes EQXX concept car which achieved 1000 km in the in-vehicle voice recognition system where it performed 5 to 10 times more efficiently that the other systems they had tested in a system where every Watt counts.

It can be used for autonomous drone navigation and image detection.

NASA are trialing it for navigation, Mars rovers, as well as wireless communication via a company called Intellisense.

Similarly, ISL (Information Systems Limited) is using it in USAF trials.

Akida is listed by ARM as being compatible with all their processors, as well as being compatible with ARM's upstart challenger SiFive which use RISC-V architecture compared with ARM's RISC-IV.

It is also available to Intel Foundry Services (IFS) customers.

It is also part of a few US university computer courses including Carnegie Mellon.

BrainChip's main business is licensing the design of Akida, a similar business model to ARM. This does limit the customer base to those who can afford the licence fee and the cost of incorporating the design in their product and manufacturing the chips. It also introduces a large time lag for designing, making and testing the chips.

A company called Renesas has licenced the Akida IC design and will be bringing out microprocessors capable of handling 30 frames per second (fps) later this year. Akida is capable of much faster fps (> 1000) but Renesas only licenced two Akida nodes out of a possible 80.

Similarly, MegaChips is also in the process of producing chips containing the Akida design.

A second generation of Akida will also be available later this year. This will have the capability to determine the speed and direction of objects in hardware rather than relying on software to process the images identified by Akida. Software is much slower and more power hungry
.
Brainchip should employ you to proof read all future marketing releases - not a single mention of Alida or Akido!
 
  • Haha
  • Like
Reactions: 17 users
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 15 users

No new Nintendo Switch model until 2024.
The Nintendo switch has been proving it’s age with newer game release performance issues.

Hopefully the delay is due to implementing new technology…neuromorphic technology. Nintendo is keeping very tight lipped about their new hardware. I can’t imagine a world where Megachips hasn’t brought Akida to their biggest customers attention.
 
  • Like
  • Love
  • Fire
Reactions: 37 users

alwaysgreen

Top 20

No new Nintendo Switch model until 2024.
The Nintendo switch has been proving it’s age with newer game release performance issues.

Hopefully the delay is due to implementing new technology…neuromorphic technology. Nintendo is keeping very tight lipped about their new hardware. I can’t imagine a world where Megachips hasn’t brought Akida to their biggest customers attention.

Agree. Nintendo is known to think outside the box with their hardware. This has me hopeful and excited.
 
  • Like
Reactions: 14 users
I'm so bloody slow sometimes, only now it hit me that neuromorphic computing in a way solves the problem af scaling down the process node, by enabling more performance in different ways:
1) Now that we have such a low power consumption and dissipated heat, it must be possible to run them more agressively.
2) As I see it, neuromorphic computing lends itself perfectly to expanding the amount of silicon used, like connecting multiple chiplets to support greater models and/or running multiple models utilizing each other. They can even be stacked not to take up any significant space.
3) It's a young technology that is already beating the old technology and has a long runway of innovation ahead of it, like the jump from Akida 1 to 2. I bet that there's a vast space of possibilities to be explored, like hardware support for n-dimensional models.

While nVidia hit the brickwall and others are struggeling with Moores law, we just got started and are seemingly already way ahead of their Jetson.

So, now I think neuromorphic computing is going to be indespensable for future performance gains and it might branch out like the three suggestions above and combinations/more branches may appear.
Ahh yeees 😁. It is a massive step change in technology as long as it is full adopted! And from what we can see many big players have embraced it …and adopting it/developing it.

In very basic terms, a similar step change type of “ICT world” comparison is what fibre optic cable adoption meant over traditional copper cabling.

Fibre Optic cable was - Much much much faster, more bandwidth, more physically compact in relation to bandwidth and even less heat when in operation in a confined bundled state. Think about it….that was a massive step change in transmission cable.

Akida is that - a technical step change but in processing. (Yes that is a very simplified statement)

What Akida can also look like it can also offer is mass affordability of the product over traditional moores law products. We wait to see, but yeah freaking exciting stuff if Akida is fully adopted on mass! All signs point to mass adoption here looking at the partnerships in play….now we wait for the mass products…
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 38 users

RobjHunt

Regular
Nice and GREEN to finish off the week.

Pantene Peeps ;)
 
  • Like
  • Love
  • Fire
Reactions: 29 users

mrgds

Regular
Nice and GREEN to finish off the week.

Pantene Peeps ;)
Absolutely @RobjHunt ......................... PANTENE
In fact it was a good week and i believe the ARM/BRN Collaboration/Tech Talk is just starting to take effect ..............;)

Followers of ARM asking, who is this Brainchip? ............... so doing some DD
............................................ "OH CRAP", .....have you seen how many partnerships they"ve created?
............................................. "OH CRAP" , ...... have you seen/listened to thier series of podcasts?
............................................. "OH CRAP" , .......have you seen how cheap the s/p is as of now?

And i do believe that the high volume of shorts ................................. are now starting to "OH CRAP " themselves !!!!! :ROFLMAO:

Watched some of the trading action today, and IMO ............There is both accumulation and covering going on !
Some probably taking profits after getting in around the mid/high 30s ...................... Good for us, ............:censored: .............. i mean them. ;)

So to celebrate, ...............
im treating myself to a "GREEN GINGER WINE" 🤏 HA HA

AKIDA ( a GO GO ) BALLISTO
 
  • Like
  • Love
  • Fire
Reactions: 39 users
Absolutely @RobjHunt ......................... PANTENE
In fact it was a good week and i believe the ARM/BRN Collaboration/Tech Talk is just starting to take effect ..............;)

Followers of ARM asking, who is this Brainchip? ............... so doing some DD
............................................ "OH CRAP", .....have you seen how many partnerships they"ve created?
............................................. "OH CRAP" , ...... have you seen/listened to thier series of podcasts?
............................................. "OH CRAP" , .......have you seen how cheap the s/p is as of now?

And i do believe that the high volume of shorts ................................. are now starting to "OH CRAP " themselves !!!!! :ROFLMAO:

Watched some of the trading action today, and IMO ............There is both accumulation and covering going on !
Some probably taking profits after getting in around the mid/high 30s ...................... Good for us, ............:censored: .............. i mean them. ;)

So to celebrate, ...............
im treating myself to a "GREEN GINGER WINE" 🤏 HA HA

AKIDA ( a GO GO ) BALLISTO
Yes there was some strong buying “up” of stock yesterday and today. Great to see that momentum is shifting.

So good was the buying up today that I observed there were whole SP levels bought out within seconds at some points and I am talking 300k to 400k worth of shares bought in a big chunk so that is insto’s or shorters covering and either way it’s great as it’s a sign that they know the price is cheap and it’s not staying here for very much longer.
 
  • Like
  • Love
  • Fire
Reactions: 29 users

The Pope

Regular
Hi Frederick,

1. Akida is asynchronous, responding to input events, so I don't think it can be overclocked. However, producing it as, say, 7nm will increase speed and reduce power.

2. Because Akida does not need cooling because it is so low power, it would be an ideal buried layer in a stacked chip arrangement. One potential application would be the Sony/Prophesee image sensor which uses Sony's 3D technology. I imagine the Prophesee DVS is also low power as it only fires on events. Also some IR night-vision systems need to be cooled, so Akida would be a good fit as it would reduce cooling requirements. Akida is designed for multiple chip applications.

One of my friends whom I convinced to buy BRN was asking about Ai following the recent ChatGPT publicity, so I sent the following:

You asked whether BrainChip's Akida was involved in AI, and I subsequently realized that ChatGPT has been in the news lately, not always favourably.

It is not related to ChatGPT. It could be used on the input side, but, at this stage, not on the output side generating the responses. That is all done with software using a web browser and language interpreting software.

Akida is a spiking neural network which imitates the brain's neural/synaptic processes. It does not use the same digital mathematical methods as conventional computers which consume enormous amounts of processing power in performing object recognition. Think of objects in a field of view as having a line around the edge, ie, where the pixel illumination changes. Akida compares the edge line with stored edge lines in a library of edge lines, whereas an object recognition program in a conventional computer does a pixel-by-pixel comparison of full frame images. Akida also does it in silicon, not in software. It is loaded with its compact object library data and does a hardware comparison, not a software comparison.

Akida is used to classify input signals, voice, video, etc. It does not generate replies.

It can be used for autonomous driving sensors such as LiDaR, radar, video, and event-cameras (shutterless cameras which detect only changes in the pixel illumination) aka DVS (Dynamic Vision Systems) to recognize the surroundings. It is also be used in-cabin driver monitoring, speech recognition, and any application which involves sensor signal interpretation such as vibration sensing to anticipate the need for maintenance. I suppose it could detect when a Boeing spits a turbine blade out.

It is used in Mercedes EQXX concept car which achieved 1000 km in the in-vehicle voice recognition system where it performed 5 to 10 times more efficiently that the other systems they had tested in a system where every Watt counts.

It can be used for autonomous drone navigation and image detection.

NASA are trialing it for navigation, Mars rovers, as well as wireless communication via a company called Intellisense.

Similarly, ISL (Information Systems Limited) is using it in USAF trials.

Akida is listed by ARM as being compatible with all their processors, as well as being compatible with ARM's upstart challenger SiFive which use RISC-V architecture compared with ARM's RISC-IV.

It is also available to Intel Foundry Services (IFS) customers.

It is also part of a few US university computer courses including Carnegie Mellon.

BrainChip's main business is licensing the design of Akida, a similar business model to ARM. This does limit the customer base to those who can afford the licence fee and the cost of incorporating the design in their product and manufacturing the chips. It also introduces a large time lag for designing, making and testing the chips.

A company called Renesas has licenced the Akida IC design and will be bringing out microprocessors capable of handling 30 frames per second (fps) later this year. Akida is capable of much faster fps (> 1000) but Renesas only licenced two Akida nodes out of a possible 80.

Similarly, MegaChips is also in the process of producing chips containing the Akida design.

A second generation of Akida will also be available later this year. This will have the capability to determine the speed and direction of objects in hardware rather than relying on software to process the images identified by Akida. Software is much slower and more power hungry
.
Solid post FF. Glad to see you are back. 😊
 
  • Haha
  • Like
  • Love
Reactions: 21 users

hotty4040

Regular
Solid post FF. Glad to see you are back. 😊
Specsavers, Mr Pope, could come in handy, I wish FF had/has returned, or did I miss something, maybe. Have another red, popey, and start praying for his return, imho. I hope all's well with FF, because we miss him heaps.

Anyway, back to the tiges/cats game which miraculously is favoring the tiges atm.

Good week chippers, so please may it continue next week. Some nice reveals over the last few days. Everybody " hang-in-there "

Akida Ballista is gaining momentum gradually.

hotty...
 
  • Like
  • Fire
  • Haha
Reactions: 16 users
Top Bottom