BRN Discussion Ongoing

stuart888

Regular
This guy is a 20 year programmer, hard core, deep learning engineer. Sharp guy, good videos.

He says he now uses ChatGBT to do 80% of his coding now. That is disruptive. He supervises the code and learns from it.

Thought it was interesting. Plus he summarizes ChatGBT and Transformer Neural Networks, "Attention is all you need". Famous Google article as they created Transformers in 2017.

I feel Brainchip is going to win in TinyML, as you don't need big data to train edge AI on Industrial patterns, ultra low power.

https://hub.packtpub.com/paper-in-two-minutes-attention-is-all-you-need/

 
  • Like
  • Fire
  • Wow
Reactions: 24 users

stuart888

Regular
Australian Space TV, he is a Neural-Spiker! The first published (non-secret) SNN application in space, I believe first started here.

Good video, yeah Australia! :coffee::coffee::coffee:

Same thing, you don't need big data to train SNN edge AI on Industrial patterns, ultra-low power.

1676240209103.png


 
  • Like
  • Fire
Reactions: 10 users

stuart888

Regular
Really glad we have Transformers high on the Brainchip radar. This stuff is too deep, for me, I really don't need to know all the Fancy Math Equations.

The transformer math takes time to unpack. Forget about that part.

When dealing with text/speaking: the human language has lots of double meaning words, that in certain parts of the sentence mean one thing. Yet the same word, with the same spelling can mean something different later in the same paragraph.

This solves that, so ChatGBT is like a teenage, that can be further trained in specific industries, say to automatically print out the legal paperwork documents required for signing when buying a house.

I would skip it, just know "Attention is all you need".

 
  • Like
Reactions: 8 users

stuart888

Regular
Explaining all the various AI Accelerators

This particular video is good for the non-spiking-engineer, like me. He does an overview of the AI progress at the chip level, exceptional.

This High Yield guy knows hardware, SoC boards, etc.



1676241681864.png
 
  • Like
Reactions: 5 users

TheFunkMachine

seeds have the potential to become trees.
Hi D,

Thanks, I must have missed that SBIR. That explains why there's been talk about not needing a processor.
The timing of the SBIR and the annoucement of our 22nm AKD1500 GF reference chip does align very nicely.


For anyone else that missed it:

Release Date:
January 10, 2023
Open Date:
January 10, 2023
Application Due Date:
March 13, 2023
Close Date:
March 13, 2023 (closing in 29 days

The preference is for a prototype processor fabricated in a technology node suitable for the space environment, such as 22-nm FDSOI, which has become increasingly affordable.


Neuromorphic and deep neural net software for point applications has become widespread and is state of the art. Integrated solutions that achieve space-relevant mission capabilities with high throughput and energy efficiency is a critical gap. For example, terrestrial neuromorphic processors such as Intels Cooporation's LoihiTM, Brainchip's AkidaTM, and Google Inc's Tensor Processing Unit (TPUTM) require full host processors for integration for their software development kit (SDK) that are power hungry or limit throughput. This by itself is inhibiting the use of neuromorphic processors for low SWaP space missions.
“For example, terrestrial neuromorphic processors such as Intels Cooporation's LoihiTM, Brainchip's AkidaTM, and Google Inc's Tensor Processing Unit (TPUTM) require full host processors for integration for their software development kit (SDK) that are power hungry or limit throughput. This by itself is inhibiting the use of neuromorphic processors for low SWaP space missions.”

Am I reading this wrong? Are they saying that Akida needs a full host processor for integration for their SDK? I thought one of the main selling points of Akida is that it doesn’t need a host processor and can do all the computation alone?
 
  • Like
  • Thinking
Reactions: 2 users

Dhm

Regular
Australian Space TV, he is a Neural-Spiker! The first published (non-secret) SNN application in space, I believe first started here.

Good video, yeah Australia! :coffee::coffee::coffee:

Same thing, you don't need big data to train SNN edge AI on Industrial patterns, ultra-low power.

View attachment 29376


I wrote to Greg following publicity surrounding Jarli in August last year. At the time they weren’t using Akida. The question now is have they had a chance to explore and implement Akida in their most recent projects.

8289B464-3289-4ACC-8C98-A2DA320BBEBE.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 37 users

stuart888

Regular
Thanks for the detail @Dhm! He really just highlighted all that Brainchip is going after. Neuromorphic Event Based Cameras and Spiking Smarts blows away the old fps (send me all the pixels stuff). They are hiring all things neuromorphic.

It just cements the Brainchip overall thesis. They are in the perfect position to win. Mr Chapman said it well earlier, too many dots.

Love hearing it from a person like this, preaching the Brainchip philosophy/focus.

1676242884391.png
 
  • Like
  • Fire
Reactions: 10 users

Calsco

Regular
Again we are just sitting at resistance. Here’s hoping we have some positive news this week as the share price is ready to bounce up, we really are just sitting on the launch pad we just need a spark to start the engines!
AC95F7F9-8AE6-4FCA-8FBA-CBB5850DD352.jpeg
 
  • Like
  • Fire
  • Haha
Reactions: 18 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
“For example, terrestrial neuromorphic processors such as Intels Cooporation's LoihiTM, Brainchip's AkidaTM, and Google Inc's Tensor Processing Unit (TPUTM) require full host processors for integration for their software development kit (SDK) that are power hungry or limit throughput. This by itself is inhibiting the use of neuromorphic processors for low SWaP space missions.”

Am I reading this wrong? Are they saying that Akida needs a full host processor for integration for their SDK? I thought one of the main selling points of Akida is that it doesn’t need a host processor and can do all the computation alone?

Hi Funky, what about SiFive + BrainChip sitting in a tree, K I S S I N G?😚


Extract 1
14.png




Extract 2
15,.png
 
  • Like
  • Love
  • Fire
Reactions: 31 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Wow
  • Like
  • Love
Reactions: 21 users

stuart888

Regular
Hi Funky, what about SiFive + BrainChip sitting in a tree, K I S S I N G?😚


Extract 1
View attachment 29384



Extract 2
View attachment 29385
Sitting in a tree, watching the Superbowl. Baby Face the old music group just did a bit live. Fantastic.

This is the old hit.

Personally: Brainchip is a super bet. No stock is perfect way early, when you don't know. I am fine with all things Brainchip.

 
  • Like
Reactions: 5 users

jk6199

Regular
How about some of the huge transactions on the market so far.

Is that distant drums I hear?
 
  • Like
  • Fire
  • Love
Reactions: 6 users

stuart888

Regular
While I enjoy the Neuromorphic love, these videos with the computer-generated text/voice seem like ChatGBT are a bit poor.

Perhaps over time, a person could trust the source of who produces the video content.

I am kind of turned-off a bit to the stuff that seems artificial. This will be a challenge. Proving accuracy will be important. A new benchmark, truth or 97% truth.

AI in the cars to save lives, no brainer. They can outperform us, and it gets proved first long before level 4. Accident vs miles driven, AI is way better than humans already.

 
  • Like
  • Wow
Reactions: 7 users

Deadpool

hyper-efficient Ai
  • Haha
  • Like
  • Love
Reactions: 10 users

jtardif999

Regular
“For example, terrestrial neuromorphic processors such as Intels Cooporation's LoihiTM, Brainchip's AkidaTM, and Google Inc's Tensor Processing Unit (TPUTM) require full host processors for integration for their software development kit (SDK) that are power hungry or limit throughput. This by itself is inhibiting the use of neuromorphic processors for low SWaP space missions.”

Am I reading this wrong? Are they saying that Akida needs a full host processor for integration for their SDK? I thought one of the main selling points of Akida is that it doesn’t need a host processor and can do all the computation alone?
Yeah, they appear to have lumped all chips listed in the same basket. Akida only needs an external pre processor to process the initial conditions, i.e. the trained weights and other configuration. Then all processing is done within the Akida Neural Fabric.
 
  • Like
  • Fire
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
FEBRUARY 2, 2023

Next-gen Siri: The future of personal assistant AIWhat advancements and features could make Siri a more powerful personal assistant in the future?​

hey-siri-banner-apple.jpg

With the rapid advancements in artificial intelligence, it’s no surprise that many users are looking forward to what the next generation of Siri personal assistance will have to offer. From improved emotional recognition to autonomous management, the possibilities are endless. But what exactly are people looking for in the next-gen Siri?
One of the most requested features is improved contextual language capabilities. At the moment, a lack of such capabilities can make it difficult for some users to have a smooth conversation with their assistants. By incorporating more advanced voice recognition technologies, Siri could better understand contexts and intentions.
Another highly requested feature is the ability to multitask. Currently, Siri can only handle one task at a time, which can be frustrating for users who want to accomplish multiple things at once. The incorporation of multitasking could enable the assistant to handle complex requests simultaneously, thereby improving efficiency.

Contextual language

Many users are asking for Siri to have improved natural language processing capabilities. This would allow for more seamless conversations with the AI, as it would be able to understand more complex and nuanced requests. This would also make it easier for users to ask for specific information, as Siri would be able to understand more context.
apple-siri.jpg

Machine learning (ML)

Users have expressed a wish for Siri to become more proactive. With the increasing popularity of smart home devices, users also request that Siri integrates with daily habits and routines, improving awareness of the different spaces and rooms using machine learning (ML).
This could include autonomous actions like sending reminders, providing updates, asking for deliveries in perfect timing, optimizing e-vehicle charging, watering gardens only when needed, and even making suggestions based on the user’s behavior, or external data like weather forecasts and traffic conditions. This would make Siri a more helpful personal assistant that could anticipate needs, making the home itself more proactive.
sirikit-og.png

Top breakthroughs in AI: what to expect

  • Natural Language Processing (NLP): the ability to understand and interpret human language, allowing for more accurate and natural dialogue;
  • Emotion detection: the ability to detect and respond to human emotions, allowing for more personalized and empathetic interactions;
  • Machine learning (ML): a method of teaching AI through data and experience, allowing it to remember, adapt, and improve over time;
  • Contextual understanding: the ability to understand and respond to the context of a conversation or request, providing more accurate and relevant results, answers, and actions;
  • Explainable AI: the ability to analyze complex data and scenarios, providing clear explanations and the best options for decision-making processes, increasing transparency and trust;
  • Autonomous awareness: the ability to connect and control multiple devices directly, creating a seamless awareness environment;
  • Predictive analytics: in the future, Siri will be able to analyze data and predict future events, allowing for proactive problem-solving over the “Internet of Things” (IoT) without human interference;
  • Computer vision: the ability to interpret and understand visual data, such as images or video, to improve image recognition and object detection, acting accordingly;
  • Autonomous services: the integration with robotics, or automated systems (drone delivery, lawn mowing, vacuum cleaning, pool maintenance, etc) and third-party services to improve the home’s efficiency.
18xe2HIAAYRNNfaVa2PFVWw.png

The next generation of Siri has the potential to revolutionize the way we interact with AI with advancements in integration capabilities. Siri could definitively become part of the family.
Stay tuned to AppleMagazine for more updates in relation to the latest advancements in personal assistants and artificial intelligence.

 
  • Like
  • Fire
  • Love
Reactions: 21 users

alwaysgreen

Top 20
FEBRUARY 2, 2023

Next-gen Siri: The future of personal assistant AIWhat advancements and features could make Siri a more powerful personal assistant in the future?​

hey-siri-banner-apple.jpg

With the rapid advancements in artificial intelligence, it’s no surprise that many users are looking forward to what the next generation of Siri personal assistance will have to offer. From improved emotional recognition to autonomous management, the possibilities are endless. But what exactly are people looking for in the next-gen Siri?
One of the most requested features is improved contextual language capabilities. At the moment, a lack of such capabilities can make it difficult for some users to have a smooth conversation with their assistants. By incorporating more advanced voice recognition technologies, Siri could better understand contexts and intentions.
Another highly requested feature is the ability to multitask. Currently, Siri can only handle one task at a time, which can be frustrating for users who want to accomplish multiple things at once. The incorporation of multitasking could enable the assistant to handle complex requests simultaneously, thereby improving efficiency.

Contextual language

Many users are asking for Siri to have improved natural language processing capabilities. This would allow for more seamless conversations with the AI, as it would be able to understand more complex and nuanced requests. This would also make it easier for users to ask for specific information, as Siri would be able to understand more context.
apple-siri.jpg

Machine learning (ML)

Users have expressed a wish for Siri to become more proactive. With the increasing popularity of smart home devices, users also request that Siri integrates with daily habits and routines, improving awareness of the different spaces and rooms using machine learning (ML).
This could include autonomous actions like sending reminders, providing updates, asking for deliveries in perfect timing, optimizing e-vehicle charging, watering gardens only when needed, and even making suggestions based on the user’s behavior, or external data like weather forecasts and traffic conditions. This would make Siri a more helpful personal assistant that could anticipate needs, making the home itself more proactive.
sirikit-og.png

Top breakthroughs in AI: what to expect

  • Natural Language Processing (NLP): the ability to understand and interpret human language, allowing for more accurate and natural dialogue;
  • Emotion detection: the ability to detect and respond to human emotions, allowing for more personalized and empathetic interactions;
  • Machine learning (ML): a method of teaching AI through data and experience, allowing it to remember, adapt, and improve over time;
  • Contextual understanding: the ability to understand and respond to the context of a conversation or request, providing more accurate and relevant results, answers, and actions;
  • Explainable AI: the ability to analyze complex data and scenarios, providing clear explanations and the best options for decision-making processes, increasing transparency and trust;
  • Autonomous awareness: the ability to connect and control multiple devices directly, creating a seamless awareness environment;
  • Predictive analytics: in the future, Siri will be able to analyze data and predict future events, allowing for proactive problem-solving over the “Internet of Things” (IoT) without human interference;
  • Computer vision: the ability to interpret and understand visual data, such as images or video, to improve image recognition and object detection, acting accordingly;
  • Autonomous services: the integration with robotics, or automated systems (drone delivery, lawn mowing, vacuum cleaning, pool maintenance, etc) and third-party services to improve the home’s efficiency.
18xe2HIAAYRNNfaVa2PFVWw.png

The next generation of Siri has the potential to revolutionize the way we interact with AI with advancements in integration capabilities. Siri could definitively become part of the family.
Stay tuned to AppleMagazine for more updates in relation to the latest advancements in personal assistants and artificial intelligence.

Sounds promising!

I'll switch from Google to Apple if they implement Akida. Unless Google decides they need us too.
 
  • Like
  • Haha
Reactions: 17 users

stuart888

Regular
Deep stuff from the Georgia Tech School of Engineering. In the USA, Georgia Tech is top of the class.
Brainchip = Biologically Plausabile Intelligence! I think so!

Fancy math and lots of work, but at the bottom: Few Shot Learning of Simple Classification Tasks.

AKD1000/500/1500 should win here. AKD2000+ in the stuff that needs Attention.

1676247715272.png
 
  • Like
  • Fire
Reactions: 4 users

Dozzaman1977

Regular
FEBRUARY 2, 2023

Next-gen Siri: The future of personal assistant AIWhat advancements and features could make Siri a more powerful personal assistant in the future?​

hey-siri-banner-apple.jpg

With the rapid advancements in artificial intelligence, it’s no surprise that many users are looking forward to what the next generation of Siri personal assistance will have to offer. From improved emotional recognition to autonomous management, the possibilities are endless. But what exactly are people looking for in the next-gen Siri?
One of the most requested features is improved contextual language capabilities. At the moment, a lack of such capabilities can make it difficult for some users to have a smooth conversation with their assistants. By incorporating more advanced voice recognition technologies, Siri could better understand contexts and intentions.
Another highly requested feature is the ability to multitask. Currently, Siri can only handle one task at a time, which can be frustrating for users who want to accomplish multiple things at once. The incorporation of multitasking could enable the assistant to handle complex requests simultaneously, thereby improving efficiency.

Contextual language

Many users are asking for Siri to have improved natural language processing capabilities. This would allow for more seamless conversations with the AI, as it would be able to understand more complex and nuanced requests. This would also make it easier for users to ask for specific information, as Siri would be able to understand more context.
apple-siri.jpg

Machine learning (ML)

Users have expressed a wish for Siri to become more proactive. With the increasing popularity of smart home devices, users also request that Siri integrates with daily habits and routines, improving awareness of the different spaces and rooms using machine learning (ML).
This could include autonomous actions like sending reminders, providing updates, asking for deliveries in perfect timing, optimizing e-vehicle charging, watering gardens only when needed, and even making suggestions based on the user’s behavior, or external data like weather forecasts and traffic conditions. This would make Siri a more helpful personal assistant that could anticipate needs, making the home itself more proactive.
sirikit-og.png

Top breakthroughs in AI: what to expect

  • Natural Language Processing (NLP): the ability to understand and interpret human language, allowing for more accurate and natural dialogue;
  • Emotion detection: the ability to detect and respond to human emotions, allowing for more personalized and empathetic interactions;
  • Machine learning (ML): a method of teaching AI through data and experience, allowing it to remember, adapt, and improve over time;
  • Contextual understanding: the ability to understand and respond to the context of a conversation or request, providing more accurate and relevant results, answers, and actions;
  • Explainable AI: the ability to analyze complex data and scenarios, providing clear explanations and the best options for decision-making processes, increasing transparency and trust;
  • Autonomous awareness: the ability to connect and control multiple devices directly, creating a seamless awareness environment;
  • Predictive analytics: in the future, Siri will be able to analyze data and predict future events, allowing for proactive problem-solving over the “Internet of Things” (IoT) without human interference;
  • Computer vision: the ability to interpret and understand visual data, such as images or video, to improve image recognition and object detection, acting accordingly;
  • Autonomous services: the integration with robotics, or automated systems (drone delivery, lawn mowing, vacuum cleaning, pool maintenance, etc) and third-party services to improve the home’s efficiency.
18xe2HIAAYRNNfaVa2PFVWw.png

The next generation of Siri has the potential to revolutionize the way we interact with AI with advancements in integration capabilities. Siri could definitively become part of the family.
Stay tuned to AppleMagazine for more updates in relation to the latest advancements in personal assistants and artificial intelligence.

You would hope that BRN management have been banging down their door showing them what akida could do to improve Siri.
I guess we will never know if management have been proactive in this respect due to the NDAs
 
  • Like
  • Fire
  • Love
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Sounds promising!

I'll switch from Google to Apple if they implement Akida. Unless Google decides they need us too.

Agreed AG, especially when you think about how today's Natural Language Processing models consume massive amounts of energy. Amongst one of the many benefits of neuromorphic computing is it's ability to perform complex computations with less energy consumption which is perfect for embedded systems where real-time processing is required for features like speech recognition. What's the point of having sophisticated NLP in a mobile phone if it drains your battery after an hour of usage?
 
  • Like
  • Fire
  • Love
Reactions: 19 users
Top Bottom