BRN Discussion Ongoing

stuart888

Regular
Wow.

It's a must watch Video, very informative. Fantastic to have Nandan as CMO.

It's great to be a shareholder 🏖
Only when needed is the key! Fantastic video, thanks a bunch Learning.

Energy efficient SNN spiking smarts!

1678236206502.png
 
  • Like
  • Fire
Reactions: 21 users

Diogenese

Top 20
Will Renesas do an Oliver Twist?

“We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”

... even better than DRP-AI.
 
  • Like
  • Love
  • Fire
Reactions: 66 users

Evermont

Stealth Mode
Will Renesas do an Oliver Twist?

“We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”

... even better than DRP-AI.

Wouldn't that be a nice message to the market.
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Interesting write up


I don't think I've ever seen this map before.

View attachment 31498

Yes @MadMayHam, and I thought it was very interesting that Si-Five specify that they want their X280 Intelligence Series to be tightly integrated with either Akida-S or Akida-P neural processors.




Screen Shot 2023-03-08 at 11.37.5.png



Screen Shot 2023-03-08 at 12.14.08 .png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 33 users

Diogenese

Top 20
Thanks for the wake-up call @Rocket577 but unfortunately I slept like a log through the Cerence Conference and they haven't put a webcast or transcript of it up on their website yet. But don't worry, I'll be keeping my eyes peeled for it.

While I'm at it, I thought I might use this opportunity to remind everyone why I'm completely obsessed with Cerence and why I'm 99.999999999999999999999999999999999999999999999999999% convinced that we'll be incorporated in the "Cerence Immersive Companion" due in FY23/24. Aside from the other zillion odd posts I've managed to devote to Cerence, of which this one is a pretty good example #43,639, here is yet another post to add to the pile.

For some context, Nils Shanz is the Chief Product Officer at Cerence. But prior to joining Cerence he was at Mercedes. And it was Nils who was responsible for user interaction and voice control on the Vision EQXX voice control system (the one that incorporated BrainChip’s technology to make the wake word detection 5-10 times faster than convention voice control systems).

Check out this LinkedIn post from Nils when he was at Mercedes. It says "this is a demo to show the performance of our voice assistant in the #EQS: no Wake-up word needed to start a conversation & plenty of use-cases in less than 45 seconds". You can click the link below to watch the demo. But you can also see that there is a comment from Holger Quast (Product Strategy and Innovation at Cerence).

The other is a screen-shot of a testimonial from Daimler on Cerence's website.

As I say, just add this post to the list until we get proof irrefutable, which won't be too far away IMO.

View attachment 31550

View attachment 31551





Hi @Bravo ,

Here are a couple of Cerence patent applications:

US2022415318A1 VOICE ASSISTANT ACTIVATION SYSTEM WITH CONTEXT DETERMINATION BASED ON MULTIMODAL DATA

1678237664902.png


1678237624897.png

A vehicle system for classifying spoken utterance within a vehicle cabin as one of system-directed and non-system directed may include at least one microphone to detect at least one acoustic utterance from at least one occupant of the vehicle, at least one camera to detect occupant data indicative of occupant behavior within the vehicle corresponding to the acoustic utterance, and a processor programmed to receive the acoustic utterance, receive the occupant data, determine whether the occupant data is indicative of a vehicle feature, classify the acoustic utterance as a system-directed utterance in response to the occupant data being indicative of a vehicle feature, and process the acoustic utterance.



WO2020142717A1 METHODS AND SYSTEMS FOR INCREASING AUTONOMOUS VEHICLE SAFETY AND FLEXIBILITY USING VOICE INTERACTION

1678238534634.png



The specifications seem oblivious of SNNs.
 
  • Like
  • Sad
  • Fire
Reactions: 10 users
Has somebody already posted the nvisio newsletter? Just noticed an email that was received about 10 hrs ago.?
Edit... Just seen @Tothemoon24 post
 
Last edited:
  • Like
Reactions: 3 users
Some bigger buying in market has just started again. Buyers now double the sellers! Some line wiping just occurred at .55

The sneaky mass accumulation is continuing as they know BRN is going to fly and these are giveaway share prices.
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
SAY WHAT?????


Tenstorrent talking about SNN's in an article dated 12 May 2020
!!!

Remember that we're in like flynn with Si-Five Intelligence x280 seeing that they have just specified that they want their X280 Intelligence Series to be tightly integrated with either Akida-S or Akida-P neural processors. And Tenstorrent have licensed Si-Five Intelligence x280 as a platform for its Tensix NPU.



cat-what.gif






Tenstorrent Is Changing the Way We Think About AI Chips​

Tenstorrent Is Changing the Way We Think About AI Chips

GPUs and CPUs are reaching their limits as far as AI is concerned. That’s why Tenstorrent is creating something different.
Chris Wiltz | May 12, 2020

SUGGESTED EVENT


GPUs and CPUs are not going to be enough to ensure a stable future for artificial intelligence. “GPUs are essentially at the end of their evolutionary curve,” Ljubisa Bajic, CEO of AI chip startup Tenstorrent told Design News. “[GPUs] have done a great job; they’ve pushed the field to to the point where it is now. But in order to make any kind of order of magnitude type jumps GPUs are going to have to go.”
Tenstorrent20Grayskull_0.png
Tenstorrent's Grayskull processor is capable of operating at up to 368 TOPS with an architecture much different than any CPU or GPU (Image source: Tenstorrent)

Bajic knows quite a bit about GPU technology. He spent some time at Nvidia, the house that GPUs built, working as senior architect. He’s also spent a few years working as an IC designer and architect at AMD. While he doesn’t think companies like Nvidia are going away any time soon, he thinks it’s only a matter of time before the company releases an AI chip product that is not a GPU.

But an entire ecosystem of AI chip startups is already heading in that direction. Engineers and developers are looking at new, novel chip architectures capable of handling the unique demands of AI and its related technologies – both in data centers and the edge.
Bajic is the founder of one such company – Toronto-based Tenstorrent, which was founded in 2016 and emerged from stealth earlier this year. Tenstorrent’s goal is both simple and largely ambitious – creating chip hardware for AI capable of delivering the best all around performance in both the data center and the edge. The company has created its own proprietary processor core called the Tensix, which contains a high utilization packet processor, a programmable SIMD, a dense math computational block, along with five single-issue RISC cores. By combining Tensix cores into an array using a network on a chip (NoC) Tenstorrent says it can create high-powered chips that can handle both inference and training and scale from small embedded devices all the way up to large data center deployments.

The company’s first product Grayskull (yes, that is a He-Man reference) is a processor targeted at inference tasks. According to company specs, Grayskull is capable of operating at up to 368 tera operations per second (TOPS). To put that into perspective as far as what Grayskull could be capable of, consider Qualcomm’s AI Engine used in its latest SoCs such as the Snapdragon 865. The Qualcomm engine offers up to 15 TOPS of performance for various mobile applications. A single Grayskull processor is capable of handling the volume of calculations of about two dozen of the chips found in the highest-end smartphones on the market today.
Tenstorrent20Grayskull20PCI20card_0.png
The Grayskull PCIe card (Image source: Tenstorrent)

Nature Versus Neural
If you want to design a chip that mimics cognition then taking cues from the human brain is the obvious way to go. Whereas AI draws a clear functional distinction between training (learning a task) and inference (implementing or acting on what’s been learned), the human brain does no such thing.
“We figured if we're going after imitating Mother Nature that we should really do a good job of it and not not miss some key features,” Bajic said. “If you look at the natural world, there’s the same architecture between small things and big things. They can all learn; it's not inference or training. And they all achieve extreme efficiency by relying on natural sparsity, so only a small percentage of the neurons in the brain are doing anything at any given time and which ones are working depends on what you're doing.”
Bajic said he and his team wanted to build a computer would have all these features and also not compromise on any of them. “In the world of artificial neural networks today, there are two camps that have popped up,” he said. “One is CPUs and GPUs and all the startup hardware that's coming up. They tend to be doing dense matrix math on hardware that's built for it, like single instructional, multiple data [SIMD] machines, and if they're scaled out they tend to talk over Ethernet. On the flip side you've got the spiking artificial neural network, which is a lot less popular and has had a lot less success in in broad applications.”

Spiking neural networks (SNNs) more closely mimic the functions of biological neurons, which send information via spikes in electrical activity. “Here people try to simulate natural neurons almost directly by writing out the differential equations that describe their operation and then implementing them as close we can in hardware,” Bajic explained. “So to an engineer this comes down to basically having many scalar processor cores connected to the scalar network.”
This is very inefficient from a hardware standpoint. But Bajic said that SNNs have an efficiency that biological neurons have in that only a certain percentage of neurons are activated depending on what the neural net is doing – something that’s highly desirable in terms of power consumption in particular.
“Spiking neural nets have this conditional efficiency, but no hardware efficiency. The other end of the spectrum has both. We wanted to build a machine that has both,” Bajic said. “We wanted to pick a place in the spectrum where we could get the best of the both worlds.”
Behind the Power of Grayskull
With that in mind there are four overall goals Tenstorrent is shooting for in its chip development – hardware efficiency, conditional efficiency, storage efficiency, and a high degree of scalability (exceeding 100,000 chips).
“So how did we do this? We implemented a machine that can run fine grain conditional execution by factoring the computation from huge groups of numbers to computations of small groups, so 16 by 4 or 16 by 16 groups to be precise,” Bajic said.

“We enable control flow on these groups with no performance penalty. So essentially we can run small matrices and we can put “if” statements around them and decide whether to run them at all. And if we’re going to run them we can decide whether to run them in reduced precision or full precision or anywhere in between.”
He said this also means rethinking the software stack. “The problem is that the software stacks that a lot of the other companies in the space have brought out assume that there's a fixed set of dimensions and a fixed set of work to run. So in order to enable adaptation at runtime normally hardware needs to be supportive of it and the full software stack as well.
“So many decisions that are currently made at compile time for us are moved into runtime so that we can accept exactly the right sized inputs. That we know exactly how big stuff is after we've chosen to eliminate some things at runtime so there's a fairly large software challenge to keep up with what the hardware enables.”
Tenstorrent%20Tensix%20core%20structure_0.jpg
(Image source: Tenstorrent)

Creating an architecture that can scale to over 100,000 nodes means operating at a scale where you can’t have a shared memory space. “You basically need a bunch of processors with private memory,” Bajic said. “Cache coherency is another thing that's impossible to scale for across more than a couple hundred nodes, so that had to go as well.”
Bajic explained that each of Tenstorrent’s Tensix cores is really a grid of five single-issue RISC covers that are networked together. Each Tensix is capable of roughly 3 TOPS of compute.
“All of our processors can pretty much be viewed as packet processors,” Bajic said. “The way that works on a single processor level is that you have a core and every one of them has a megabyte of SRAM. Packets arrive into buffers in this SRAM, which triggers software to fetch them and run a hardware unpacketization engine – this removes all the packet framing, interprets what it means, and decompresses the packet so it leaves compressed at all times, except when it’s being computed on.

“It essentially recreates that little tensor that made the packet. We run a bunch of computations on those tensors and eventually we're ready to to send them onward. What happens then is they get repacketized, recompressed, deposited into SRAM, and then from there our network functionality picks them up and forwards them to all the other cores that they need to go to under the directional compiler.”
While Tenstorrent is rolling out Grayskull it is actively developing its second Tensix core-based processor, dubbed Wormhole. Tenstorrent is targeting a Fall 2020 release for Wormhole and says it will focus even more on scale. “It’s essentially built around the same architecture [as Grayskull], but it has a lot of Ethernet links on it for scaling out,” Bajic said. “It's not going be a PCI card chip – it’s the same architecture, but for big systems.”
Searching for the iPhone Moment
There are a lot of lofty goals for AI on the horizon. Researchers and major companies alike are hoping new chip hardware will help along the path toward big projects like Level 5 autonomous cars all the way to some idea of general artificial intelligence.
Bajic agrees with these ideas, but he also believes that there’s a simple matter of cost savings that makes chips like the ones being developed by his company an attractive commodity.
“The metric that everybody cares about is this concept of total cost of ownership (TCO),”he said. “If you think of companies like Google, Microsoft, and Amazon, these are big organizations that run an inordinate amount of computing machinery and spend a lot of money doing it. Essentially they calculate the cost of everything to do with running a computer system over some set of years including how much the machine costs to begin with – the upfront cost, how much it costs to pipe through wires and cooling so that you can live with its power consumption, and the cost of how much the power itself costs. They add all of that together and get this TCO metric.
“For them minimizing that metric is important because they spend billions of dollars on this. Machine learning and AI has become a very sizable percentage of all their compute activity and it’s trending towards becoming half of all that activity in the next couple years. So if your hardware can perform, say, 10 times better then it's a very meaningful financial indicator. If you can convince the market that you've got an order of magnitude in TCO advantage that is going persist for a few years, it's a super powerful story. It's a completely valid premise to build a business around, but it's kind of an optimization thing as opposed to something super exciting.”
For Bajic those more exciting areas come in the form of large scale AI projects like using machine learning to track diseases and discover vaccines and medications as well as in emerging feels such as emotional AI and affective computing. “Imagine if you had a device on your wrist that could interpret all of your mannerisms and gestures. As you’re sitting there watching a movie it could tell if you’re bored or disgusted and change the channel. Or it could automatically order food if you appear to be hungry – something pretty intelligent that can also be situationally aware,” he said.
“The key engine that enables this level of awareness is an AI, but at this point these solutions are too power hungry and too big to put on your wrist or to put anywhere that can follow you. By providing an architecture that will give an order of magnitude boost you can start unlocking whole new technologies and creating things that will an impact on the level of the first iPhone release.”

 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 36 users

Damo4

Regular
SAY WHAT?????


Tenstorrent are talking about SNN's
. Remeber that we're in like flynn with Si-Five Intelligence x280 and that Tenstorrent have licensed Si-Five Intelligence x280 as a platform for its Tensix NPU.


View attachment 31568





Tenstorrent Is Changing the Way We Think About AI Chips​

Tenstorrent Is Changing the Way We Think About AI Chips

GPUs and CPUs are reaching their limits as far as AI is concerned. That’s why Tenstorrent is creating something different.
Chris Wiltz | May 12, 2020

SUGGESTED EVENT
TBS_EU_nodate.jpg
The Battery Show Europe/Electric & Hybrid Vehicle Technology Expo Europe 2023
May 23, 2023 to May 25, 2023

GPUs and CPUs are not going to be enough to ensure a stable future for artificial intelligence. “GPUs are essentially at the end of their evolutionary curve,” Ljubisa Bajic, CEO of AI chip startup Tenstorrent told Design News. “[GPUs] have done a great job; they’ve pushed the field to to the point where it is now. But in order to make any kind of order of magnitude type jumps GPUs are going to have to go.”
Tenstorrent20Grayskull_0.png
Tenstorrent's Grayskull processor is capable of operating at up to 368 TOPS with an architecture much different than any CPU or GPU (Image source: Tenstorrent)

Bajic knows quite a bit about GPU technology. He spent some time at Nvidia, the house that GPUs built, working as senior architect. He’s also spent a few years working as an IC designer and architect at AMD. While he doesn’t think companies like Nvidia are going away any time soon, he thinks it’s only a matter of time before the company releases an AI chip product that is not a GPU.

But an entire ecosystem of AI chip startups is already heading in that direction. Engineers and developers are looking at new, novel chip architectures capable of handling the unique demands of AI and its related technologies – both in data centers and the edge.
Bajic is the founder of one such company – Toronto-based Tenstorrent, which was founded in 2016 and emerged from stealth earlier this year. Tenstorrent’s goal is both simple and largely ambitious – creating chip hardware for AI capable of delivering the best all around performance in both the data center and the edge. The company has created its own proprietary processor core called the Tensix, which contains a high utilization packet processor, a programmable SIMD, a dense math computational block, along with five single-issue RISC cores. By combining Tensix cores into an array using a network on a chip (NoC) Tenstorrent says it can create high-powered chips that can handle both inference and training and scale from small embedded devices all the way up to large data center deployments.

The company’s first product Grayskull (yes, that is a He-Man reference) is a processor targeted at inference tasks. According to company specs, Grayskull is capable of operating at up to 368 tera operations per second (TOPS). To put that into perspective as far as what Grayskull could be capable of, consider Qualcomm’s AI Engine used in its latest SoCs such as the Snapdragon 865. The Qualcomm engine offers up to 15 TOPS of performance for various mobile applications. A single Grayskull processor is capable of handling the volume of calculations of about two dozen of the chips found in the highest-end smartphones on the market today.
Tenstorrent20Grayskull20PCI20card_0.png
The Grayskull PCIe card (Image source: Tenstorrent)

Nature Versus Neural
If you want to design a chip that mimics cognition then taking cues from the human brain is the obvious way to go. Whereas AI draws a clear functional distinction between training (learning a task) and inference (implementing or acting on what’s been learned), the human brain does no such thing.
“We figured if we're going after imitating Mother Nature that we should really do a good job of it and not not miss some key features,” Bajic said. “If you look at the natural world, there’s the same architecture between small things and big things. They can all learn; it's not inference or training. And they all achieve extreme efficiency by relying on natural sparsity, so only a small percentage of the neurons in the brain are doing anything at any given time and which ones are working depends on what you're doing.”
Bajic said he and his team wanted to build a computer would have all these features and also not compromise on any of them. “In the world of artificial neural networks today, there are two camps that have popped up,” he said. “One is CPUs and GPUs and all the startup hardware that's coming up. They tend to be doing dense matrix math on hardware that's built for it, like single instructional, multiple data [SIMD] machines, and if they're scaled out they tend to talk over Ethernet. On the flip side you've got the spiking artificial neural network, which is a lot less popular and has had a lot less success in in broad applications.”

Spiking neural networks (SNNs) more closely mimic the functions of biological neurons, which send information via spikes in electrical activity. “Here people try to simulate natural neurons almost directly by writing out the differential equations that describe their operation and then implementing them as close we can in hardware,” Bajic explained. “So to an engineer this comes down to basically having many scalar processor cores connected to the scalar network.”
This is very inefficient from a hardware standpoint. But Bajic said that SNNs have an efficiency that biological neurons have in that only a certain percentage of neurons are activated depending on what the neural net is doing – something that’s highly desirable in terms of power consumption in particular.
“Spiking neural nets have this conditional efficiency, but no hardware efficiency. The other end of the spectrum has both. We wanted to build a machine that has both,” Bajic said. “We wanted to pick a place in the spectrum where we could get the best of the both worlds.”
Behind the Power of Grayskull
With that in mind there are four overall goals Tenstorrent is shooting for in its chip development – hardware efficiency, conditional efficiency, storage efficiency, and a high degree of scalability (exceeding 100,000 chips).
“So how did we do this? We implemented a machine that can run fine grain conditional execution by factoring the computation from huge groups of numbers to computations of small groups, so 16 by 4 or 16 by 16 groups to be precise,” Bajic said.

“We enable control flow on these groups with no performance penalty. So essentially we can run small matrices and we can put “if” statements around them and decide whether to run them at all. And if we’re going to run them we can decide whether to run them in reduced precision or full precision or anywhere in between.”
He said this also means rethinking the software stack. “The problem is that the software stacks that a lot of the other companies in the space have brought out assume that there's a fixed set of dimensions and a fixed set of work to run. So in order to enable adaptation at runtime normally hardware needs to be supportive of it and the full software stack as well.
“So many decisions that are currently made at compile time for us are moved into runtime so that we can accept exactly the right sized inputs. That we know exactly how big stuff is after we've chosen to eliminate some things at runtime so there's a fairly large software challenge to keep up with what the hardware enables.”
Tenstorrent%20Tensix%20core%20structure_0.jpg
(Image source: Tenstorrent)

Creating an architecture that can scale to over 100,000 nodes means operating at a scale where you can’t have a shared memory space. “You basically need a bunch of processors with private memory,” Bajic said. “Cache coherency is another thing that's impossible to scale for across more than a couple hundred nodes, so that had to go as well.”
Bajic explained that each of Tenstorrent’s Tensix cores is really a grid of five single-issue RISC covers that are networked together. Each Tensix is capable of roughly 3 TOPS of compute.
“All of our processors can pretty much be viewed as packet processors,” Bajic said. “The way that works on a single processor level is that you have a core and every one of them has a megabyte of SRAM. Packets arrive into buffers in this SRAM, which triggers software to fetch them and run a hardware unpacketization engine – this removes all the packet framing, interprets what it means, and decompresses the packet so it leaves compressed at all times, except when it’s being computed on.

“It essentially recreates that little tensor that made the packet. We run a bunch of computations on those tensors and eventually we're ready to to send them onward. What happens then is they get repacketized, recompressed, deposited into SRAM, and then from there our network functionality picks them up and forwards them to all the other cores that they need to go to under the directional compiler.”
While Tenstorrent is rolling out Grayskull it is actively developing its second Tensix core-based processor, dubbed Wormhole. Tenstorrent is targeting a Fall 2020 release for Wormhole and says it will focus even more on scale. “It’s essentially built around the same architecture [as Grayskull], but it has a lot of Ethernet links on it for scaling out,” Bajic said. “It's not going be a PCI card chip – it’s the same architecture, but for big systems.”
Searching for the iPhone Moment
There are a lot of lofty goals for AI on the horizon. Researchers and major companies alike are hoping new chip hardware will help along the path toward big projects like Level 5 autonomous cars all the way to some idea of general artificial intelligence.
Bajic agrees with these ideas, but he also believes that there’s a simple matter of cost savings that makes chips like the ones being developed by his company an attractive commodity.
“The metric that everybody cares about is this concept of total cost of ownership (TCO),”he said. “If you think of companies like Google, Microsoft, and Amazon, these are big organizations that run an inordinate amount of computing machinery and spend a lot of money doing it. Essentially they calculate the cost of everything to do with running a computer system over some set of years including how much the machine costs to begin with – the upfront cost, how much it costs to pipe through wires and cooling so that you can live with its power consumption, and the cost of how much the power itself costs. They add all of that together and get this TCO metric.
“For them minimizing that metric is important because they spend billions of dollars on this. Machine learning and AI has become a very sizable percentage of all their compute activity and it’s trending towards becoming half of all that activity in the next couple years. So if your hardware can perform, say, 10 times better then it's a very meaningful financial indicator. If you can convince the market that you've got an order of magnitude in TCO advantage that is going persist for a few years, it's a super powerful story. It's a completely valid premise to build a business around, but it's kind of an optimization thing as opposed to something super exciting.”
For Bajic those more exciting areas come in the form of large scale AI projects like using machine learning to track diseases and discover vaccines and medications as well as in emerging feels such as emotional AI and affective computing. “Imagine if you had a device on your wrist that could interpret all of your mannerisms and gestures. As you’re sitting there watching a movie it could tell if you’re bored or disgusted and change the channel. Or it could automatically order food if you appear to be hungry – something pretty intelligent that can also be situationally aware,” he said.
“The key engine that enables this level of awareness is an AI, but at this point these solutions are too power hungry and too big to put on your wrist or to put anywhere that can follow you. By providing an architecture that will give an order of magnitude boost you can start unlocking whole new technologies and creating things that will an impact on the level of the first iPhone release.”



Holy s***
 
  • Haha
  • Like
  • Thinking
Reactions: 11 users
I think the below segment is important for some of those who have been expecting things ahead of Brainchip timeline:

Mike Vizard: So how long before we started to see these use cases? You guys just launched the processors; it usually takes some time for all this to come together. And for it to manifest itself somewhere, what’s your kind of timeline?

Nandan Nayampally: So I think the way to think about it is, you’re the real kind of growth in more radical innovative use cases, probably, you know, a few months out, a year out. But what I think what we’re saying is there are use cases that exist on more high powered devices today, that actually can now migrate to much more efficient edge devices, right? And so I do want to make sure people understand when when we talk about edge, it’s not kind of the brick that’s sitting next to your network and still driven by a fan, right? It’s smaller than the bigger bricks, but it is still a brick. What we’re talking about is literally at-sensor, always on intelligence, let’s say whether it’s a heart rate monitor, for example, or, you know, respiratory rate monitor – you could actually have a very, very compact device of that kind. And so one of the big benefits that we see is, let’s say video object detection today needs quite a bit of high power compute, to do HD video object detection, target tracking. Now imagine you could do that in a battery operated or have very low form factor, cost effective device, right? So suddenly, your dash cam, with additional capabilities built into that, could become much more cost effective or more capable. So we see a lot of the use cases that exists today, coming in. And then we see a number of use cases like vital signs predictions much closer, or remote healthcare, now getting cheaper, because you don’t have to send everything to cloud., You can get a really good idea before you have to send anything to cloud and then you’re sending less data you’re sending, it’s already pre-qualified before you send it rather than finding out through the cycle that it’s taken a lot more time. Does that make sense?
Thanks for posting that, bookmarked for later to read. Tsunami of information to sift through and just recalled zee has a bookmark feature here .
 
  • Like
Reactions: 3 users

Quiltman

Regular
1678240435025.png


Love this diagram.

The biggest technology event of historical significance over the lifetime of contributors on this forum is the development of AI.
The biggest impact on human society, how we work, where we work, our economies, our ethics, where humans explore & apply our endeavour ...

Personally, you can look like a deer in the headlights at this momentous change, or be part of the future & embrace it by partnering with a leading edge AI company like BrainChip.

No Brainer really.
 
  • Like
  • Fire
  • Love
Reactions: 37 users
Thought I'd share this, as this is how I view that brainchip runs the company.
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Diogenese

Top 20
SAY WHAT?????


Tenstorrent talking about SNN's in an article dated 12 May 2020
!!!

Remember that we're in like flynn with Si-Five Intelligence x280 seeing that they have just specified that they want their X280 Intelligence Series to be tightly integrated with either Akida-S or Akida-P neural processors. And Tenstorrent have licensed Si-Five Intelligence x280 as a platform for its Tensix NPU.



View attachment 31568





Tenstorrent Is Changing the Way We Think About AI Chips​

Tenstorrent Is Changing the Way We Think About AI Chips

GPUs and CPUs are reaching their limits as far as AI is concerned. That’s why Tenstorrent is creating something different.
Chris Wiltz | May 12, 2020

SUGGESTED EVENT


GPUs and CPUs are not going to be enough to ensure a stable future for artificial intelligence. “GPUs are essentially at the end of their evolutionary curve,” Ljubisa Bajic, CEO of AI chip startup Tenstorrent told Design News. “[GPUs] have done a great job; they’ve pushed the field to to the point where it is now. But in order to make any kind of order of magnitude type jumps GPUs are going to have to go.”
Tenstorrent20Grayskull_0.png
Tenstorrent's Grayskull processor is capable of operating at up to 368 TOPS with an architecture much different than any CPU or GPU (Image source: Tenstorrent)

Bajic knows quite a bit about GPU technology. He spent some time at Nvidia, the house that GPUs built, working as senior architect. He’s also spent a few years working as an IC designer and architect at AMD. While he doesn’t think companies like Nvidia are going away any time soon, he thinks it’s only a matter of time before the company releases an AI chip product that is not a GPU.

But an entire ecosystem of AI chip startups is already heading in that direction. Engineers and developers are looking at new, novel chip architectures capable of handling the unique demands of AI and its related technologies – both in data centers and the edge.
Bajic is the founder of one such company – Toronto-based Tenstorrent, which was founded in 2016 and emerged from stealth earlier this year. Tenstorrent’s goal is both simple and largely ambitious – creating chip hardware for AI capable of delivering the best all around performance in both the data center and the edge. The company has created its own proprietary processor core called the Tensix, which contains a high utilization packet processor, a programmable SIMD, a dense math computational block, along with five single-issue RISC cores. By combining Tensix cores into an array using a network on a chip (NoC) Tenstorrent says it can create high-powered chips that can handle both inference and training and scale from small embedded devices all the way up to large data center deployments.

The company’s first product Grayskull (yes, that is a He-Man reference) is a processor targeted at inference tasks. According to company specs, Grayskull is capable of operating at up to 368 tera operations per second (TOPS). To put that into perspective as far as what Grayskull could be capable of, consider Qualcomm’s AI Engine used in its latest SoCs such as the Snapdragon 865. The Qualcomm engine offers up to 15 TOPS of performance for various mobile applications. A single Grayskull processor is capable of handling the volume of calculations of about two dozen of the chips found in the highest-end smartphones on the market today.
Tenstorrent20Grayskull20PCI20card_0.png
The Grayskull PCIe card (Image source: Tenstorrent)

Nature Versus Neural
If you want to design a chip that mimics cognition then taking cues from the human brain is the obvious way to go. Whereas AI draws a clear functional distinction between training (learning a task) and inference (implementing or acting on what’s been learned), the human brain does no such thing.
“We figured if we're going after imitating Mother Nature that we should really do a good job of it and not not miss some key features,” Bajic said. “If you look at the natural world, there’s the same architecture between small things and big things. They can all learn; it's not inference or training. And they all achieve extreme efficiency by relying on natural sparsity, so only a small percentage of the neurons in the brain are doing anything at any given time and which ones are working depends on what you're doing.”
Bajic said he and his team wanted to build a computer would have all these features and also not compromise on any of them. “In the world of artificial neural networks today, there are two camps that have popped up,” he said. “One is CPUs and GPUs and all the startup hardware that's coming up. They tend to be doing dense matrix math on hardware that's built for it, like single instructional, multiple data [SIMD] machines, and if they're scaled out they tend to talk over Ethernet. On the flip side you've got the spiking artificial neural network, which is a lot less popular and has had a lot less success in in broad applications.”

Spiking neural networks (SNNs) more closely mimic the functions of biological neurons, which send information via spikes in electrical activity. “Here people try to simulate natural neurons almost directly by writing out the differential equations that describe their operation and then implementing them as close we can in hardware,” Bajic explained. “So to an engineer this comes down to basically having many scalar processor cores connected to the scalar network.”
This is very inefficient from a hardware standpoint. But Bajic said that SNNs have an efficiency that biological neurons have in that only a certain percentage of neurons are activated depending on what the neural net is doing – something that’s highly desirable in terms of power consumption in particular.
“Spiking neural nets have this conditional efficiency, but no hardware efficiency. The other end of the spectrum has both. We wanted to build a machine that has both,” Bajic said. “We wanted to pick a place in the spectrum where we could get the best of the both worlds.”
Behind the Power of Grayskull
With that in mind there are four overall goals Tenstorrent is shooting for in its chip development – hardware efficiency, conditional efficiency, storage efficiency, and a high degree of scalability (exceeding 100,000 chips).
“So how did we do this? We implemented a machine that can run fine grain conditional execution by factoring the computation from huge groups of numbers to computations of small groups, so 16 by 4 or 16 by 16 groups to be precise,” Bajic said.

“We enable control flow on these groups with no performance penalty. So essentially we can run small matrices and we can put “if” statements around them and decide whether to run them at all. And if we’re going to run them we can decide whether to run them in reduced precision or full precision or anywhere in between.”
He said this also means rethinking the software stack. “The problem is that the software stacks that a lot of the other companies in the space have brought out assume that there's a fixed set of dimensions and a fixed set of work to run. So in order to enable adaptation at runtime normally hardware needs to be supportive of it and the full software stack as well.
“So many decisions that are currently made at compile time for us are moved into runtime so that we can accept exactly the right sized inputs. That we know exactly how big stuff is after we've chosen to eliminate some things at runtime so there's a fairly large software challenge to keep up with what the hardware enables.”
Tenstorrent%20Tensix%20core%20structure_0.jpg
(Image source: Tenstorrent)

Creating an architecture that can scale to over 100,000 nodes means operating at a scale where you can’t have a shared memory space. “You basically need a bunch of processors with private memory,” Bajic said. “Cache coherency is another thing that's impossible to scale for across more than a couple hundred nodes, so that had to go as well.”
Bajic explained that each of Tenstorrent’s Tensix cores is really a grid of five single-issue RISC covers that are networked together. Each Tensix is capable of roughly 3 TOPS of compute.
“All of our processors can pretty much be viewed as packet processors,” Bajic said. “The way that works on a single processor level is that you have a core and every one of them has a megabyte of SRAM. Packets arrive into buffers in this SRAM, which triggers software to fetch them and run a hardware unpacketization engine – this removes all the packet framing, interprets what it means, and decompresses the packet so it leaves compressed at all times, except when it’s being computed on.

“It essentially recreates that little tensor that made the packet. We run a bunch of computations on those tensors and eventually we're ready to to send them onward. What happens then is they get repacketized, recompressed, deposited into SRAM, and then from there our network functionality picks them up and forwards them to all the other cores that they need to go to under the directional compiler.”
While Tenstorrent is rolling out Grayskull it is actively developing its second Tensix core-based processor, dubbed Wormhole. Tenstorrent is targeting a Fall 2020 release for Wormhole and says it will focus even more on scale. “It’s essentially built around the same architecture [as Grayskull], but it has a lot of Ethernet links on it for scaling out,” Bajic said. “It's not going be a PCI card chip – it’s the same architecture, but for big systems.”
Searching for the iPhone Moment
There are a lot of lofty goals for AI on the horizon. Researchers and major companies alike are hoping new chip hardware will help along the path toward big projects like Level 5 autonomous cars all the way to some idea of general artificial intelligence.
Bajic agrees with these ideas, but he also believes that there’s a simple matter of cost savings that makes chips like the ones being developed by his company an attractive commodity.
“The metric that everybody cares about is this concept of total cost of ownership (TCO),”he said. “If you think of companies like Google, Microsoft, and Amazon, these are big organizations that run an inordinate amount of computing machinery and spend a lot of money doing it. Essentially they calculate the cost of everything to do with running a computer system over some set of years including how much the machine costs to begin with – the upfront cost, how much it costs to pipe through wires and cooling so that you can live with its power consumption, and the cost of how much the power itself costs. They add all of that together and get this TCO metric.
“For them minimizing that metric is important because they spend billions of dollars on this. Machine learning and AI has become a very sizable percentage of all their compute activity and it’s trending towards becoming half of all that activity in the next couple years. So if your hardware can perform, say, 10 times better then it's a very meaningful financial indicator. If you can convince the market that you've got an order of magnitude in TCO advantage that is going persist for a few years, it's a super powerful story. It's a completely valid premise to build a business around, but it's kind of an optimization thing as opposed to something super exciting.”
For Bajic those more exciting areas come in the form of large scale AI projects like using machine learning to track diseases and discover vaccines and medications as well as in emerging feels such as emotional AI and affective computing. “Imagine if you had a device on your wrist that could interpret all of your mannerisms and gestures. As you’re sitting there watching a movie it could tell if you’re bored or disgusted and change the channel. Or it could automatically order food if you appear to be hungry – something pretty intelligent that can also be situationally aware,” he said.
“The key engine that enables this level of awareness is an AI, but at this point these solutions are too power hungry and too big to put on your wrist or to put anywhere that can follow you. By providing an architecture that will give an order of magnitude boost you can start unlocking whole new technologies and creating things that will an impact on the level of the first iPhone release.”


USPTO shows the inventor is Heath Robinson:

The way that works on a single processor level is that you have a core and every one of them has a megabyte of SRAM. Packets arrive into buffers in this SRAM, which triggers software to fetch them and run a hardware unpacketization engine – this removes all the packet framing, interprets what it means, and decompresses the packet so it leaves compressed at all times, except when it’s being computed on.

“It essentially recreates that little tensor that made the packet. We run a bunch of computations on those tensors and eventually we're ready to to send them onward. What happens then is they get repacketized, recompressed, deposited into SRAM, and then from there our network functionality picks them up and forwards them to all the other cores that they need to go to under the directional compiler
.”

1678241877226.png
 
Last edited:
  • Haha
  • Like
  • Thinking
Reactions: 18 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
GM unveils Hummer SUV EV & Cadillac EVs in China

NEWS

GM’s new autonomous driving system follows Mercedes, not Tesla​

CREDIT: GM - CADILLAC CELESTIQ

20200816_102217-80x80.jpg

ByWilliam Johnson
Posted on March 7, 2023
General Motors (GM) has announced some crucial details about its upcoming Ultra Cruise autonomous driving system.
With the mass proliferation of autonomous driving, thanks largely to Tesla, more and more companies have begun working on their own systems. This includes GM, which has already released its Super Cruise system but has now released details about its next iteration, Ultra Cruise.
In the design process of autonomous systems, two leaders with two very different design philosophies have emerged. Tesla is the first, heavily relying on AI while focusing on visual sensor systems to guide the vehicle. This has been seen most clearly in Tesla’s upcoming hardware 4, which eliminates ultra-sonic sensors, instead opting to dramatically increase the quality of the visual sensing systems around the vehicle. The second camp is currently headed by Mercedes.

SPONSORED CONTENT​

Google Chrome Users Can Now Block All Ads (Do it Now For Free!)
Google Chrome Users Can Now Block All Ads (Do it Now For Free!)
By Safe Tech Tips

Mercedes has taken the complete opposite approach to Tesla. While still relying on AI guidance, Mercedes uses a combination of three different sensor arrays, visual, ultra-sonic, and LiDAR, to help guide the vehicle.
That takes us to GM’s Ultra Cruise, which was revealed in detail today. Much like Mercedes, GM has chosen to use three sensor arrays; visual, ultra-sonic, and LiDAR. Further emulating the premium German auto group, GM’s system “will have a 360-degree view of the vehicle,” according to the automaker.
According to GM, this architecture allows redundancy and sensor specialization, whereby each sensor group will help focus on a single task. The camera and short-range ultra-sonic radar systems focus on object detection, primarily at low speeds and in urban environments. These systems will help the vehicle detect other vehicles, traffic signals and signs, and pedestrians. At higher speeds, the long-range radar and LiDAR systems also come into play, helping to detect vehicles and road features from further away.
GM also points out that, thanks to the capabilities of radar and LiDAR systems in poor visibility conditions, the system benefits from better overall uptime. GM aims to create an autonomous driving system allowing hands-free driving in 95% of situations.
As for the Tesla approach, the leader in autonomous driving certainly has credibility in its design. According to Tesla’s blog post about removing the ultra-sonic sensor capabilities from its vehicles, “Tesla Vision” equipped vehicles perform just as well, if not better, in tests like the pedestrian automatic emergency braking (AEB) test. Though it should be noted that the lack of secondary sensors is also likely to help reduce vehicle manufacturing costs.
Ultra Cruise will first be available on the upcoming Cadillac Celestiq. Still, with a growing number of vehicles coming with GM’s Super Cruise, it’s likely only a matter of time before the more advanced ADAS system makes its way to mass market offerings as well.
“GM’s fundamental strategy for all ADAS features, including Ultra Cruise, is safely deploying these technologies,” said Jason Ditman, GM chief engineer, Ultra Cruise. “A deep knowledge of what Ultra Cruise is capable of, along with the detailed picture provided by its sensors, will help us understand when Ultra Cruise can be engaged and when to hand control back to the driver. We believe consistent, clear operation can help build drivers’ confidence in Ultra Cruise.”
With more and more automakers entering the autonomous driving space every year, it will be interesting to see which architecture they choose to invest in. But what could prove to be the defining trait is which system performs better in the real world. And as of now, it isn’t immediately clear who the victor is.

 
  • Like
  • Fire
  • Love
Reactions: 55 users

Steve10

Regular


the 2023 Edge AI Hardware Report from VDC Research estimates that the market for Edge AI hardware processors will be $35B by 2030.

I found the pie chart from HTF Market Intelligence.


1678239287080.png



The pie chart indicates about 12% market share for BRN x $35B TAM by 2030 = $4.2B. That's AUD $6.37B revenue if accurate.

USD $35B = AUD $53.08B x 1 % = $530.8M AUD revenue per 1% market share.

$530.8M AUD x 60% NPAT similar to ARM = $318.5M NPAT x PE60 = $19.1M AUD MC.

The forecast 12% market share would equate to AUD $229.2B MC / 1.8B SOI = $127.33 SP AUD.

That would mean BRN SP rises from 51c in March 2023 to $127.33 = x249.7 by March 2030.

PLS was 1c SP low in 2013 to $5.66 peak in November 2022 = x566 within 10 years.

BRN was 3.5c SP low in 2020 to $127.33 in 2030 = x3,638 within 10 years.

Appears impossible, however, BRN has breakthrough tech whereas there are lithium mines everywhere.

Anything under $100M MC BRN was like investing in pre-IPO.

I will be very happy with AUD $50B MC by 2030 or about $27.78 SP.

It will require 2.6% market share. Anything above will be a big bonus.
 
  • Like
  • Fire
  • Love
Reactions: 63 users
Some big buys going through now - Insto Analysts said BRN is a BUY! Only blue sky from here. DYOR

Over 300,000 shares wanted in this one order.

1:19:59 PM 0.560 366,481 $205,229.360
 
  • Like
  • Fire
  • Love
Reactions: 28 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi @Bravo ,

Here are a couple of Cerence patent applications:

US2022415318A1 VOICE ASSISTANT ACTIVATION SYSTEM WITH CONTEXT DETERMINATION BASED ON MULTIMODAL DATA

View attachment 31562

View attachment 31561
A vehicle system for classifying spoken utterance within a vehicle cabin as one of system-directed and non-system directed may include at least one microphone to detect at least one acoustic utterance from at least one occupant of the vehicle, at least one camera to detect occupant data indicative of occupant behavior within the vehicle corresponding to the acoustic utterance, and a processor programmed to receive the acoustic utterance, receive the occupant data, determine whether the occupant data is indicative of a vehicle feature, classify the acoustic utterance as a system-directed utterance in response to the occupant data being indicative of a vehicle feature, and process the acoustic utterance.



WO2020142717A1 METHODS AND SYSTEMS FOR INCREASING AUTONOMOUS VEHICLE SAFETY AND FLEXIBILITY USING VOICE INTERACTION

View attachment 31565


The specifications seem oblivious of SNNs.

Hi Dodgy Knees, forgive me if I sound like a complete eejit by asking this question. But is it necessary for a company to have an IP describing SNN's in order for them to incorporate SNN's into their products? I mean, can't they just sprinkle them in there and Bob's your uncle?
 
  • Like
  • Haha
Reactions: 9 users

Damo4

Regular
Some big buys going through now - Insto Analysts said BRN is a BUY! Only blue sky from here. DYOR

Over 300,000 shares wanted in this one order.

1:19:59 PM 0.560 366,481 $205,229.360
Yep and watch the 400kish walls at +0.005, +0.010, +0.015 being moved up. These used to sit on the sell price and now it's being moved up.
One 79k order forced them to move up.
 
  • Like
Reactions: 10 users

Bloodsy

Regular
GM unveils Hummer SUV EV & Cadillac EVs in China

NEWS

GM’s new autonomous driving system follows Mercedes, not Tesla​

CREDIT: GM - CADILLAC CELESTIQ

20200816_102217-80x80.jpg

ByWilliam Johnson
Posted on March 7, 2023
General Motors (GM) has announced some crucial details about its upcoming Ultra Cruise autonomous driving system.
With the mass proliferation of autonomous driving, thanks largely to Tesla, more and more companies have begun working on their own systems. This includes GM, which has already released its Super Cruise system but has now released details about its next iteration, Ultra Cruise.
In the design process of autonomous systems, two leaders with two very different design philosophies have emerged. Tesla is the first, heavily relying on AI while focusing on visual sensor systems to guide the vehicle. This has been seen most clearly in Tesla’s upcoming hardware 4, which eliminates ultra-sonic sensors, instead opting to dramatically increase the quality of the visual sensing systems around the vehicle. The second camp is currently headed by Mercedes.

SPONSORED CONTENT​

Google Chrome Users Can Now Block All Ads (Do it Now For Free!)
Google Chrome Users Can Now Block All Ads (Do it Now For Free!)
By Safe Tech Tips

Mercedes has taken the complete opposite approach to Tesla. While still relying on AI guidance, Mercedes uses a combination of three different sensor arrays, visual, ultra-sonic, and LiDAR, to help guide the vehicle.
That takes us to GM’s Ultra Cruise, which was revealed in detail today. Much like Mercedes, GM has chosen to use three sensor arrays; visual, ultra-sonic, and LiDAR. Further emulating the premium German auto group, GM’s system “will have a 360-degree view of the vehicle,” according to the automaker.
According to GM, this architecture allows redundancy and sensor specialization, whereby each sensor group will help focus on a single task. The camera and short-range ultra-sonic radar systems focus on object detection, primarily at low speeds and in urban environments. These systems will help the vehicle detect other vehicles, traffic signals and signs, and pedestrians. At higher speeds, the long-range radar and LiDAR systems also come into play, helping to detect vehicles and road features from further away.
GM also points out that, thanks to the capabilities of radar and LiDAR systems in poor visibility conditions, the system benefits from better overall uptime. GM aims to create an autonomous driving system allowing hands-free driving in 95% of situations.
As for the Tesla approach, the leader in autonomous driving certainly has credibility in its design. According to Tesla’s blog post about removing the ultra-sonic sensor capabilities from its vehicles, “Tesla Vision” equipped vehicles perform just as well, if not better, in tests like the pedestrian automatic emergency braking (AEB) test. Though it should be noted that the lack of secondary sensors is also likely to help reduce vehicle manufacturing costs.
Ultra Cruise will first be available on the upcoming Cadillac Celestiq. Still, with a growing number of vehicles coming with GM’s Super Cruise, it’s likely only a matter of time before the more advanced ADAS system makes its way to mass market offerings as well.
“GM’s fundamental strategy for all ADAS features, including Ultra Cruise, is safely deploying these technologies,” said Jason Ditman, GM chief engineer, Ultra Cruise. “A deep knowledge of what Ultra Cruise is capable of, along with the detailed picture provided by its sensors, will help us understand when Ultra Cruise can be engaged and when to hand control back to the driver. We believe consistent, clear operation can help build drivers’ confidence in Ultra Cruise.”
With more and more automakers entering the autonomous driving space every year, it will be interesting to see which architecture they choose to invest in. But what could prove to be the defining trait is which system performs better in the real world. And as of now, it isn’t immediately clear who the victor is.



Interesting link here also about GM and a tie up with Global Foundries one of our new partners.

GM securing chip supply through GF.

Within this article there is an embedded link to a similar article on Ford doing the same thing!

Could be coincidence but who knows!
 
  • Like
  • Fire
  • Thinking
Reactions: 31 users

Quercuskid

Regular
I have found out how the share price goes up, I put in a buy just below the sell price and it never gets there it just goes up!!! I will try this every day just to make the price go up 😂
 
  • Haha
  • Like
  • Love
Reactions: 40 users
Top Bottom