BRN Discussion Ongoing

Damo4

Regular
SAY WHAT?????


Tenstorrent are talking about SNN's
. Remeber that we're in like flynn with Si-Five Intelligence x280 and that Tenstorrent have licensed Si-Five Intelligence x280 as a platform for its Tensix NPU.


View attachment 31568





Tenstorrent Is Changing the Way We Think About AI Chips​

Tenstorrent Is Changing the Way We Think About AI Chips

GPUs and CPUs are reaching their limits as far as AI is concerned. That’s why Tenstorrent is creating something different.
Chris Wiltz | May 12, 2020

SUGGESTED EVENT
TBS_EU_nodate.jpg
The Battery Show Europe/Electric & Hybrid Vehicle Technology Expo Europe 2023
May 23, 2023 to May 25, 2023

GPUs and CPUs are not going to be enough to ensure a stable future for artificial intelligence. “GPUs are essentially at the end of their evolutionary curve,” Ljubisa Bajic, CEO of AI chip startup Tenstorrent told Design News. “[GPUs] have done a great job; they’ve pushed the field to to the point where it is now. But in order to make any kind of order of magnitude type jumps GPUs are going to have to go.”
Tenstorrent20Grayskull_0.png
Tenstorrent's Grayskull processor is capable of operating at up to 368 TOPS with an architecture much different than any CPU or GPU (Image source: Tenstorrent)

Bajic knows quite a bit about GPU technology. He spent some time at Nvidia, the house that GPUs built, working as senior architect. He’s also spent a few years working as an IC designer and architect at AMD. While he doesn’t think companies like Nvidia are going away any time soon, he thinks it’s only a matter of time before the company releases an AI chip product that is not a GPU.

But an entire ecosystem of AI chip startups is already heading in that direction. Engineers and developers are looking at new, novel chip architectures capable of handling the unique demands of AI and its related technologies – both in data centers and the edge.
Bajic is the founder of one such company – Toronto-based Tenstorrent, which was founded in 2016 and emerged from stealth earlier this year. Tenstorrent’s goal is both simple and largely ambitious – creating chip hardware for AI capable of delivering the best all around performance in both the data center and the edge. The company has created its own proprietary processor core called the Tensix, which contains a high utilization packet processor, a programmable SIMD, a dense math computational block, along with five single-issue RISC cores. By combining Tensix cores into an array using a network on a chip (NoC) Tenstorrent says it can create high-powered chips that can handle both inference and training and scale from small embedded devices all the way up to large data center deployments.

The company’s first product Grayskull (yes, that is a He-Man reference) is a processor targeted at inference tasks. According to company specs, Grayskull is capable of operating at up to 368 tera operations per second (TOPS). To put that into perspective as far as what Grayskull could be capable of, consider Qualcomm’s AI Engine used in its latest SoCs such as the Snapdragon 865. The Qualcomm engine offers up to 15 TOPS of performance for various mobile applications. A single Grayskull processor is capable of handling the volume of calculations of about two dozen of the chips found in the highest-end smartphones on the market today.
Tenstorrent20Grayskull20PCI20card_0.png
The Grayskull PCIe card (Image source: Tenstorrent)

Nature Versus Neural
If you want to design a chip that mimics cognition then taking cues from the human brain is the obvious way to go. Whereas AI draws a clear functional distinction between training (learning a task) and inference (implementing or acting on what’s been learned), the human brain does no such thing.
“We figured if we're going after imitating Mother Nature that we should really do a good job of it and not not miss some key features,” Bajic said. “If you look at the natural world, there’s the same architecture between small things and big things. They can all learn; it's not inference or training. And they all achieve extreme efficiency by relying on natural sparsity, so only a small percentage of the neurons in the brain are doing anything at any given time and which ones are working depends on what you're doing.”
Bajic said he and his team wanted to build a computer would have all these features and also not compromise on any of them. “In the world of artificial neural networks today, there are two camps that have popped up,” he said. “One is CPUs and GPUs and all the startup hardware that's coming up. They tend to be doing dense matrix math on hardware that's built for it, like single instructional, multiple data [SIMD] machines, and if they're scaled out they tend to talk over Ethernet. On the flip side you've got the spiking artificial neural network, which is a lot less popular and has had a lot less success in in broad applications.”

Spiking neural networks (SNNs) more closely mimic the functions of biological neurons, which send information via spikes in electrical activity. “Here people try to simulate natural neurons almost directly by writing out the differential equations that describe their operation and then implementing them as close we can in hardware,” Bajic explained. “So to an engineer this comes down to basically having many scalar processor cores connected to the scalar network.”
This is very inefficient from a hardware standpoint. But Bajic said that SNNs have an efficiency that biological neurons have in that only a certain percentage of neurons are activated depending on what the neural net is doing – something that’s highly desirable in terms of power consumption in particular.
“Spiking neural nets have this conditional efficiency, but no hardware efficiency. The other end of the spectrum has both. We wanted to build a machine that has both,” Bajic said. “We wanted to pick a place in the spectrum where we could get the best of the both worlds.”
Behind the Power of Grayskull
With that in mind there are four overall goals Tenstorrent is shooting for in its chip development – hardware efficiency, conditional efficiency, storage efficiency, and a high degree of scalability (exceeding 100,000 chips).
“So how did we do this? We implemented a machine that can run fine grain conditional execution by factoring the computation from huge groups of numbers to computations of small groups, so 16 by 4 or 16 by 16 groups to be precise,” Bajic said.

“We enable control flow on these groups with no performance penalty. So essentially we can run small matrices and we can put “if” statements around them and decide whether to run them at all. And if we’re going to run them we can decide whether to run them in reduced precision or full precision or anywhere in between.”
He said this also means rethinking the software stack. “The problem is that the software stacks that a lot of the other companies in the space have brought out assume that there's a fixed set of dimensions and a fixed set of work to run. So in order to enable adaptation at runtime normally hardware needs to be supportive of it and the full software stack as well.
“So many decisions that are currently made at compile time for us are moved into runtime so that we can accept exactly the right sized inputs. That we know exactly how big stuff is after we've chosen to eliminate some things at runtime so there's a fairly large software challenge to keep up with what the hardware enables.”
Tenstorrent%20Tensix%20core%20structure_0.jpg
(Image source: Tenstorrent)

Creating an architecture that can scale to over 100,000 nodes means operating at a scale where you can’t have a shared memory space. “You basically need a bunch of processors with private memory,” Bajic said. “Cache coherency is another thing that's impossible to scale for across more than a couple hundred nodes, so that had to go as well.”
Bajic explained that each of Tenstorrent’s Tensix cores is really a grid of five single-issue RISC covers that are networked together. Each Tensix is capable of roughly 3 TOPS of compute.
“All of our processors can pretty much be viewed as packet processors,” Bajic said. “The way that works on a single processor level is that you have a core and every one of them has a megabyte of SRAM. Packets arrive into buffers in this SRAM, which triggers software to fetch them and run a hardware unpacketization engine – this removes all the packet framing, interprets what it means, and decompresses the packet so it leaves compressed at all times, except when it’s being computed on.

“It essentially recreates that little tensor that made the packet. We run a bunch of computations on those tensors and eventually we're ready to to send them onward. What happens then is they get repacketized, recompressed, deposited into SRAM, and then from there our network functionality picks them up and forwards them to all the other cores that they need to go to under the directional compiler.”
While Tenstorrent is rolling out Grayskull it is actively developing its second Tensix core-based processor, dubbed Wormhole. Tenstorrent is targeting a Fall 2020 release for Wormhole and says it will focus even more on scale. “It’s essentially built around the same architecture [as Grayskull], but it has a lot of Ethernet links on it for scaling out,” Bajic said. “It's not going be a PCI card chip – it’s the same architecture, but for big systems.”
Searching for the iPhone Moment
There are a lot of lofty goals for AI on the horizon. Researchers and major companies alike are hoping new chip hardware will help along the path toward big projects like Level 5 autonomous cars all the way to some idea of general artificial intelligence.
Bajic agrees with these ideas, but he also believes that there’s a simple matter of cost savings that makes chips like the ones being developed by his company an attractive commodity.
“The metric that everybody cares about is this concept of total cost of ownership (TCO),”he said. “If you think of companies like Google, Microsoft, and Amazon, these are big organizations that run an inordinate amount of computing machinery and spend a lot of money doing it. Essentially they calculate the cost of everything to do with running a computer system over some set of years including how much the machine costs to begin with – the upfront cost, how much it costs to pipe through wires and cooling so that you can live with its power consumption, and the cost of how much the power itself costs. They add all of that together and get this TCO metric.
“For them minimizing that metric is important because they spend billions of dollars on this. Machine learning and AI has become a very sizable percentage of all their compute activity and it’s trending towards becoming half of all that activity in the next couple years. So if your hardware can perform, say, 10 times better then it's a very meaningful financial indicator. If you can convince the market that you've got an order of magnitude in TCO advantage that is going persist for a few years, it's a super powerful story. It's a completely valid premise to build a business around, but it's kind of an optimization thing as opposed to something super exciting.”
For Bajic those more exciting areas come in the form of large scale AI projects like using machine learning to track diseases and discover vaccines and medications as well as in emerging feels such as emotional AI and affective computing. “Imagine if you had a device on your wrist that could interpret all of your mannerisms and gestures. As you’re sitting there watching a movie it could tell if you’re bored or disgusted and change the channel. Or it could automatically order food if you appear to be hungry – something pretty intelligent that can also be situationally aware,” he said.
“The key engine that enables this level of awareness is an AI, but at this point these solutions are too power hungry and too big to put on your wrist or to put anywhere that can follow you. By providing an architecture that will give an order of magnitude boost you can start unlocking whole new technologies and creating things that will an impact on the level of the first iPhone release.”



Holy s***
 
  • Haha
  • Like
  • Thinking
Reactions: 11 users
I think the below segment is important for some of those who have been expecting things ahead of Brainchip timeline:

Mike Vizard: So how long before we started to see these use cases? You guys just launched the processors; it usually takes some time for all this to come together. And for it to manifest itself somewhere, what’s your kind of timeline?

Nandan Nayampally: So I think the way to think about it is, you’re the real kind of growth in more radical innovative use cases, probably, you know, a few months out, a year out. But what I think what we’re saying is there are use cases that exist on more high powered devices today, that actually can now migrate to much more efficient edge devices, right? And so I do want to make sure people understand when when we talk about edge, it’s not kind of the brick that’s sitting next to your network and still driven by a fan, right? It’s smaller than the bigger bricks, but it is still a brick. What we’re talking about is literally at-sensor, always on intelligence, let’s say whether it’s a heart rate monitor, for example, or, you know, respiratory rate monitor – you could actually have a very, very compact device of that kind. And so one of the big benefits that we see is, let’s say video object detection today needs quite a bit of high power compute, to do HD video object detection, target tracking. Now imagine you could do that in a battery operated or have very low form factor, cost effective device, right? So suddenly, your dash cam, with additional capabilities built into that, could become much more cost effective or more capable. So we see a lot of the use cases that exists today, coming in. And then we see a number of use cases like vital signs predictions much closer, or remote healthcare, now getting cheaper, because you don’t have to send everything to cloud., You can get a really good idea before you have to send anything to cloud and then you’re sending less data you’re sending, it’s already pre-qualified before you send it rather than finding out through the cycle that it’s taken a lot more time. Does that make sense?
Thanks for posting that, bookmarked for later to read. Tsunami of information to sift through and just recalled zee has a bookmark feature here .
 
  • Like
Reactions: 3 users

Quiltman

Regular
1678240435025.png


Love this diagram.

The biggest technology event of historical significance over the lifetime of contributors on this forum is the development of AI.
The biggest impact on human society, how we work, where we work, our economies, our ethics, where humans explore & apply our endeavour ...

Personally, you can look like a deer in the headlights at this momentous change, or be part of the future & embrace it by partnering with a leading edge AI company like BrainChip.

No Brainer really.
 
  • Like
  • Fire
  • Love
Reactions: 37 users
Thought I'd share this, as this is how I view that brainchip runs the company.
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Diogenese

Top 20
SAY WHAT?????


Tenstorrent talking about SNN's in an article dated 12 May 2020
!!!

Remember that we're in like flynn with Si-Five Intelligence x280 seeing that they have just specified that they want their X280 Intelligence Series to be tightly integrated with either Akida-S or Akida-P neural processors. And Tenstorrent have licensed Si-Five Intelligence x280 as a platform for its Tensix NPU.



View attachment 31568





Tenstorrent Is Changing the Way We Think About AI Chips​

Tenstorrent Is Changing the Way We Think About AI Chips

GPUs and CPUs are reaching their limits as far as AI is concerned. That’s why Tenstorrent is creating something different.
Chris Wiltz | May 12, 2020

SUGGESTED EVENT


GPUs and CPUs are not going to be enough to ensure a stable future for artificial intelligence. “GPUs are essentially at the end of their evolutionary curve,” Ljubisa Bajic, CEO of AI chip startup Tenstorrent told Design News. “[GPUs] have done a great job; they’ve pushed the field to to the point where it is now. But in order to make any kind of order of magnitude type jumps GPUs are going to have to go.”
Tenstorrent20Grayskull_0.png
Tenstorrent's Grayskull processor is capable of operating at up to 368 TOPS with an architecture much different than any CPU or GPU (Image source: Tenstorrent)

Bajic knows quite a bit about GPU technology. He spent some time at Nvidia, the house that GPUs built, working as senior architect. He’s also spent a few years working as an IC designer and architect at AMD. While he doesn’t think companies like Nvidia are going away any time soon, he thinks it’s only a matter of time before the company releases an AI chip product that is not a GPU.

But an entire ecosystem of AI chip startups is already heading in that direction. Engineers and developers are looking at new, novel chip architectures capable of handling the unique demands of AI and its related technologies – both in data centers and the edge.
Bajic is the founder of one such company – Toronto-based Tenstorrent, which was founded in 2016 and emerged from stealth earlier this year. Tenstorrent’s goal is both simple and largely ambitious – creating chip hardware for AI capable of delivering the best all around performance in both the data center and the edge. The company has created its own proprietary processor core called the Tensix, which contains a high utilization packet processor, a programmable SIMD, a dense math computational block, along with five single-issue RISC cores. By combining Tensix cores into an array using a network on a chip (NoC) Tenstorrent says it can create high-powered chips that can handle both inference and training and scale from small embedded devices all the way up to large data center deployments.

The company’s first product Grayskull (yes, that is a He-Man reference) is a processor targeted at inference tasks. According to company specs, Grayskull is capable of operating at up to 368 tera operations per second (TOPS). To put that into perspective as far as what Grayskull could be capable of, consider Qualcomm’s AI Engine used in its latest SoCs such as the Snapdragon 865. The Qualcomm engine offers up to 15 TOPS of performance for various mobile applications. A single Grayskull processor is capable of handling the volume of calculations of about two dozen of the chips found in the highest-end smartphones on the market today.
Tenstorrent20Grayskull20PCI20card_0.png
The Grayskull PCIe card (Image source: Tenstorrent)

Nature Versus Neural
If you want to design a chip that mimics cognition then taking cues from the human brain is the obvious way to go. Whereas AI draws a clear functional distinction between training (learning a task) and inference (implementing or acting on what’s been learned), the human brain does no such thing.
“We figured if we're going after imitating Mother Nature that we should really do a good job of it and not not miss some key features,” Bajic said. “If you look at the natural world, there’s the same architecture between small things and big things. They can all learn; it's not inference or training. And they all achieve extreme efficiency by relying on natural sparsity, so only a small percentage of the neurons in the brain are doing anything at any given time and which ones are working depends on what you're doing.”
Bajic said he and his team wanted to build a computer would have all these features and also not compromise on any of them. “In the world of artificial neural networks today, there are two camps that have popped up,” he said. “One is CPUs and GPUs and all the startup hardware that's coming up. They tend to be doing dense matrix math on hardware that's built for it, like single instructional, multiple data [SIMD] machines, and if they're scaled out they tend to talk over Ethernet. On the flip side you've got the spiking artificial neural network, which is a lot less popular and has had a lot less success in in broad applications.”

Spiking neural networks (SNNs) more closely mimic the functions of biological neurons, which send information via spikes in electrical activity. “Here people try to simulate natural neurons almost directly by writing out the differential equations that describe their operation and then implementing them as close we can in hardware,” Bajic explained. “So to an engineer this comes down to basically having many scalar processor cores connected to the scalar network.”
This is very inefficient from a hardware standpoint. But Bajic said that SNNs have an efficiency that biological neurons have in that only a certain percentage of neurons are activated depending on what the neural net is doing – something that’s highly desirable in terms of power consumption in particular.
“Spiking neural nets have this conditional efficiency, but no hardware efficiency. The other end of the spectrum has both. We wanted to build a machine that has both,” Bajic said. “We wanted to pick a place in the spectrum where we could get the best of the both worlds.”
Behind the Power of Grayskull
With that in mind there are four overall goals Tenstorrent is shooting for in its chip development – hardware efficiency, conditional efficiency, storage efficiency, and a high degree of scalability (exceeding 100,000 chips).
“So how did we do this? We implemented a machine that can run fine grain conditional execution by factoring the computation from huge groups of numbers to computations of small groups, so 16 by 4 or 16 by 16 groups to be precise,” Bajic said.

“We enable control flow on these groups with no performance penalty. So essentially we can run small matrices and we can put “if” statements around them and decide whether to run them at all. And if we’re going to run them we can decide whether to run them in reduced precision or full precision or anywhere in between.”
He said this also means rethinking the software stack. “The problem is that the software stacks that a lot of the other companies in the space have brought out assume that there's a fixed set of dimensions and a fixed set of work to run. So in order to enable adaptation at runtime normally hardware needs to be supportive of it and the full software stack as well.
“So many decisions that are currently made at compile time for us are moved into runtime so that we can accept exactly the right sized inputs. That we know exactly how big stuff is after we've chosen to eliminate some things at runtime so there's a fairly large software challenge to keep up with what the hardware enables.”
Tenstorrent%20Tensix%20core%20structure_0.jpg
(Image source: Tenstorrent)

Creating an architecture that can scale to over 100,000 nodes means operating at a scale where you can’t have a shared memory space. “You basically need a bunch of processors with private memory,” Bajic said. “Cache coherency is another thing that's impossible to scale for across more than a couple hundred nodes, so that had to go as well.”
Bajic explained that each of Tenstorrent’s Tensix cores is really a grid of five single-issue RISC covers that are networked together. Each Tensix is capable of roughly 3 TOPS of compute.
“All of our processors can pretty much be viewed as packet processors,” Bajic said. “The way that works on a single processor level is that you have a core and every one of them has a megabyte of SRAM. Packets arrive into buffers in this SRAM, which triggers software to fetch them and run a hardware unpacketization engine – this removes all the packet framing, interprets what it means, and decompresses the packet so it leaves compressed at all times, except when it’s being computed on.

“It essentially recreates that little tensor that made the packet. We run a bunch of computations on those tensors and eventually we're ready to to send them onward. What happens then is they get repacketized, recompressed, deposited into SRAM, and then from there our network functionality picks them up and forwards them to all the other cores that they need to go to under the directional compiler.”
While Tenstorrent is rolling out Grayskull it is actively developing its second Tensix core-based processor, dubbed Wormhole. Tenstorrent is targeting a Fall 2020 release for Wormhole and says it will focus even more on scale. “It’s essentially built around the same architecture [as Grayskull], but it has a lot of Ethernet links on it for scaling out,” Bajic said. “It's not going be a PCI card chip – it’s the same architecture, but for big systems.”
Searching for the iPhone Moment
There are a lot of lofty goals for AI on the horizon. Researchers and major companies alike are hoping new chip hardware will help along the path toward big projects like Level 5 autonomous cars all the way to some idea of general artificial intelligence.
Bajic agrees with these ideas, but he also believes that there’s a simple matter of cost savings that makes chips like the ones being developed by his company an attractive commodity.
“The metric that everybody cares about is this concept of total cost of ownership (TCO),”he said. “If you think of companies like Google, Microsoft, and Amazon, these are big organizations that run an inordinate amount of computing machinery and spend a lot of money doing it. Essentially they calculate the cost of everything to do with running a computer system over some set of years including how much the machine costs to begin with – the upfront cost, how much it costs to pipe through wires and cooling so that you can live with its power consumption, and the cost of how much the power itself costs. They add all of that together and get this TCO metric.
“For them minimizing that metric is important because they spend billions of dollars on this. Machine learning and AI has become a very sizable percentage of all their compute activity and it’s trending towards becoming half of all that activity in the next couple years. So if your hardware can perform, say, 10 times better then it's a very meaningful financial indicator. If you can convince the market that you've got an order of magnitude in TCO advantage that is going persist for a few years, it's a super powerful story. It's a completely valid premise to build a business around, but it's kind of an optimization thing as opposed to something super exciting.”
For Bajic those more exciting areas come in the form of large scale AI projects like using machine learning to track diseases and discover vaccines and medications as well as in emerging feels such as emotional AI and affective computing. “Imagine if you had a device on your wrist that could interpret all of your mannerisms and gestures. As you’re sitting there watching a movie it could tell if you’re bored or disgusted and change the channel. Or it could automatically order food if you appear to be hungry – something pretty intelligent that can also be situationally aware,” he said.
“The key engine that enables this level of awareness is an AI, but at this point these solutions are too power hungry and too big to put on your wrist or to put anywhere that can follow you. By providing an architecture that will give an order of magnitude boost you can start unlocking whole new technologies and creating things that will an impact on the level of the first iPhone release.”


USPTO shows the inventor is Heath Robinson:

The way that works on a single processor level is that you have a core and every one of them has a megabyte of SRAM. Packets arrive into buffers in this SRAM, which triggers software to fetch them and run a hardware unpacketization engine – this removes all the packet framing, interprets what it means, and decompresses the packet so it leaves compressed at all times, except when it’s being computed on.

“It essentially recreates that little tensor that made the packet. We run a bunch of computations on those tensors and eventually we're ready to to send them onward. What happens then is they get repacketized, recompressed, deposited into SRAM, and then from there our network functionality picks them up and forwards them to all the other cores that they need to go to under the directional compiler
.”

1678241877226.png
 
Last edited:
  • Haha
  • Like
  • Thinking
Reactions: 18 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
GM unveils Hummer SUV EV & Cadillac EVs in China

NEWS

GM’s new autonomous driving system follows Mercedes, not Tesla​

CREDIT: GM - CADILLAC CELESTIQ

20200816_102217-80x80.jpg

ByWilliam Johnson
Posted on March 7, 2023
General Motors (GM) has announced some crucial details about its upcoming Ultra Cruise autonomous driving system.
With the mass proliferation of autonomous driving, thanks largely to Tesla, more and more companies have begun working on their own systems. This includes GM, which has already released its Super Cruise system but has now released details about its next iteration, Ultra Cruise.
In the design process of autonomous systems, two leaders with two very different design philosophies have emerged. Tesla is the first, heavily relying on AI while focusing on visual sensor systems to guide the vehicle. This has been seen most clearly in Tesla’s upcoming hardware 4, which eliminates ultra-sonic sensors, instead opting to dramatically increase the quality of the visual sensing systems around the vehicle. The second camp is currently headed by Mercedes.

SPONSORED CONTENT​

Google Chrome Users Can Now Block All Ads (Do it Now For Free!)
Google Chrome Users Can Now Block All Ads (Do it Now For Free!)
By Safe Tech Tips

Mercedes has taken the complete opposite approach to Tesla. While still relying on AI guidance, Mercedes uses a combination of three different sensor arrays, visual, ultra-sonic, and LiDAR, to help guide the vehicle.
That takes us to GM’s Ultra Cruise, which was revealed in detail today. Much like Mercedes, GM has chosen to use three sensor arrays; visual, ultra-sonic, and LiDAR. Further emulating the premium German auto group, GM’s system “will have a 360-degree view of the vehicle,” according to the automaker.
According to GM, this architecture allows redundancy and sensor specialization, whereby each sensor group will help focus on a single task. The camera and short-range ultra-sonic radar systems focus on object detection, primarily at low speeds and in urban environments. These systems will help the vehicle detect other vehicles, traffic signals and signs, and pedestrians. At higher speeds, the long-range radar and LiDAR systems also come into play, helping to detect vehicles and road features from further away.
GM also points out that, thanks to the capabilities of radar and LiDAR systems in poor visibility conditions, the system benefits from better overall uptime. GM aims to create an autonomous driving system allowing hands-free driving in 95% of situations.
As for the Tesla approach, the leader in autonomous driving certainly has credibility in its design. According to Tesla’s blog post about removing the ultra-sonic sensor capabilities from its vehicles, “Tesla Vision” equipped vehicles perform just as well, if not better, in tests like the pedestrian automatic emergency braking (AEB) test. Though it should be noted that the lack of secondary sensors is also likely to help reduce vehicle manufacturing costs.
Ultra Cruise will first be available on the upcoming Cadillac Celestiq. Still, with a growing number of vehicles coming with GM’s Super Cruise, it’s likely only a matter of time before the more advanced ADAS system makes its way to mass market offerings as well.
“GM’s fundamental strategy for all ADAS features, including Ultra Cruise, is safely deploying these technologies,” said Jason Ditman, GM chief engineer, Ultra Cruise. “A deep knowledge of what Ultra Cruise is capable of, along with the detailed picture provided by its sensors, will help us understand when Ultra Cruise can be engaged and when to hand control back to the driver. We believe consistent, clear operation can help build drivers’ confidence in Ultra Cruise.”
With more and more automakers entering the autonomous driving space every year, it will be interesting to see which architecture they choose to invest in. But what could prove to be the defining trait is which system performs better in the real world. And as of now, it isn’t immediately clear who the victor is.

 
  • Like
  • Fire
  • Love
Reactions: 55 users

Steve10

Regular


the 2023 Edge AI Hardware Report from VDC Research estimates that the market for Edge AI hardware processors will be $35B by 2030.

I found the pie chart from HTF Market Intelligence.


1678239287080.png



The pie chart indicates about 12% market share for BRN x $35B TAM by 2030 = $4.2B. That's AUD $6.37B revenue if accurate.

USD $35B = AUD $53.08B x 1 % = $530.8M AUD revenue per 1% market share.

$530.8M AUD x 60% NPAT similar to ARM = $318.5M NPAT x PE60 = $19.1M AUD MC.

The forecast 12% market share would equate to AUD $229.2B MC / 1.8B SOI = $127.33 SP AUD.

That would mean BRN SP rises from 51c in March 2023 to $127.33 = x249.7 by March 2030.

PLS was 1c SP low in 2013 to $5.66 peak in November 2022 = x566 within 10 years.

BRN was 3.5c SP low in 2020 to $127.33 in 2030 = x3,638 within 10 years.

Appears impossible, however, BRN has breakthrough tech whereas there are lithium mines everywhere.

Anything under $100M MC BRN was like investing in pre-IPO.

I will be very happy with AUD $50B MC by 2030 or about $27.78 SP.

It will require 2.6% market share. Anything above will be a big bonus.
 
  • Like
  • Fire
  • Love
Reactions: 63 users
Some big buys going through now - Insto Analysts said BRN is a BUY! Only blue sky from here. DYOR

Over 300,000 shares wanted in this one order.

1:19:59 PM 0.560 366,481 $205,229.360
 
  • Like
  • Fire
  • Love
Reactions: 28 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi @Bravo ,

Here are a couple of Cerence patent applications:

US2022415318A1 VOICE ASSISTANT ACTIVATION SYSTEM WITH CONTEXT DETERMINATION BASED ON MULTIMODAL DATA

View attachment 31562

View attachment 31561
A vehicle system for classifying spoken utterance within a vehicle cabin as one of system-directed and non-system directed may include at least one microphone to detect at least one acoustic utterance from at least one occupant of the vehicle, at least one camera to detect occupant data indicative of occupant behavior within the vehicle corresponding to the acoustic utterance, and a processor programmed to receive the acoustic utterance, receive the occupant data, determine whether the occupant data is indicative of a vehicle feature, classify the acoustic utterance as a system-directed utterance in response to the occupant data being indicative of a vehicle feature, and process the acoustic utterance.



WO2020142717A1 METHODS AND SYSTEMS FOR INCREASING AUTONOMOUS VEHICLE SAFETY AND FLEXIBILITY USING VOICE INTERACTION

View attachment 31565


The specifications seem oblivious of SNNs.

Hi Dodgy Knees, forgive me if I sound like a complete eejit by asking this question. But is it necessary for a company to have an IP describing SNN's in order for them to incorporate SNN's into their products? I mean, can't they just sprinkle them in there and Bob's your uncle?
 
  • Like
  • Haha
Reactions: 9 users

Damo4

Regular
Some big buys going through now - Insto Analysts said BRN is a BUY! Only blue sky from here. DYOR

Over 300,000 shares wanted in this one order.

1:19:59 PM 0.560 366,481 $205,229.360
Yep and watch the 400kish walls at +0.005, +0.010, +0.015 being moved up. These used to sit on the sell price and now it's being moved up.
One 79k order forced them to move up.
 
  • Like
Reactions: 10 users

Bloodsy

Regular
GM unveils Hummer SUV EV & Cadillac EVs in China

NEWS

GM’s new autonomous driving system follows Mercedes, not Tesla​

CREDIT: GM - CADILLAC CELESTIQ

20200816_102217-80x80.jpg

ByWilliam Johnson
Posted on March 7, 2023
General Motors (GM) has announced some crucial details about its upcoming Ultra Cruise autonomous driving system.
With the mass proliferation of autonomous driving, thanks largely to Tesla, more and more companies have begun working on their own systems. This includes GM, which has already released its Super Cruise system but has now released details about its next iteration, Ultra Cruise.
In the design process of autonomous systems, two leaders with two very different design philosophies have emerged. Tesla is the first, heavily relying on AI while focusing on visual sensor systems to guide the vehicle. This has been seen most clearly in Tesla’s upcoming hardware 4, which eliminates ultra-sonic sensors, instead opting to dramatically increase the quality of the visual sensing systems around the vehicle. The second camp is currently headed by Mercedes.

SPONSORED CONTENT​

Google Chrome Users Can Now Block All Ads (Do it Now For Free!)
Google Chrome Users Can Now Block All Ads (Do it Now For Free!)
By Safe Tech Tips

Mercedes has taken the complete opposite approach to Tesla. While still relying on AI guidance, Mercedes uses a combination of three different sensor arrays, visual, ultra-sonic, and LiDAR, to help guide the vehicle.
That takes us to GM’s Ultra Cruise, which was revealed in detail today. Much like Mercedes, GM has chosen to use three sensor arrays; visual, ultra-sonic, and LiDAR. Further emulating the premium German auto group, GM’s system “will have a 360-degree view of the vehicle,” according to the automaker.
According to GM, this architecture allows redundancy and sensor specialization, whereby each sensor group will help focus on a single task. The camera and short-range ultra-sonic radar systems focus on object detection, primarily at low speeds and in urban environments. These systems will help the vehicle detect other vehicles, traffic signals and signs, and pedestrians. At higher speeds, the long-range radar and LiDAR systems also come into play, helping to detect vehicles and road features from further away.
GM also points out that, thanks to the capabilities of radar and LiDAR systems in poor visibility conditions, the system benefits from better overall uptime. GM aims to create an autonomous driving system allowing hands-free driving in 95% of situations.
As for the Tesla approach, the leader in autonomous driving certainly has credibility in its design. According to Tesla’s blog post about removing the ultra-sonic sensor capabilities from its vehicles, “Tesla Vision” equipped vehicles perform just as well, if not better, in tests like the pedestrian automatic emergency braking (AEB) test. Though it should be noted that the lack of secondary sensors is also likely to help reduce vehicle manufacturing costs.
Ultra Cruise will first be available on the upcoming Cadillac Celestiq. Still, with a growing number of vehicles coming with GM’s Super Cruise, it’s likely only a matter of time before the more advanced ADAS system makes its way to mass market offerings as well.
“GM’s fundamental strategy for all ADAS features, including Ultra Cruise, is safely deploying these technologies,” said Jason Ditman, GM chief engineer, Ultra Cruise. “A deep knowledge of what Ultra Cruise is capable of, along with the detailed picture provided by its sensors, will help us understand when Ultra Cruise can be engaged and when to hand control back to the driver. We believe consistent, clear operation can help build drivers’ confidence in Ultra Cruise.”
With more and more automakers entering the autonomous driving space every year, it will be interesting to see which architecture they choose to invest in. But what could prove to be the defining trait is which system performs better in the real world. And as of now, it isn’t immediately clear who the victor is.



Interesting link here also about GM and a tie up with Global Foundries one of our new partners.

GM securing chip supply through GF.

Within this article there is an embedded link to a similar article on Ford doing the same thing!

Could be coincidence but who knows!
 
  • Like
  • Fire
  • Thinking
Reactions: 31 users

Quercuskid

Regular
I have found out how the share price goes up, I put in a buy just below the sell price and it never gets there it just goes up!!! I will try this every day just to make the price go up 😂
 
  • Haha
  • Like
  • Love
Reactions: 40 users

Diogenese

Top 20
Hi Dodgy Knees, forgive me if I sound like a complete eejit by asking this question. But is it necessary for a company to have an IP describing SNN's in order for them to incorporate SNN's into their products? I mean, can't they just sprinkle them in there and Bob's your uncle?
Quite rite!

It's just fairy dust like on the wings of a butterfly.
 
  • Haha
  • Like
Reactions: 13 users

Interesting link here also about GM and a tie up with Global Foundries one of our new partners.

GM securing chip supply through GF.

Within this article there is an embedded link to a similar article on Ford doing the same thing!

Could be coincidence but who knows!
excited whats going on GIF
:ROFLMAO::ROFLMAO::ROFLMAO::ROFLMAO: too funny, but seriously there is some serious ducks being lined up here around our BRN tech
 
  • Like
  • Haha
  • Fire
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Quite rite!

It's just fairy dust like on the wings of a butterfly.

Speaking of fairy dust, I'm hoping that we can get an exhaust fan to blow a shed load of it on this #49,748.
 
  • Like
  • Fire
  • Love
Reactions: 5 users

Tothemoon24

Top 20



Recent articles

Avnet Unveils the RASynBoard Edge AI Board, with Renesas MCU, Syntiant NDP, and TDK Sensors​

Featuring a microcontroller, edge AI accelerator, Wi-Fi and Bluetooth connectivity, plus on-board sensors, this little board packs a punch.​



5 minutes ago • Machine Learning & AI / Internet of Things /
image_4HrkPndnYv.png

Avnet has announced pre-orders for the RASynBoard, a development kit which combines a Renesas microcontroller and a Syntiant Neural Decision Processor (NDP) with on-board TDK sensors to provide a platform for ultra-low-power edge AI projects — with immediate support in Edge Impulse.
“Avnet’s strong relationship with industry leading suppliers such as Renesas, TDK, and our new supplier Syntiant allows us to combine the unique offerings and capabilities of each partner into a complete system-level, production-ready solution," explains Avnet's Jim Beneke of the new board. "Our design customization and manufacturing capabilities also allow us to offer production-ready solutions for customers needing a full-turnkey option."
image_yPQuuySacA.png
Avnet's latest development board aims squarely at energy-efficient on-device machine learning at the edge: the RASynBoard. (📷: Avnet)
The RASynBoard Core Board is a compact module with USB Type-C connectivity which includes a Renesas RA6M4 microcontroller with a single Arm Cortex-M33 core running at 200MHz, 256kB of static RAM, and 1MB of flash memory, a Syntiant NDP120 Neural Decision Processor (NDP) with Core 2 deep neural network hardware, Arm Cortex-M0 core, and Cadence HiFi 3 Digital Signal Processor (DSP). Elsewhere on the board is 2MB of SPI flash, a Renesas DA16600 802.11b/g/n Wi-Fi and Bluetooth 5.1 radio module, a lithium-polymer battery management circuit, and a six-axis inertial measurement unit (IMU) and digital microphone from TDK.
With all of the above in a compact 25×30mm (around 0.98×1.18") footprint, the RASynBoard aims to be a tiny titan for on-device machine learning work — loading models from the SPI flash storage for execution on the NDP120. "Our hardware technology brings advanced multimodal neural processing to Avnet's new RASynBoard with very low power consumption," explains Syntiant's Mallik Moturi. "The NDP120's ability to provide highly accurate, always-on sensor processing with relatively no impact to battery life was a key edge AI requirement in the development of the module, which also makes the device ideal for supporting a wide range of field applications in smart buildings, factories and cities."
For those growing beyond the RASynBoard Core Board's built-in capabilities, two 28-pin board-to-board connectors provide expansion — with an optional IO Board making use of these to offer an on-board debugger and USB-serial interface, a MikroE Click shuttle box header, a Pmod Type-6A socket, a 14-pin microcontroller expansion header, microSD storage, a user-definable button and RGB LED.
image_yuWN1ihkRC.png
A bundled IO Board provides expansion including a Pmod connector, general-purpose input/output (GPIO) pins, and microSD storage. (📷: Avnet)
On the software front, Avnet has announced a partnership with Edge Impulse to provide support for the RASynBoard within the company's popular Studio machine learning platform. "Edge impulse is excited to collaborate with Avnet on the launch of the RASynBoard, an ideal solution for ultra-low-power machine learning applications thanks to its Renesas MCU and sub-mW Syntiant NDP120 processor, and flexible array of sensors," says Edge Impulse's Raul Vergara. "Customers can quickly develop advanced models in Edge Impulse Studio and deploy them to the board for always-on inferencing in almost any location or environment."
The RASynBoard Core Board and IO Board bundle is now up for pre-order on Avnet's site at $99, with delivery expected to take place late in the second quarter of 2023.
internet of things
 
  • Like
  • Love
  • Fire
Reactions: 11 users

Deadpool

hyper-efficient Ai
Teksun Machine Learning section on their website, again right up our ally & they do Natural Language Processing amongst others.


MACHINE LEARNING


We assist you in developing and deploying personalized and data-intensive solutions based on Machine Learning Services, to let you counter business challenges.

Instilling Intelligence​


Teksun delivers you the new-age apps empowered with pattern recognition, artificial intelligence, and mathematical predictability, which collectively provide you higher scalability. Our technical developers are experts in optimally utilizing and placing machine learning in anomaly detection, algorithm design, future forecasting, data modeling, spam filtering, predictive analytics, product recommendations, etc.

Get You First Consultation for FREE

Our Offerings

The offerings that we present here are just a gist of options and alternatives that we have for you inside the box. Catch sight of these to know the scope of our services:

null

Deep​

Learning​

null

Predictive​

Analytics​

null

Image​

Analytics​

null

Video​

Analytics​

null

Natural

Language

Processing



We also provide for Neural Network Development and Machine Learning Solutions. Looking for a better start for your project! Partner with our expert consultants to draft out the premier ways of undertaking it.

Get Started

It’s an apt time to take-off with us!


What makes us unique

The unique is our ability to serve you in a ceaseless manner, with real-time updates of every project phase.

1​


We provide Machine Learning Consulting, assisting you all the way from project initiation to deployment.

2​


We furnish you with Supervised/Unsupervised ML services on both structured and unstructured data.

3​


Our experts undertake different algorithms and models to cater you the required service such as NLP, Decision Trees, etc.

4​


The tools and technologies used by us are the best in the market, a few of which can be named MongoDB, Cassandra, and so on.

5​


Our constantly updated and wide range of AI Models impart your business with high performance & scalability.

6​


Our experts undertake a personalized approach while delivering you the finest of Machine Learning Services.




Take a Look at

QA & Project Execution


Hire Developer​

Develop with the industry masters!
It’s the selection of technologies that carves out its full potential. Our top developers, assure your Machine Learning solutions of the finest tools as per the project and budget needs.


Industry we serve

We bring across a broad gamut of services, along with a versatile approach. Hence we are also able to facilitate a wide foot of industries, whether it be Forensic, Financial, Healthcare, Defence, or any other.
Consumer Electronics

Consumer Electronics

Wearable Devices

Wearable

Industrial Automation

Industry 4.0

Biotech Solutions

Biotech

Home Automation

Home Automation

Agritech Solutions

Agritech

null

Security & Surveillance

Health Care System Design

Health Care

null

Drones & Autonomy

Automated testing

Automotive


Every project needs different kind of attention and service. Our highly experienced consultants and technicians arrange for tailor-made plans and strategies to manage your varied projects.

Kick-Off Project

Surge on your success journey!
Golly TechGirl, this partnership is looking to be sensational for us.

Running Man Happy Dance GIF by MOODMAN
 
  • Haha
  • Like
  • Love
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
The forecast 12% market share would equate to AUD $229.2B MC / 1.8B SOI = $127.33 SP AUD.

That would mean BRN SP rises from 51c in March 2023 to $127.33 = x249.7 by March 2030.

PLS was 1c SP low in 2013 to $5.66 peak in November 2022 = x566 within 10 years.

BRN was 3.5c SP low in 2020 to $127.33 in 2030 = x3,638 within 10 years.

Appears impossible, however, BRN has breakthrough tech whereas there are lithium mines everywhere.

Anything under $100M MC BRN was like investing in pre-IPO.

I will be very happy with AUD $50B MC by 2030 or about $27.78 SP.

It will require 2.6% market share. Anything above will be a big bonus.

Yeah, but you gotta admit you wouldn't be too disappointed if it actually hit 12% as predicted. 😝
 
  • Like
  • Fire
Reactions: 21 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
USPTO shows the inventor is Heath Robinson:

The way that works on a single processor level is that you have a core and every one of them has a megabyte of SRAM. Packets arrive into buffers in this SRAM, which triggers software to fetch them and run a hardware unpacketization engine – this removes all the packet framing, interprets what it means, and decompresses the packet so it leaves compressed at all times, except when it’s being computed on.

“It essentially recreates that little tensor that made the packet. We run a bunch of computations on those tensors and eventually we're ready to to send them onward. What happens then is they get repacketized, recompressed, deposited into SRAM, and then from there our network functionality picks them up and forwards them to all the other cores that they need to go to under the directional compiler
.”
OK. I see how it all works now.🥴😝


a200a7b7b7dd4e845b3350a000e7d7fe.gif
 
  • Haha
  • Love
  • Thinking
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
OK. I see how it all works now.🥴😝


View attachment 31572

Meaning I might need to brush up a bit on my technological prowess as it seems to have absconded somewhere without a trace. This is coming from someone who valiantly tried to read, if not understand Peter Van Der Made's book "Higher Intelligence", but in doing so managed to forget nearly every sentence the second after reading it. Maybe Peter could write his next book using me as test case and this time he could call the book "Lower Intelligence".
 
Last edited:
  • Haha
  • Love
  • Like
Reactions: 18 users
Top Bottom