stuart888
Regular
Only when needed is the key! Fantastic video, thanks a bunch Learning.Wow.
It's a must watch Video, very informative. Fantastic to have Nandan as CMO.
It's great to be a shareholder đ
Energy efficient SNN spiking smarts!
Only when needed is the key! Fantastic video, thanks a bunch Learning.Wow.
It's a must watch Video, very informative. Fantastic to have Nandan as CMO.
It's great to be a shareholder đ
Will Renesas do an Oliver Twist?
âWe see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,â said Roger Wendelken, Senior Vice President in Renesasâ IoT and Infrastructure Business Unit. âWe licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for todayâs mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.â
... even better than DRP-AI.
Interesting write up
![]()
Brainchip Extends AI, Machine Learning In Space And Time With Bio-Inspired Neural Networks
Brainchip has introduced a new generation of its unique, bio-inspired Akida line of licensable, configurable neural processing IP.www.forbes.com
I don't think I've ever seen this map before.
View attachment 31498
Hi @Bravo ,Thanks for the wake-up call @Rocket577 but unfortunately I slept like a log through the Cerence Conference and they haven't put a webcast or transcript of it up on their website yet. But don't worry, I'll be keeping my eyes peeled for it.
While I'm at it, I thought I might use this opportunity to remind everyone why I'm completely obsessed with Cerence and why I'm 99.999999999999999999999999999999999999999999999999999% convinced that we'll be incorporated in the "Cerence Immersive Companion" due in FY23/24. Aside from the other zillion odd posts I've managed to devote to Cerence, of which this one is a pretty good example #43,639, here is yet another post to add to the pile.
For some context, Nils Shanz is the Chief Product Officer at Cerence. But prior to joining Cerence he was at Mercedes. And it was Nils who was responsible for user interaction and voice control on the Vision EQXX voice control system (the one that incorporated BrainChipâs technology to make the wake word detection 5-10 times faster than convention voice control systems).
Check out this LinkedIn post from Nils when he was at Mercedes. It says "this is a demo to show the performance of our voice assistant in the #EQS: no Wake-up word needed to start a conversation & plenty of use-cases in less than 45 seconds". You can click the link below to watch the demo. But you can also see that there is a comment from Holger Quast (Product Strategy and Innovation at Cerence).
The other is a screen-shot of a testimonial from Daimler on Cerence's website.
As I say, just add this post to the list until we get proof irrefutable, which won't be too far away IMO.
View attachment 31550
View attachment 31551
![]()
Sign Up | LinkedIn
500 million+ members | Manage your professional identity. Build and engage with your professional network. Access knowledge, insights and opportunities.www.linkedin.com
![]() |
Tenstorrent's Grayskull processor is capable of operating at up to 368 TOPS with an architecture much different than any CPU or GPU (Image source: Tenstorrent) |
![]() |
The Grayskull PCIe card (Image source: Tenstorrent) |
![]() |
(Image source: Tenstorrent) |
SAY WHAT?????
Tenstorrent are talking about SNN's. Remeber that we're in like flynn with Si-Five Intelligence x280 and that Tenstorrent have licensed Si-Five Intelligence x280 as a platform for its Tensix NPU.
View attachment 31568
Tenstorrent Is Changing the Way We Think About AI Chips
![]()
GPUs and CPUs are reaching their limits as far as AI is concerned. Thatâs why Tenstorrent is creating something different.
Chris Wiltz | May 12, 2020
SUGGESTED EVENT
![]()
The Battery Show Europe/Electric & Hybrid Vehicle Technology Expo Europe 2023
May 23, 2023 to May 25, 2023
GPUs and CPUs are not going to be enough to ensure a stable future for artificial intelligence. âGPUs are essentially at the end of their evolutionary curve,â Ljubisa Bajic, CEO of AI chip startup Tenstorrent told Design News. â[GPUs] have done a great job; theyâve pushed the field to to the point where it is now. But in order to make any kind of order of magnitude type jumps GPUs are going to have to go.â
![]()
Tenstorrent's Grayskull processor is capable of operating at up to 368 TOPS with an architecture much different than any CPU or GPU (Image source: Tenstorrent)
Bajic knows quite a bit about GPU technology. He spent some time at Nvidia, the house that GPUs built, working as senior architect. Heâs also spent a few years working as an IC designer and architect at AMD. While he doesnât think companies like Nvidia are going away any time soon, he thinks itâs only a matter of time before the company releases an AI chip product that is not a GPU.
But an entire ecosystem of AI chip startups is already heading in that direction. Engineers and developers are looking at new, novel chip architectures capable of handling the unique demands of AI and its related technologies â both in data centers and the edge.
Bajic is the founder of one such company â Toronto-based Tenstorrent, which was founded in 2016 and emerged from stealth earlier this year. Tenstorrentâs goal is both simple and largely ambitious â creating chip hardware for AI capable of delivering the best all around performance in both the data center and the edge. The company has created its own proprietary processor core called the Tensix, which contains a high utilization packet processor, a programmable SIMD, a dense math computational block, along with five single-issue RISC cores. By combining Tensix cores into an array using a network on a chip (NoC) Tenstorrent says it can create high-powered chips that can handle both inference and training and scale from small embedded devices all the way up to large data center deployments.
The companyâs first product Grayskull (yes, that is a He-Man reference) is a processor targeted at inference tasks. According to company specs, Grayskull is capable of operating at up to 368 tera operations per second (TOPS). To put that into perspective as far as what Grayskull could be capable of, consider Qualcommâs AI Engine used in its latest SoCs such as the Snapdragon 865. The Qualcomm engine offers up to 15 TOPS of performance for various mobile applications. A single Grayskull processor is capable of handling the volume of calculations of about two dozen of the chips found in the highest-end smartphones on the market today.
![]()
The Grayskull PCIe card (Image source: Tenstorrent)
Nature Versus Neural
If you want to design a chip that mimics cognition then taking cues from the human brain is the obvious way to go. Whereas AI draws a clear functional distinction between training (learning a task) and inference (implementing or acting on whatâs been learned), the human brain does no such thing.
âWe figured if we're going after imitating Mother Nature that we should really do a good job of it and not not miss some key features,â Bajic said. âIf you look at the natural world, thereâs the same architecture between small things and big things. They can all learn; it's not inference or training. And they all achieve extreme efficiency by relying on natural sparsity, so only a small percentage of the neurons in the brain are doing anything at any given time and which ones are working depends on what you're doing.â
Bajic said he and his team wanted to build a computer would have all these features and also not compromise on any of them. âIn the world of artificial neural networks today, there are two camps that have popped up,â he said. âOne is CPUs and GPUs and all the startup hardware that's coming up. They tend to be doing dense matrix math on hardware that's built for it, like single instructional, multiple data [SIMD] machines, and if they're scaled out they tend to talk over Ethernet. On the flip side you've got the spiking artificial neural network, which is a lot less popular and has had a lot less success in in broad applications.â
Spiking neural networks (SNNs) more closely mimic the functions of biological neurons, which send information via spikes in electrical activity. âHere people try to simulate natural neurons almost directly by writing out the differential equations that describe their operation and then implementing them as close we can in hardware,â Bajic explained. âSo to an engineer this comes down to basically having many scalar processor cores connected to the scalar network.â
This is very inefficient from a hardware standpoint. But Bajic said that SNNs have an efficiency that biological neurons have in that only a certain percentage of neurons are activated depending on what the neural net is doing â something thatâs highly desirable in terms of power consumption in particular.
âSpiking neural nets have this conditional efficiency, but no hardware efficiency. The other end of the spectrum has both. We wanted to build a machine that has both,â Bajic said. âWe wanted to pick a place in the spectrum where we could get the best of the both worlds.â
Behind the Power of Grayskull
With that in mind there are four overall goals Tenstorrent is shooting for in its chip development â hardware efficiency, conditional efficiency, storage efficiency, and a high degree of scalability (exceeding 100,000 chips).
âSo how did we do this? We implemented a machine that can run fine grain conditional execution by factoring the computation from huge groups of numbers to computations of small groups, so 16 by 4 or 16 by 16 groups to be precise,â Bajic said.
âWe enable control flow on these groups with no performance penalty. So essentially we can run small matrices and we can put âifâ statements around them and decide whether to run them at all. And if weâre going to run them we can decide whether to run them in reduced precision or full precision or anywhere in between.â
He said this also means rethinking the software stack. âThe problem is that the software stacks that a lot of the other companies in the space have brought out assume that there's a fixed set of dimensions and a fixed set of work to run. So in order to enable adaptation at runtime normally hardware needs to be supportive of it and the full software stack as well.
âSo many decisions that are currently made at compile time for us are moved into runtime so that we can accept exactly the right sized inputs. That we know exactly how big stuff is after we've chosen to eliminate some things at runtime so there's a fairly large software challenge to keep up with what the hardware enables.â
![]()
(Image source: Tenstorrent)
Creating an architecture that can scale to over 100,000 nodes means operating at a scale where you canât have a shared memory space. âYou basically need a bunch of processors with private memory,â Bajic said. âCache coherency is another thing that's impossible to scale for across more than a couple hundred nodes, so that had to go as well.â
Bajic explained that each of Tenstorrentâs Tensix cores is really a grid of five single-issue RISC covers that are networked together. Each Tensix is capable of roughly 3 TOPS of compute.
âAll of our processors can pretty much be viewed as packet processors,â Bajic said. âThe way that works on a single processor level is that you have a core and every one of them has a megabyte of SRAM. Packets arrive into buffers in this SRAM, which triggers software to fetch them and run a hardware unpacketization engine â this removes all the packet framing, interprets what it means, and decompresses the packet so it leaves compressed at all times, except when itâs being computed on.
âIt essentially recreates that little tensor that made the packet. We run a bunch of computations on those tensors and eventually we're ready to to send them onward. What happens then is they get repacketized, recompressed, deposited into SRAM, and then from there our network functionality picks them up and forwards them to all the other cores that they need to go to under the directional compiler.â
While Tenstorrent is rolling out Grayskull it is actively developing its second Tensix core-based processor, dubbed Wormhole. Tenstorrent is targeting a Fall 2020 release for Wormhole and says it will focus even more on scale. âItâs essentially built around the same architecture [as Grayskull], but it has a lot of Ethernet links on it for scaling out,â Bajic said. âIt's not going be a PCI card chip â itâs the same architecture, but for big systems.â
Searching for the iPhone Moment
There are a lot of lofty goals for AI on the horizon. Researchers and major companies alike are hoping new chip hardware will help along the path toward big projects like Level 5 autonomous cars all the way to some idea of general artificial intelligence.
Bajic agrees with these ideas, but he also believes that thereâs a simple matter of cost savings that makes chips like the ones being developed by his company an attractive commodity.
âThe metric that everybody cares about is this concept of total cost of ownership (TCO),âhe said. âIf you think of companies like Google, Microsoft, and Amazon, these are big organizations that run an inordinate amount of computing machinery and spend a lot of money doing it. Essentially they calculate the cost of everything to do with running a computer system over some set of years including how much the machine costs to begin with â the upfront cost, how much it costs to pipe through wires and cooling so that you can live with its power consumption, and the cost of how much the power itself costs. They add all of that together and get this TCO metric.
âFor them minimizing that metric is important because they spend billions of dollars on this. Machine learning and AI has become a very sizable percentage of all their compute activity and itâs trending towards becoming half of all that activity in the next couple years. So if your hardware can perform, say, 10 times better then it's a very meaningful financial indicator. If you can convince the market that you've got an order of magnitude in TCO advantage that is going persist for a few years, it's a super powerful story. It's a completely valid premise to build a business around, but it's kind of an optimization thing as opposed to something super exciting.â
For Bajic those more exciting areas come in the form of large scale AI projects like using machine learning to track diseases and discover vaccines and medications as well as in emerging feels such as emotional AI and affective computing. âImagine if you had a device on your wrist that could interpret all of your mannerisms and gestures. As youâre sitting there watching a movie it could tell if youâre bored or disgusted and change the channel. Or it could automatically order food if you appear to be hungry â something pretty intelligent that can also be situationally aware,â he said.
âThe key engine that enables this level of awareness is an AI, but at this point these solutions are too power hungry and too big to put on your wrist or to put anywhere that can follow you. By providing an architecture that will give an order of magnitude boost you can start unlocking whole new technologies and creating things that will an impact on the level of the first iPhone release.â
![]()
Tenstorrent Is Changing the Way We Think About AI Chips
GPUs and CPUs are reaching their limits as far as AI is concerned. Thatâs why Tenstorrent is creating something different.www.designnews.com
Thanks for posting that, bookmarked for later to read. Tsunami of information to sift through and just recalled zee has a bookmark feature here .I think the below segment is important for some of those who have been expecting things ahead of Brainchip timeline:
Mike Vizard: So how long before we started to see these use cases? You guys just launched the processors; it usually takes some time for all this to come together. And for it to manifest itself somewhere, whatâs your kind of timeline?
Nandan Nayampally: So I think the way to think about it is, youâre the real kind of growth in more radical innovative use cases, probably, you know, a few months out, a year out. But what I think what weâre saying is there are use cases that exist on more high powered devices today, that actually can now migrate to much more efficient edge devices, right? And so I do want to make sure people understand when when we talk about edge, itâs not kind of the brick thatâs sitting next to your network and still driven by a fan, right? Itâs smaller than the bigger bricks, but it is still a brick. What weâre talking about is literally at-sensor, always on intelligence, letâs say whether itâs a heart rate monitor, for example, or, you know, respiratory rate monitor â you could actually have a very, very compact device of that kind. And so one of the big benefits that we see is, letâs say video object detection today needs quite a bit of high power compute, to do HD video object detection, target tracking. Now imagine you could do that in a battery operated or have very low form factor, cost effective device, right? So suddenly, your dash cam, with additional capabilities built into that, could become much more cost effective or more capable. So we see a lot of the use cases that exists today, coming in. And then we see a number of use cases like vital signs predictions much closer, or remote healthcare, now getting cheaper, because you donât have to send everything to cloud., You can get a really good idea before you have to send anything to cloud and then youâre sending less data youâre sending, itâs already pre-qualified before you send it rather than finding out through the cycle that itâs taken a lot more time. Does that make sense?
SAY WHAT?????
Tenstorrent talking about SNN's in an article dated 12 May 2020!!!
Remember that we're in like flynn with Si-Five Intelligence x280 seeing that they have just specified that they want their X280 Intelligence Series to be tightly integrated with either Akida-S or Akida-P neural processors. And Tenstorrent have licensed Si-Five Intelligence x280 as a platform for its Tensix NPU.
View attachment 31568
Tenstorrent Is Changing the Way We Think About AI Chips
![]()
GPUs and CPUs are reaching their limits as far as AI is concerned. Thatâs why Tenstorrent is creating something different.
Chris Wiltz | May 12, 2020
SUGGESTED EVENT
GPUs and CPUs are not going to be enough to ensure a stable future for artificial intelligence. âGPUs are essentially at the end of their evolutionary curve,â Ljubisa Bajic, CEO of AI chip startup Tenstorrent told Design News. â[GPUs] have done a great job; theyâve pushed the field to to the point where it is now. But in order to make any kind of order of magnitude type jumps GPUs are going to have to go.â
![]()
Tenstorrent's Grayskull processor is capable of operating at up to 368 TOPS with an architecture much different than any CPU or GPU (Image source: Tenstorrent)
Bajic knows quite a bit about GPU technology. He spent some time at Nvidia, the house that GPUs built, working as senior architect. Heâs also spent a few years working as an IC designer and architect at AMD. While he doesnât think companies like Nvidia are going away any time soon, he thinks itâs only a matter of time before the company releases an AI chip product that is not a GPU.
But an entire ecosystem of AI chip startups is already heading in that direction. Engineers and developers are looking at new, novel chip architectures capable of handling the unique demands of AI and its related technologies â both in data centers and the edge.
Bajic is the founder of one such company â Toronto-based Tenstorrent, which was founded in 2016 and emerged from stealth earlier this year. Tenstorrentâs goal is both simple and largely ambitious â creating chip hardware for AI capable of delivering the best all around performance in both the data center and the edge. The company has created its own proprietary processor core called the Tensix, which contains a high utilization packet processor, a programmable SIMD, a dense math computational block, along with five single-issue RISC cores. By combining Tensix cores into an array using a network on a chip (NoC) Tenstorrent says it can create high-powered chips that can handle both inference and training and scale from small embedded devices all the way up to large data center deployments.
The companyâs first product Grayskull (yes, that is a He-Man reference) is a processor targeted at inference tasks. According to company specs, Grayskull is capable of operating at up to 368 tera operations per second (TOPS). To put that into perspective as far as what Grayskull could be capable of, consider Qualcommâs AI Engine used in its latest SoCs such as the Snapdragon 865. The Qualcomm engine offers up to 15 TOPS of performance for various mobile applications. A single Grayskull processor is capable of handling the volume of calculations of about two dozen of the chips found in the highest-end smartphones on the market today.
![]()
The Grayskull PCIe card (Image source: Tenstorrent)
Nature Versus Neural
If you want to design a chip that mimics cognition then taking cues from the human brain is the obvious way to go. Whereas AI draws a clear functional distinction between training (learning a task) and inference (implementing or acting on whatâs been learned), the human brain does no such thing.
âWe figured if we're going after imitating Mother Nature that we should really do a good job of it and not not miss some key features,â Bajic said. âIf you look at the natural world, thereâs the same architecture between small things and big things. They can all learn; it's not inference or training. And they all achieve extreme efficiency by relying on natural sparsity, so only a small percentage of the neurons in the brain are doing anything at any given time and which ones are working depends on what you're doing.â
Bajic said he and his team wanted to build a computer would have all these features and also not compromise on any of them. âIn the world of artificial neural networks today, there are two camps that have popped up,â he said. âOne is CPUs and GPUs and all the startup hardware that's coming up. They tend to be doing dense matrix math on hardware that's built for it, like single instructional, multiple data [SIMD] machines, and if they're scaled out they tend to talk over Ethernet. On the flip side you've got the spiking artificial neural network, which is a lot less popular and has had a lot less success in in broad applications.â
Spiking neural networks (SNNs) more closely mimic the functions of biological neurons, which send information via spikes in electrical activity. âHere people try to simulate natural neurons almost directly by writing out the differential equations that describe their operation and then implementing them as close we can in hardware,â Bajic explained. âSo to an engineer this comes down to basically having many scalar processor cores connected to the scalar network.â
This is very inefficient from a hardware standpoint. But Bajic said that SNNs have an efficiency that biological neurons have in that only a certain percentage of neurons are activated depending on what the neural net is doing â something thatâs highly desirable in terms of power consumption in particular.
âSpiking neural nets have this conditional efficiency, but no hardware efficiency. The other end of the spectrum has both. We wanted to build a machine that has both,â Bajic said. âWe wanted to pick a place in the spectrum where we could get the best of the both worlds.â
Behind the Power of Grayskull
With that in mind there are four overall goals Tenstorrent is shooting for in its chip development â hardware efficiency, conditional efficiency, storage efficiency, and a high degree of scalability (exceeding 100,000 chips).
âSo how did we do this? We implemented a machine that can run fine grain conditional execution by factoring the computation from huge groups of numbers to computations of small groups, so 16 by 4 or 16 by 16 groups to be precise,â Bajic said.
âWe enable control flow on these groups with no performance penalty. So essentially we can run small matrices and we can put âifâ statements around them and decide whether to run them at all. And if weâre going to run them we can decide whether to run them in reduced precision or full precision or anywhere in between.â
He said this also means rethinking the software stack. âThe problem is that the software stacks that a lot of the other companies in the space have brought out assume that there's a fixed set of dimensions and a fixed set of work to run. So in order to enable adaptation at runtime normally hardware needs to be supportive of it and the full software stack as well.
âSo many decisions that are currently made at compile time for us are moved into runtime so that we can accept exactly the right sized inputs. That we know exactly how big stuff is after we've chosen to eliminate some things at runtime so there's a fairly large software challenge to keep up with what the hardware enables.â
![]()
(Image source: Tenstorrent)
Creating an architecture that can scale to over 100,000 nodes means operating at a scale where you canât have a shared memory space. âYou basically need a bunch of processors with private memory,â Bajic said. âCache coherency is another thing that's impossible to scale for across more than a couple hundred nodes, so that had to go as well.â
Bajic explained that each of Tenstorrentâs Tensix cores is really a grid of five single-issue RISC covers that are networked together. Each Tensix is capable of roughly 3 TOPS of compute.
âAll of our processors can pretty much be viewed as packet processors,â Bajic said. âThe way that works on a single processor level is that you have a core and every one of them has a megabyte of SRAM. Packets arrive into buffers in this SRAM, which triggers software to fetch them and run a hardware unpacketization engine â this removes all the packet framing, interprets what it means, and decompresses the packet so it leaves compressed at all times, except when itâs being computed on.
âIt essentially recreates that little tensor that made the packet. We run a bunch of computations on those tensors and eventually we're ready to to send them onward. What happens then is they get repacketized, recompressed, deposited into SRAM, and then from there our network functionality picks them up and forwards them to all the other cores that they need to go to under the directional compiler.â
While Tenstorrent is rolling out Grayskull it is actively developing its second Tensix core-based processor, dubbed Wormhole. Tenstorrent is targeting a Fall 2020 release for Wormhole and says it will focus even more on scale. âItâs essentially built around the same architecture [as Grayskull], but it has a lot of Ethernet links on it for scaling out,â Bajic said. âIt's not going be a PCI card chip â itâs the same architecture, but for big systems.â
Searching for the iPhone Moment
There are a lot of lofty goals for AI on the horizon. Researchers and major companies alike are hoping new chip hardware will help along the path toward big projects like Level 5 autonomous cars all the way to some idea of general artificial intelligence.
Bajic agrees with these ideas, but he also believes that thereâs a simple matter of cost savings that makes chips like the ones being developed by his company an attractive commodity.
âThe metric that everybody cares about is this concept of total cost of ownership (TCO),âhe said. âIf you think of companies like Google, Microsoft, and Amazon, these are big organizations that run an inordinate amount of computing machinery and spend a lot of money doing it. Essentially they calculate the cost of everything to do with running a computer system over some set of years including how much the machine costs to begin with â the upfront cost, how much it costs to pipe through wires and cooling so that you can live with its power consumption, and the cost of how much the power itself costs. They add all of that together and get this TCO metric.
âFor them minimizing that metric is important because they spend billions of dollars on this. Machine learning and AI has become a very sizable percentage of all their compute activity and itâs trending towards becoming half of all that activity in the next couple years. So if your hardware can perform, say, 10 times better then it's a very meaningful financial indicator. If you can convince the market that you've got an order of magnitude in TCO advantage that is going persist for a few years, it's a super powerful story. It's a completely valid premise to build a business around, but it's kind of an optimization thing as opposed to something super exciting.â
For Bajic those more exciting areas come in the form of large scale AI projects like using machine learning to track diseases and discover vaccines and medications as well as in emerging feels such as emotional AI and affective computing. âImagine if you had a device on your wrist that could interpret all of your mannerisms and gestures. As youâre sitting there watching a movie it could tell if youâre bored or disgusted and change the channel. Or it could automatically order food if you appear to be hungry â something pretty intelligent that can also be situationally aware,â he said.
âThe key engine that enables this level of awareness is an AI, but at this point these solutions are too power hungry and too big to put on your wrist or to put anywhere that can follow you. By providing an architecture that will give an order of magnitude boost you can start unlocking whole new technologies and creating things that will an impact on the level of the first iPhone release.â
![]()
Tenstorrent Is Changing the Way We Think About AI Chips
GPUs and CPUs are reaching their limits as far as AI is concerned. Thatâs why Tenstorrent is creating something different.www.designnews.com
Hi @Bravo ,
Here are a couple of Cerence patent applications:
US2022415318A1 VOICE ASSISTANT ACTIVATION SYSTEM WITH CONTEXT DETERMINATION BASED ON MULTIMODAL DATA
View attachment 31562
View attachment 31561
A vehicle system for classifying spoken utterance within a vehicle cabin as one of system-directed and non-system directed may include at least one microphone to detect at least one acoustic utterance from at least one occupant of the vehicle, at least one camera to detect occupant data indicative of occupant behavior within the vehicle corresponding to the acoustic utterance, and a processor programmed to receive the acoustic utterance, receive the occupant data, determine whether the occupant data is indicative of a vehicle feature, classify the acoustic utterance as a system-directed utterance in response to the occupant data being indicative of a vehicle feature, and process the acoustic utterance.
WO2020142717A1 METHODS AND SYSTEMS FOR INCREASING AUTONOMOUS VEHICLE SAFETY AND FLEXIBILITY USING VOICE INTERACTION
View attachment 31565
The specifications seem oblivious of SNNs.
Yep and watch the 400kish walls at +0.005, +0.010, +0.015 being moved up. These used to sit on the sell price and now it's being moved up.Some big buys going through now - Insto Analysts said BRN is a BUY! Only blue sky from here. DYOR
Over 300,000 shares wanted in this one order.
1:19:59 PM 0.560 366,481 $205,229.360
![]()
NEWS
GMâs new autonomous driving system follows Mercedes, not Tesla
CREDIT: GM - CADILLAC CELESTIQ
![]()
ByWilliam Johnson
Posted on March 7, 2023
General Motors (GM) has announced some crucial details about its upcoming Ultra Cruise autonomous driving system.
With the mass proliferation of autonomous driving, thanks largely to Tesla, more and more companies have begun working on their own systems. This includes GM, which has already released its Super Cruise system but has now released details about its next iteration, Ultra Cruise.
In the design process of autonomous systems, two leaders with two very different design philosophies have emerged. Tesla is the first, heavily relying on AI while focusing on visual sensor systems to guide the vehicle. This has been seen most clearly in Teslaâs upcoming hardware 4, which eliminates ultra-sonic sensors, instead opting to dramatically increase the quality of the visual sensing systems around the vehicle. The second camp is currently headed by Mercedes.
SPONSORED CONTENT
![]()
Google Chrome Users Can Now Block All Ads (Do it Now For Free!)
By Safe Tech Tips
Mercedes has taken the complete opposite approach to Tesla. While still relying on AI guidance, Mercedes uses a combination of three different sensor arrays, visual, ultra-sonic, and LiDAR, to help guide the vehicle.
That takes us to GMâs Ultra Cruise, which was revealed in detail today. Much like Mercedes, GM has chosen to use three sensor arrays; visual, ultra-sonic, and LiDAR. Further emulating the premium German auto group, GMâs system âwill have a 360-degree view of the vehicle,â according to the automaker.
According to GM, this architecture allows redundancy and sensor specialization, whereby each sensor group will help focus on a single task. The camera and short-range ultra-sonic radar systems focus on object detection, primarily at low speeds and in urban environments. These systems will help the vehicle detect other vehicles, traffic signals and signs, and pedestrians. At higher speeds, the long-range radar and LiDAR systems also come into play, helping to detect vehicles and road features from further away.
GM also points out that, thanks to the capabilities of radar and LiDAR systems in poor visibility conditions, the system benefits from better overall uptime. GM aims to create an autonomous driving system allowing hands-free driving in 95% of situations.
As for the Tesla approach, the leader in autonomous driving certainly has credibility in its design. According to Teslaâs blog post about removing the ultra-sonic sensor capabilities from its vehicles, âTesla Visionâ equipped vehicles perform just as well, if not better, in tests like the pedestrian automatic emergency braking (AEB) test. Though it should be noted that the lack of secondary sensors is also likely to help reduce vehicle manufacturing costs.
Ultra Cruise will first be available on the upcoming Cadillac Celestiq. Still, with a growing number of vehicles coming with GMâs Super Cruise, itâs likely only a matter of time before the more advanced ADAS system makes its way to mass market offerings as well.
âGMâs fundamental strategy for all ADAS features, including Ultra Cruise, is safely deploying these technologies,â said Jason Ditman, GM chief engineer, Ultra Cruise. âA deep knowledge of what Ultra Cruise is capable of, along with the detailed picture provided by its sensors, will help us understand when Ultra Cruise can be engaged and when to hand control back to the driver. We believe consistent, clear operation can help build driversâ confidence in Ultra Cruise.â
With more and more automakers entering the autonomous driving space every year, it will be interesting to see which architecture they choose to invest in. But what could prove to be the defining trait is which system performs better in the real world. And as of now, it isnât immediately clear who the victor is.
![]()
GMâs new autonomous driving system follows Mercedes, not Tesla
General Motors (GM) has announced some crucial details about its upcoming Ultra Cruise autonomous driving system. With the mass proliferation of autonomous driving, thanks largely to Tesla, more and more companies have begun working on their own systems. This includes GM, which has already...www.teslarati.com