Boab
I wish I could paint like Vincent
Exciting times ahead but for nowDrop in sp then.
Exciting times ahead but for nowDrop in sp then.
Thank you so much, both @Esq.111 and @cosors for your kind and helpful suggestions on how to best get our dining table repaired - I was very touched!Morning Frangipani,
May help with leg re-attachment..
Two options ...
.
Regards,
Esq.
Won't be long now Brain Fam!Could someone please ring Rob Telson and ask him to give Renee Haas (CEO Arm) a call, to see if Rene would mind giving Masayoshi Son (CEO SoftBank) a call and to remind Son when he is speaking to Sam Altman (CEO Open AI) tomorrow, if he could please quickly mention Shyamal Anadkat from OpenAI's (ChatGPT) recent Linkedin post below, which mentions us?
TIA
View attachment 38620
https://lnkd.in/gEN3NdXt
View attachment 38619
Your happy faces are getting ...very funny! :-/Won't be long now Brain Fam!
Masayoshi Son (CEO SoftBank) said today that Arm is at the beginning of an “explosive” period of growth!
View attachment 38654
View attachment 38652
View attachment 38653
SoftBank’s Son Goes Back on Offense to Cement His Tech Legacy
(Bloomberg) -- SoftBank Group Corp. founder Masayoshi Son declared in an emotional meeting with investors that he will go back on the offensive in tech investing soon, seeking to establish his credentials in the burgeoning field of artificial intelligence.Most Read from BloombergRussia Latest...au.finance.yahoo.com
In the course of sales between 11:15am and 12:20pm, I counted 46 individual trades of either 125 or 126. The 125 lots and a 126 lots generally seem to be adjacent to each other. Interesting sort of activity going on there.
Your happy faces are getting ...very funny! :-/
Interesting short article from the recent Autosens as an insight into the vehicle sensors required.
Also a snippet of a MB presso general statement. That's a shitload of data moving and how much latency is added?
Solutions like NVIDIA GPU AI may be one of the interim / transitional ones (imo) given their overall size, resources and supply ability however, as per a recent prev post of a conference paper, this processing comes at a substantial power cost.
There will come a point where the power consideration becomes a selling point of differentiation and / or an absolute necessity to end users / developers.
That's where we must be ready, developed and at the cutting edge after all this DD, testing etc that's been going on with EAP, continuing partnerships and potential early adopters.
How Many Senses Do You Need To Drive A Car?
Automotive computing, sensing, and data transport requirements are growing enormously.semiengineering.com
How Many Senses Do You Need To Drive A Car?
Automotive computing, sensing, and data transport requirements are growing enormously.
JUNE 1ST, 2023 - BY: FRANK SCHIRRMEISTER
The recent AutoSens conference in Detroit left me questioning whether I should turn in my driver’s license. The answer the attending OEMs gave to all the discussions about the advantages of RGB cameras, ultrasound, radar, lidar, and thermal sensors was a unanimous “We probably need all of them in some form of combination” to make autonomy a reality in automotive. Together, these sensors are much better than my eyes and ears.
Technology progress is speedy in Automated Driving Assistance Systems (ADAS) and autonomous driving, but we are not there yet holistically. So I am keeping my license for some more time.
I traveled to Auto City to speak on a panel organized by Ann Mutschler that focused on the design chain aspects, together with Siemens EDA and GlobalFoundries. Ann’s write-up “Automotive Relationships Shifting With Chiplets” summarizes the panel well. The conference was a great experience as the networking allowed talking to the whole design chain from OEMs through Tier 1 system suppliers, Tier 2 semis and software developers, to Tier 3s like us in semiconductor IP. Given that the panel had a foundry, an IP vendor, and an EDA vendor, we quickly focused our discussions on chiplets.
Concerning the sensing aspects, Owl.Ai’s CEO and co-founder, Chuck Gershman, gave an excellent presentation summarizing the problem the industry is trying to solve – 700K annual worldwide pedestrian fatalities, 59% increase in pedestrian deaths in the last decade in the US and 76% of fatalities occurring at night. Government regulations are coming for pedestrian nighttime safety worldwide. Owl.Ai and Flir showcased thermal camera-related technologies, motivated by only 1 out of 23 vehicles passing all tests in a nighttime IIHS test using cameras and radar and on RGB image sensors not being able to see in complete darkness (just like me, I should say, but I am still keeping my driver’s license).
Source: Owl.Ai, AutoSens 2023, Detroit
Chuck nicely introduced the four critical phases of “detection” – is something there – “classification” – is it a person, car, or deer – “range estimation” – what distance in meters is the object – and “acting” – warning the driver or acting automatically. I liked Owl.Ai’s slide above, which shows the various sensing methods’ different use cases and flaws.
And during the discussion I had during the conference, the OEMs agreed that multiple sensors are needed.
Regarding the transition of driving from L3 to L4 robot taxis, Rivian’s Abdullah Zaidi showed the slide below outlining the different needs for cameras, radars, and lidars, and also the compute requirements.
Source: Rivian, AutoSens 2023, Detroit
No wonder automotive is such an attractive space for semiconductors. Computing, sensing, and data transport requirements are just growing enormously. And mind you that the picture above does not mention other cameras for in-cabin monitoring.
Besides the computing requirements, data transport is core to my day-to-day work. In one of his slides, Mercedes-Benz AG’s Konstantin Fichtner presented that the DrivePilot system records 33.73 GB of trigger measurements per minute – 281 times as much as it takes to watch a Netflix 4K stream. That’s a lot of data to transport across networks-on-chips (NoCs), between chips and chiplets. And it, of course, raises the question of on-board vs. off-board processing.
Are we there yet? Not quite, but we are getting closer. On the last day of the conference, Carnegie Mellon University’s Prof. Philip Koopman sobered up the audience with his talk “Defining Safety For Shared Human/Computer Driver Responsibility.” His keynote walked the audience through the accountability dilemma when a highly automated vehicle crashes and made some constructive suggestions on updating state laws. In their recently published essay “Winning the Imitation Game: Setting Safety Expectations for Automated Vehicles.” Prof. Koopman and William H. Widen from the University of Miami School of Law suggest that legislatures amend existing laws to create a new legal category of “computer driver” to allow a plaintiff to make a negligence claim.
To make that exact point, the universe created the situation the next day, which I took a picture of below. Can you see what’s wrong here?
Source: Frank Schirrmeister, May 2023
Yep, a pedestrian ghost in the machine.
To technology’s excuse, there had been a jay-walking pedestrian about 30 seconds ago, which probably erred on the side of caution. But still, this was a good reminder that future sensors are hopefully better than my eyes, and a thermal sensor would have helped here too.
All soberness and glitches aside, let’s not forget the end goal: Reducing fatalities due to traffic situations. And as I joked in my last blog on ISO 26262 safety, How Safe is Safe Enough: “If aliens would arrive and assess how to reduce traffic-related deaths, they would most certainly take humans off the streets.”
Brave new autonomous world, here we come. And I am keeping my license. That 1997 Miata doesn’t drive itself!
If power efficiency is Arm's forte, that must mean power efficiency is our PIANO FORTE!!!
View attachment 38656
View attachment 38655 I know how a glove puppet works - how does a cat puppet work?
If power efficiency is Arm's forte, that must mean power efficiency is our PIANO FORTE!!!
View attachment 38656
View attachment 38655
“We believe that our joint solution, combining radar technology with advanced imaging algorithms, will be a game-changer in drowsiness detection and child presence detection, making our roads safer for everyone. In the meantime, it provides a perfect multi-sensor fusion for smart in-cabin applications.”
Tweet this
Wonder if we get a look in with eomotion3D as a processor on their new hook ups and solution to be shown in Sept 23?
They looking at real time dowsiness plus others in a sensor fusion set up with their software stack.
From the partnership blurb...note the situations being used for.
BrainChip Partners with emotion3D to Improve Driver Safety and User Experience - emotion3D
Laguna Hills, Calif. – February 26, 2023 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced that it has entered into a partnership with emotion3D to demonstrate...emotion3d.ai
The partnership will allow emotion3D to leverage BrainChip’s technology to achieve an ultra-low-power working environment with on-chip learning while processing everything locally on device within the vehicle to ensure data privacy.
“We are committed to setting the standard in driving safety and user experience through the development of camera-based, in-cabin understanding,” says Florian Seitner, CEO at emotion3D. “In combining our in-cabin analysis software with BrainChip’s on-chip compute, we are able to elevate that standard in a faster, safer and smarter way. This partnership will provide a cascading number of benefits that will continue to disrupt the mobility industry.”
Among some of the situations covered by this optimized driver monitoring functionality are warnings for driver distractions and drowsiness, device personalization, gesture recognition, passenger detection, and more.
emotion3D, Chuhang Technologies and Sleep Advice Technologies Announce Innovative Partnership to Develop Multi-sensor In-cabin Analysis System Leveraging Camera and Radar Technology
emotion3D, a leading company for camera-based in-cabin analysis software, Chuhang Tech, a visionary startup developing radar technologies for AD and Awww.businesswire.com
emotion3D, Chuhang Technologies and Sleep Advice Technologies Announce Innovative Partnership to Develop Multi-sensor In-cabin Analysis System Leveraging Camera and Radar Technology
June 14, 2023 08:00 AM Eastern Daylight Time
VIENNA--(BUSINESS WIRE)--emotion3D, a leading company for camera-based in-cabin analysis software, Chuhang Tech, a visionary startup developing radar technologies for AD and ADAS applications and SAT, an innovative startup specialized in sleep onset prediction have entered into a collaboration for the development of a multi-sensor in-cabin analysis system.
Drowsiness detection is a critical aspect of road safety, as fatigue-related accidents account for 10-20% of crashes and near-crashes according to the Mobility & Transport department of the European Commission. Drowsiness is a highly complex state to analyse, therefore, multiple sensors and solutions complementary to each other can be applied to get more accurate results. With the aim of creating the new standard of automotive drowsiness detection, emotion3D, Chuhang Tech and SAT have started their joint work.
Within this collaboration, the companies will develop a multi-sensing solution for drowsiness detection. emotion3D’s human analysis software stack CABIN EYE derives valuable information on the driver from camera images while the radar solutions of Chuhang Tech analyses the driver’s vital signs. These two sensing modalities combined with the sleep onset prediction algorithms of SAT will accurately identify signs of drowsiness in real-time. As a result, the system shall become the benchmark for the next generation of in-cabin monitoring systems, setting a new standard in the industry.
In addition to drowsiness detection, this solution will bring a wide range of additional features required by Euro NCAP such as distraction warnings with the help of emotion3D’s CABIN EYE software and child presence detection enabled by Chuhang Tech’s mmWave radar technology. Moreover, numerous user experience features can be realized such as personalization, intuitive interaction and much more.
“We are working towards Vision Zero within all our projects and partnerships and we are happy to have made a step forward in drowsiness detection together with SAT and Chuhang Tech.” says Florian Seitner, CEO of emotion3D.
Similarly, Riccardo Groppo, CEO and Co-Founder of SAT, expressed his thoughts on the partnership by stating “We are glad to combine our sleep onset prediction algorithm with a non-intrusive sensing technology for vital signs. This is the way forward for automotive drowsiness detection.”
Wogong Zhang, the CTO and Co-Founder of Chuhang Tech mentioned "We believe that our joint solution, combining radar technology with advanced imaging algorithms, will be a game-changer in drowsiness detection and child presence detection, making our roads safer for everyone. In the meantime, it provides a perfect multi-sensor fusion for smart in-cabin applications.”
emotion3D, SAT, and Chuhang Tech are committed to the joint development and successful implementation of this advanced solution. By pooling their resources and expertise, the three companies aim to drive innovation and set new benchmarks in automotive safety, improving the lives of drivers, passengers, and pedestrians.
A first solution will be showcased at IAA Mobility in Munich starting September 5th 2023.
Yes, was out this morning. Check earlier in this thread.Wasn’t the podcast out today?
Yes, was out this morning. Check earlier in this thread.