There was sposed to be a stake ...Can someone please do a health check on @Diogenese please??
He just reacted with fire to my post (attached), please be careful when approaching him, he may be behaving wildy irrational.
very concerned.
tyia
There was sposed to be a stake ...Can someone please do a health check on @Diogenese please??
He just reacted with fire to my post (attached), please be careful when approaching him, he may be behaving wildy irrational.
very concerned.
tyia
That makes far more senseThere was sposed to be a stake ...
Loihi, Am I right?Hello All, this is for dummies to answer, so I expect no responses
Sundays question: How many companies can currently state that they have on-chip learning, one shot, few shots, continuous learning,
learning on top of learning, power consumption ranging from 1,000th's of a watt down to as low as 1,000,000th's of a watt, can run for 6 months approximately on 2 AAA batteries, doesn't require an internet connection, doesn't require something called "a cloud", protects your security and privacy like no other, doesn't require a VPN, doesn't require 5/6G, is fully commercialized now, has it's brother in production now whom is smarter, smaller and hoping to go to Mars one day, is available in IP Blocks or as a NSoC for as little as $50 USD, can see, smell, taste, hear, feel, can out smart a human through pure speed, was first developed in Belmont, Perth, Western Australia and is nicknamed, "The Brain" ?
Have a great night.
Tech x
Do I have this right.....a $50 chip equivalent or superior to Nvidia's $30,000 GPU.Hello All, this is for dummies to answer, so I expect no responses
Sundays question: How many companies can currently state that they have on-chip learning, one shot, few shots, continuous learning,
learning on top of learning, power consumption ranging from 1,000th's of a watt down to as low as 1,000,000th's of a watt, can run for 6 months approximately on 2 AAA batteries, doesn't require an internet connection, doesn't require something called "a cloud", protects your security and privacy like no other, doesn't require a VPN, doesn't require 5/6G, is fully commercialized now, has it's brother in production now whom is smarter, smaller and hoping to go to Mars one day, is available in IP Blocks or as a NSoC for as little as $50 USD, can see, smell, taste, hear, feel, can out smart a human through pure speed, was first developed in Belmont, Perth, Western Australia and is nicknamed, "The Brain" ?
Have a great night.
Tech x
Do I have this right.....a $50 chip equivalent or superior to Nvidia's $30,000 GPU.
If so Brainchip has plenty of room to raise the price or NVDA should be pursuing a buyout.
Or am I delusional?? Ill informed?
Valencia-based start-up iPronics, the developer of a plug-and-play, programmable photonic microchip, has announced delivery of its first shipments to several companies in distinct sectors.Photonics chips use up to 10-times less power and can be 20-times faster than electrical chips, while processing far more information
Anything Apple can do TATA can too:Apple has a secret team working to bring noninvasive glucose monitoring to its smartwatch, but that’s not all it’s pursuing. Also: The company’s upcoming headset likely won’t need an iPhone and follow-up models are already in the works, plus schedule changes are in store for Apple retail employees.
Last week in Power On: Apple’s upcoming mixed-reality headset and WWDC are a perfect match.
The Starters
The Apple Park campus.
Photographer: Sam Hall/Bloomberg
Apple Inc. is famous for keeping its future products under wraps, but even by those standards the company’s Exploratory Design Group is secretive.
As I first revealed last week, this covert team is the brains behind future no-prick glucose tracking technology for the Apple Watch. And that’s not all it’s working on. The group is akin to X, Alphabet Inc.’s “moonshot factory,” which helped develop Waymo self-driving car technology, Google Glass and Loon internet balloons.
Though the Apple team — better known inside the company as XDG — is primarily focused on the glucose work, there are several other projects underway and it’s made key contributions to existing Apple devices.
The team originated several years ago and was long led by Bill Athas, one of the few people to have the title of engineering fellow at Apple, until he passed away unexpectedly at the end of last year. Athas was seen by the late co-founder Steve Jobs and current Chief Executive Officer Tim Cook as one of the brightest engineering minds at the company.
The XDG team sits within Apple’s Hardware Technologies group, led by Senior Vice President Johny Srouji, and works at a building known as Tantau 9 right outside of the Apple Park spaceship-shaped ring.
The team is now run on a day-to-day basis by a number of Athas lieutenants, including top Apple engineers and scientists Jeff Koller, Dave Simon, Heather Sullens, Bryan Raines and Jared Zerbe. Koller, Simon and Raines are involved in the glucose project, while Sullens and Zerbe manage other groups within the larger team.
Apple’s Johny Srouji, who oversees the XDG team.
Source: Apple
The Exploratory Design Group operates as a startup within Apple and is made up of only a few hundred people, mostly engineers and academic types. That’s a far cry from the many hundreds of people in the Special Projects Group, which is focused on Apple’s self-driving car, or the more than a thousand engineers in Apple’s Technology Development Group, the team building the mixed-reality headset.
Beyond the glucose work, XDG is working on next-generation display technology, artificial intelligence and features for AR/VR headsets that help people with eye diseases. The team originally came together under Athas to work on low-power processor technologies and next-generation batteries for smartphones, efforts that continue.
Like Alphabet’s moonshot team — and those at other Silicon Valley companies — the XDG staff is given vast financial resources and headroom to explore countless ideas. The members have a different remit than the engineering teams churning out new iPhones, iPads and Apple Watches annually. Instead, they’re instructed to work on projects until they can determine whether or not an idea is feasible.
The unit is even more secretive than Alphabet’s X but it’s not a pie-in-the-sky operation. It has already had breakthroughs that made their way into Apple products. Many of the chip and battery technologies developed by XDG have been shipping for years in iPhones, iPads and Macs.
While the team operates as a startup, it is still compartmentalized like any other Apple division: People working on one project within XDG aren’t allowed to communicate about their work with other members of XDG that are assigned to different projects.
But the team’s members are organized by skill sets rather than individual projects. That means that one engineer could be working on several initiatives that fit their skills, rather than on one specific product.
The Bench
An HTC headset at the Apple WWDC conference in 2017.
Photographer: David Paul Morris/Bloomberg
The Apple headset probably won’t require an iPhone, and other new models are in development. The company’s first mixed-reality headset, unlike the original Apple Watch, probably won’t require an iPhone for setup or use. I’m told that the latest test versions of the device and its onboard xrOS operating system can be set up without an iPhone and can download a user’s content and iCloud data directly from the cloud.
You will, however, be able to transfer your data from an iPhone or iPad, just as you can today when setting up a new device. As I’ve written previously, the headset doesn’t have a remote control but instead is operated by a user’s eyes and hands.
A key feature for text input — in-air typing — is available on the latest internal prototypes, I’m told. But it’s been finicky in testing. So if you get the first headset, you still may want to pair an iPhone to use its touch-screen keyboard. The hope within Apple is to make rapid improvements after the device is released. The company expects its headset to follow the same path as the original Apple Watch in that respect.
Apple is currently planning to unveil its first headset, which may be dubbed the Reality Pro, at WWDC in June. The product would then ship toward the end of 2023 at the earliest. But there are already follow-ups in the works, too.
As I wrote in January, Apple is planning to launch a cheaper headset with lower-end display and processor components at the end of 2024 or in 2025. That will help address users who don’t want to pay around $3,000 for the high-end model. Based on trademark filings, the cheaper version may be dubbed the Reality One. And, unsurprisingly, there’s already a second-generation version of the Reality Pro underway.
I’m told the focus of the second Pro headset is performance. While the first model will have an M2 chip — plus a secondary chip for AR and VR processing — it’s not powerful enough to output graphics at a level Apple would ideally like. For instance, FaceTime will only support realistic VR representations of two people at a time, not everyone in a conference call.
Apple’s first headset was initially planned to be even more powerful, featuring a separate hub with additional processing power that could be beamed to the device across a home wirelessly. But former Apple design chief Jony Ive nixed that idea. Now the company is working to add a more powerful processor (perhaps a variant of the M3 or M4) for the second model, helping bridge that gap.
An Apple store employee helps a customer.
Photographer: Alejandro Cegarra/Bloomberg
Apple rolls out scheduling changes to all US and Canada retail employees. Over the past year, some Apple retail employees have voiced concerns about pay, benefits and working hours. To address some of those issues, Apple has revamped its scheduling process, including the number of hours it requires for both full-time and part-time workers. I first wrote about those changes last June when they were rolled out to some stores as a pilot.
Beginning on April 29, Apple will bring the new policies to all of its roughly 300 stores in the US and Canada, according to a recent memo. The main changes:
The caveat is that these new rules could go out the window during peak shopping periods (a new iPhone launch, for instance) or for all-hands meetings. Another change is that time off must be requested at least four weeks in advance, a slight adjustment from a prior requirement of about three weeks in advance.
- A maximum of five consecutive workdays, down from the prior limit of six.
- More weekend time off for part-time employees.
- A consistent weekend workday or day off for full-time employees.
Still, some part-time Apple retail employees are concerned about another change that they say is being introduced by managers at some stores: a requirement to work on weekends. They fear that they’ll be terminated from the company if they don’t agree to work on those days, despite not having to do so when they joined Apple.
Other part-time employees also have said that their managers are asking them to work at least a few extra hours per week than they were previously required to.
The Schedule
The Steve Jobs Theater, where Apple held its shareholder meetings before the pandemic.
Photographer: Nic Coury/Bloomberg
March 10: Apple’s annual shareholder meeting. Cook and his lieutenants, such as General Counsel Kate Adams, will take the virtual stage to field carefully selected questions from shareholders and give some company updates. Major news rarely breaks at these conferences, but there will be shareholder votes on Apple’s board, executive pay, labor and other matters.
Post-Game Q&A
Q: Do you think the Apple Card will expand to Europe anytime soon?
Q: How risky is the Apple glucose initiative to current medical device makers?
Q: Should we expect new AirPods Pro earbuds this year?
Email me, or you can always send me a tweet or DM @markgurman.
I’m on Signal at 413-340-6295; Wickr and Telegram at GurmanMark; or ProtonMail at markgurman@protonmail.com.
More from Bloomberg
Listen: Foundering: The John McAfee Story is a new six-part podcast series retracing the life, the myths and the self-destruction of a Silicon Valley icon. Subscribe for free on Apple, Spotify or wherever you get your podcasts.
Get Tech Daily and more Bloomberg Tech weeklies in your inbox:
- Cyber Bulletin for coverage of the shadow world of hackers and cyber-espionage
- Game On for a playthrough of the video game business
- Screentime for a front-row seat to the collision of Hollywood and Silicon Valley
- Soundbite for reporting on podcasting, the music industry and audio trends
Great find FF, looks like science fiction to meAnything Apple can do TATA can too:
EchoWrite-SNN: Acoustic Based Air-Written Shape Recognition Using Spiking Neural Networks
AM George, A Gigie, AA Kumar, S Dey… - … on Circuits and …, 2022 - ieeexplore.ieee.org
… on different available neuromorphic platforms such as Brainchip Akida [22], Intel Loihi [21] etc. … to enhance intuitive surgical robotteleoperation,” arXiv preprint arXiv:2102.10585, 2021. …
Related articles
EchoWrite-SNN: Acoustic Based Air-Written Shape Recognition Using Spiking Neural Networks
Arun M George, Andrew Gigie, A Anil Kumar, Sounak Dey, Arpan Pal, K Aditi
2022 IEEE International Symposium on Circuits and Systems (ISCAS), 1067-1071, 2022
In this paper, we propose EchoWrite-SNN, a robust edge compatible air-writing recognition system (used in applications such as AR/VR, HRI etc.) based on principles of SONAR and neuromorphic computing. The bare finger movements in air are captured by a pair of commonly available speaker-microphone pair. A new tracking algorithm based on windowed difference cross-correlation and ESPRIT is employed which shows better tracking accuracy compared to state-of-the-art methods with a median tracking error of only 3.31mm. To classify these air-written shapes, a 5-layer CNN is trained and then converted to a Spiking Neural Network (SNN) using ANN-to-SNN conversion technique to reap the benefits of low power neuromorphic computing on edge. Experimental results show that the converted SNN achieves 92% accuracy (a mere 3% less than the CNN) while showing 4.4 × reduction in number of operations compared to CNN resulting in further energy benefit when run on actual neuromorphic computation platforms.
My opinion only DYOR
FF
AKIDA BALLISTA
Thanks FF. Just wondering why it is that numerous articles mention AKIDA in the same space as Loihi. As far as I understand Loihi is still being developed and is far from commercially available (if ever). AKIDA is proven and ready to implement. Is Loihi a direct competitor now or are we still 3 years plus ahead? Or is it that Intel get a mention because they are well known and BRN is not (yet)? Surely AKIDA would outperform Loihi and would be the preferred choice after comparison.on different available neuromorphic platforms such as Brainchip Akida [22], Intel Loihi [21]
Good morning! We have a media release:
BrainChip Partners with emotion3D to Improve Driver Safety and User Experience
2/26/2023LAGUNA HILLS, CA / ACCESSWIRE / February 26, 2023 /BrainChip Holdings Ltd (ASX:BRN)(OTCQX:BRCHF)(ADR:BCHPY), the world's first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced that it has entered into a partnership with emotion3D to demonstrate in-cabin analysis that makes driving safer and enables next level user experience.
emotion3D offers state-of-the-art computer vision and machine learning software for image-based analysis of in-cabin environments. This analysis enables a comprehensive understanding of humans and objects inside a vehicle. The partnership will allow emotion3D to leverage BrainChip's technology to achieve an ultra-low-power working environment with on-chip learning while processing everything locally on device within the vehicle to ensure data privacy.
"We are committed to setting the standard in driving safety and user experience through the development of camera-based, in-cabin understanding," says Florian Seitner, CEO at emotion3D. "In combining our in-cabin analysis software with BrainChip's on-chip compute, we are able to elevate that standard in a faster, safer and smarter way. This partnership will provide a cascading number of benefits that will continue to disrupt the mobility industry."
Among some of the situations covered by this optimized driver monitoring functionality are warnings for driver distractions and drowsiness, device personalization, gesture recognition, passenger detection, and more.
"Processing in-cabin data requires significant compute and associated power," said Sean Hehir, BrainChip CEO. "By leveraging BrainChip's Akida processor IP, emotion3D is able to improve intelligent safety and user experience functions by analyzing the data in real time and forward inference data to the automobile's central processor. Together, we improve the next generation of intelligent vehicles and give drivers a safer, enhanced user experience."
Emotion 3D Customers & Partners:emotion3D - Garmin connection
View attachment 30706
View attachment 30707
emotion3D on LinkedIn: SAT and emotion3D showcase next-level sensor fusion driver monitoring…
We are excited to announce the development of a system enabling enhanced driver drowsiness detection in collaboration with SAT - Sleep Advice Technologies…www.linkedin.com
Head office is in Austria.emotion3D - Garmin connection
View attachment 30706
View attachment 30707
emotion3D on LinkedIn: SAT and emotion3D showcase next-level sensor fusion driver monitoring…
We are excited to announce the development of a system enabling enhanced driver drowsiness detection in collaboration with SAT - Sleep Advice Technologies…www.linkedin.com
And this:Interesting where some of their research is taking them in the automotive space:
Smart-RCS
Worldwide, over 1.4 million people die each year in road accidents (WHO) and millions more suffer from injuries.Mandatory passive safety systems trigger airbags and tense seat belts in event of a crash to reduce the number of fatalities and heavy injuries.However, these systems follow a ”few-sizes-fit-all” development approach and thus perform best for a small number of specified body physiques – the most common one is the “average male”: 175cm, 78kg. This is suboptimal for everybody who deviates from these averages – children, elderly people and even woman.
Studies have shown: Any seatbelt-wearing female occupant is at 73% more risk to suffer from serious injuries than seatbelt-wearing male occupants (Univ. Virginia). Also, female occupants are at up to 17% higher risk to be killed in an accident than male occupants (NHSTA).
As long as passive safety systems cannot distinguish between the occupant’s individual characteristics, it is impossible to achieve optimal protection for everybody.
For the first time, touchless 3D imaging sensors are used to derive precise real-time information about each occupant, such as body position and pose, body physique, age, gender, etc. Based on this information, the Smart-RCS computes the optimal deployment strategy tailored to each individual occupant.
By taking those relevant factors into account, Smart-RCS optimizes the protective function while simultaneously mitigating the risks of doing unnecessary harm.
Smart-RCS aims to disrupt the passive safety systems market by introducing personalized and situation-aware protection.
Learn more on our project website: www.smart-rcs.eu
Funding agency: This project is funded by the European Commission within the Horizon 2020 FTI program
My opinion only DYOR
FF
AKIDA BALLISTA