BRN Discussion Ongoing

Diogenese

Top 20
Here's a more detailed description.
The system sounds a little over the top, but..


We could have a shoe in here..

"The Valeo interior lighting system is composed of projection modules, a smart adaptable user-interface based on gesture detection and a software dedicated to projection and content management"


https://www.valeo.com/wp-content/uploads/2023/12/press-release_valeo-at-ces2024.pdf

Valeo Unveils Groundbreaking Innovations at CES 2024, Paving the Way for Greener, Safer Mobility for All, Everywhere

CES2024 Innovation Award Honoree Valeo SCALA™3 LiDAR:
For the first time, we will give visitors the opportunity to experience and learn more about our AI-based perception software and how it helps classify objects identified by the LiDAR in its point clou
d.

I suppose they could use software for an ICE, but surely there must be a better way for EVs?
 
  • Like
  • Fire
  • Love
Reactions: 19 users
https://www.valeo.com/wp-content/uploads/2023/12/press-release_valeo-at-ces2024.pdf

Valeo Unveils Groundbreaking Innovations at CES 2024, Paving the Way for Greener, Safer Mobility for All, Everywhere

CES2024 Innovation Award Honoree Valeo SCALA™3 LiDAR:
For the first time, we will give visitors the opportunity to experience and learn more about our AI-based perception software and how it helps classify objects identified by the LiDAR in its point clou
d.

I suppose they could use software for an ICE, but surely there must be a better way for EVs?
"I suppose they could use software for an ICE, but surely there must be a better way for EVs?"

Pursuing technology, that only really works with ICE, at this stage of the game (while I disagree with a pure EV future) doesn't really make sense..
 
  • Like
Reactions: 5 users

Diogenese

Top 20
Here's a more detailed description.
The system sounds a little over the top, but..


We could have a shoe in here..

"The Valeo interior lighting system is composed of projection modules, a smart adaptable user-interface based on gesture detection and a software dedicated to projection and content management"
Hi DB,

This seems to be the relevant Valeo patent. It is bereft of any reference to spikes.

US2023146935A1 CONTENT CAPTURE OF AN ENVIRONMENT OF A VEHICLE USING A PRIORI CONFIDENCE LEVELS

1704626702579.png



A method for the content capture of an environment of a vehicle is disclosed. The method uses an artificial intelligence neural network, on the basis of a point cloud generated by an environment sensor of the vehicle. The method involves performing reference measurements by the environment sensor to capture reference objects depending on positions in the environment in relation to the environment sensor, generating confidence values depending on positions in the environment in relation to the environment sensor on the basis of the reference measurements by the environment sensor to capture the reference objects, training the artificial intelligence for the content capture of the environment on the basis of training point clouds for the environment sensor, capturing the environment by the environment sensor to generate the point cloud, and processing the point cloud generated by the environment sensor using the trained artificial intelligence for the content capture of the environment.
 
Last edited:
  • Like
  • Sad
Reactions: 7 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 27 users
https://www.valeo.com/wp-content/uploads/2023/12/press-release_valeo-at-ces2024.pdf

Valeo Unveils Groundbreaking Innovations at CES 2024, Paving the Way for Greener, Safer Mobility for All, Everywhere

CES2024 Innovation Award Honoree Valeo SCALA™3 LiDAR:
For the first time, we will give visitors the opportunity to experience and learn more about our AI-based perception software and how it helps classify objects identified by the LiDAR in its point clou
d.

I suppose they could use software for an ICE, but surely there must be a better way for EVs?
A question if you have looked into @Diogenese

Valeo were running a project / program Spikili with Tempo and this is a conclusion to one paper I was reading.

Thoughts on whether we would fit within the CNN2SNN development and / or the hardware implementation as part of our joint Dev agreement?

Was in 4 bits as well.



SpikiLi: A Spiking Simulation of LiDAR based Real-time Object Detection for Autonomous Driving​

Sambit Mohapatra*1, Thomas Mesquida*2, Mona Hodaei1, Senthil Yogamani3, Heinrich Gotzig1, Patrick Mäder4
*Equal contribution 1Valeo Germany 2CEA-List, France 3Valeo Ireland 4TU Ilmenau, Germany

VConclusion​

We have presented CNN to SNN conversion and simulation that adapts existing CNN building blocks to simulate spiking behavior for a complex real world application. We have shown that SNNs can be applied to complex tasks such as object detection and achieve comparable performance while achieving much better energy efficiency when implemented in hardware. This is a nascent research area, and we aim to progress from simulations to hardware implementation to realize the full potential of SNNs using other vital techniques such as event-based processing, differential signal processing, power-saving using sparsity, and low activation of neurons.

1704627926579.png
 
  • Like
  • Thinking
  • Fire
Reactions: 19 users

Frangipani

Regular

If we are involved, Sennheiser could well be one of the mystery hearing aid customers we are working with behind an NDA.

Musicians and audiophiles worldwide are familiar with Sennheiser headphones, speakers and microphones, but may not be aware that the world’s largest hearing aid manufacturer Sonova acquired Sennheiser’s consumer electronics business in 2021 and entered the OTC hearing aid market under the Sennheiser brand name last year. Sennheiser’s professional audio division (which includes Sennheiser Mobility) continues to remain family-owned.


Sonova acquires Sennheiser Consumer Business​

Sennheiser CompanyNews
7 mei 2021
Sennheiser Logo_2.jpg


Both companies will work together under the Sennheiser brand in the future​

Sennheiser and the Sonova Holding AG with headquarters in Stäfa, Switzerland, today announced their future cooperation under the Sennheiser brand. The global provider of medical hearing solutions will fully take over Sennheiser's Consumer Electronics business. Subject to regulatory approval, the plan is to complete the transfer of the business to Sonova by the end of 2021. Sennheiser had announced in February that it would focus on the Professional business in the future while seeking a partner for the Consumer Electronics business.

Consumer Electronics products from Sennheiser stand for the best sound and a unique audio experience. With the takeover of the Sennheiser Consumer business, Sonova is adding headphones and soundbars to its hearing care portfolio, which includes hearing aids and cochlear implants, among other hearing solutions. Sonova will leverage the complementary competencies of both companies to strengthen and further expand its business areas in the future. Sennheiser's many years of expertise as one of the world's leading companies in the audio industry and the resulting reputation and appeal of the brand are an excellent complement to Sonova's extensive technological and audiological expertise in the field of medical hearing solutions. A permanent cooperation is planned under the joint Sennheiser brand umbrella in order to continue offering Sennheiser customers first-class audio solutions in the future. A license agreement for future use of the Sennheiser brand has been made.

"We couldn't have asked for a better partner than Sonova for our Consumer Electronics business," says Daniel Sennheiser, co-CEO at Sennheiser. "Sonova is a strong, well-positioned company. Not only do we share a passion for unique audio experiences, we also share very similar corporate values. This gives us an excellent foundation for a successful future together." Co-CEO Dr. Andreas Sennheiser adds: "The combination of our strengths provides an very good starting point for future growth. We are convinced that Sonova will strengthen the Sennheiser Consumer Business in the long term and capture the major growth opportunities." Both partners see great potential in particular in the market for speech-enhanced hearables and for true wireless and audiophile headphones.
As part of the partnership, a complete transfer of operations of the Consumer Electronics business to Sonova is planned. This will be aligned with the Sennheiser works councils. For the employees who will transfer to Sonova, the move to the internationally operating and well-positioned company, headquartered in Switzerland, opens up very good opportunities for the future. Currently, a total of around 600 Sennheiser employees works for the Sennheiser consumer business.

Arnd Kaldowski, CEO of Sonova, says: “I am very pleased that Sennheiser has chosen Sonova to further develop the well-renowned Consumer Division. We look forward to welcoming our new colleagues and to building on the combined strengths of both organizations to successfully shape our joint future. The fast-growing market for personal audio devices is rapidly evolving. Combining our audiological expertise with Sennheiser’s know-how in sound delivery, their great reputation as well as their high-quality products will allow us to expand our offering and to create important touchpoints with consumers earlier in their hearing journey. Combining our market-leading technology with the strong brand and well-established distribution network of Sennheiser creates a strong foundation for future growth.”

The Sennheiser brand has been a synonym for first-class sound and excellent product quality for over 75 years. Sonova will also take over the development and production areas of Sennheiser Consumer Electronics so that Sennheiser customers will continue to benefit from this in the future.

With the partnership for the Consumer business, Sennheiser will now concentrate its own strengths and resources on the Pro Audio, Business Communications and Neumann business areas. In these three business units, the company plans to continue to grow at an above-average rate under its own power and to expand its already strong position in the global market. In addition, Sennheiser will successively expand its business units.
Here you can find the message of Daniel and Andreas Sennheiser.





B7667873-27D5-4C67-B63C-0030FDA2801F.jpeg
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 24 users

Crestman

Regular
Here is a link I found regarding Valeo at CES 2024.

It has more info but also quite a lot of ads.

https://inf.news/en/auto/82114ffb5c9c664534fc32a89d6a178b.html


1704628537210.png



Valeo SCALA3 lidar: improving the reliability of autonomous driving

Lidar is an important part of the car's autonomous driving system, allowing the vehicle to detect and respond to obstacles in the surrounding environment, which is difficult to achieve with other sensing systems. For example, lidar sensors produce more accurate and comprehensive images compared to millimeter-wave radar; they have a wider detection range than ultrasonic sensors; and they can render images in adverse weather and lighting conditions compared to cameras. Better results. In addition, lidar systems can better generate real-time visualization information for autonomous vehicles. Valeo SCALA3 is Valeo's third-generation lidar (laser detection and ranging system). Innovations in hardware and sensing software functions make its performance leading in the industry. Valeo SCALA3 delivers powerful sensing capabilities in any condition and meets the highest standards of quality and safety in the automotive industry.
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Neuromorphia

fact collector
Here is a link I found regarding Valeo at CES 2024.

It has more info but also quite a lot of ads.

https://inf.news/en/auto/82114ffb5c9c664534fc32a89d6a178b.html


View attachment 53618


Valeo SCALA3 lidar: improving the reliability of autonomous driving

Lidar is an important part of the car's autonomous driving system, allowing the vehicle to detect and respond to obstacles in the surrounding environment, which is difficult to achieve with other sensing systems. For example, lidar sensors produce more accurate and comprehensive images compared to millimeter-wave radar; they have a wider detection range than ultrasonic sensors; and they can render images in adverse weather and lighting conditions compared to cameras. Better results. In addition, lidar systems can better generate real-time visualization information for autonomous vehicles. Valeo SCALA3 is Valeo's third-generation lidar (laser detection and ranging system). Innovations in hardware and sensing software functions make its performance leading in the industry. Valeo SCALA3 delivers powerful sensing capabilities in any condition and meets the highest standards of quality and safety in the automotive industry.
Valeo will display a number of black technologies at CES 2024

black technology’ (heikeji 黑科技), a Chinese term used to describe cutting edge and futuristic technologies, so advanced that they defy comprehension.
 
  • Like
  • Fire
  • Love
Reactions: 30 users

Diogenese

Top 20
A question if you have looked into @Diogenese

Valeo were running a project / program Spikili with Tempo and this is a conclusion to one paper I was reading.

Thoughts on whether we would fit within the CNN2SNN development and / or the hardware implementation as part of our joint Dev agreement?

Was in 4 bits as well.



SpikiLi: A Spiking Simulation of LiDAR based Real-time Object Detection for Autonomous Driving​

Sambit Mohapatra*1, Thomas Mesquida*2, Mona Hodaei1, Senthil Yogamani3, Heinrich Gotzig1, Patrick Mäder4
*Equal contribution 1Valeo Germany 2CEA-List, France 3Valeo Ireland 4TU Ilmenau, Germany

VConclusion​

We have presented CNN to SNN conversion and simulation that adapts existing CNN building blocks to simulate spiking behavior for a complex real world application. We have shown that SNNs can be applied to complex tasks such as object detection and achieve comparable performance while achieving much better energy efficiency when implemented in hardware. This is a nascent research area, and we aim to progress from simulations to hardware implementation to realize the full potential of SNNs using other vital techniques such as event-based processing, differential signal processing, power-saving using sparsity, and low activation of neurons.

View attachment 53617
Well the paper's from December 2022, and they appear to be trying to reinvent the wheel, but BrainChip does CNN2SNN in software while the authors propose as a next step to a hardware implementation.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

skutza

Regular
  • Like
  • Love
Reactions: 5 users

Frangipani

Regular
Hi All
As the cost of developing an academic theory or a product has become a topic and being a technophobe I thought I would investigate this question.

It turns out if you use the Edge Impulse platform it is likely to be free for most academic and individual users.

Who would have thought so struggling academics and developers no longer have to put up their hard earned to run an idea through Brainchip AKIDA or any other companies supported hardware.

I guess that’s why Brainchip dropped off including support with their Boards something which they promoted strongly.

Rather a clever move this Edge Impulse partnership. Who would have thought completely free access to experiment with Brainchip AKIDA technology.

“Edge Impulse​

Edge Impulse is the leading development platform for machine learning on edge devices, free for developers and trusted by enterprises. Founded in 2019 by Zach Shelby and Jan Jongboom, we are on a mission to enable developers to create the next generation of intelligent devices. We believe that machine learning can enable positive change in society, and we are dedicated to support applications for good.”


My opinion only DYOR
Fact Finder

Point taken, but if you wanted to ultimately deploy your built and trained model to hardware, for use in low power environments, you’d still need to buy the standalone Mini PCIe Board or one of the Development Kits, right?
Which brings us back to the uni researcher’s cost comparison…

Absolutely correct these Brainchip products were low volume partly assembled and packaged by staff at Brainchip in limited numbers (literally a few hundred) primarily as demonstrators for new and existing customers. There were three. The $499 Raspberry Pi, then the two larger board packages at $4,999 and $9,999.

Not quite correct, by the way. It is the Mini PCIe Board that was/is sold for US$499, not the Raspberry Pi Development Kit. For that one you had to spend US$4995.

1DDD1205-0191-45E3-A985-7E65B5E5532D.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 6 users
Well the paper's from December 2022, and they appear to be trying to reinvent the wheel, but BrainChip does CNN2SNN in software while the authors propose as a next step to a hardware implementation.
Thanks D.

Yeah saw the date too.

That was the linking I was considering as in we already do CNN2SNN and also the comment re hardware eg Akida chip.

Just musing if they were using us in a parallel Dev program to measure against or get diff ideas to complement / advance.

In looking through Valeo stuff I've seen a bit of their work appears to go through their R&D team in Tuam Ireland.
 
  • Like
Reactions: 5 users
Whilst I doubt it just yet, having a hand in here would be a dream.

Our Sth Korean friends need to do something given the mkt share performance of Sony and also their hookup with Prophesee these days.

I do believe our CES press release said we would also be demo'ing the use of TOF sensors.



Wednesday, January 03, 2024​

Samsung/SK hynix bet on intelligent image sensors​


From Business Korea: https://www.businesskorea.co.kr/news/articleView.html?idxno=208769
Samsung, SK hynix Advance in AI-embedded Image Sensors
Samsung Electronics and SK hynix are making strides in commercializing “On-sensor AI” technology for image sensors, aiming to elevate their image sensor technologies centered around AI and challenge the market leader, Japan’s Sony, in dominating the next-generation market.
At the “SK Tech Summit 2023” held last month, SK hynix revealed its progress in developing On-sensor AI technology. This technology embeds an image sensor onto an AI chip, processing data directly at the sensor level, unlike traditional sensors that relay image information to the Central Processing Unit (CPU) for computation and inference. This advance is expected to be a key technology in enabling evolved Internet of Things (IoT) and smart home services, reducing power consumption and processing time.
SK hynix’s approach involves integrating an AI accelerator into the image sensor. The company is currently conducting proof-of-concept research focused on facial and object recognition features, using a Computing In Memory (CIM) accelerator, a next-generation technology capable of performing multiplication and addition operations required for AI model computations.
Additionally, SK hynix has presented its technologies for implementing On-sensor AI, including AI software and AI lightweighting, at major academic conferences like the International Conference on Computer Vision and the IEEE EDTM seminar on semiconductor manufacturing and next-generation devices.
Samsung Electronics is also rapidly incorporating AI into its image sensor business. This year, the company unveiled a 200-megapixel image sensor with an advanced zoom feature called Zoom Anyplace, which uses AI technology for automatic object tracking during close-ups. Samsung has set a long-term business goal to commercialize “Humanoid Sensors” capable of sensing and replicating human senses, with a road map to develop image sensors that can capture even the invisible by 2027.
In October, Park Yong-in, president of Samsung Electronics’ System LSI Business, emphasized at the Samsung System LSI Tech Day in Silicon Valley, the goal of pioneering the era of “Proactive AI,” advancing from generative AI through high-performance IP, short and long-range communication solutions, and System LSI Humanoids based on sensors mimicking human senses.
The push by both companies into On-sensor AI technology development is seen as a strategy to capture new AI-specific demands and increase their market share. The image sensor market, which temporarily contracted post-COVID-19 due to a downturn in the smartphone market, is now entering a new growth phase, expanding its applications from mobile to autonomous vehicles, extended reality devices, and robotics.
According to Counterpoint Research, Sony dominated the global image sensor market with a 54% share in the last year, while Samsung Electronics held second place with 29%, and SK hynix, struggling to close the gap, barely made it into the top five with 5%.



Friday, January 05, 2024​

Samsung announces new Isocell Vizion sensors​


Samsung press release: https://semiconductor.samsung.com/e...rs-tailored-for-robotics-and-xr-applications/
Samsung Unveils Two New ISOCELL Vizion Sensors Tailored for Robotics and XR Applications
The ISOCELL Vizion 63D, a time-of-flight sensor, captures high-resolution 3D images with exceptional detail
The ISOCELL Vizion 931, a global shutter sensor, captures dynamic moments with clarity and precision


Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today introduced two new ISOCELL Vizion sensors — a time-of-flight (ToF) sensor, the ISOCELL Vizion 63D, and a global shutter sensor, the ISOCELL Vizion 931. First introduced in 2020, Samsung’s ISOCELL Vizion lineup includes ToF and global shutter sensors specifically designed to offer visual capabilities across an extensive range of next-generation mobile, commercial and industrial use cases.
“Engineered with state-of-the-art sensor technologies, Samsung’s ISOCELL Vizion 63D and ISOCELL Vizion 931 will be essential in facilitating machine vision for future high-tech applications like robotics and extended reality (XR),” said Haechang Lee, Executive Vice President of the Next Generation Sensor Development Team at Samsung Electronics. “Leveraging our rich history in technological innovation, we are committed to driving the rapidly expanding image sensor market forward.”
ISOCELL Vizion 63D: Tailored for capturing high-resolution 3D images with exceptional detail
Similar to how bats use echolocation to navigate in the dark, ToF sensors measure distance and depth by calculating the time it takes the emitted light to travel to and from an object.
Particularly, Samsung’s ISOCELL Vizion 63D is an indirect ToF (iToF) sensor that measures the phase shift between emitted and reflected light to sense its surroundings in three dimensions. With exceptional accuracy and clarity, the Vizion 63D is ideal for service and industrial robots as well as XR devices and facial authentication where high-resolution and precise depth measuring are crucial.
The ISOCELL Vizion 63D sensor is the industry’s first iToF sensor with an integrated depth-sensing hardware image signal processor (ISP). With this innovative one-chip design, it can precisely capture 3D depth information without the help of another chip, enabling up to a 40% reduction in system power consumption compared to the previous ISOCELL Vizion 33D product. The sensor can also process images at up to 60 frames per second in QVGA resolution (320x240), which is a high-demand display resolution used in commercial and industrial markets.
Based on the industry’s smallest 3.5㎛ pixel size in iToF sensors, the ISOCELL Vizion 63D achieves high Video Graphics Array (VGA) resolution (640x480) within a 1/6.4” optical format, making it an ideal fit for compact, on-the-go devices.
Thanks to backside scattering technology (BST) that enhances light absorption, the Vizion 63D sensor boasts the highest level of quantum efficiency in the industry, reaching 38% at an infrared light wavelength of 940 nanometers (nm). This enables enhanced light sensitivity and reduced noise, resulting in sharper image quality with minimal motion blur.
Moreover, the ISOCELL Vizion 63D supports both flood (high-resolution at short-range) and spot (long-range) lighting modes, significantly extending its measurable distance range from its predecessor’s five meters to 10.
ISOCELL Vizion 931: Optimized for capturing dynamic movements without distortion
The ISOCELL Vizion 931 is a global shutter image sensor tailored for capturing rapid movements without the “jello effect”. Unlike rolling shutter sensors that scan the scene line by line from top to bottom in a “rolling” manner, global shutter sensors capture the entire scene at once or “globally,” similar to how human eyes see. This allows the ISOCELL Vizion 931 to capture sharp, undistorted images of moving objects, making it well-suited for motion-tracking in XR devices, gaming systems, service and logistics robots as well as drones.
Designed in a one-to-one ratio VGA resolution (640 x 640) that packs more pixels in a smaller form factor, the ISOCELL Vizion 931 is optimal for iris recognition, eye tracking as well as facial and gesture detection in head-mounted display devices like XR headsets.
The ISOCELL Vizion 931 also achieves the industry’s highest level of quantum efficiency, delivering an impressive 60% at 850nm infrared light wavelength. This feat was made possible by incorporating Front Deep Trench Isolation (FDTI) which places an insulation layer between pixels to maximize light absorption, in addition to the BST method used in the ISOCELL Vizion 63D.
The Vizion 931 supports multi-drop that can seamlessly connect up to four cameras to the application processor using a single wire. With minimal wiring required, the sensor provides greater design flexibility for device manufactures.
Samsung ISOCELL Vizion 63D and ISOCELL Vizion 931 sensors are currently sampling to OEMs worldwide

The Isocell Vizion 63D is a time-of-flight (ToF) image sensor, while the Isocell Vizion 931 sports a global shutter image sensor. These sensors are the latest in Samsung’s Vizion line of ToF and global shutter sensors, first announced in 2020. The Vizion lineup is designed with next-generation mobile, commercial, and industrial use cases in mind

“Engineered with state-of-the-art sensor technologies, Samsung’s Isocell Vizion 63D and Isocell Vizion 931 will be essential in facilitating machine vision for future high-tech applications like robotics and extended reality (XR),” says Haechang Lee, Executive Vice President of the Next Generation Sensor Development Team at Samsung Electronics. “Leveraging our rich history in technological innovation, we are committed to driving the rapidly expanding image sensor market forward.”
The Vizion 63D works similarly to how bats use echolocation to navigate in dim conditions. A ToF sensor measures distance and depth by calculating the amount of time it takes for emitted photons to travel to and back from an object in a scene. The Isocell Vizion 63D is an “indirect time-of-flight (iToF) sensor,” meaning it measures the “phase shift between emitted and reflected light to sense its surroundings in three dimensions.”

The sophisticated sensor is the first of its kind to integrate depth-sensing hardware into the image signal processor. The one-chip design enables 3D data capture without a seprate chip, thereby reducing the overall power demand of the system. The technology is still relatively new, so its resolution is not exceptionally high. The 63D processes QVGA (320×240) images at up to 60 frames per second. The sensor has 3.5-micron pixel size and can measure distances up to 10 meters from the sensor, double what it predecessor could achieve.

As for the Isocell Vizion 931, the global shutter sensor is designed to capture motion without rolling shutter artifacts, much like has been seen with Sony’s new a9 III full-frame interchangeable lens camera.

“Unlike rolling shutter sensors that scan the scene line by line from top to bottom in a ‘rolling’ manner, global shutter sensors capture the entire scene at once or ‘globally,’ similar to how human eyes see,” Samsung explains. “This allows the Isocell Vizion 931 to capture sharp, undistorted images of moving objects, making it well-suited for motion-tracking in XR devices, gaming systems, service and logistics robots as well as drones.”

The Vizion 931 boasts the industry’s highest quantum efficiency, joining the 63D, which makes the same claim in its specialized class of sensors.

While these sensors are unlikely to find their way into photography-oriented consumer products, it is always fascinating to learn about the newest advancements in image sensor technology. Samsung is sending samples of its new sensors to OEMs worldwide, so it won’t be long before they make their way into products of some kind.
 
  • Like
  • Fire
Reactions: 19 users
Bit of a blurb on MagikEyes' CES attendance. @MDhere mentioned they were resurfacing not long ago.

Haven't heard much from them or BRN so wonder 🤔

MagikEye's Pico Image Sensor: Pioneering the Eyes of AI for the Robotics Age at CES

From Businesswire.

December 20, 2023 09:00 AM Eastern Standard Time
STAMFORD, Conn.--(BUSINESS WIRE)--Magik Eye Inc. (www.magik-eye.com), a trailblazer in 3D sensing technology, is set to showcase its groundbreaking Pico Depth Sensor at the 2024 Consumer Electronics Show (CES) in Las Vegas, Nevada. Embarking on a mission to "Provide the Eyes of AI for the Robotics Age," the Pico Depth Sensor represents a key milestone in MagikEye’s journey towards AI and robotics excellence.

The heart of the Pico Depth Sensor’s innovation lies in its use of MagikEye’s proprietary Invertible Light™ Technology (ILT), which operates efficiently on a “bare-metal” ARM M0 processor within the Raspberry Pi RP2040. This noteworthy feature underscores the sensor's ability to deliver high-quality 3D sensing without the need for specialized silicon. Moreover, while the Pico Sensor showcases its capabilities using the RP2040, the underlying technology is designed with adaptability in mind, allowing seamless operation on a variety of microcontroller cores, including those based on the popular RISC-V architecture. This flexibility signifies a major leap forward in making advanced 3D sensing accessible and adaptable across different platforms.

Takeo Miyazawa, Founder & CEO of MagikEye, emphasizes the sensor's transformative potential: “Just as personal computers democratized access to technology and spurred a revolution in productivity, the Pico Depth Sensor is set to ignite a similar transformation in the realms of AI and robotics. It is not just an innovative product; it’s a gateway to new possibilities in fields like autonomous vehicles, smart home systems, and beyond, where AI and depth sensing converge to create smarter, more intuitive solutions.”

Attendees at CES 2024 are cordially invited to visit MagikEye's booth for an exclusive first-hand experience of the Pico Sensor. Live demonstrations of MagikEye’s latest ILT solutions for next-gen 3D sensing solutions will be held from January 9-11 at the Embassy Suites by Hilton Convention Center Las Vegas. Demonstration times are limited and private reservations will be accommodated by contacting ces2024@magik-eye.com.
 
  • Like
  • Fire
  • Love
Reactions: 24 users
Bit of a blurb on MagikEyes' CES attendance. @MDhere mentioned they were resurfacing not long ago.

Haven't heard much from them or BRN so wonder 🤔

MagikEye's Pico Image Sensor: Pioneering the Eyes of AI for the Robotics Age at CES

From Businesswire.

December 20, 2023 09:00 AM Eastern Standard Time
STAMFORD, Conn.--(BUSINESS WIRE)--Magik Eye Inc. (www.magik-eye.com), a trailblazer in 3D sensing technology, is set to showcase its groundbreaking Pico Depth Sensor at the 2024 Consumer Electronics Show (CES) in Las Vegas, Nevada. Embarking on a mission to "Provide the Eyes of AI for the Robotics Age," the Pico Depth Sensor represents a key milestone in MagikEye’s journey towards AI and robotics excellence.

The heart of the Pico Depth Sensor’s innovation lies in its use of MagikEye’s proprietary Invertible Light™ Technology (ILT), which operates efficiently on a “bare-metal” ARM M0 processor within the Raspberry Pi RP2040. This noteworthy feature underscores the sensor's ability to deliver high-quality 3D sensing without the need for specialized silicon. Moreover, while the Pico Sensor showcases its capabilities using the RP2040, the underlying technology is designed with adaptability in mind, allowing seamless operation on a variety of microcontroller cores, including those based on the popular RISC-V architecture. This flexibility signifies a major leap forward in making advanced 3D sensing accessible and adaptable across different platforms.

Takeo Miyazawa, Founder & CEO of MagikEye, emphasizes the sensor's transformative potential: “Just as personal computers democratized access to technology and spurred a revolution in productivity, the Pico Depth Sensor is set to ignite a similar transformation in the realms of AI and robotics. It is not just an innovative product; it’s a gateway to new possibilities in fields like autonomous vehicles, smart home systems, and beyond, where AI and depth sensing converge to create smarter, more intuitive solutions.”

Attendees at CES 2024 are cordially invited to visit MagikEye's booth for an exclusive first-hand experience of the Pico Sensor. Live demonstrations of MagikEye’s latest ILT solutions for next-gen 3D sensing solutions will be held from January 9-11 at the Embassy Suites by Hilton Convention Center Las Vegas. Demonstration times are limited and private reservations will be accommodated by contacting ces2024@magik-eye.com.

Originally the partnership, was only for one year, from August 2020 (which allowed for extensions) and It's on FactFinder's list of 50 Companies, that our Company has confirmed continued engagements with (post #72,465).

So the partnership has obviously been extended and there is a strong chance, that MagikEye's Pico Image Sensor, has some BrainChip sauce in it 😉
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 33 users

Quiltman

Regular
  • Like
Reactions: 21 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 22 users

Frangipani

Regular
AKIDA development kits, aren't specifically intended for backyard hobbyists and are not priced as such.

Hi DingoBorat,

I agree that the Dev Kits are not targeted at hobbyists - however, the Mini PCIe Board, which is only ~ 1/10th of the price of a Raspberry Pi Development Kit, but still sets you back US$499, was intended “for anyone from hobbyists to companies who wants [sic!] to just plug 10 boards in for specific applications, all the way through to licensing IP”, according to Rob Telson in the January 2022 interview with Sally Ward-Foxton:



BB24247C-D727-40B9-834C-4E724F09CBCA.jpeg



The Quantum Ventura AKIDA USB stick at ~$US50.00 (should that product eventuate) is and although he can't possibly do a price comparison on a product that doesn't exist yet, his "product" at ~$US100 doesn't exist yet either?..

Erm, yes, it does. He literally writes “We build and test a technically simple prototype of the proposed physical RC system employing an inexpensive Arduino microcontroller”. How else could he have written his paper? 🤔
So he obviously knows exactly how much money it cost them to build the prototype.

The way I see it, this research is not about commercialising a product and pitting it against the “competition” such as Brainchip (which was just an example to put the prototype’s inexpensiveness in relation to commercially available neuromorphic hardware), but a life hack for researchers on a tight budget, so to speak. If a US$50 AKD1000 USB stick had been available, he might have gladly picked that one instead of the Arduino microcontroller, for all we know.
Lots of academic research is fundamental research, which aims to improve humans’ understanding of the natural world - primarily the pursuit of knowledge for more knowledge.
But maybe my cursory glance at the paper missed the part where he expressed his intention to commercialise the prototype. Did you spot any evidence of that?


Note that he doesn’t consider his prototype a panacea: “… we argue that in certain practical situations the efficacy of the physical RC system may exceed the one of optimised machine learning software run on a high-performance workstation.”

And while it is admittedly embarrassing for the preprint’s author holding a PhD in Electrical Engineering to also have gotten the Akida Dev Kit’s power consumption wrong (thank’s for explaining, @Diogenese!), to me this error still doesn’t alter the result of his cost comparison (see my prior post on that matter), as his prototype is very low-power itself - the entire experimental setup apparently consumes less than 1 W.

221EF03F-A0EA-4492-8821-2A5247D8AF55.jpeg


I still can't see how you can agree on his argument of price point..

But I guess we'll just have to agree to disagree, again 😛

Yep, looks like it. 😊
 
  • Like
  • Love
Reactions: 8 users

Tothemoon24

Top 20
Last edited:
  • Like
Reactions: 22 users

IloveLamp

Top 20
Posting on behalf of @Patient

TATA demoing with BRN at CES along with NVISO and VVDN

Screenshot_20240108_071924_Chrome.jpg
 
  • Like
  • Love
  • Fire
Reactions: 90 users
Top Bottom