BRN Discussion Ongoing

I’m dropping this report here for those new to Brainchip querying defence revenue. I’ve posted it a few times however in light of some of the recent news it’s worth a reminder there is a roadmap to success: and we’re firmly on it; in pole position!


The easiest way to navigate around it is to hit the ”View the interactive report” button I circled in yellow (On the link supplied).

There are a plethora of chapters we are perfect for. I’m sure once the scientists/engineers get their head around how to use Akida it will be become ubiquitous throughout Defence. It’s just time to go from proof of concept to operational/commercial production!

Enjoy :)


1681276302048.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 21 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Qualcomm's Snapdragon 8 Gen 3 Rumored For A Massive GPU Uplift For Android Devices​

by Aaron Klotz — Tuesday, April 11, 2023, 05:20 PM EDT

linkedin sharing button

Support Independent Tech Journalism, Become A Patron!
qualcomm snapdragon
A new rumor suggests that Qualcomm’s upcoming Snapdragon 8 Gen 3 SoC will feature a whopping 50% greater graphical uplift over its already super strong predecessor, the Snapdragon 8 Gen 2. This could enable future Gen 3-equipped devices to really excel in gaming performance, and far outpace even the likes of Apple's A16 bionic chipset.
The performance claim comes to us by way of 数码闲聊站 (translated to "Digital chatter") on Weibo. We don’t know how true this 50% performance upgrade could be, but it's not completely unreasonable. Qualcomm’s current Snapdragon Gen 2 is consistently 30% faster than the Snapdragon 8 Gen 1, in most graphical applications, and slightly less compared to Qualcomm’s mid-cycle refresh, the Snapdragon 8+ Gen 1, based on our review of the Samsung Galaxy S23 Ultra.

708px S23 Ultra review

A 50% performance lead would make the Snapdragon 8 Gen 3 the dominant SoC for gaming and any GPU applications by a long shot. It would also solve any niche performance issues found in highly demanding mobile games and emulators, like the notoriously demanding Genshin Impact which can struggle to hit a steady 60FPS even on the Gen 2 equipped-Galaxy S23 Ultra. Apple’s closest competitor to the Gen 3 will be its upcoming A17 bionic, but current rumors slate that chipset to only have a 35% efficiency uplift over the previous generation. So at best, the A17 could be 35% faster in the same power envelope, if the rumors are true.

The Snapdragon 8 Gen 3 is rumored to arrive as soon as Q4 2023, and is rumored to feature a unique (1+5+2) core configuration, consisting of 1 speedy X4 mega-core clocked at 3.75GHz, five A720 performance cores clocked at 3GHz and two A520 efficiency cores clocked at 2 GHz. The Gen 3 chipset is also rumored to be manufactured on TSMC’s more efficient N4P node which promises up to 11% greater performance and 22% higher efficiency compared to the standard N4 node.

Gen 3’s rumored upgrades are shaping up to be a massive performance uplift over the Snapdragon Gen 2 and any of Apple’s current bionic chipsets. Hopefully, Qualcomm can accomplish this 50% performance target without killing the chip’s thermal performance as it did with the Snapdragon 8 Gen 1

 
  • Like
  • Fire
  • Love
Reactions: 24 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Thinking
  • Fire
Reactions: 12 users

jtardif999

Regular
If this was anything to do with Akida (and I haven’t looked at it) then there would be no need to teach it to learn, it would just learn in the course of events. Nobody with a brain has to be taught to learn 🤷‍♂️ which suggests it is describing some computer process with no real learning outcome. Am I right?, I should read it I guess but I felt compelled by the wording ‘taught to learn’.
 
Probably already posted, but link: Appear to be testing/using Brainchip to assist with contraband ID etc in US. Could be huge?????
From the other XXX.
Robust Classification of Contraband Substances using Longwave Robust Classification of Contraband Substances using Longwave Hyperspectral Imaging and Full Precision and Neuromorphic Hyperspectral
Abstract
Several agencies such as the US Department of Homeland Security (DHS) seek to improve the detection of illegal threats and materials passing through Ports of Entry (POE). A combined hardware/software solution that is portable, non-ionizing, handheld, low cost, and fast would represent a significant contribution towards that goal as existing systems do not fulfil many or all of these requirements. To design such a system, Quantum Ventura partnered with Bodkin Design and Engineering to combine long-wave infrared (LWIR) hyperspectral imaging (HSI) with convolutional neural networks (CNNs), implemented on full precision GPUs and neuromorphic computing modules.
Neuromorphic processors implement CNNs with dramatically reduced size, weight, power and cost (SWaP-C) compared to GPU versions. Here we describe converting the 3D CNN into a format that can be run on neuromorphic platforms. We had early access to BrainChip’s software developer kit (SDK) and simulator thus we focused our efforts
on this. We now have access to Intel Neuromorphic Research Consortium and are using it for other projects [11]. BrainChip can support many features of CNNs but not all. For example, it can only accept grayscale or RGB images, not hyperspectral images (HSIs) for convolutional input layers. (For regular input layers, it may be possible to input HSIs but only 4-bit precision can be used at this time.) Because of this, we had to remap the 61 bands of the HSI image into separate “grayscale” input channels and then fuse across input channels in groups. Furthermore, the skip connections in the original 3D CNN are implemented by copying activation values from one neural processor unit (NPU) to another, and then copying them to the original NPU with identical weights of the value 1. This was the recommendation from BrainChip. The AkidaTM chip has 80 NPUs so using a handful of extra NPUs to implement the skip connections would not prevent neuromorphic implementation [12]. In Figure 5, we show the translated CNN
compatible with the BrainChip hardware.
Very interesting! If Akida 1 could do an nVidia A100's job reasonably well at a fraction of the cost/size/price, then we're very competitive.

It also sounds like that they may benefit greatly from Akida 2's raw sensor input, TENN and 8 bit weights. Akida-P's 50 TFLOPS and probably much larger neuron capacity could probably help a lot.
 
  • Like
  • Love
Reactions: 8 users
Oh, AI-stralia, you keep giving. Our family is humbly thanking you for handing over another 220.000 shares.
 
  • Like
  • Fire
  • Wow
Reactions: 15 users

jtardif999

Regular
If this was anything to do with Akida (and I haven’t looked at it) then there would be no need to teach it to learn, it would just learn in the course of events. Nobody with a brain has to be taught to learn 🤷‍♂️ which suggests it is describing some computer process with no real learning outcome. Am I right?, I should read it I guess but I felt compelled by the wording ‘taught to learn’.
I am right, this is just another article about extending generative AI.
 
  • Like
  • Thinking
Reactions: 2 users

Steve10

Regular
Crucial moment for the markets tonight.

US CPI due at 10.30pm AEST. Forecast to have big drop down to 5.2-5.3% YOY from 6% YOY.

However, core CPI forecast to remain flattish or go up slightly to 5.5-5.6% YOY from 5.5% YOY.

If market rises it will rise fast as shorts will get sizzled. April is seasonally a very good month.
 
  • Like
  • Fire
  • Love
Reactions: 40 users

Taproot

Regular
Hi @Boab

That was a different device, from memory using a hand held radar to scan people and use ML to identify objects on peoples bodies which were a threat, e.g. a gun/knife. It was a law enforcement company creating it. It’s be a bit like the TASER. Once once state has it they will all be wanting it. If it helps reduce the issues with shoot/no shoot scenario’s, and stop, search & detain laws to make things safer for the public/police then all the better!


Very interesting article that was shared. One of the reasons Brainchip will be successful is the price. How good it is that a $50 item is being compared with Nvidia’s $30K device.

View attachment 34104


Nvidia CEO Jensen Huang couldn’t stop talking about AI on a call with analysts on Wednesday, suggesting that the recent boom in artificial intelligence is at the center of the company’s strategy.
“The activity around the AI infrastructure that we built, and the activity around inferencing using Hopper and Ampere to influence large language models has just gone through the roof in the last 60 days,” Huang said. “There’s no question that whatever our views are of this year as we enter the year has been fairly dramatically changed as a result of the last 60, 90 days.”
Ampere is Nvidia’s code name for the A100 generation of chips. Hopper is the code name for the new generation, including H100, which recently started shipping.

More computers needed​

Nvidia A100 processor

Nvidia A100 processor
Nvidia
Compared to other kinds of software, like serving a webpage, which uses processing power occasionally in bursts for microseconds, machine learning tasks can take up the whole computer’s processing power, sometimes for hours or days.
This means companies that find themselves with a hit AI product often need to acquire more GPUs to handle peak periods or improve their models.
These GPUs aren’t cheap. In addition to a single A100 on a card that can be slotted into an existing server, many data centers use a system that includes eight A100 GPUs working together.
This system, Nvidia’s DGX A100, has a suggested price of nearly $200,000, although it comes with the chips needed. On Wednesday, Nvidia said it would sell cloud access to DGX systems directly, which will likely reduce the entry cost for tinkerers and researchers.
It’s easy to see how the cost of A100s can add up.
For example, an estimate from New Street Research found that the OpenAI-based ChatGPT model inside Bing’s search could require 8 GPUs to deliver a response to a question in less than one second.
At that rate, Microsoft would need over 20,000 8-GPU servers just to deploy the model in Bing to everyone, suggesting Microsoft’s feature could cost $4 billion in infrastructure spending.
“If you’re from Microsoft, and you want to scale that, at the scale of Bing, that’s maybe $4 billion. If you want to scale at the scale of Google, which serves 8 or 9 billion queries every day, you actually need to spend $80 billion on DGXs.” said Antoine Chkaiban, a technology analyst at New Street Research. “The numbers we came up with are huge. But they’re simply the reflection of the fact that every single user taking to such a large language model requires a massive supercomputer while they’re using it.”
The latest version of Stable Diffusion, an image generator, was trained on 256 A100 GPUs, or 32 machines with 8 A100s each, according to information online posted by Stability AI, totaling 200,000 compute hours.
At the market price, training the model alone cost $600,000, Stability AI CEO Mostaque said on Twitter, suggesting in a tweet exchange the price was unusually inexpensive compared to rivals. That doesn’t count the cost of “inference,” or deploying the model.
 
  • Like
  • Wow
  • Fire
Reactions: 16 users

Nvidia CEO Jensen Huang couldn’t stop talking about AI on a call with analysts on Wednesday, suggesting that the recent boom in artificial intelligence is at the center of the company’s strategy.
“The activity around the AI infrastructure that we built, and the activity around inferencing using Hopper and Ampere to influence large language models has just gone through the roof in the last 60 days,” Huang said. “There’s no question that whatever our views are of this year as we enter the year has been fairly dramatically changed as a result of the last 60, 90 days.”
Ampere is Nvidia’s code name for the A100 generation of chips. Hopper is the code name for the new generation, including H100, which recently started shipping.

More computers needed​

Nvidia A100 processor

Nvidia A100 processor
Nvidia
Compared to other kinds of software, like serving a webpage, which uses processing power occasionally in bursts for microseconds, machine learning tasks can take up the whole computer’s processing power, sometimes for hours or days.
This means companies that find themselves with a hit AI product often need to acquire more GPUs to handle peak periods or improve their models.
These GPUs aren’t cheap. In addition to a single A100 on a card that can be slotted into an existing server, many data centers use a system that includes eight A100 GPUs working together.
This system, Nvidia’s DGX A100, has a suggested price of nearly $200,000, although it comes with the chips needed. On Wednesday, Nvidia said it would sell cloud access to DGX systems directly, which will likely reduce the entry cost for tinkerers and researchers.
It’s easy to see how the cost of A100s can add up.
For example, an estimate from New Street Research found that the OpenAI-based ChatGPT model inside Bing’s search could require 8 GPUs to deliver a response to a question in less than one second.
At that rate, Microsoft would need over 20,000 8-GPU servers just to deploy the model in Bing to everyone, suggesting Microsoft’s feature could cost $4 billion in infrastructure spending.
“If you’re from Microsoft, and you want to scale that, at the scale of Bing, that’s maybe $4 billion. If you want to scale at the scale of Google, which serves 8 or 9 billion queries every day, you actually need to spend $80 billion on DGXs.” said Antoine Chkaiban, a technology analyst at New Street Research. “The numbers we came up with are huge. But they’re simply the reflection of the fact that every single user taking to such a large language model requires a massive supercomputer while they’re using it.”
The latest version of Stable Diffusion, an image generator, was trained on 256 A100 GPUs, or 32 machines with 8 A100s each, according to information online posted by Stability AI, totaling 200,000 compute hours.
At the market price, training the model alone cost $600,000, Stability AI CEO Mostaque said on Twitter, suggesting in a tweet exchange the price was unusually inexpensive compared to rivals. That doesn’t count the cost of “inference,” or deploying the model.
We definitely need another chip in the server room.
 
  • Like
  • Fire
Reactions: 8 users
Are there anybody who has any idea how many neurons Akida 2.0 (Akida-P) can emulate at 8-bit weights?

I know about the 50 TOPS.
 

cassip

Regular
Hi to all,

I came over a company "Masimo". @Tothemoon24 postet their watch. They produce a lot of products and sensors for medical applications. Maybe their patents are worth a watch, they have many partners/licensees for these solutions:




Masimo Unveils the Future of Health Tracking and Smart Wearables with the Pre-Sale Launch of the Freedom™ Smartwatch

First of Its Kind Smartwatch Combines Masimo’s Accurate, Continuous Health Tracking Technology and Breakthrough Features in a Premium, Elegant Watch

Irvine, California - March 28, 2023 - Masimo(NASDAQ: MASI) a global leader in pulse oximetry and innovative noninvasive monitoring technologies, unveiled the newest addition to its wearable product line: the Masimo Freedom™ smartwatch. Developed to revolutionize the wearable technology industry, the Masimo Freedom smartwatch offers you the freedom to take control of your personal health and privacy through accurate and continuous health tracking, including a novel hardware feature designed to reduce radiation and free you from privacy infringement.

Masimo Root® with PVi®
Masimo Freedom™
The Masimo Freedom smartwatch builds on the success of the Masimo W1™ advanced biosensing watch by leveraging its state-of-the-art sensor and digital signal processing technology, enabling it to provide continuous, accurate, and reliable readings of key health data, like arterial blood oxygen saturation (SpO2), hydration index (Hi™), pulse rate, heart rate, and respiration rate. The Freedom smartwatch seamlessly integrates Masimo’s cutting-edge health tracking technology into an elegant, user-friendly wearable with premium connectivity through Bluetooth®, Wi-Fi, and LTE technologies, enabling the everyday conveniences of texting, calling, music, and third-party app compatibility. Masimo Freedom is an ideal companion for active individuals looking for better control of their day-to-day health and wellness data. Unlike other products on the market, Masimo Freedom features a physical privacy switch that instantly stops all data sharing beyond the device, including user data, as well as microphone, location, and metadata.

“We are thrilled to finally unveil Freedom,” said Joe Kiani, Founder and CEO of Masimo. “We believe that this groundbreaking new product will revolutionize wearable technology and health tracking. We are allowing people to take control of their health with continuous and accurate actionable biosensing information along with the convenience of being connected, but without compromising their freedom.”

The Masimo Freedom smartwatch utilizes technology based on Masimo SET® pulse oximetry to optimize the capture of health data on the wrist. Masimo SET®is the primary pulse oximetry technology used at 9 of the top 10 hospitals, as ranked in the 2022-23 U.S. News & World Report Best Hospitals Honor Roll,1 and has been shown in over 100 studies to outperform other pulse oximetry technologies in clinical settings.2

In addition to Masimo Freedom, Masimo is also introducing a sleek, screenless band built on the innovations developed for Masimo W1 and Freedom. The band is designed to work in tandem and seamlessly synchronize with Freedom, so that you can wear one while charging the other – a 24/7 continuous wear ecosystem maximizing your ability to track your health both during the active day and at night while you sleep.

Consumers can pre-order Masimo Freedom now by placing a fully refundable $100 deposit, with expected shipping this fall. Masimo W1 is currently shipping. The Masimo band is expected to be ready for sale this summer. Those who would like to begin taking advantage of Masimo’s advanced biosensing technology now can purchase a Masimo W1 and automatically reserve their place in line to purchase Freedom with a $400 discount off the retail price. For more information about the Masimo family of wearables, and to order Masimo W1 and reserve your Masimo Freedom, visit freedom.masimoconsumer.com.

Masimo Freedom is not cleared for use in medical applications in the U.S.

@Masimo || #Masimo

About Masimo

Masimo (NASDAQ: MASI) is a global medical technology company that develops and produces a wide array of industry-leading monitoring technologies, including innovative measurements, sensors, patient monitors, and automation and connectivity solutions. In addition, Masimo Consumer Audio is home to eight legendary audio brands, including Bowers & Wilkins, Denon, Marantz, and Polk Audio. Our mission is to improve life, improve patient outcomes, and reduce the cost of care. Masimo SET® Measure-through Motion and Low Perfusion™ pulse oximetry, introduced in 1995, has been shown in over 100 independent and objective studies to outperform other pulse oximetry technologies.2 Masimo SET® has also been shown to help clinicians reduce severe retinopathy of prematurity in neonates,3 improve CCHD screening in newborns,4 and, when used for continuous monitoring with Masimo Patient SafetyNet™ in post-surgical wards, reduce rapid response team activations, ICU transfers, and costs.5-8 Masimo SET® is estimated to be used on more than 200 million patients in leading hospitals and other healthcare settings around the world,9 and is the primary pulse oximetry at 9 of the top 10 hospitals as ranked in the 2022-23 U.S. News and World Report Best Hospitals Honor Roll.1 In 2005, Masimo introduced rainbow® Pulse CO-Oximetry technology, allowing noninvasive and continuous monitoring of blood constituents that previously could only be measured invasively, including total hemoglobin (SpHb®), oxygen content (SpOC™), carboxyhemoglobin (SpCO®), methemoglobin (SpMet®), Pleth Variability Index (PVi®), RPVi™ (rainbow® PVi), and Oxygen Reserve Index (ORi™). In 2013, Masimo introduced the Root® Patient Monitoring and Connectivity Platform, built from the ground up to be as flexible and expandable as possible to facilitate the addition of other Masimo and third-party monitoring technologies; key Masimo additions include Next Generation SedLine® Brain Function Monitoring, O3® Regional Oximetry, and ISA™ Capnography with NomoLine® sampling lines. Masimo’s family of continuous and spot-check monitoring Pulse CO-Oximeters® includes devices designed for use in a variety of clinical and non-clinical scenarios, including tetherless, wearable technology, such as Radius-7®, Radius PPG®, and Radius VSM™, portable devices like Rad-67®, fingertip pulse oximeters like MightySat® Rx, and devices available for use both in the hospital and at home, such as Rad-97®. Masimo hospital and home automation and connectivity solutions are centered around the Masimo Hospital Automation™ platform, and include Iris® Gateway, iSirona™, Patient SafetyNet, Replica®, Halo ION®, UniView®, UniView :60™, and Masimo SafetyNet®. Its growing portfolio of health and wellness solutions includes Radius T°®and the Masimo W1™ watch. Additional information about Masimo and its products may be found at www.masimo.com. Published clinical studies on Masimo products can be found at www.masimo.com/evidence/featured-studies/feature.

ORi and RPVi have not received FDA 510(k) clearance and are not available for sale in the United States. The use of the trademark Patient SafetyNet is under license from University HealthSystem Consortium.
 
  • Like
  • Love
Reactions: 7 users

Derby1990

Regular
1681293754831.png


That line's getting vertical real fast.
I reckon BRN SP will follow suit.
 
  • Like
  • Love
  • Haha
Reactions: 21 users

Tothemoon24

Top 20
  • Like
  • Fire
  • Thinking
Reactions: 10 users
Something is missing in this forum.

Where's Fact Finder?

@Fact Finder
 
  • Like
  • Fire
  • Haha
Reactions: 5 users
Last edited:
  • Like
  • Love
  • Fire
Reactions: 55 users

MrRomper

Regular
  • Like
  • Fire
  • Love
Reactions: 67 users

Tothemoon24

Top 20
Some familiar quotes below inside a newly structured read & info





Why you will be seeing much more from event cameras
14 February 2023
share-black.svg

February/March 2023
Advances in sensors that capture images like real eyes, plus in the software and hardware to process them, are bringing a paradigm shift in imaging, finds Andrei Mihai
Neuromorphic%20vision.jpg

The field of neuromorphic vision, where electronic cameras mimic the biological eye, has been around for some 30 years.

Neuromorphic cameras (also called event cameras) mimic the function of the retina, the part of the eye that contains light-sensitive cells. This is a fundamental change from conventional cameras – and why applications for event cameras for industry and research are also different.
Conventional cameras are built for capturing images and visually reproducing them.
They take a picture at certain amounts of time, capturing the field of vision and snapping frames at predefined intervals, regardless of how the image is changing. These frame-based cameras work excellently for their purpose, but they are not optimised for sensing or machine vision. They capture a great deal of information but, from a sensing perspective, much of that information is useless, because it is not changing.
Event cameras suppress this redundancy and have fundamental benefits in terms of efficiency, speed, and dynamic range. Event-based vision sensors can achieve better speed versus power consumption trade-off by up to three orders of magnitude. By relying on a different way of acquiring information compared with a conventional camera, they also address applications in the field of machine vision and AI.
monitoring%20particle%20size%20and%20movement.png

Event camera systems can quickly and efficiently monitor particle size and movement

“Essentially, what we’re bringing to the table is a new approach to sensing information, very different to conventional cameras that have been around for many years,” says Luca Verre, CEO of Prophesee, a market leader in the field.
Whereas most commercial cameras are essentially optimised to produce attractive images, the needs of the automotive, industrial, Internet of Things (IoT) industries, and even some consumer products, often demand different performances. If you are monitoring change, for instance, as much as 90% of the scene is useless information because it does not change. Event cameras bypass that as they only monitor when light goes up or down in certain relative amounts, which produces a so-called “change event”.
In modern neuromorphic cameras, each pixel of the sensor works independently (asynchronously) and records continuously, so there is no downtime, even when you go down to microseconds. Also, since they only monitor changing data, they do not monitor redundant data. This is one of the key aspects driving the field forward.

Innovation in neuromorphic vision

Vision sensors typically gather a lot of data, but increasingly there is a drive to use edge processing for these sensors. For many machine vision applications, edge computation has become a bottleneck. But for event cameras, it is the opposite.
“More and more, sensor cameras are used for some local processing, some edge processing, and this is where we believe we have a technology and an approach that can bring value to this application,” says Verre.
“We are enabling fully fledged edge computing by the fact that our sensors produce very low data volumes. So, you can afford to have a cost-reasonable, low-power system on a chip at the edge, because you can simply generate a few event data that this processor can easily interface with and process locally.

“Instead of feeding this processor with tons of frames that overload them and hinder their capability to process data in real-time, our event camera can enable them to do real-time across a scene. We believe that event cameras are finally unlocking this edge processing.”
Making sensors smaller and cheaper is also a key innovation because it opens up a range of potential applications, such as in IoT sensing or smartphones. For this, Prophesee partnered with Sony, mixing its expertise in event cameras with Sony’s infrastructure and experience in vision sensors to develop a smaller, more efficient, and cheaper event camera evaluation kit. Verre thinks the pricing of event cameras is at a point where they can be realistically introduced into smartphones.
Another area companies are eyeing is fusion kits – the basic idea is to mix the capability of a neuromorphic camera with another vision sensor, such as lidar or a conventional camera, into a single system.
“From both the spatial information of a frame-based camera and from the information of an event-based camera, you can actually open the door to many other applications,” says Verre. “Definitely, there is potential in sensor fusion… by combining event-based sensors with some lidar technologies, for instance, in navigation, localisation, and mapping.”

Neuromorphic computing progress

However, while neuromorphic cameras mimic the human eye, the processing chips they work with are far from mimicking the human brain. Most neuromorphic computing, including work on event camera computing, is carried out using deep learning algorithms that perform processing on CPUs of GPUs, which are not optimised for neuromorphic processing. This is where new chips such as Intel’s Loihi 2 (a neuromorphic research chip) and Lava (an open-source software framework) come in.
“Our second-generation chip greatly improves the speed, programmability, and capacity of neuromorphic processing, broadening its usages in power and latency-constrained intelligent computing applications,” says Mike Davies, Director of Intel’s Neuromorphic Computing Lab.

BrainChip, a neuromorphic computing IP vendor, also partnered with Prophesee to deliver event-based vision systems with integrated low-power technology coupled with high AI performance.
It is not only industry accelerating the field of neuromorphic chips for vision – there is also an emerging but already active academic field. Neuromorphic systems have enormous potential, yet they are rarely used in a non-academic context. Particularly, there are no industrial employments of these bio-inspired technologies. Nevertheless, event-based solutions are already far superior to conventional algorithms in terms of latency and energy efficiency.
Working with the first iteration of the Loihi chip in 2019, Alpha Renner et al (‘Event-based attention and tracking on neuromorphic hardware’) developed the first set-up that interfaces an event-based camera with the spiking neuromorphic system Loihi, creating a purely event-driven sensing and processing system. The system selects a single object out of a number of moving objects and tracks it in the visual field, even in cases when movement stops, and the event stream is interrupted.
In 2021, Viale et al demonstrated the first spiking neuronal network (SNN) on a chip used for a neuromorphic vision-based controller solving a high-speed UAV control task. Ongoing research is looking at ways to use neuromorphic neural networks to integrate chips and event cameras for autonomous cars. Since many of these applications use the Loihi chip, newer generations, such as Loihi 2, should speed development. Other neuromorphic chips are also emerging, allowing quick learning and training of the algorithm even with a small dataset. Specialised SNN algorithms operating on neuromorphic chips can further help edge processing and general computing in event vision.
“The development of event-based cameras, inspired by the retina, enables the exploitation of an additional physical constraint – time. Due to their asynchronous course of operation, considering the precise occurrence of spikes, spiking neural networks take advantage of this constraint,” writes Lea Steffen and colleagues (‘Neuromorphic Stereo Vision: A Survey of Bio-Inspired Sensors and Algorithms’).
Lighting is another aspect the field of neuromorphic vision is increasingly looking at. An advantage of event cameras compared with frame-based cameras is their ability to deal with a range of extreme light conditions – whether high or low. But event cameras can now use light itself in a different way.
Prophesee and CIS have started work on the industry’s first evaluation kit for implementing 3D sensing based on structured light. This uses event-based vision and point cloud generation to produce an accurate 3D Point Cloud.
“You can then use this principle to project the light pattern in the scene and, because you know the geometry of the setting, you can compute the disparity map and then estimate the 3D and depth information,” says Verre. “We can reach this 3D Point Cloud at a refresh rate of 1kHz or above. So, any application of 3D tourism, such as 3D measurements or 3D navigation that requires high speed and time precision, really benefits from this technology. There are no comparable 3D approaches available today that can reach this time resolution.”

Industrial applications of event vision

Due to its inherent advantages, as well as progress in the field of peripherals (such as neuromorphic chips and lighting systems) and algorithms, we can expect the deployment of neuromorphic vision systems to continue – especially as systems become increasingly cost-effective.
monitoring%20engine%20vibration.jpg

Event vision can trace particles or monitor vibrations with low latency, low energy consumption, and relatively low amounts of data
We have mentioned some of the applications of event cameras here at IMVE before, from helping restore people’s vision to tracking and managing space debris. But in the near future perhaps the biggest impact will be at an industrial level.
From tracing particles or quality control to monitoring vibrations, all with low latency, low energy consumption, and relatively low amounts of data that favour edge computing, event vision is promising to become a mainstay in many industrial processes. Lowering costs through scaling production and better sensor design is opening even more doors.
Smartphones are one field where event cameras may make an unexpected entrance, but Verre says this is just the tip of the iceberg. He is looking forward to a paradigm shift and is most excited about all the applications that will soon pop up for event cameras – some of which we probably cannot yet envision.
“I see these technologies and new tech sensing modalities as a new paradigm that will create a new standard in the market. And in serving many, many applications, so we will see more event-based cameras all around us. This is so exciting.”
Lead image credit: Vector Tradition/Shutterstock.com
Event-based vision
neuromorphic vision
 
  • Like
  • Love
  • Fire
Reactions: 31 users
I posted in diff thread in error so deleted and posting it here.

I see our friends at Prophesee doing a presso at upcoming tinyML.

Be great if one day, sooner than later preferably, revealed has a little Akida sauce in it....or will have. :)


tinyML EMEA Innovation Forum - June 26-28, 2023​



Event sensors for embedded edge AI vision applications

Christoph POSCH, CTO, PROPHESEE

Abstract (English)

Event-based vision is a term naming an emerging paradigm of acquisition and processing of visual information for numerous artificial vision applications in industrial, surveillance, IoT, AR/VR, automotive and more. The highly efficient way of acquiring sparse data and the robustness to uncontrolled lighting conditions are characteristics of the event sensing process that make event-based vision attractive for at-the-edge visual perception systems that are able to cope with limited resources and a high degree of autonomy.

However, the unconventional format of the event data, non-constant data rates, non-standard interfaces and, in general, the way, dynamic visual information is encoded inside the data, pose challenges to the usage and integration of event sensors in an embedded vision system.

Prophesee has recently developed the first of a new generation of event sensor that was designed with the explicit goal to improve integrability and usability of event sensing technology in an embedded at-the-edge vision system.

Particular emphasis has been put on event data pre-processing and formatting, data interface compatibility and low-latency connectivity to various processing platforms including low- power uCs and neuromorphic processor architectures.

Furthermore, the sensor has been optimized for ultra-low power operation, featuring a hierarchy of low-power modes and application-specific modes of
operation. On-chip power management and an embedded uC core further improve sensor flexibility and useability at-the-edge
 
  • Like
  • Fire
  • Love
Reactions: 20 users

MrRomper

Regular
@Diogenese another Patent from Toshiba referencing SNN. Does not mention Brainchip. But am I getting closer?

Source USPTO
 

Attachments

  • 11625579.pdf
    1.6 MB · Views: 90
  • Like
  • Love
Reactions: 9 users
Top Bottom