BRN Discussion Ongoing

robsmark

Regular
Hi Rob


I think this summarises it well.

But for additional context,, the website wasn't made live all of sudden and we, being everyone, all piled in.

There was planning that went into it in the weeks/days leading up to the website going live. Those with Founding Member status were in discussion with Zeebot during that period, bouncing ideas off of one another to help with the development of the forum. Once it was made live, we set up our accounts and then flooded HC with links and 'advertisements' to get the 1000 eyes across, hence why your join date is the same as ours.

Cheers

Thank you for the civil response Sera.
 
  • Like
  • Love
  • Fire
Reactions: 12 users

Reuben

Founding Member
Hi Rob


I think this summarises it well.

But for additional context,, the website wasn't made live all of sudden and we, being everyone, all piled in.

There was planning that went into it in the weeks/days leading up to the website going live. Those with Founding Member status were in discussion with Zeebot during that period, bouncing ideas off of one another to help with the development of the forum. Once it was made live, we set up our accounts and then flooded HC with links and 'advertisements' to get the 1000 eyes across, hence why your join date is the same as ours.

Cheers
And many of us are banned as wellfrom HC....🤣🤣🤣
 
  • Haha
  • Like
  • Fire
Reactions: 16 users

misslou

Founding Member
  • Haha
  • Like
  • Love
Reactions: 30 users

IloveLamp

Top 20
Screenshot_20231011_173127_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 35 users

Boab

I wish I could paint like Vincent
This is from 2022 but I don't recall seeing it.
I just needed something positive after the reversal today. We're still heading in the right direction.
Twitter.jpg
 
  • Like
  • Fire
  • Love
Reactions: 19 users

cosors

👀
  • Like
  • Love
  • Fire
Reactions: 18 users

IloveLamp

Top 20
View attachment 46820
Screenshot_20231011_204601_LinkedIn.jpg
 
  • Like
  • Love
  • Fire
Reactions: 14 users
Recent article in Science Direct.

Doesn't mention us but obviously related to neuromorphic cameras and pose estimation in gaming.

I recall Nviso were doing pose as well and probs emotion3D?

Anyway, I just found it interesting at the end where some of their financial support comes from, that they will look to test properly on Loihi 2 in the future (obvious research chip access) and where one of the co-authors works.



28 August 2023

Neuromorphic high-frequency 3D dancing pose estimation in dynamic environment​


Abstract​

Technology-mediated dance experiences, as a medium of entertainment, are a key element in both traditional and virtual reality-based gaming platforms. These platforms predominantly depend on unobtrusive and continuous human pose estimation as a means of capturing input. Current solutions primarily employ RGB or RGB-Depth cameras for dance gaming applications; however, the former is hindered by low-light conditions due to motion blur and reduced sensitivity, while the latter exhibits excessive power consumption, diminished frame rates, and restricted operational distance. Boasting ultra-low latency, energy efficiency, and a wide dynamic range, neuromorphic cameras present a viable solution to surmount these limitations. Here, we introduce YeLan, a neuromorphic camera-driven, three-dimensional, high-frequency human pose estimation (HPE) system capable of withstanding low-light environments and dynamic backgrounds. We have compiled the first-ever neuromorphic camera dance HPE dataset and devised a fully adaptable motion-to-event, physics-conscious simulator. YeLan surpasses baseline models under strenuous conditions and exhibits resilience against varying clothing types, background motion, viewing angles, occlusions, and lighting fluctuations.


9. Limitation and Future Works​

......Additionally, we aim to investigate energy-efficient implementations on neuromorphic computing platforms such as Intel Loihi 2.


Declaration of Competing Interest​

The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Hava Siegelmann reports financial support was provided by Defense Advanced Research Projects Agency. Tauhidur Rahman reports financial support was provided by Defense Advanced Research Projects Agency. Tauhidur Rahman reports financial support was provided by National Science Foundation. Coauthor employed by Qualcomm Technologies, Inc. - Upal Mahbub..
 
  • Like
  • Thinking
Reactions: 12 users

"Zebra Technologies Corporation, a digital solution provider enabling businesses to intelligently connect data, assets, and people has successfully demonstrated a Generative Artificial Intelligence (GenAI) large language model (LLM) running on Zebra handheld mobile computers and tablets without needing connectivity to the cloud."
 
  • Like
  • Wow
  • Fire
Reactions: 14 users

Quiltman

Regular
As the SP manipulation continues this week ... I'm no trading expert, far from it in fact, but it looks to me that every trick in the book is being used to wrestle shares free and accumulate. I've taken heed, and used the low SP to accumulate.

I hold long on the relationship with Tata alone, but Renesas comes close behind.

Can I remind readers of this article from December 2022 ... and the content in this article is moving close to commercial reality.

Developing Neuromorphic Devices for TinyML

The article was written by Eldar Sido from Renesas

Eldar Sido works in the product marketing management team for the Arm-based MCU family at Renesas Electronics. He specializes in the technical aspect of endpoint AI implementation on microcontrollers. He received his master's degree in nanotechnology from the University of Tokyo.

An extract :

To summarize, the advantages of using neuromorphic devices and SNNs at the endpoint include:

  • Ultra-low power consumption (millijoule to microjoule per inference)
  • Lower MAC requirements as compared to conventional NNs
  • Lower parameter memory usage as compared to conventional NNs
  • On-edge learning capabilities

Neuromorphic TinyML Use Cases​

Microcontrollers with neuromorphic cores can excel in use cases throughout the industry (Fig. 3) thanks to their distinct characteristics of on-edge learning, such as:

  • In anomaly-detection applications for existing industrial equipment, using the cloud to train a model is inefficient. Adding an endpoint AI device on the motor and training on the edge would allow for ease of scalability, as equipment aging tends to differ from machine to machine even if they’re the same model.
  • In robotics, as time passes, the joints of robotic arms tend to wear down, becoming untuned and stop operating as needed. Re-tuning the controller on the edge without human intervention mitigates the need to call a professional, reducing downtime and saving time and money.
  • In face-recognition applications, a user would have to add their face to the dataset and retrain the model on the cloud. With a few snaps of a person’s face, the neuromorphic device can identify the end-user via on-edge learning. Thus, users’ data can be secured on the device, and there’s a more seamless experience. This can be employed in cars, where different users have different preferences on seat position, climate control, etc.
  • In keyword-spotting applications, extra words can be added to your device to recognize on the edge. It can be used in biometric applications, where a person would add a “secret word” that they would want to keep secure on the device.
The balance of ultra-low-power neuromorphic endpoint devices and enhanced performance makes them suitable for prolonged battery-powered applications, executing algorithms not possible on other low-power devices due to them being computationally constrained (Fig. 4). Or they can be applied to higher-end devices capable of similar processing power that’s too power-hungry. Use cases include:

  • Smartwatches that monitor and process the data at the endpoint, sending only relevant information to the cloud.
  • Smart camera sensors for people detection to execute a logical command. For instance, automated door opening when a person is approaching, as current technology is based on proximity sensors.
  • Area with no connectivity or charging capabilities, such as in forests for smart animal tracking or monitoring under ocean pipes for any potential cracks using real-time vibration, vision, and sound data.
  • For infrastructure monitoring use cases, where a neuromorphic MCU can be used to continuously monitor movements, vibrations, and structural changes in bridges (via images) to identify potential failures.
On this front, Renesas has acknowledged the vast potential of neuromorphic devices and SNNs. The company licensed a neuromorphic core from Brainchip,3,4 the world’s first commercial producer of neuromorphic IP.
 
  • Like
  • Fire
  • Love
Reactions: 84 users
  • Like
Reactions: 3 users
Another company connection and the price gets pushed down!

How many times are they going to give me a chance to top up 😂?

Gotta laugh, gonna be rich or a cheap alcoholic.
1697054415579.gif
 
  • Haha
  • Like
Reactions: 2 users

Getupthere

Regular
  • Like
Reactions: 5 users

Frangipani

Regular
Mmmh, nice article about event-based cameras and their potential use cases, but what to think of those last two paragraphs? 🤔


Human Vision Inspires a New Generation of Cameras—And More​

October 11, 2023 Pat Brans
Thanks to a few lessons in biology, researchers have developed new sensor technology that opens up a world of new opportunities—including high-speed cameras that operate at low data rates.

In the broadest sense, the term neuromorphic applies to any computing system that borrows engineering concepts from biology. One set of notions that is particularly interesting for the development of electronic sensors is the spiking nature of neurons. Rather than fire right away, neurons build potential each time they receive a certain stimulus, firing only when a threshold is passed. The neurons are also leaky, losing membrane potential, which produces a filtering effect: If nothing new happens, the level goes down over time. “These behaviors can be emulated by electronics,” said Ilja Ocket, program manager for Neuromorphic Computing at imec. “And this is the basis for a new generation of sensors.”

Ilja Ocket, program manager for Neuromorphic Computing at imec


Ilja Ocket, program manager for Neuromorphic Computing at imec


The best illustration of how these ideas improve sensors is the event-based camera, also called the retinomorphic camera. Rather than accumulate photons in capacitive buckets and propagate them as images to a back-end system, these cameras treat each pixel autonomously. Each pixel can decide whether enough change has occurred in photon streams to convey that information downstream in the form of an event.

“Imec gets involved when sensors do not produce classical arrays or tensors or matrices, but rather events,” Ilja Ocket said. “We figure out how to adapt the AI to absorb event-based data and perform the necessary processing. Our spiking neural networks do not work with regular data. Instead, they take input from a time encoded stream.”

“One of the important benefits of these techniques is the reduced energy consumption—completely changing the game,” Ocket said. “We do a lot of work on AI and application development in areas where this benefit is the greatest— including robotics, smart sensors, wearables, AR/VR and automotive.”

One of the companies imec has been working with is Prophesee, a nine-year-old business based in Paris. Its 120 employees in France, China, Japan and the U.S. design vision sensors and develop software to overcome some of the challenges that plague traditional cameras.

Event-based vision sensors

“Our sensor is fundamentally different from a conventional image sensor,” said Luca Verre, CEO of Prophesee. “It produces event changes in the scene, as opposed to a full frame, at a fixed point in time. A regular camera captures images one after the other at a fixed point in time, maybe 20 frames per second.”

Prophesee's CEO Luca Verre
Luca Verre, CEO of Prophesee

This method, which is as old as cinematographer, works fine if you just want to display an image or make a movie. But it has three major shortcomings for more modern use cases, especially when AI is involved. The first is, because entire frames are captured and propagated even when there is very little change to most of the scene, a lot of redundant data is sent for processing.

The second problem is that movement between frames is missed. Since snapshots are taken at regular intervals several times a second, anything that happens between data capture events doesn’t get picked up.

The third problem is that traditional cameras have a fixed exposure time, which means each pixel could have a compromised acquisition depending on the lighting conditions. If there is bright light and dark areas in the same scene, you may end up with some pixels being overexposed or underexposed—often at the same time.

“Our approach, which is inspired by the human eye, is to have the acquisition driven by the scene, rather than having a sensor that acquires frames regardless of what’s changing,” Verre said. “Our pixels are independent and asynchronous, making for a very fast and efficient system. This suppresses data redundancy at the sensor level, and it captures movement, regardless of when it occurs—with microsecond precision.”

“While this is not time continuous, it is a very high time granularity for any natural phenomenon,” Verre said. “Most of the applications we target don’t need such high time precision. We don’t capture unnecessary data and we don’t miss information—two features that make a neuromorphic camera a high-speed camera, but at a low data rate.”

“Because the pixels are independent, we don’t have the problem of fixed exposure time,” Verre added. “Pixels that look at the dark part of the scene are independent from the ones looking at bright parts of the scene, so we have a very wide dynamic range.”

Because less redundant data is transmitted to AI systems, less processing is needed and less power consumption too. It becomes much easier to implement edge AI, putting inference closer to the sensor.

Prophesee-Sony-Sensor-Neuromorphic Sensing
The IMX 636 event-based camera module, developed with Sony, is a fourth-generation product. Last year, Prophesee released the EVK4 evaluation kit for the IMX 636 for industrial vision with a rugged housing but it will work for all applications. (Source: Prophesee)

Audio sensors and beyond neuromorphic

Automotive is an important market for companies like Prophesee, but it’s a long play,” Ocket said. “If you want to develop a product for autonomous cars, you’ll need to think seven to 10 years ahead. And you’ll need the patience and deep pockets to sustain your company until the market really takes off.”

In the meantime, event-based cameras are meeting the needs of several other markets. These include industrial use cases that require ultra-high-speed counting, particle size monitoring and vibration monitoring for predictive maintenance. Other applications include eye tracking, visual odometry and gesture detection for AR and VR. And in China, there is a growing market for small cameras in toy animals. The cameras need to operate at low power—and the most important thing for them to detect is movement. Neuromorphic cameras meet this need, operating on very little power, and fitting nicely into toys.

Neuromorphic principles can also be applied to audio sensors. Like the retina, the cochlea does not sample spectrograms at fixed intervals. It just conveys changes in sensory input. So far, there are not many examples of neuromorphic audio sensors, but that’s likely to change soon since audio-based AI is now in high demand. Neuromorphic principles can also be applied to sensors with no biological counterpart, like radar or LiDAR.

But researchers are increasingly convinced that making a silicon version of the biological structures is not the best idea. The biggest impact may lie beyond neuromorphic, making the best use of both biology and electronics.

“If you strip it down to its computational behavior you could improve biology,” Ocket said. “Instead of emulating spiking neurons with thresholds, you can just apply time-based computational behavior on very simple timing circuits—technology from the 1950s and 1960s. If you hook them together and find a way to train them, you can go much lower in power consumption than if you simply emulate spike neurons in electronic form.”
 
  • Like
  • Love
  • Fire
Reactions: 30 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 16 users

IloveLamp

Top 20

BOS Semiconductors CEO Jaehong Park said, “Through our collaboration with Tenstorrent, we expect to develop low-power and high-performance automotive SoC semiconductors that improve processing speed, accuracy, and power efficiency. This will enable us to compete successfully in the automotive semiconductor market that is rapidly evolving around new technologies such as connected vehicles and autonomous driving.”
Screenshot_20231012_063748_LinkedIn.jpg






Screenshot_20231012_064235_LinkedIn.jpg


Hyundai Motor Group and Samsung Catalyst Fund are leading a financing round that will raise more than $100 million for Tenstorrent, the Canadian AI chip startup said. The company added that it has verbal commitments that would increase the investment round to more than $130 million by the time it closes next month—bringing the total raised to date to more than $350 million.

The money will be used to fund team growth, as well as develop AI chiplets and its machine-learning software roadmap.

Founded in 2016, Tenstorrent sells AI processors and licenses AI and RISC-V IP to customers that want to own and customize their silicon. The company has become synonymous with AI chips, given CEO Jim Keller’s illustrious career designing central processing units (CPUs) and systems with the likes of AMD, Apple, Tesla and Intel.
The group expects to leverage Tenstorrent’s technologies and experience to jointly develop optimized semiconductors while strengthening its own technological capabilities. These will be applied to CPUs and neural processing units (NPUs) for future vehicles and mobility solutions. Hyundai’s 2023 CEO Investor Day in June provided a taste of its ambitions in these areas. Hyundai Motor Co. and Kia Corp. will invest a total of $50 million ($30 million and $20 million, respectively).

Heung-soo Kim, executive VP and head of the global strategy office at Hyundai Motor Group, said, “Tenstorrent’s high growth potential and high-performance AI semiconductors will help the group secure competitive technologies for future mobilities.

With this investment, the group expects to develop optimized but differentiated semiconductor technology that will aid future mobilities and strengthen internal capabilities in AI technology development.”
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 17 users

FJ-215

Regular

BOS Semiconductors CEO Jaehong Park said, “Through our collaboration with Tenstorrent, we expect to develop low-power and high-performance automotive SoC semiconductors that improve processing speed, accuracy, and power efficiency. This will enable us to compete successfully in the automotive semiconductor market that is rapidly evolving around new technologies such as connected vehicles and autonomous driving.” View attachment 46854





View attachment 46855

I did enjoy the last podcast with Keith Witek from Tenstorrent.

Hmmm.......that was 2 months ago now.

Time for the next edition?
 
  • Like
  • Love
Reactions: 9 users

IloveLamp

Top 20

Another area our teams worked on was the ability to make Meta Quest 3 more capable of understanding the environment around a user, so that virtual experiences can interact with physical spaces. We packed the Snapdragon XR2 Gen 2 with cutting edge on-device AI that is eight times more performant* and ultra-low latency passthrough for smoother and more natural interactions, all in a single chip architecture. This enabled Meta to build a slimmer and more comfortable headset that doesn’t require an external battery pack and enables the freedom for users to interact with their virtual and physical space seamlessly.

Screenshot_20231012_065103_LinkedIn.jpg
 
  • Like
  • Thinking
  • Fire
Reactions: 9 users

IloveLamp

Top 20
"Adapts automatically and instantly to your needs and riding style"

Screenshot_20231012_070338_LinkedIn.jpg
 
  • Like
Reactions: 8 users
Top Bottom