BRN Discussion Ongoing

7für7

Top 20
Don't worry, today is the day! It is down 3.8 % already, more is to come for sure.

Emma Stone Laughing GIF
-10% in germanistan… what’s up there?
 
  • Thinking
  • Like
Reactions: 2 users

Guzzi62

Regular
-10% in germanistan… what’s up there?
A company listed on several exchanges have to have the same value on all of them.

If not, you could in theory buy them on the cheap one and sell them on the expensive one.
 
  • Like
Reactions: 1 users

Tothemoon24

Top 20

“This is the future “​

ITL ventures into neuromorphic computing​

By Megan Saxton
U.S. ARMY ENGINEER RESEARCH AND DEVELOPMENT CENTER
Published Sept. 23, 2024
Updated: Sept. 23, 2024
FacebookXEmailShare


Neuromorphic Computing

PHOTO DETAILS / DOWNLOAD HI-RES 1 of 1
The U.S. Army Engineer Research and Development Center (ERDC) Information Technology Laboratory (ITL) Edge Computing Lab has long been on the cutting-edge of this field and is now exploring something new: neuromorphic computing.

PRINT | E-MAIL
In recent years, edge computing has revolutionized the technology landscape for users situated in remote areas or away from primary devices. By bringing computation and data storage closer to the location where it is needed, response times, reliability and performance are greatly improved, latency and bandwidth costs are reduced and privacy and security are enhanced. The U.S. Army Engineer Research and Development Center (ERDC) Information Technology Laboratory (ITL) Edge Computing Lab has long been on the cutting-edge of this field and is now exploring something new: neuromorphic computing.
“Neuromorphic computing is a process in which computers are designed and engineered to mirror the structure and function of the human brain,” said Dr. Raju Namburu, ITL chief technology officer and a senior scientific technical manager. “Using artificial neurons and synapses, neuromorphic computers simulate the way our brains process information, allowing them to solve problems, recognize patterns and make decisions more quickly and efficiently than the traditional high-performance computing systems we use today.”
The driving force behind ITL’s research into this emerging technology is the U.S. military’s need to know more, sooner, to allow rapid, decisive action on the multi-domain battlefield. The battlespace has become characterized by highly distributed processing, heterogeneous and mobile assets with limited battery life, communications- dominated but restricted network capacity and operating with time-critical needs in a rapidly changing hostile environment. Distributed and low power edge processing is one of the essential technologies for maintaining overmatch in various emerging operational and contested environments, as is the need to take advantage of machine learning (ML) and generative artificial intelligence (AI).
“Overall, neuromorphic chips offer the DoD community a number of potential benefits including improved performance, resilience, cost-efficiency, security, privacy, power-efficiency, signal processing, ML capabilities and more,” said Dr. Ruth Cheng, a computer scientist in ITL’s Supercomputing Research Center. “By keeping an eye on developments in this technology, the DoD community can ensure it remains at the forefront of military and defense innovation.”
“Computations performed at the molecular, atomic, and neuro scales mimicking the human brain are showing tremendous viability,” added Namburu. “We just started this work on next generation advanced computing, which is significantly different from traditional computing systems historically used at ERDC. Neuromorphic computing represents a paradigm shift in computing, promising significant advancements in ML, generative AI, scientific applications and sensor processing compared to traditional computing. Moreover, neuromorphic chips emulate the brain's plasticity, enabling learning and adaptation over time, unlike traditional systems.”
Ongoing efforts edge computing efforts include agnostic graphics processing unit (GPU) ray tracing development, benchmarking deep neural networks, sensor-data management, ML for underwater invasive plants, railcar inspection, photogrammetry, reservoir frameworks, decentralized edge computing, bi-directional digital twins and algorithms for anomaly detection. ITL is also exploring emerging AI chips for edge computing including novel algorithms and sustainable software.
“Overall, edge computing is helping to enable new use cases and provide better experiences to the users by making applications faster, more reliable and more secure,” said Cheng. “Neuromorphic chips are well-suited for edge computing, which is becoming increasingly important in military and defense applications, and ITL is already aiding in this process that will touch everything from lowering the cost of deployments by eliminating the need for expensive, high-powered servers and data centers to support of mobile and autonomous systems. This is the future.”
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 27 users

Esq.111

Fascinatingly Intuitive.

ITL ventures into neuromorphic computing​

By Megan Saxton
U.S. ARMY ENGINEER RESEARCH AND DEVELOPMENT CENTER
Published Sept. 23, 2024
Updated: Sept. 23, 2024
FacebookXEmailShare


Neuromorphic Computing

PHOTO DETAILS / DOWNLOAD HI-RES 1 of 1
The U.S. Army Engineer Research and Development Center (ERDC) Information Technology Laboratory (ITL) Edge Computing Lab has long been on the cutting-edge of this field and is now exploring something new: neuromorphic computing.

PRINT | E-MAIL
In recent years, edge computing has revolutionized the technology landscape for users situated in remote areas or away from primary devices. By bringing computation and data storage closer to the location where it is needed, response times, reliability and performance are greatly improved, latency and bandwidth costs are reduced and privacy and security are enhanced. The U.S. Army Engineer Research and Development Center (ERDC) Information Technology Laboratory (ITL) Edge Computing Lab has long been on the cutting-edge of this field and is now exploring something new: neuromorphic computing.
“Neuromorphic computing is a process in which computers are designed and engineered to mirror the structure and function of the human brain,” said Dr. Raju Namburu, ITL chief technology officer and a senior scientific technical manager. “Using artificial neurons and synapses, neuromorphic computers simulate the way our brains process information, allowing them to solve problems, recognize patterns and make decisions more quickly and efficiently than the traditional high-performance computing systems we use today.”
The driving force behind ITL’s research into this emerging technology is the U.S. military’s need to know more, sooner, to allow rapid, decisive action on the multi-domain battlefield. The battlespace has become characterized by highly distributed processing, heterogeneous and mobile assets with limited battery life, communications- dominated but restricted network capacity and operating with time-critical needs in a rapidly changing hostile environment. Distributed and low power edge processing is one of the essential technologies for maintaining overmatch in various emerging operational and contested environments, as is the need to take advantage of machine learning (ML) and generative artificial intelligence (AI).
“Overall, neuromorphic chips offer the DoD community a number of potential benefits including improved performance, resilience, cost-efficiency, security, privacy, power-efficiency, signal processing, ML capabilities and more,” said Dr. Ruth Cheng, a computer scientist in ITL’s Supercomputing Research Center. “By keeping an eye on developments in this technology, the DoD community can ensure it remains at the forefront of military and defense innovation.”
“Computations performed at the molecular, atomic, and neuro scales mimicking the human brain are showing tremendous viability,” added Namburu. “We just started this work on next generation advanced computing, which is significantly different from traditional computing systems historically used at ERDC. Neuromorphic computing represents a paradigm shift in computing, promising significant advancements in ML, generative AI, scientific applications and sensor processing compared to traditional computing. Moreover, neuromorphic chips emulate the brain's plasticity, enabling learning and adaptation over time, unlike traditional systems.”
Ongoing efforts edge computing efforts include agnostic graphics processing unit (GPU) ray tracing development, benchmarking deep neural networks, sensor-data management, ML for underwater invasive plants, railcar inspection, photogrammetry, reservoir frameworks, decentralized edge computing, bi-directional digital twins and algorithms for anomaly detection. ITL is also exploring emerging AI chips for edge computing including novel algorithms and sustainable software.
“Overall, edge computing is helping to enable new use cases and provide better experiences to the users by making applications faster, more reliable and more secure,” said Cheng. “Neuromorphic chips are well-suited for edge computing, which is becoming increasingly important in military and defense applications, and ITL is already aiding in this process that will touch everything from lowering the cost of deployments by eliminating the need for expensive, high-powered servers and data centers to support of mobile and autonomous systems. This is the future.”
Evening Tothemoon24 ,

Good article.

Christ the yanks are slow to get their shite together , now thay are talking about bi directional digital twins.... kinky buggers .


Regards,
Esq.
 
  • Haha
  • Like
Reactions: 9 users
Evening Tothemoon24 ,

Good article.

Christ the yanks are slow to get their shite together , now thay are talking about bi directional digital twins.... kinky buggers .


Regards,
Esq.
I met Bi-twins once. Only problem for me was they were a pigeon pair. Bolted real quick 😆 🤣 😂

SC
 
  • Haha
Reactions: 7 users

manny100

Regular
I really don't know.

We do know that Sony and Prophesee went with Synsense for the lo-fi version (380*380 pixels) - the apochryphal low hanging fruit.

From your June 2022 link:
“We’ve successfully ported the data from Prophesee’s neuromorphic-based camera sensor to process inference on Akida with impressive performance,” said Anil Mankar, Co-Founder and CDO of BrainChip. “This combination of intelligent vision sensors with Akida’s ability to process data with unparalleled efficiency, precision and economy of energy at the point of acquisition truly advances state-of-the-art AI enablement and offers manufacturers a ready-to-implement solution.”

By combining our Metavision solution with Akida-based IP, we are better able to deliver a complete high-performance and ultra-low power solution to OEMs looking to leverage edge-based visual technologies as part of their product offerings,” said Luca Verre, CEO and co-founder of Prophesee."

Akida IP includes more than the tape-out specs. It would also include, eg, the copyright-protected software.

To the uninitiated, Anil's comments imply that the Prophesee data was applied to Akida 1 SoC, whereas Luca's coments can be interpreted as encompassing incorporating Akida software, eg, TeNNs simulation software into the Prophesee Metavision software.

The patent application for teNNs was filed 3 days after the article was published, so clearly BRN had been testing it beforehand as software. I think it is probable that TeNNs was used in tests on the Prophesee data, and that combining the Akida-based software IP with Metavision would have been the only available means of testing the Prophesee data against Akida 2, as we still have not seen the SoC. So it is not outside the bounds of possibility that TeNNs/Akida 2 software has been combined with Metavision.
Sean in a interview/ presentation around a year ago said that if you buy a camera powered by Prophesee you want to know its got AKIDA in it.
At the time I thought it was a hint. BRN would have been working together for sure.
 
  • Like
  • Thinking
  • Wow
Reactions: 12 users

Tothemoon24

Top 20
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 16 users

Rskiff

Regular
Great September newsletter just received. Plenty going on.
 
  • Like
  • Fire
  • Love
Reactions: 24 users

TopCat

Regular
  • Like
  • Love
  • Fire
Reactions: 60 users

IloveLamp

Top 20
  • Like
  • Love
Reactions: 25 users

Xray1

Regular
Great September newsletter just received. Plenty going on.
This Monday awaits .... 30th September 2024 ...it is the last day in September and the end of this quarterly period ..... I would like to see some sort of actual ASX announcement either Price Sensitive or even at the very least a Non Price Sensitive Co Update in nature being actually lodge with the ASX ..... I'm sick of Co newsletters and podcasts which imo only promote the IR departments PR fluffy stuff they want us to hear, so as to make us believe that things are supposedly progressing, even though there have been no substantial revenue streams nor any new IP agreements being made for sometime now especially during our current CEO's tenure. IMO, there are also way too many unsubstantiated and dot joining poster's materials floating around here on this and other forums .....However, imo this is understandable given the Co's seeming " Cone of Silence " stance and somewhat reluctance to actually formalise information for s/holder discimination via formal ASX announcement either as Price Sensitive or Non Price Sensitive, as part of the Co's ongoing obligation to provide full and frank disclosure as part of their fiduciary duty to all s/holders.
 
  • Like
  • Love
  • Fire
Reactions: 28 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

The Road to embedded world North America: Edge Impulse Empowers Developers to Innovate Edge AI​

By Chad Cox

Production Editor
Embedded Computing Design
September 27, 2024

Sponsored Blog



The Road to embedded world North America: Edge Impulse Empowers Developers to Innovate Edge AI
Image Credit: Edge Impulse

Join Edge Impulse at Embedded World North America to witness how it is transforming edge AI projects by empowering developers and ML engineers to build datasets (including with synthetic data), train models, and optimize libraries to run directly on device, from the smallest microcontrollers to gateways, with the latest neural accelerators (and everything in between).​

Visit booth 1947, where Edge Impulse partner Advantech, will exhibit the iCAM540, an NVIDIA Jetson Orin-based camera featuring Edge Impulse software, as well as a basic machine learning model for defect detection using a small conveyor belt, and how simple it is to train a model and simplify hardware integration at the edge.

Edge Impulse: Simplifying Edge AI Development​

Take any data, develop any model, and deploy to any target in three simple steps:

  • Build real-world datasets at scale (or bring your own)
  • Develop custom ML solutions fast
  • Deploy intelligent edge products
Train your models to run on any device, including edge gateways and industrial cameras running NVIDIA GPU hardware, as well as tiny, efficient microcontrollers.

Picture6(1).png


Edge Impulse’s Edge Optimized Neural (EON) Compiler delivers models that run more consistently on device while decreasing RAM and flash usage. It supports a large variety of neural networks trained in TensorFlow or PyTorch, and a large selection of classical ML models trained in scikit-learn, LightGBM, or XGBoost. This ensures rapid execution of neural network models for real-time or near-real-time scenarios, all while maintaining accuracy and:

  • Up to 70% less RAM usage
  • Up to 40% less flash usage
  • Faster inference times
  • Reduced overall power consumption
The Edge Impulse Eon Tuner platform will enable you to select the edge-optimized NVIDIA TAO model for your use case and deploy on any efficient, low-cost edge devices, including MCUs, CPUs, and accelerators.

Picture4(2).png


Edge Impulse operates within an innovative AI ecosystem featuring partners like NVIDIA, Arm, Nordic Semiconductor, STMicroelectronics, Advantech, BrainChip, Alif, Synapse/Capgemini, among others aiding in accelerating development timelines from years to weeks.

Edge Impulse’s platform enables quick deployment of edge AI in manufacturing (predictive maintenance systems and instant anomaly detection) and healthcare, where its solutions are expanding on wearable devices that leverage Edge Impulse-developed models for ongoing, real-time health insights.



Please visit Edge Impulse at ew24 NA in Booth #1947, or visit edgeimpulse.com for more information.

Click here to redeem your free ticket to the embedded world Expo Floor. Use voucher code SEBO24.
 
  • Like
  • Love
Reactions: 29 users

Cartagena

Regular
This Monday awaits .... 30th September 2024 ...it is the last day in September and the end of this quarterly period ..... I would like to see some sort of actual ASX announcement either Price Sensitive or even at the very least a Non Price Sensitive Co Update in nature being actually lodge with the ASX ..... I'm sick of Co newsletters and podcasts which imo only promote the IR departments PR fluffy stuff they want us to hear, so as to make us believe that things are supposedly progressing, even though there have been no substantial revenue streams nor any new IP agreements being made for sometime now especially during our current CEO's tenure. IMO, there are also way too many unsubstantiated and dot joining poster's materials floating around here on this and other forums .....However, imo this is understandable given the Co's seeming " Cone of Silence " stance and somewhat reluctance to actually formalise information for s/holder discimination via formal ASX announcement either as Price Sensitive or Non Price Sensitive, as part of the Co's ongoing obligation to provide full and frank disclosure as part of their fiduciary duty to all s/holders.
Agree completely. Time to start making some proper progress announcements on engagements, even as non price sensitive ones, as that is their duty to shareholders.
 
  • Like
  • Fire
Reactions: 16 users

IMG_2794.jpeg
 
  • Like
  • Wow
  • Thinking
Reactions: 5 users

Cartagena

Regular
Looks to be some very interesting camera 📸 technology coming up at Vision 2024


LUCID to Unveil Latest GigE Vision Cameras and Advanced Sensing Technologies at VISION 2024​

Camera-spread-VISION-2024-1800x580-1.jpg

Richmond, BC, Canada – August 22, 2024 – LUCID Vision Labs, Inc., a leading designer and manufacturer of industrial cameras, will showcase a range of new GigE Vision cameras and advanced sensing technologies at VISION 2024, which takes place from October 8–10, 2024, in Stuttgart, Germany.
Triton2-SMART-300x300-1.jpg
LUCID is set to introduce the first member of its intelligent vision camera family, the Triton® Smart camera featuring Sony’s IMX501 intelligent vision sensor with AI processing. The Triton Smart is an easy-to-use, cost-effective intelligent vision camera capable of outputting inference results alongside regular 12.3 MP images for every frame. Its on-sensor AI processing reduces data bandwidth, alleviates processing load on the host PC, and minimizes latency.
Triton2-EVS-VISION-2024.jpg
Expanding the Triton2 – 2.5GigE camera family, LUCID will showcase two new models for advanced sensing applications. The Triton2 EVS event-based 2.5GigE camera, powered by Sony’s IMX636 or IMX637 vision sensor and our Arena® SDK as well as PROPHESEE’s Metavision® SDK, is designed to deliver enhanced performance, lower power consumption, and greater flexibility for machine vision applications. It is ideal for motion analysis, vibration monitoring, object tracking, optical flow, autonomous driving, and high-speed detection.
Triton2-4K-line-scan-500x500-1.jpg
Additionally, the new Triton2 4K line scan camera, equipped with Gpixel’s GL3504 image sensor, will also be unveiled. Featuring 4096 (H) x 2 (V) at 3.5 μm pixels, this camera is ideal for high-speed, high-resolution imaging.
Atlas10 10GigE On-Semi 45mp
The Atlas10 camera – 10GigE camera family is welcoming a new high-resolution model featuring the 45-megapixel (8192 x 5460) onsemi XGS45000 CMOS global shutter image sensor, capable of running at 16 fps. This RDMA-enabled camera offers a unique combination of high resolution, high frame rate, and superior image quality, making it well-suited for applications such as flat panel inspection, aerial surveillance, mapping, and electronics inspection.
Helios2 3D ToF Camera with Narrow FoV
LUCID is also expanding its Helios®2 3D Time-of-Flight camera family with the introduction of the Helios2 Narrow Field-of-View (FoV)variant. This model integrates Sony’s DepthSense™ IMX556PLR back-illuminated ToF image sensor. It produces a tighter point cloud, and the narrower illumination area reduces the likelihood of multipath error, making it ideal for applications requiring precise 3D depth measurement in confined spaces.
industrial-VISION-days.jpg
As part of Industrial VISION Days 2024, organized by VDMA Machine Vision, LUCID’s Director of Product Management, Alexis Teissie, will present “The Benefits of RDMA for 10GigE Cameras and Beyond” on Wednesday, October 9th at 2:40 pm.
Stay tuned for additional product highlights to be unveiled on the show floor. Join LUCID at VISION 2024 from October 8–10, 2024, in Stuttgart, Germany at Booth 10E40.

Brainchip's Marketing team have to capitalise on unveiling their much touted tech too and it is about time we start to see it...... I see many other companies like Prophesee and LUCID getting in front of the world stage especially at major events like this. Where is Brainchip's marketing ?
 
  • Like
  • Thinking
  • Fire
Reactions: 5 users

Justchilln

Regular
  • Thinking
  • Like
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
  • Love
Reactions: 47 users
Top Bottom