BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
Rebellions and Samsung's Atom chip to be released this year.
Screenshot 2024-02-01 at 12.44.14 pm.png


Extract



Screenshot 2024-02-01 at 12.44.52 pm.png




Here is what has been said previously about Atom.
Screenshot 2024-02-01 at 12.47.28 pm.png

 
  • Thinking
  • Like
  • Love
Reactions: 12 users
Hi Fact Finder,

Speaking of the European Space Agency! This ties in with the following posts:
Seeing as Adam Taylor (Adiuvo Engineering) acts as a consultant and is developing the flight electronics and FPGA solutions then no doubt he has or will come across EDGX too.

Adiuvo are also developing embedded vision and ADAS vision systems and given Adam Taylor sung our praises in his blog, would it be too much to expect inclusion in these systems also?🤞

View attachment 55657
Hi Bravo
When I break it all down you are much more than just a pretty foot.

Of course you taking all the guess work out of my wild theories that are designed to mislead must be very annoying to some.

Great investigative skills.

Much like AKIDA you know exactly which bits to ignore and which bits to store away and where they fit into the unseen patterns that can be used to complete sections of the giant spider web of known and unknown Brainchip partnerships.

My opinion only DYOR
Fact Finder
 
  • Like
  • Love
  • Fire
Reactions: 45 users

Esq.111

Fascinatingly Intuitive.
Afternoon Chippers ,

Shaping up to be the lowest days volume in well over a year.

Lowest days volume transacted in the last 12 months was 12 Jan 2024 with 2,459,449 units, very limp wristed effort .

Regards.
Esq.
 
  • Like
  • Haha
  • Fire
Reactions: 24 users

Wags

Regular
Afternoon Chippers ,

Shaping up to be the lowest days volume in well over a year.

Lowest days volume transacted in the last 12 months was 12 Jan 2024 with 2,459,449 units, very limp wristed effort .

Regards.
Esq.
Esqi, pretty quiet at the registers today, probably half the trades under 500 shares ($100). WTF.

More popcorn anyone?, choctops?, maltesers??
 
  • Haha
  • Love
  • Fire
Reactions: 8 users

toasty

Regular
Afternoon Chippers ,

Shaping up to be the lowest days volume in well over a year.

Lowest days volume transacted in the last 12 months was 12 Jan 2024 with 2,459,449 units, very limp wristed effort .

Regards.
Esq.
Pressure building..........
 
  • Like
  • Haha
Reactions: 4 users

In considering the above posts regarding the Renesas/Hailo product and the Rebellions/Samsung offerings consider the following evidence regarding the AKIDA technology advantage. The maths clearly adds up in favour of Science Fiction:

EXHIBIT A:​

Low Power & Low Latency Cloud Cover Detection in Small Satellites Using On-board Neuromorphic Processors​

Publisher: IEEE
Cite This
PDF

Chetan Kadway;Tata Consulting Services (TCS) Sounak Dey; (TCS) Arijit Mukherjee;(TCS) Arpan Pal;(TCS) Gilles Bézard (Brainchip-Paris)

Abstract:
Emergence of small satellites for earth observation missions has opened up new horizons for space research but at the same time posed newer challenges of limited power and compute resource arising out of the size & weight constraints imposed by these satellites. The currently evolving neuromorphic computing paradigm shows promise in terms of energy efficiency and may possibly be exploited here. In this paper, we try to prove the applicability of neuromorphic computing for on-board data processing in satellites by creating a 2-stage hierarchical cloud cover detection application for multi-spectral earth observation images. We design and train a CNN and convert it into SNN using the CNN2SNN conversion toolkit of Brainchip Akida neuromorphic platform. We achieve 95.46% accuracy while power consumption and latency are at least 35x and 3.4x more efficient respectively in stage-1 (and 230x & 7x in stage-2) compared to the equivalent CNN running on Jetson TX2

EXHIBIT B:​

Robust Classification of Contraband Substances using Longwave Hyperspectral Imaging and Full Precision and Neuromorphic Convolutional Neural Networks​

Author links open overlay panelKyung Chae Park a, Jeremy Forest a, Sudeepto Chakraborty a, James T. Daly b, Suhas Chelian a, Srini Vasan aaQuantum Ventura, 1 S. Market St., Suite 1715, San Jose, CA 95125, USAbBodkin Design and Engineering, 77 Oak St., Suite 201, Newton Upper Falls, MA 02464, USA
Available online 26 November 2022, Version of Record 26 November 2022.

Table 6. Comparison of SWaP-C profiles for CPU/GPU platforms and neuromorphic versions

CPU/GPU platforms
NVIDIA A100
26.7 long x 11.2 tall x 3.5 wide cm (10.5 x 4.4 x 1.4 in)
250 W
Cost $30,000 USD (est.)

Neuromorphic

AKIDA USB key form factor
5.1 long 1.3 tall x 0.6 wide (2 x 0.5 x 0.3 in)
1 W
Cost $50.00 USD (est.)

My opinion only DYOR
Fact Finder
 
  • Like
  • Love
  • Fire
Reactions: 72 users

Diogenese

Top 20

In considering the above posts regarding the Renesas/Hailo product and the Rebellions/Samsung offerings consider the following evidence regarding the AKIDA technology advantage. The maths clearly adds up in favour of Science Fiction:

EXHIBIT A:​

Low Power & Low Latency Cloud Cover Detection in Small Satellites Using On-board Neuromorphic Processors​

Publisher: IEEE
Cite This
PDF

Chetan Kadway;Tata Consulting Services (TCS) Sounak Dey; (TCS) Arijit Mukherjee;(TCS) Arpan Pal;(TCS) Gilles Bézard (Brainchip-Paris)

Abstract:
Emergence of small satellites for earth observation missions has opened up new horizons for space research but at the same time posed newer challenges of limited power and compute resource arising out of the size & weight constraints imposed by these satellites. The currently evolving neuromorphic computing paradigm shows promise in terms of energy efficiency and may possibly be exploited here. In this paper, we try to prove the applicability of neuromorphic computing for on-board data processing in satellites by creating a 2-stage hierarchical cloud cover detection application for multi-spectral earth observation images. We design and train a CNN and convert it into SNN using the CNN2SNN conversion toolkit of Brainchip Akida neuromorphic platform. We achieve 95.46% accuracy while power consumption and latency are at least 35x and 3.4x more efficient respectively in stage-1 (and 230x & 7x in stage-2) compared to the equivalent CNN running on Jetson TX2

EXHIBIT B:​

Robust Classification of Contraband Substances using Longwave Hyperspectral Imaging and Full Precision and Neuromorphic Convolutional Neural Networks​

Author links open overlay panelKyung Chae Park a, Jeremy Forest a, Sudeepto Chakraborty a, James T. Daly b, Suhas Chelian a, Srini Vasan aaQuantum Ventura, 1 S. Market St., Suite 1715, San Jose, CA 95125, USAbBodkin Design and Engineering, 77 Oak St., Suite 201, Newton Upper Falls, MA 02464, USA
Available online 26 November 2022, Version of Record 26 November 2022.

Table 6. Comparison of SWaP-C profiles for CPU/GPU platforms and neuromorphic versions

CPU/GPU platforms

NVIDIA A100
26.7 long x 11.2 tall x 3.5 wide cm (10.5 x 4.4 x 1.4 in)
250 W
Cost $30,000 USD (est.)

Neuromorphic

AKIDA USB key form factor
5.1 long 1.3 tall x 0.6 wide (2 x 0.5 x 0.3 in)
1 W
Cost $50.00 USD (est.)

My opinion only DYOR
Fact Finder
"Akida USB key form factor" - is that the bottle opener?
 
  • Haha
  • Like
Reactions: 27 users

jtardif999

Regular
  • Like
Reactions: 9 users
Peter van der Made has a successful track record where cybersecurity is concerned. Very early in AKIDA technology development Peter van der Made emphasised that its capacity to recognise unseen repeating patterns made AKIDA ideal for detecting cybersecurity threats.

In furtherance of this Peter van der Made flew to Greece and acquired the exclusive rights to cybersecurity Software for spiking neural networks from Professor Iliadis at the Democratis University of Thrace.

Professor Iliadis caused a little bit of a stir shortly after by stating in an interview with a student magazine that AKIDA with his software was being used by NASA to navigate in space.

Quantum Ventura partnered with Brainchip to develop a Cybersecurity system to protect essential infrastructure in an SBIR from the US Department of Energy.

A friend pointed out that the very nature of such a project is likely to require that its successful completion will never be revealed.

So the next best thing would be research that confirms the feasibility of this approach to cybersecurity.

The following researchers unrelated to Quantum Ventura and Brainchip believe so:


“Adaptive Cyber Defense: Leveraging Neuromorphic Computing for Advanced Threat Detection and Response

Aviral Srivastava, Viral Parmar, Samir Patel, Akshat Chaturvedi
2023 International Conference on Sustainable Computing and Smart Systems (ICSCSS), 1557-1562, 2023
As the complexity of the digital landscape evolves, so does the sophistication of cyber threats, necessitating advanced cybersecurity measures. Despite significant strides in threat detection and response using machine learning and deep learning techniques, these systems grapple with high false positive rates, limited adaptability to evolving threats, and computational inefficiency in real-time data processing. This study proposes to delve into the potential of Neuromorphic Computing (NC) to address these challenges. Inspired by the human brain’s principles, NC offers rapid, efficient information processing through Spiking Neural Networks (SNNs) and other brain-inspired architectures. The study hypothesizes that integrating NC into cyber defence could enhance threat detection, response times, and adaptability, thereby bolstering cybersecurity systems’ resilience. However, the implementation of NC in cybersecurity is fraught with challenges, including scalability, compatibility with existing infrastructures, and the creation of secure, robust neuromorphic systems. This study elucidates these challenges, proposes potential solutions, and highlights future research directions in this promising field. With focused research and development, NC could revolutionize cybersecurity, enhancing the defence mechanisms of the digital ecosystems against the relentless onslaught of cyber threats.

The study analyses that the incorporation of NC into cybersecurity is not only feasible but also necessary in increasingly digital world.



Given all of the above it is hard to not believe that Quantum Ventura will convince the US Department of Energy that the use of AKIDA technology to ensure the cybersecurity of essential infrastructure is NOT ONLY FEASIBLE BUT ALSO NECESSARY.

My opinion only DYOR
Fact Finder
Will Quantum Ventures integrate Akida into USB for computers used by us mortals is what I would love to know or isn’t this on their radar currently ?.

In considering the above posts regarding the Renesas/Hailo product and the Rebellions/Samsung offerings consider the following evidence regarding the AKIDA technology advantage. The maths clearly adds up in favour of Science Fiction:

EXHIBIT A:​

Low Power & Low Latency Cloud Cover Detection in Small Satellites Using On-board Neuromorphic Processors​

Publisher: IEEE
Cite This
PDF

Chetan Kadway;Tata Consulting Services (TCS) Sounak Dey; (TCS) Arijit Mukherjee;(TCS) Arpan Pal;(TCS) Gilles Bézard (Brainchip-Paris)

Abstract:
Emergence of small satellites for earth observation missions has opened up new horizons for space research but at the same time posed newer challenges of limited power and compute resource arising out of the size & weight constraints imposed by these satellites. The currently evolving neuromorphic computing paradigm shows promise in terms of energy efficiency and may possibly be exploited here. In this paper, we try to prove the applicability of neuromorphic computing for on-board data processing in satellites by creating a 2-stage hierarchical cloud cover detection application for multi-spectral earth observation images. We design and train a CNN and convert it into SNN using the CNN2SNN conversion toolkit of Brainchip Akida neuromorphic platform. We achieve 95.46% accuracy while power consumption and latency are at least 35x and 3.4x more efficient respectively in stage-1 (and 230x & 7x in stage-2) compared to the equivalent CNN running on Jetson TX2

EXHIBIT B:​

Robust Classification of Contraband Substances using Longwave Hyperspectral Imaging and Full Precision and Neuromorphic Convolutional Neural Networks​

Author links open overlay panelKyung Chae Park a, Jeremy Forest a, Sudeepto Chakraborty a, James T. Daly b, Suhas Chelian a, Srini Vasan aaQuantum Ventura, 1 S. Market St., Suite 1715, San Jose, CA 95125, USAbBodkin Design and Engineering, 77 Oak St., Suite 201, Newton Upper Falls, MA 02464, USA
Available online 26 November 2022, Version of Record 26 November 2022.

Table 6. Comparison of SWaP-C profiles for CPU/GPU platforms and neuromorphic versions

CPU/GPU platforms

NVIDIA A100
26.7 long x 11.2 tall x 3.5 wide cm (10.5 x 4.4 x 1.4 in)
250 W
Cost $30,000 USD (est.)

Neuromorphic

AKIDA USB key form factor
5.1 long 1.3 tall x 0.6 wide (2 x 0.5 x 0.3 in)
1 W
Cost $50.00 USD
 

Tothemoon24

Top 20
Apologies if posted ;
Won’t be long until it’s TATA to the lagging share price :



IMG_8277.jpeg

Date: 1st - 3rd February, 2024
Venue: Booth #C72, Bharat Mandapam, Pragati Maidan, New Delhi


Tata Elxsi will be a proud exhibitor at the Bharat Mobility Global Expo 2024. Our exhibit will spotlight future mobility solutions and services in several key areas like SDVs, Connected Vehicle Platform, Electrification, Autonomous Driving, Customer Experience, Automotive Testing, AR/VR, and Smart Manufacturing.

The automotive industry is undergoing a revolutionary transformation, driven by advanced technology that is reshaping vehicles and the way we think about mobility. As we stand at this exciting juncture, it's clear that while challenges abound, the opportunities are even greater. At the forefront of this evolution is Bharat Mobility Expo, where we're thrilled to present our innovative solutions and services that are defining the future of transportation.

Our experienced team will be available to provide detailed insights into the cutting-edge technology we bring and discuss its potential impact on the future of mobility.

Our Capabilities
Software-Defined Vehicle Solutions | Electric Vehicle Solutions | ADAS & Autonomous Vehicles | Connected Vehicle Platform | Automotive Testing | Advanced Automotive Experiences - HMI and Styling | Future of Digital Manufacturing | Immersive Experiences with VxR

Software-Defined Vehicle Solutions

SDV enables a platform or a mechanism for OEMs to get continuous revenues. Tata Elxsi is involved in programs with OEMs and Tier-1s, contributing to how SDVs will be part of OEMs' roadmaps in the coming years,and we bring the right frameworksfor faster development and deployment ecosystem for in-vehicle and cloud enablement.
Learn More

Electric Vehicle Solutions

The landscape of EV technology and product development is rapidly changing. Tata Elxsi supports accelerated Electric Vehicle adoption through a sustainable service framework of software, hardware, mechanical, and cloud for various subsystems like BMS, Motor controls, Fuel cells and much more.
Learn More

ADAS & Autonomous Vehicles

An intelligent system inside the vehicle that assists drivers and protects vehicle occupants drastically reduces casualties. Tata Elxsi supports in building a connected environment with integrated Autonomous Driving/ ADAS systems powered by AI and advanced perception technologies that propel the Automotive Industry toward more futuristic ADAS solutions.
Learn More

Connected Vehicle Platform

Tata Elxsi provides comprehensive connected vehicle solutions, encompassing platform enablement, service customization, and expert-run management for an elevated strategy. Tata Elxsi’s TETHER – connected vehicle platform is an industry-proven platform licensed to OEMs enabling their digitalization roadmap for future mobility.
Learn More

Automotive Testing

Tata Elxsi offers a complete HILS Validation facility for testing various ICE/Electric/HEV systems at both system level and component level testing and validation. Ready to deploy test automation frameworks and plant models makes this a preferred choice for customers to launch production-quality solutions in market.
Learn More

Advanced Automotive Experiences - HMI and Styling

Tata Elxsi will be sharing fresh insights and ideas that will, help enhance futuristic Automotive and End User Experiences. We bring expertise in developing vehicle concepts to production vehicles. Our design team is skilled in addressing the complete process from sketches, and renderings to designing digital models and developing physical prototypes.
Learn More

Future of Digital Manufacturing

Our set of experts will explain insights on how TETHER, Tata Elxsi’s hyper customizable IoT platform boosts the production KPIs by enabling end to end visibility at shopfloor. We will also be featuring IRIS, our AI and ML based video analytics platform that empowers enterprises to warrant safety and quality at manufacturing.
Learn More

Immersive Experiences with VxR

Tata Elxsi’s digital experience artists will show be showcasing live demos on AR, VR and Mixed Reality, bringing immersive and engaging experiences across product design and review, product visualization, sales, marketing & safety and operations training. The power of the digital world is endless and beyond imagination, bringing new life to automotive life cycle stages.
Learn More
 
  • Like
  • Fire
  • Love
Reactions: 44 users

Diogenese

Top 20
I think we covered Hailo back in 2019, their tech is no comparison.

That's true, but it has not stopped them getting another toehold:

"The Hailo-8 AI accelerator and Renesas R-car controller are being used for a passively cooled main controller in production later this year.

The iMotion iDC High Domain Controller combines the Hailo-8 AI accelerator and R-Car V4H SoC and will be deployed in mass production in 2024 by a Chinese OEM. iMotion, based in China, works with OEMs such as Geely, Great Wall Motor, Chery, Dongfeng and Polestar
."

Looks like it runs at 3W.

https://hailo.ai/products/ai-accelerators/hailo-8-ai-accelerator/#hailo8-benchmarks

1706767337191.png







Hailo, Renesas in 2024 AI domain controller

Hailo, Renesas in 2024 AI domain controller​

Business news | January 31, 2024
By Nick Flaherty
AI AUTOMOTIVE POWER MANAGEMENT


The Hailo-8 AI accelerator and Renesas R-car controller are being used for a passively cooled main controller in production later this year.​

The iMotion iDC High Domain Controller combines the Hailo-8 AI accelerator and R-Car V4H SoC and will be deployed in mass production in 2024 by a Chinese OEM. iMotion, based in China, works with OEMs such as Geely, Great Wall Motor, Chery, Dongfeng and Polestar.


The AI-enabled domain controller makes advanced driving, parking safety and comfort applications more affordable for mass market vehicles. The Hailo-8 chip is used for Highway Pilot / Navigate on Autopilot (NoA), Automated Home Parking, and in the future also Urban Pilot / NoA.

The domain controller enables Bird’s Eye View 3D perception for advanced automated driving applications and enhances safety and comfort with a 10V5R sensor configuration. Importantly, the low power consumption of the Hailo-8 accelerator enables passive cooling of the ECU, lowering the cost of the residual bill of materials (BoM) and simplifying the vehicle integration.

Hailo is one of several AI chip suppliers that Renesas works with for different OEM suppliers, and Renesas is also developing its own AI silicon with IP from Brainchip and its own cloud-based AI development toolchain.


“The integration of Hailo’s groundbreaking Hailo-8 AI accelerator and the high-performance Renasas R-Car V4H SoC significantly improves the affordability of iMotion’s AD/ADAS domain controller while enabling best-in-class processing capabilities, an important development in our shared commitment to make advanced automated driving safer and more affordable for all vehicles,” said Orr Danon, Hailo CEO. “Partnering with iMotion represents another key milestone in our quest to make our AI technology foundational in the 21st century global automotive industry that needs high performance, cost-effective, robust and scalable AI solutions for automated driving and parking.”

“We chose to partner with Hailo for high performance AI in ADAS because of the advanced AI capabilities and efficiencies their Hailo-8 AI processor brings to the market for the benefit of all drivers. The Hailo-8 accelerator is unique in its ability to enable power-efficient AI acceleration of state-of-the-art Neural Networks with low energy consumption that greatly advances automotive innovation,” said iMotion CTO Calvin Lu.

Further amplifying that point, Takeshi Fuse, Vice President of Marketing, High Performance Computing Division of Renesas, noted: “The integration of the Renesas R-Car SoC with the Hailo-8 AI accelerator further enables unprecedented capabilities such as Bird’s-Eye-View 3D perception at an affordable cost for mass market vehicles. Together, we are bringing to life a new era in high performance, affordable automated driving that will benefit the Chinese automotive market specifically and the automotive industry at large.”

www.renesas.com; www.hailo.com; www.imotion.ai
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 18 users
View attachment 55643

Eyes on the Road: Enabling Real-Time Traffic Camera Analysis with BrainChip Akida

EDGE AI
By Nick BildJan 30, 2024
Eyes on the Road: Enabling Real-Time Traffic Camera Analysis with BrainChip Akida

Smart camera systems designed to monitor and optimize traffic flow and enhance public safety are becoming more common components of modern urban infrastructure. These advanced systems leverage cutting-edge technologies, such as artificial intelligence and computer vision to analyze and respond to real-time traffic conditions. By deploying these smart camera systems at strategic locations throughout cities, municipalities can address a variety of challenges and significantly improve overall urban efficiency.

One of the primary issues smart camera systems address is traffic congestion. These systems can monitor traffic patterns, identify bottlenecks, and dynamically adjust traffic signal timings to optimize the flow of vehicles. By intelligently managing traffic signals based on the current demand and congestion levels, these systems can reduce delays, shorten travel times, and minimize fuel consumption, thereby contributing to a more sustainable and eco-friendly transportation system.

In addition to alleviating congestion, smart camera systems also play a crucial role in enhancing public safety. They can be equipped with features such as license plate recognition, facial recognition, and object detection to identify and respond to potential security threats or criminal activities. These capabilities enable law enforcement agencies to quickly and proactively address security concerns, providing a force multiplier for urban safety.

image3.png
BrainChip’s Akida Development Kit hardware
Despite the benefits that they can offer, the widespread adoption of smart traffic camera systems has been hindered by some nagging technical issues. In order to be effective, real-time processing of video streams is required, which means powerful edge computing hardware is needed on-site. Moreover, each system generally needs multiple views of the area which further taxes the onboard processing resources. Considering that many of these processing units are needed throughout a city, some just an intersection away from one another, problems of scale quickly emerge.

The Challenge: Real-Time Traffic Analysis​

Being quite familiar with the latest in edge computing hardware and current developments at Edge Impulse, engineer Naveen Kumar recently had an idea that could solve this problem. By leveraging BrainChip’s Akida Development Kit with a powerful AKD1000 neuromorphic processor, it is possible to efficiently analyze multiple video streams in real-time. Pairing this hardware with Edge Impulse’s ground-breaking FOMO object detection algorithm that allows complex computer vision applications to run on resource-constrained hardware platforms, Kumar reasoned that a scalable smart traffic camera system could be produced. The low cost, energy efficiency, and computational horsepower of this combination should result in a device that could be practically deployed throughout a city.


image2.png
Preparing the training data

Implementation​

As a first step, Kumar decided to focus on a critical component of any traffic camera setup — the ability to locate vehicles in real-time. After setting up the hardware for the project, it was time to build the object detection pipeline with Edge Impulse. To train the model, Kumar sought out a dataset captured by cameras at Shibuya Scramble Crossing, a busy intersection in Tokyo, Japan. Still image frames were extracted from videos posted on YouTube, and those images were then uploaded to a project in Edge Impulse using the CLI uploader tool.

From there, Kumar pivoted into the Labeling Queue tool to draw bounding boxes around objects of interest — in this case, vehicles. The object detection algorithm needs this additional information before it can learn to recognize specific objects. This can be a very tedious task for large training datasets, but the Labeling Queue offers an AI-assisted boost that helps to position the bounding boxes. Typically, one only needs to review the suggestions and make an occasional tweak.

Having squared away the training data, the next step involved designing the impulse. The impulse defines exactly how data is processed, all the way from the time it is produced by the sensor until a prediction is made by the machine learning model. In this case, images were first resized, which is important in reducing the computational complexity of downstream processing steps. Following that, the data was forwarded into a FOMO object detection model that has been optimized for use with BrainChip’s Akida neuromorphic processor. Kumar made a few small adjustments to the model’s hyperparameters to optimize it for use with multiple video streams, then the training process was initiated with the click of a button.

image4.png
The impulse defines the data processing steps
After a short time, the training was complete and a set of metrics was presented to help in assessing how well the model was performing. Right off the bat, an average accuracy score of 92.6% was observed. This is certainly more than good enough to prove the concept, but it is important to ensure that the model has not been overfit to the training data. For this reason, Kumar also leveraged the Model Testing tool, which utilizes a dataset that was left out of the training process. This tool revealed an average accuracy rate of 94.85% had been achieved, which added to the confidence given by the training results.

BrainChip’s Akida Development Kit is fully supported by Edge Impulse, which made deployment a snap. By selecting the “BrainChip MetaTF Model” option from the Deployment tool, a compressed archive was automatically prepared that was ready to run on the hardware. This code is aware of how to utilize the Akida PCIe card, which allows it to make the most of the board’s specialized hardware.

To wrap things up, Kumar used the Edge Impulse Linux C++ SDK to run the deployed model. This made it simple to start up a web-based application that marks the locations of vehicles in real-time video streams from traffic cameras. As expected from the performance metrics, the system accurately detected vehicles in the video streams. It was also demonstrated that the predictions could be made in just a few milliseconds, and while slowly sipping on power.


Subscribe to Edge Impulse​

Get the free version of our newsletter. No spam ever, unsubscribe anytime.
Subscribe
Are you interested in building your own multi-camera computer vision application? If so, Kumar has a great write-up that is well worth reading. The insights will help you to get your own idea off the ground quickly. Kumar has also experimented with single-camera setups if that is more what you had in mind.

More Like This

On the Fast Lane of Innovation: Traffic Monitoring with BrainChip Akida

Benchmarking Akida with Edge Impulse: A Validation of Model Performance on BrainChip's Akida Platform

Edge Impulse and BrainChip Partner to Further AI Development with Support for the Akida Platform
Explore Edge Impulse

Announcing FOMO (Faster Objects, More Objects)

Introducing EON: Neural Networks in Up to 55% Less RAM and 35% Less ROM

Voice Activated Micro:bit with Machine Learning

Subscribe​

Palease!!! (hopefully this ends up going way beyond "analysis").

One of my biggest beefs is traffic lights! (might have mentioned this before 🙄..).
Especially, when there is light traffic..

Waiting at the "traffic" lights, when there is no "traffic" and it's safe to go irks me and is irrational beyond belief.

It's the year 2024 and we're still dealing with these antiquated systems, based on series timing and pressure pads, that some drivers seem too "shy" to drive on to??..

If AKIDA, can fix this one thing, I think it will comand the Love and Respect, of hundreds of millions of people.

But maybe, that's just my view 🤔..
 
  • Like
  • Haha
  • Love
Reactions: 39 users

Diogenese

Top 20
Palease!!!
One of my biggest beefs is traffic lights! (might have mentioned this before 🙄..)
Especially, when there is light traffic..

Waiting at the "traffic" lights, when there is no "traffic" and it's safe to go irks me and is irrational beyond belief.

It's the year 2024 and we're still dealing with these antiquated systems, based on series timing and pressure pads, that some drivers seem too "shy" to drive on to??..

If AKIDA, can fix this one thing, I think it will comand the Love and Respect, of hundreds of millions of people.

But maybe, that's just my view 🤔..
Apparently, if you've kept the receipts for all the time you've waited at red lights, you can get a refund.
 
Last edited:
  • Haha
  • Like
Reactions: 18 users

7für7

Top 20
Afternoon Chippers ,

Shaping up to be the lowest days volume in well over a year.

Lowest days volume transacted in the last 12 months was 12 Jan 2024 with 2,459,449 units, very limp wristed effort .

Regards.
Esq.
Excuse me sir… I don’t want to sell!
Palease!!! (hopefully this ends up going way beyond "analysis").

One of my biggest beefs is traffic lights! (might have mentioned this before 🙄..).
Especially, when there is light traffic..

Waiting at the "traffic" lights, when there is no "traffic" and it's safe to go irks me and is irrational beyond belief.

It's the year 2024 and we're still dealing with these antiquated systems, based on series timing and pressure pads, that some drivers seem too "shy" to drive on to??..

If AKIDA, can fix this one thing, I think it will comand the Love and Respect, of hundreds of millions of people.

But maybe, that's just my view 🤔..
it will get interesting what kind of role japan will play on this game! They investing a lot of money to get back to old glory like 80s/90s. They are not good in software but maybe they can fix the hardware problem! They are specialists in this field
 
  • Like
  • Love
  • Wow
Reactions: 7 users
That's true, but it has not stopped them getting another toehold:

"The Hailo-8 AI accelerator and Renesas R-car controller are being used for a passively cooled main controller in production later this year.

The iMotion iDC High Domain Controller combines the Hailo-8 AI accelerator and R-Car V4H SoC and will be deployed in mass production in 2024 by a Chinese OEM. iMotion, based in China, works with OEMs such as Geely, Great Wall Motor, Chery, Dongfeng and Polestar
."

Looks like it runs at 3W.

https://hailo.ai/products/ai-accelerators/hailo-8-ai-accelerator/#hailo8-benchmarks

View attachment 55684
Hi Diogenese
Reading about Hailo it seems to be somewhat misleading as the starter kit comes with heat sinks and the 3 watts is only in respect of the Hailo chip and does not give details of the GPU/CPU processors power draw it has to be plugged into and how and when it needs to utilised this host processor or have I missed something?

chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://hailo.ai/wp-content/uploads/2023/12/hailo8_m.2_starterkit_product_brief_1.23.pdf

My opinion only DYOR
Fact Finder
 
  • Like
  • Love
  • Wow
Reactions: 11 users

7für7

Top 20
Since I'm not sure to what extent BrainChip will expand its contacts in Japan to solidify its market share, I'll classify this as speculation for now, as potential future market opportunities. I find it intriguing and worth monitoring! The link is in German language and quite older… but I translated a interesting part. (I know there was a updated post yesterday)

Source: Link to the article

Japan is catching up with Taiwan and South Korea

The joint venture aims to develop and produce semiconductors in the Two-Nanometer technology by around 2027. The number describes the width of the circuit lines. Smaller lines result in more energy-efficient and powerful chips. According to the plan, Japan would be on par with the leading technical manufacturers, Taiwan Semiconductor Manufacturing Company (TSMC), and South Korea's Samsung Electronics. Samsung began mass-producing semiconductors in the Three-Nanometer technology in the summer, and TSMC is close behind. Both aim to achieve production of Two-Nanometer chips by 2025.
 
  • Like
  • Fire
Reactions: 6 users
Afternoon Chippers ,

Shaping up to be the lowest days volume in well over a year.

Lowest days volume transacted in the last 12 months was 12 Jan 2024 with 2,459,449 units, very limp wristed effort .

Regards.
Esq.
1706772963491.gif
 
  • Haha
Reactions: 1 users

Diogenese

Top 20
Hi Diogenese
Reading about Hailo it seems to be somewhat misleading as the starter kit comes with heat sinks and the 3 watts is only in respect of the Hailo chip and does not give details of the GPU/CPU processors power draw it has to be plugged into and how and when it needs to utilised this host processor or have I missed something?

chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://hailo.ai/wp-content/uploads/2023/12/hailo8_m.2_starterkit_product_brief_1.23.pdf

My opinion only DYOR
Fact Finder
Hi FF,

Interesting point.

The terminology used is a bit fluid. Even on the Hailo website they refer to Hailo 8 as an accelerator on one page and a processor elsewhere, but the datasheet does show runtime software for the CPU, but what part this software plays in the operation of Hailo 8 is not clear.

I remember in LdN's days, there was a point made about Akida being an NN processor, not just an accelerator, but that was when it included an ARM Cortex.

Even Akida 1000 needs the CPU to set up the configuration via MPU/CPU software, but this plays no part in the Akida 1000 runtime. I think there is some (minimal) cpu intervention in the TeNNs operation.

The heat sinks are just aluminium fins and would qualify as passive cooling, ie, no fan or pumped coolant.
 
  • Like
  • Fire
Reactions: 15 users

Esq.111

Fascinatingly Intuitive.
Chippers,,

Yet another Mrec tid bit..... reasonably sure not from Co marketing dep.



Esq
 
  • Like
  • Fire
Reactions: 7 users
Top Bottom