BRN Discussion Ongoing

Guzzi62

Regular
Those cars will be launched in late 2020s, which shows how long time these things take.

Let's hope we are part of Renesas's secret sauce!

Jan 8, 2025

Honda and Renesas Sign Agreement to Develop High-Performance SoC for Software-Defined Vehicles

-Delivering Industry-Leading AI Performance and Power Efficiency for the Honda 0 Series in the Late 2020s-

c250108beng_01.jpg


TOKYO, Japan, January 8, 2025 ― Honda Motor Co., Ltd. (TSE: 7267) and Renesas Electronics Corporation (TSE:6723) announced today that they have signed an agreement to develop a high-performance system-on-chip (SoC) for software-defined vehicles (SDVs). The new SoC is designed to deliver leading-edge*¹ AI performance of 2,000*2 TOPS combined with a world-class power efficiency of 20 TOPS/W, and is slated for use in future models of the “Honda 0 (Zero) Series,” Honda’s new electric vehicle (EV) series, specifically those that will be launched in the late 2020s. The agreement was announced during a Honda press conference held at CES 2025 in Las Vegas, Nevada on January 7.
Honda is developing original SDVs to provide a mobility experience optimized for each individual customer in the Honda 0 Series. The Honda 0 Series will adopt a centralized E/E architecture that combines multiple electronic control units (ECUs) responsible for controlling vehicle functions into a single ECU. The core ECU, which serves as the heart of the SDV, manages essential vehicle functions such as Advanced Driver Assistance Systems (ADAS) and Automated Driving (AD), powertrain control, and comfort features, all on a single ECU. To achieve this, the ECU requires a SoC that provides higher processing performance than traditional systems, while minimizing any increase in power consumption.
Renesas is committed to providing automotive semiconductor solutions that enable automobile OEMs to develop SDVs. Renesas’ R-Car solutions offer higher AI performance with the ability to customize by leveraging multi-die chiplet technology*3 and integrating AI accelerators*4 into its SoC.
To realize the Honda vision for SDVs, Honda and Renesas reached an agreement to develop a high-performance SoC compute solution designed for core ECUs. Using TSMC’s leading-edge 3-nm automotive process technology, this SoC also can achieve a significant reduction in power consumption. Additionally, it realizes a system that utilizes multi-die chiplet technology to combine Renesas’ generic fifth-generation (Gen 5) R-Car X5 SoC series with an AI accelerator optimized for AI software developed independently by Honda. With this combination, the system aims to achieve one of the industry’s top class AI performances with power efficiency. The SoC chiplet solution will provide the AI performance required for advanced functions such as AD, while keeping power consumption low. Chiplet technology allows flexibility to create customized solutions and offers future upgrades for functional and performance improvements.
Honda and Renesas have collaborated closely for many years. This agreement will accelerate the integration of advanced semiconductor and software innovations into the Honda 0 Series, enhancing the mobility experience for customers.

 
  • Like
  • Thinking
  • Fire
Reactions: 6 users

Cirat

Regular
View attachment 75404

BrainChip Unveils Edge AI Box Partner Ecosystem for Gestures, Cybersecurity, Image Recognition, and Computer Vision​



Laguna Hills, Calif. – January 7th, 2025 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, brain-inspired AI, today announced an all-star lineup of partners developing revolutionary technology demonstrations and products with the Akida™ Edge AI Box, a compact, cost-effective appliance with AI/ML processing power for a wide variety of markets such as manufacturing, warehouse, retail, hospitals, energy, automotive, and aviation. The Akida Edge AI box is an embedded Linux solution with Ethernet, Bluetooth, and USB interfaces to provide a complete edge AI computing platform based on Brainchip’s Akida AKD1000 IC.

Brainchip developed the Akida Edge AI Box in partnership with VVDN Technologies, a premier electronics innovation and manufacturing company. VVDN produces the Brainchip Akida Edge AI Box Developer Kit and can develop custom versions of the product with OEMs for volume commercial applications. VVDN, based in Gurgaon, India and Fremont, Calif., offers worldwide service with eleven advanced R&D centers and offices in the U.S., Canada, Europe, Vietnam, South Korea, and Japan.

Companies that are delivering edge AI use case demonstrations and/or products executing on BrainChip’s Edge AI Box include:

Edge AI Model training and development with Edge Impulse, demonstrating how easy it is to build and deploy custom machine learning models directly on the Akida Edge AI box.

Gesture recognition by BeEmotion, an innovator at the intersection of AI and human behavior that provides cutting-edge solutions to enhance interaction, prevent accidents, and bring enjoyment to the everyday use of technology.

Climate change solutions to forecast and gain visibility into contributors to climate change such as emissions, developed by AI Labs, a pioneer in AI, robotic process automation, and software development.

Model evaluation to streamline the AI workflow from experts at DeGirum, a leader in AI innovation with comprehensive solutions that simplify and accelerate the adoption of AI at the edge.

Cybersecurity engineered by Quantum Ventura Inc., provider of advanced R&D to the federal government in fields such as machine learning, hyperspectral imaging, cybersecurity, computer vision, and synthetic aperture radar, is demonstrating their CyberNeuro-RT (CNRT) technology, an AI/ML-driven, highly-scalable, real-time network defense & threat intelligence tool.

Computer vision analysis by Vedya Labs, an AI accelerator in edge, embedded systems, Gen AI, SDKs/toolchains, and developer of AI models, applications, optimization, libraries, and more will be demonstrating video object detection models.

“The Akida Edge Box is a great platform for running AI in standalone edge environments where footprint, cost and efficiency is critical, while not compromising performance,” said Sean Hehir, BrainChip CEO. “We look forward to announcing more partners developing edge AI for their customers’ specific use cases and more importantly, we look forward to the ideas these companies will bring to life with the Akida Edge AI Box.”

The Akida Edge AI Box puts BrainChip’s event-based neural processing, which closely mimics the learning ability of the human brain, into energy-efficient, portable hardware, to deliver powerful AI computing performance that surpasses market standards for edge AI computing appliances. Dual Akida processors perform on-chip learning, independent of the cloud, for application security, reduced training overhead, and faster learning. The Akida Edge AI Box is so cost-effective it can be utilized in production applications: in every patient’s room to monitor their health and safety; in every store aisle to gauge shopping experience; in every car, truck, boat, or plane in the fleet to manage logistics.

BrainChip’s Akida is an event-based computing platform ideal for early detection, low-latency solutions without massive compute resources for robotics, drones, automotive and traditional sense-detect-classify-track solutions. BrainChip provides a range of software, hardware, and IP products for integration into existing and future designs, with a roadmap for customers to deploy multi-modal AI models at the edge.
For fun, I pasted this article into ChatGPT (free version) and asked it which company listed in the article would most likely be the first to sign a contract with Brainchip.
The answer is as follows so let's see how the answer plays out in the future........ Just dreaming and filling in time.

ChatGPT response
"Based on the article, the companies mentioned, and the potential use cases of BrainChip's Akida Edge AI Box, the most likely company to first sign a contract with BrainChip could be Quantum Ventura Inc. Here’s why:


Reasoning:​

  1. Strategic Fit and Industry Need:
    • Quantum Ventura Inc. specializes in cybersecurity and provides advanced R&D in areas like machine learning, computer vision, and synthetic aperture radar to the federal government.
    • Cybersecurity is a critical industry, and the CyberNeuro-RT (CNRT) technology they are demonstrating on the Akida Edge AI Box aligns well with BrainChip’s focus on low-latency, real-time AI solutions.
  2. High Stakes Industry:
    • Cybersecurity solutions that leverage AI are increasingly critical across industries and government applications. A partnership with BrainChip could provide Quantum Ventura a competitive advantage in delivering real-time threat intelligence tools.
  3. Existing Demonstrations and Potential for Immediate Deployment:
    • Quantum Ventura is already demonstrating its CNRT technology on BrainChip's Akida Edge AI Box. This indicates that the two companies are collaborating actively, which could naturally progress into a commercial contract."

Conclusion:​

The most likely first contract signer is Quantum Ventura Inc. due to its active demonstration of cybersecurity solutions, the critical importance of its use case, and the potential for immediate deployment in high-priority industries like defense and government. Other companies like BeEmotion and Vedya Labs could follow closely, depending on the readiness of their solutions and market demand.


Not financial advice, DYOR, GLTA
 
  • Like
  • Fire
Reactions: 21 users

FJ-215

Regular


View attachment 75451 View attachment 75452 View attachment 75453 View attachment 75455
Eat meat,,

You may not die from plant based cancers....

but a good start.....

Akida can't save you from a poor diet!!!!
 
  • Like
  • Haha
Reactions: 2 users

cosors

👀
Not total off topic, but maybe from general interest (s. at the end):

Pentagon to blacklist China’s largest EV battery and tech firms​

Among the firms banned from doing business with the Defense Department is CATL, the world’s largest EV battery-maker, which supplies Elon Musk’s Tesla.

Updated
January 7, 2025 at 12:02 p.m. ESTyesterday at 12:02 p.m. EST
1736345237792.png

The Pentagon will blacklist China’s largest EV battery manufacturer and its largest tech firm beginning in June 2026, barring them from Defense Department contracts and sending a powerful signal to American firms about the potential risks of doing business with them.

In a notice in the Federal Register on Monday, the Defense Department published a list of firms that it deems to be operating in the United States for, or on behalf of, the Chinese military or that contribute to China’s military buildup.

The “1260H list,” mandated annually by Congress since 2021, now includes CATL, the world’s largest electric-vehicle battery-maker, which supplies Tesla, the EV manufacturer owned by President-elect Donald Trump’s ally Elon Musk. It also lists the social media giant Tencent, China’s most valuable technology company.

Other firms of note added to the list are China’s top commercial-jet-maker, a DNA-sequencing firm with U.S. ties and two companies at the heart of China’s surveillance-technology boom.

“When the Pentagon affirms you are a Chinese military company, it’s an official scarlet letter that can impact everything from the risk perception of working with a company that might affect its share price to its long-term viability in the U.S. market,” said Eric Sayers, a nonresident fellow at the American Enterprise Institute and a former aide to the top U.S. military commander in the Pacific.
Some of the designated firms are already included on a Treasury Department list of Chinese military-industrial-complex companies, subjecting them to investment restrictions because of their assessed military ties. That includes the AI firm SenseTime Group and subsidiaries of the state-owned military contractor, Aviation Industry Corporation of China.

Others — such as Tencent and CATL — have not faced sanctions but have come under increasing scrutiny from U.S. lawmakers in recent years as they expand their American operations.
The Pentagon blacklisting is among actions taken by Congress, and the Biden and first Trump administrations, to curb China’s aggressive military and tech ambitions and the expansion of its surveillance state. The issue is largely seen as bipartisan, though the political ascendance of tech mogul Musk — who has significant business interests in China — may complicate the policy picture under the incoming administration.
The Trump transition team did not reply to a request for comment.

Pentagon spokeswoman Selena Rodts called the new list “an important step in securing our supply chains and removing entities that support” China’s military-civil fusion strategy.

CATL’s listing is significant, said Craig Singleton, senior China fellow at the Foundation for the Defense of Democracies, because its control over the data collected by EV charging stations and battery management systems might enable Chinese government spying. Chinese law requires CATL to provide the Chinese government with access to any and all of its proprietary and customer data — no questions asked, he said.
Moreover, Singleton said, CATL’s integration into the U.S. electrical grid, through its collaboration with American energy firms — along with what is likely to be increasing U.S. dependence on CATL’s charging infrastructure — creates potential vulnerabilities that could be exploited to disrupt essential services, similar to concerns raised about the deep integration of Chinese tech giant Huawei into global telecom networks.

Trump’s pick for secretary of State, Marco Rubio, has led efforts to block a $3.5 billion EV battery plant partnership between CATL and Ford in Michigan. As the Senate Intelligence Committee’s top Republican, Rubio in 2023 introduced legislation to prevent CATL from receiving U.S. climate subsidies and called for a national security review of the deal, citing concerns over reliance on China.

In a statement to The Washington Post, CATL called the designation a “mistake” and said the company had never engaged in any military-related business. “We will proactively engage with DOD to address the false designation, including legal action if necessary,” it said.
A spokesman for the Chinese Embassy said the list represented an “unreasonable suppression of Chinese companies” that “undermines the confidence of foreign companies investing in the United States.” The spokesman, Liu Pengyu, urged Washington “to immediately correct the above discriminatory practices.”

The Shenzhen-based Tencent oversees the popular social media site WeChat and is a leader in online payments, e-commerce and gaming. It holds a large stake in Epic Games, the North Carolina-based maker of the popular game “Fortnite,” and has invested in U.S. entertainment groups Warner Music and Universal Music Group.

Valued at more than $480 billion, Tencent saw its stock drop by almost 10 percent Monday following the announcement.
“Tencent’s inclusion on this list is clearly a mistake,” the company said in a statement. “We are not a military company or supplier. We will nonetheless work with the Department of Defense to address any misunderstanding.”

Though the ban does not go into effect until 2026, the Pentagon blacklisting could provide momentum to other U.S. agencies conducting investigations into the firms. It probably will also alert U.S. companies and others that sell or provide services to the Defense Department that these firms are deemed to support the Chinese defense-industrial base, which has reputational risk implications, said Ann Kowalewski, a former aide on the Senate Foreign Relations and House Foreign Affairs committees who is now a senior fellow at the Project 2049 Institute.

The list’s publication comes after China’s Commerce Ministry last week added 10 U.S. companies to its “unreliable entities list,” imposing export controls in response to U.S. sanctions on Chinese firms.
The timing is probably coincidental, Kowalewski said, as the Defense Department is required to issue the list annually, and did so last year at the same time. The impact of China’s entities list is largely symbolic, she said: “It’s a way to lodge their complaints about us selling weapons to Taiwan and to our export controls.”

Several firms on the Chinese Commerce Ministry’s list are major U.S. defense contractors that are barred by law anyway from exporting weaponry and defense services to China, analysts said.
A notable addition to the Pentagon list is the Commercial Aircraft Corporation of China Limited, or COMAC, China’s top commercial-aircraft-maker, which launched China’s first passenger jet in 2017.

The list also added ChangXin Memory Technologies, China’s leading memory chip firm, and genomics giant BGI Genomic’s DNA sequencing unit, MGI Tech, which has a San Jose-based subsidiary called Complete Genomics.
It also designated the facial recognition firm Xiamen Meiya Zhongmin Electronic Technology, whose technology, along with SenseTime’s, the U.S. government has said is used by China’s security apparatus to repress domestic ethnic minorities.

“This decision by the U.S. Department of Defense has no factual basis, and we firmly disagree with it. … SenseTime remains firmly committed to working collaboratively with the relevant stakeholders to address this matter,” the company said in a statement provided to The Post.
Though the list published Monday does not include Hesai, a company whose lidar system is used in autonomous vehicles and robotics, a Pentagon official said it will be in the final list issued Tuesday, subject to the same procurement bar in June 2026.
ChangXin Memory Technologies and MGI did not respond to requests for comment. COMAC and Xiamen Meiya Zhongmin Electronic Technology could not be immediately reached for comment."
Link

____________
Pentagon list:
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 5 users

cosors

👀

"BrainChip Boasts of a Strong Ecosystem for Its Akida Edge AI Box​

A post-pre-order price hike to $1,495, though, puts a damper on its claims of cost efficiency for direct deployment.​


Neuromorphic edge artificial intelligence (AI) specialist BrainChip is celebrating a year since it unveiled its Akida Edge AI Box development platform at the Consumer Electronics Show (CES) in Las Vegas — and it's back in Sin City again this year to announce "an all-star lineup" of partners on the project.

"The Akida Edge Box is a great platform for running AI in standalone edge environments where footprint, cost and efficiency is critical, while not compromising performance," claims BrainChip chief executive officer Sean Hehir. "We look forward to announcing more partners developing edge AI for their customers' specific use cases and more importantly, we look forward to the ideas these companies will bring to life with the Akida Edge AI Box."

BrainChip opened orders for the Akida Edge Box, developed in partnership with VVDN, back in February last year, after showcasing the device at CES 2024. The idea: delivering a low-cost single-unit development platform for those looking to experiment with the company's Akida neuromorphic processor, with two AKD1000 chips installed in the compact device alongside an NXP Semiconductors i.MX 8M Plus system-on-chip.

While there's no new Akida Edge Box for CES 2025, BrainChip still has plenty to announce in the form of partnership on the project. The Akida Edge AI Box ecosystem now includes, the company says: support in Edge Impulse for rapid AI model development, training, and deployment; gesture recognition support from BeEmotion; climate forecasting developed by AI Labs; model evaluation from DeGirum; cybersecurity projects from Quantum Ventura; and computer vision analysis from Vedya Labs.

BrainChip is also repositioning the Akida Edge AI Box, which is priced below its previous development kits: "the Akida Edge AI Box is so cost-effective," the company claims, "it can be utilized in production applications: in every patient's room to monitor their health and safety; in every store aisle to gauge shopping experience; in every car, truck, boat, or plane in the fleet to manage logistics."
The device isn't quite as affordable as it used to be, though: pre-orders for the Akida Edge Box launched in February 2024 at just $799, but the company is currently asking for $1,495 on its official web store — with a 10-12 week shipping estimate."


Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin.
 
  • Like
  • Fire
  • Wow
Reactions: 17 users

cosors

👀
Hey Gang!

I took some time out from my undercover ops - hiding behind pot plants at CES 2025, to do some reading and stumbled upon this NASA article published yesterday.

From the areas I've highlighted in orange, you can see that NASA's recently updated inventory consists of a few AI use cases describing autonomous navigation for the Perseverance Rover on Mars. I hadn't heard the term "Mars2020 Rover" referenced before and so I searched for it on TSEx and sure enough nothing came up.

What I thought of immediately was the 2020 SBIR, which I have posted below for your convenience, which described how AKIDA was to potentially be utilised to make autonomous rovers travel faster. So it occurred to me that this 2020 SBIR which AKIDA was featured in might be part of the whole "Mars2020 Rover" thingamajig.

I had a quick Google search under "Mars2020 Rover" and I found this NASA Fact Sheet from 2019. The second page states "A new autonomous navigation system will allow the rover to drive faster in challenging terrain", which 100% ties into the goals described in the 2020 SBIR!

Oh, and I might as well also add that the whole NASA High Performance Spaceflight Computer (HPSC) that I've been so obsessed about and in which I'm convinced our tech will be incorporated into at some point in time, well... the HPSC runs the software that controls the spacecraft's various subsystems, such as navigation, communication, power management, etc.


The HPSC processor which is being built by Microchip and will be utilising SiFive's 'Intelligence' X280 core. NASA has stated previously that initial availability will be sometime in 2024 (which didn't occur obviously, so maybe it will be ready this year) and the chip won't just be for space missions but is also expected to be utilised in applications on Earth such as defense, commercial aviation, robotics and medical equipment.



NASA’s AI Use Cases: Advancing Space Exploration with Responsibility​


Kate Halloran​

Jan 07, 2025
Article

Contents​

NASA's 2024 AI Use Case inventory highlights the agency’s commitment to integrating artificial intelligence in its space missions and operations. The agency’s updated inventory consists of active AI use cases, ranging from AI-driven autonomous space operations, such as navigation for the Perseverance Rover on Mars, to advanced data analysis for scientific discovery.
NASA’s 2024 AI Use Case inventory highlights the agency’s commitment to integrating artificial intelligence in its space missions and operations. The agency’s updated inventory consists of active AI use cases, ranging from AI-driven autonomous space operations, such as navigation for the Perseverance Rover on Mars, to advanced data analysis for scientific discovery.

AI Across NASA​

NASA’s use of AI is diverse and spans several key areas of its missions:

Autonomous Exploration and Navigation

  • AEGIS (Autonomous Exploration for Gathering Increased Science): AI-powered system designed to autonomously collect scientific data during planetary exploration.
  • Enhanced AutoNav for Perseverance Rover: Utilizes advanced autonomous navigation for Mars exploration, enabling real-time decision-making.
  • MLNav (Machine Learning Navigation): AI-driven navigation tools to enhance movement across challenging terrains.
  • Perseverance Rover on Mars – Terrain Relative Navigation: AI technology supporting the rover’s navigation across Mars, improving accuracy in unfamiliar terrain.

Mission Planning and Management

  • ASPEN Mission Planner: AI-assisted tool that helps streamline space mission planning and scheduling, optimizing mission efficiency.
  • AWARE (Autonomous Waiting Room Evaluation): AI system that manages operational delays, improving mission scheduling and resource allocation.
  • CLASP (Coverage Planning & Scheduling): AI tools for resource allocation and scheduling, ensuring mission activities are executed seamlessly.
  • Onboard Planner for Mars2020 Rover: AI system that helps the Perseverance Rover autonomously plan and schedule its tasks during its mission.

Environmental Monitoring and Analysis

  • SensorWeb for Environmental Monitoring: AI-powered system used to monitor environmental factors such as volcanoes, floods, and wildfires on Earth and beyond.
  • Volcano SensorWeb: Similar to SensorWeb, but specifically focused on volcanic activity, leveraging AI to enhance monitoring efforts.
  • Global, Seasonal Mars Frost Maps: AI-generated maps to study seasonal variations in Mars’ atmosphere and surface conditions.

Data Management and Automation

  • NASA OCIO STI Concept Tagging Service: AI tools that organize and tag NASA’s scientific data, making it easier to access and analyze.
  • Purchase Card Management System (PCMS): AI-assisted system for streamlining NASA’s procurement processes and improving financial operations.

Aerospace and Air Traffic Control

  • NextGen Methods for Air Traffic Control: AI tools to optimize air traffic control systems, enhancing efficiency and reducing operational costs.
  • NextGen Data Analytics: Letters of Agreement: AI-driven analysis of agreements within air traffic control systems, improving management and operational decision-making.

Space Exploration

  • Mars2020 Rover (Perseverance): AI systems embedded within the Perseverance Rover to support its mission to explore Mars.
  • SPOC (Soil Property and Object Classification): AI-based classification system used to analyze soil and environmental features, particularly for Mars exploration.

Ethical AI: NASA’s Responsible Approach​

NASA ensures that all AI applications adhere to Responsible AI (RAI) principles outlined by the White House in its Executive Order 13960. This includes ensuring AI systems are transparent, accountable, and ethical. The agency integrates these principles into every phase of development and deployment, ensuring AI technologies used in space exploration are both safe and effective.

Looking Forward: AI’s Expanding Role​

As AI technologies evolve, NASA’s portfolio of AI use cases will continue to grow. With cutting-edge tools currently in development, the agency is poised to further integrate AI into more aspects of space exploration, from deep space missions to sustainable solutions for planetary exploration.
By maintaining a strong commitment to both technological innovation and ethical responsibility, NASA is not only advancing space exploration but also setting an industry standard for the responsible use of artificial intelligence in scientific and space-related endeavors.



View attachment 75442



Mars2020 Fact Sheet


View attachment 75443
View attachment 75444









That would be 555 times as fast with Akida on board.
Screenshot_2025-01-08-17-58-39-92_40deb401b9ffe8e1df2f1cc5ba480b12.jpg

________
Radiation tolerance is certainly not a problem.

"...Frontgrade Gaisler, a leading provider of radiation-hardened microprocessors for space applications, has licensed its Akida™ IP for incorporation into space-grade, fault-tolerant system-on-chip solutions for hardware AI acceleration. ..."
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 19 users

Frangipani

Regular

BrainChip Brings Neuromorphic Capabilities to M.2 Form Factor

January 08, 2025 12:00 PM Eastern Standard Time
LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced the availability of its Akida™ advanced neural networking processor on the M.2 form factor, enabling a low-cost, high-speed and low-power consumption option for those looking to build their own edge AI boxes.

“BrainChip’s AKD1000 chips and boards are available today for industry evaluation, development, proof of concept and demonstration platforms with the IP available to license for integration into SoCs. Releasing the AKD1000 on the M.2 form factor continues our commitment to aid developers in creating AI solutions with our Akida IP”
Post this

BrainChip’s neural processor Al IP is an event-based technology that is inherently lower power when compared to conventional neural network accelerators. BrainChip IP supports incremental learning and high-speed inference in a wide variety of use cases, such as convolutional neural networks with high throughput and unsurpassed performance in low power budgets. The AKD1000-powered boards can be plugged into the M.2 slot – around the size of a stick of gum, with a power budget of about 1 watt – to unlock capabilities for a wide array of edge AI applications where space and power is limited and speed is critical, including industrial, factory service centers, network access devices and more.

BrainChip’s AKD1000 product is available in both B+M Key and E Key configurations of the M.2 2260 form factor. It can be purchased integrated into stand-alone Raspberry PI or Edge AI box enclosures, or for integration into custom designed products. Pricing starts at $249. Visit shop.brainchipinc.com or the Buy Now button at www.brainchip.com/.

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, Akida™, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables Edge learning locally to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be designed into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like Tensorflow/Keras. In enabling effective Edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.
Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc
Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006

Contacts​

Media Contact:
Mark Smith
JPR Communications
818-398-1424
Investor Relations:
Tony Dawe
Director, Global Investor Relations
tdawe@brainchip.com
 
  • Like
  • Fire
  • Love
Reactions: 40 users

"BrainChip Boasts of a Strong Ecosystem for Its Akida Edge AI Box​

A post-pre-order price hike to $1,495, though, puts a damper on its claims of cost efficiency for direct deployment.​


Neuromorphic edge artificial intelligence (AI) specialist BrainChip is celebrating a year since it unveiled its Akida Edge AI Box development platform at the Consumer Electronics Show (CES) in Las Vegas — and it's back in Sin City again this year to announce "an all-star lineup" of partners on the project.

"The Akida Edge Box is a great platform for running AI in standalone edge environments where footprint, cost and efficiency is critical, while not compromising performance," claims BrainChip chief executive officer Sean Hehir. "We look forward to announcing more partners developing edge AI for their customers' specific use cases and more importantly, we look forward to the ideas these companies will bring to life with the Akida Edge AI Box."

BrainChip opened orders for the Akida Edge Box, developed in partnership with VVDN, back in February last year, after showcasing the device at CES 2024. The idea: delivering a low-cost single-unit development platform for those looking to experiment with the company's Akida neuromorphic processor, with two AKD1000 chips installed in the compact device alongside an NXP Semiconductors i.MX 8M Plus system-on-chip.

While there's no new Akida Edge Box for CES 2025, BrainChip still has plenty to announce in the form of partnership on the project. The Akida Edge AI Box ecosystem now includes, the company says: support in Edge Impulse for rapid AI model development, training, and deployment; gesture recognition support from BeEmotion; climate forecasting developed by AI Labs; model evaluation from DeGirum; cybersecurity projects from Quantum Ventura; and computer vision analysis from Vedya Labs.

BrainChip is also repositioning the Akida Edge AI Box, which is priced below its previous development kits: "the Akida Edge AI Box is so cost-effective," the company claims, "it can be utilized in production applications: in every patient's room to monitor their health and safety; in every store aisle to gauge shopping experience; in every car, truck, boat, or plane in the fleet to manage logistics."
The device isn't quite as affordable as it used to be, though: pre-orders for the Akida Edge Box launched in February 2024 at just $799, but the company is currently asking for $1,495 on its official web store — with a 10-12 week shipping estimate."


Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin.
"with a 10-12 week shipping estimate."

Wow, that's a lot, if that's the current wait time..

Up to 3 months..

Is that due to pent up demand?
Lack of chips?
Lengthy time of production and testing?

If it's not due to pent up demand, then having to wait that long, will hurt it and if it is, it will hurt demand going forward.

Maybe get VVDN, to stop making that Nvidia junk and concentrate on our units, if production is too slow..
 
  • Like
  • Fire
  • Haha
Reactions: 11 users

Frangipani

Regular

BrainChip Brings Neuromorphic Capabilities to M.2 Form Factor

January 08, 2025 12:00 PM Eastern Standard Time
LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced the availability of its Akida™ advanced neural networking processor on the M.2 form factor, enabling a low-cost, high-speed and low-power consumption option for those looking to build their own edge AI boxes.



BrainChip’s neural processor Al IP is an event-based technology that is inherently lower power when compared to conventional neural network accelerators. BrainChip IP supports incremental learning and high-speed inference in a wide variety of use cases, such as convolutional neural networks with high throughput and unsurpassed performance in low power budgets. The AKD1000-powered boards can be plugged into the M.2 slot – around the size of a stick of gum, with a power budget of about 1 watt – to unlock capabilities for a wide array of edge AI applications where space and power is limited and speed is critical, including industrial, factory service centers, network access devices and more.

BrainChip’s AKD1000 product is available in both B+M Key and E Key configurations of the M.2 2260 form factor. It can be purchased integrated into stand-alone Raspberry PI or Edge AI box enclosures, or for integration into custom designed products. Pricing starts at $249. Visit shop.brainchipinc.com or the Buy Now button at www.brainchip.com/.

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, Akida™, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables Edge learning locally to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be designed into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like Tensorflow/Keras. In enabling effective Edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.
Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc
Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006

Contacts​

Media Contact:
Mark Smith
JPR Communications
818-398-1424
Investor Relations:
Tony Dawe
Director, Global Investor Relations
tdawe@brainchip.com



A33EDCBC-7D4C-4367-8DC0-F8F0BE3BC27F.jpeg



 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 14 users

charles2

Regular
Unfortunately, so far, the only market moving news has been the credit raise. Needed but predictably sparked a selloff.

Perhaps they are saving the best for last.

Clairvoyants need apply.
 
  • Like
  • Haha
  • Thinking
Reactions: 6 users

cosors

👀
"with a 10-12 week shipping estimate."

Wow, that's a lot, if that's the current wait time..

Up to 3 months..

Is that due to pent up demand?
Lack of chips?
Lengthy time of production and testing?

If it's not due to pent up demand, then having to wait that long, will hurt it and if it is, it will hurt demand going forward.

Maybe get VVDN, to stop making that Nvidia junk and concentrate on our units, if production is too slow..
I saw a smart watch advertised at the current CES. It's already sold out on the manufacturer's website. It doesn't say anything about delivery times.
Weren't there similar supply bottlenecks with Nvidia's graphics cards and the price increased drastically with resellers?
 
Last edited:
  • Wow
  • Like
  • Fire
Reactions: 5 users

BrainChip Brings Neuromorphic Capabilities to M.2 Form Factor

January 08, 2025 12:00 PM Eastern Standard Time
LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced the availability of its Akida™ advanced neural networking processor on the M.2 form factor, enabling a low-cost, high-speed and low-power consumption option for those looking to build their own edge AI boxes.



BrainChip’s neural processor Al IP is an event-based technology that is inherently lower power when compared to conventional neural network accelerators. BrainChip IP supports incremental learning and high-speed inference in a wide variety of use cases, such as convolutional neural networks with high throughput and unsurpassed performance in low power budgets. The AKD1000-powered boards can be plugged into the M.2 slot – around the size of a stick of gum, with a power budget of about 1 watt – to unlock capabilities for a wide array of edge AI applications where space and power is limited and speed is critical, including industrial, factory service centers, network access devices and more.

BrainChip’s AKD1000 product is available in both B+M Key and E Key configurations of the M.2 2260 form factor. It can be purchased integrated into stand-alone Raspberry PI or Edge AI box enclosures, or for integration into custom designed products. Pricing starts at $249. Visit shop.brainchipinc.com or the Buy Now button at www.brainchip.com/.

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, Akida™, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables Edge learning locally to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be designed into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like Tensorflow/Keras. In enabling effective Edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.
Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc
Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006

Contacts​

Media Contact:
Mark Smith
JPR Communications
818-398-1424
Investor Relations:
Tony Dawe
Director, Global Investor Relations
tdawe@brainchip.com
"..today announced the availability of its Akida™ advanced neural networking processor on the M.2 form factor, enabling a low-cost, high-speed and low-power consumption option for those looking to build their own edge AI boxes."

"BrainChip’s AKD1000 product is available in both B+M Key and E Key configurations of the M.2 2260 form factor. It can be purchased integrated into stand-alone Raspberry PI or Edge AI box enclosures, or for integration into custom designed products. Pricing starts at $249. Visit shop.brainchipinc.com or the Buy Now button at www.brainchip.com/."


The website hasn't been updated yet?
Because the lowest price tech is the PCIe board, at $499...
 
  • Like
  • Thinking
Reactions: 10 users

cosors

👀
"with a 10-12 week shipping estimate."

Wow, that's a lot, if that's the current wait time..

Up to 3 months..

Is that due to pent up demand?
Lack of chips?
Lengthy time of production and testing?

If it's not due to pent up demand, then having to wait that long, will hurt it and if it is, it will hurt demand going forward.

Maybe get VVDN, to stop making that Nvidia junk and concentrate on our units, if production is too slow..
Does the higher price perhaps correlate with the delivery time?
So high demand higher price?
It would be nice if the higher price did not discourage demand but, on the contrary, did not suppress it.

Unfortunately, I can't think of an analogue example where high demand would have increased the OEM's price. Maybe you?
 
  • Thinking
  • Like
Reactions: 4 users

Quiltman

Regular
Looks like Rudy is a valuable employee adding to the value of our investment !

1736362832024.png


1736362905709.png


1736362938195.png
 
  • Like
  • Fire
  • Love
Reactions: 24 users
  • Like
Reactions: 3 users

Baisyet

Regular
Just a thought looks like we wont see any wow effect from CES25 this time.
 
  • Like
  • Wow
  • Sad
Reactions: 8 users
  • Like
  • Fire
Reactions: 14 users

Frangipani

Regular

View attachment 75504



I bet Nimble AI’s project coordinator Xabier Iturbe, Senior Research Engineer at IKERLAN (Basque Country, Spain), will be very pleased to hear about this new offering by BrainChip and will keep his fingers crossed that the same form factor option will be made available for the AKD1500 soon.

Today’s announcement of AKD1000 now being offered on the M.2 form factor reminded me of a post (whose author sadly made up his mind to leave the forum months ago) I had meant to reply to for ages…

@Frangipani and I have posted about Nimble AI before. I've noticed that their recent content no longer mention Brainchip and the AKIDA 1500. It appears we've been overshadowed by IMEC, a multi-billion-dollar company and research partner on the Nimble project. IMEC is heavily involved in nearly every EU-sponsored neuromorphic project and has been developing their own SNN for several years. What is news is that In Q1 2025, IMEC plans to do a foundry run of their SNN based neuromorphic processor called SENeCA (Scalable Energy-efficient Neuromorphic Computer Architecture).

View attachment 66880

View attachment 66881

Some details on SENeCA are in the below paper (few years old now).


Are they developing the hardware/processor, though the IP may not be in-house? Hard to tell from the info online around SENeCA. Other aspects that make me wonder about the use of Akida as the IP include reference to digital IP, RISC-V based architecture and designed for GF22nm.

I thought this was worth mentioning as IMEC could be a customer or a potential rival. If they're doing a foundry run Q1 2025 and we're involved, would expect some kind of IP license or arrangement prior. Would line up with Seans comments around deals before end of 2024.

8AD5935D-3B31-4A4E-B70B-2EE03EF554B4.jpeg


I reached out directly to the project director for Nimble AI and asked has SENeCA replaced the use of Akida 1500, reply below:

View attachment 66909
Reading between the lines, it seems they have been forced to sub out Akida for IMEC’s SENeCA (which does not include our IP) due to their partnership. This means there is another confirmed competitor for SNN processors, with a chip planned for tape-out in January 2025. We need to pick up the pace. What happened to the patent fortress?

90803B0E-9389-414C-B178-F9C1E7B09AC8.jpeg



Hi @AI_Inquirer,

what a shame you decided to leave TSE last August - miss your contributions!
Maybe you still happen to hang around, though, reading in stealth - that’s why I am addressing you anyway.

Thank’s for reaching out to Xabier Iturbe, whose reply you seem to have misunderstood at the time: The way I see it, we haven’t been overshadowed or replaced by imec’s SENeCA chip, which was always going to be used alongside us resp. the Hailo Edge AI accelerator.

Have a look at the slightly updated illustration and project description of the Nimble AI neuromorphic 3D vision prototype I had posted in May 2024:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-424893



C9C3D46A-E409-40D4-ABE4-2F1A53195DB3.jpeg


The Nimble AI researchers were always planning to produce two different neuromorphic 3D vision prototypes based on the Prophesee IMX636 sensor manufactured by Sony, and both of them were going to use imec’s neuromorphic SENeCA chip for early perception: One will additionally have the AKD1500 as a neuromorphic processor to perform 3D perception inference. This will be benchmarked against another prototype utilising a non-neuromorphic Edge AI processor by Hailo (on an M.2 form factor).

This latter prototype has apparently been progressing well (not sure, however, whether Prophesee’s financial difficulties will now delay the 3 year EU-funded project which started in November 2022) as can be seen on their website (https://www.nimbleai.eu/technology/)…

6CEBFC46-AC74-4B2B-A395-B12847EC146D.jpeg


…as well as in this October 7, 2024 video:





As for the second prototype slated to utilise our technology, the Nimble AI researchers are hoping that BrainChip will ideally be offering the AKD1500 on an M.2 form factor - just like Hailo does and just like BrainChip does now (as of today) for the AKD1000.
I believe that’s what Xabier Iturbe was trying to tell you:

25400C8A-3B32-4B60-AFEC-C5676360DC1E.jpeg


Regards,
Frangipani
 
  • Like
  • Love
  • Fire
Reactions: 24 users

Frangipani

Regular
Not sure if posted here today at all but did anyone see what Nimble AI are up to with our 1500 and Hailo8 courtesy of @Rayz on the other site.

Full credit to Rayz who is a great poster over there for finding info like many others over here. If u still frequent over there, worth giving a like and a follow (y)



View attachment 74968

eu
Perceiving a 3D world
from a 3D silicon architecture
100x 50x ≈10s mW
Energy-efficiency Latency reduction Energy budget improvement

Expected outcomes
World’s first light-field dynamic vision sensor and SDK for monocular-image- based depth perception.
Silicon-proven implementations
for use in next-generation commercial neuromorphic chips.
EDA tools to advance 3D silicon integration and exceed the pace of Moore’s Law.
World’s first event-driven full perception stack that runs industry standard convolutional neural networks.
Prototypic platform and programming tools to test new AI and computer vision algorithms.
Applications that showcase the competitive advantage of NimbleAI technology.
World’s first Light-field
Dynamic Vision Sensor Prototype

In NimbleAI, we are designing a
3D integrated sensing-processing neuromorphic chip that mimics
the efficient way our eyes and brains capture and process visual information. NimbleAI also advances towards new vision modalities
not present in humans, such as insect-inspired light-field vision, for instantaneous 3D perception.
Key features of our chip are:
The top layer in the architecture senses light and delivers meaningful visual information to processing and inference engines in the interior layers to achieve efficient end-to-end perception. NimbleAI adopts the biological data economy principle systematically across the chip layers, starting
in the light-electrical sensing interface.
Sense
Ignore?
Process
Adaptive
3D
light and depth
or recognise
efficiently
visual pathways
integrated silicon
Sensing, memory, and processing components are physically fused
in a 3D silicon volume to boost the communication bandwidth.
ONLY changing light is sensed, inspired by the retina. Depth perception is inspired by the insect compound eye.
Our chip ONLY processes feature- rich and/or critical sensor regions.
ONLY significant neuron state changes are propagated and processed by other neurons.
Sensing and processing are adjusted at runtime to operate jointly
at the optimal temporal and data resolution.

How it works
Sensing
Sensor pixels generate visual events ONLY if/when significant light changes are detected. Pixels can be dynamically grouped and ungrouped to allocate different resolution levels across sensor regions. This mimics the foveation mechanism in eyes, which allows foveated regions to be
n seen in greater detail than peripheral regions.
evird- The NimbleAI sensing layer enables depth perception in the sub-ms range tne by capturing directional information of incoming light by means of light- vE field micro-lenses by Raytrix. This is the world’s first light-field DVS sensor, which estimates the origin of light rays by triangulating disparities from neighbour views formed by the micro-lenses. 3D visual scenes are thus encoded in the form of sparse visual event flows.
Early Perception:
Our always-on early perception engine continuously analyzes the sensed n
visual events in a spatio-temporal mode to extract the optical flow and evir
identify and select ONLY salient regions of interest (ROIs) for further
d-
processing in high-resolution (foveated regions). This engine is powered tne
by Spiking Neural Networks (SNNs), which process incoming visual events vE
and adjust foveation settings in the DVS sensor with ultra-low latency and minimal energy consumption.
Processing:
Format and properties of visual event flows from salient regions are adapted in the processing engine to match data structures of user AI models (e.g., Convolutional Neural Networks - CNNs) and to best exploit optimization mechanisms implemented in the inference engine (e.g., sparsity). Processing kernels are tailored to each salient region properties, including size, shape and movement patterns of objects in those regions. The processing engine uses in-memory computing blocks by CEA and a Menta eFPGA fabric, both tightly coupled to a Codasip RISC-V CPU.
Inference with user AI models:
We are exploring the use of event-driven dataflow architectures that exploit sparsity properties of incoming visual data. For practical use in real-world applications, size-limited CNNs can be run on-chip using the NimbleAI processing engine above, while industry standard AI models can be run in mainstream commercial architectures, including GPUs and NPUs.

Light-field DVS using Prophesee IMX 636
Foveated DVS testchip
Prototyping MPSoC XCZU15EG
HAILO-8 /Akida 1500 (ROI inference)
SNN testchip (ROI selection)
Digital foveation settings
Harness the biological advantage
in your vision pipelines
NimbleAI will deliver a functional prototype of the 3D integrated sensing-processing neuromorphic chip along with the corresponding programming tools and OS drivers (i.e., Linux/ROS) to enable users run their AI models on it. The prototype will be flexible to accommodate user RTL IP in a Xilinx MPSoC and combines commercial neuromorphic and AI chips (e.g., HAILO, BrainChip, Prophesee) and NimbleAI 2D testchips (e.g., foveated DVS sensor and SNN engine).
Raytrix is advancing its light-field SDK to support event-based inputs, making it easy for researchers and early adopters to seamlessly integrate nimbleAI‘s groundbreaking vision modality –
3D perception DVS – and evolve this technology with their projects, prior to deployment on the NimbleAI functional prototype. The NimbleAI light-field SDK by Raytrix will be compatible with Prophesee’s Metavision DVS SDK.
Sensing
User RTL IP
NimbleAI RTL IP
Processing
Inference
User CNN models
SNN models
Early perception
Reach out to test combined use of your vision pipelines and NimbleAI technology.
PCIe M2
Modules

Use cases
Hand-held medical imaging
Smart monitors with 3D perception for highly automated and autonomous cars by AVL
Human attention for worm-inspired neural networks by TU Wien
device by ULMA
Eye-tracking sensors for smart
glasses by Viewpointsystem Follow our journey!
@NimbleAI_EU NimbleAI.eu
Partners NimbleAI coordinator: Xabier Iturbe (xiturbe@ikerlan.es)
nimbleai.eu

The prototype will be flexible to accommodate user RTL IP in a Xilinx MPSoC and combines commercial neuromorphic and AI chips (e.g., HAILO, BrainChip, Prophesee) and NimbleAI 2D testchips (e.g., foveated DVS sensor and SNN engine).
Raytrix is advancing its light-field SDK to support event-based inputs, making it easy for researchers and early adopters to seamlessly integrate nimbleAI‘s groundbreaking vision modality –
3D perception DVS – and evolve this technology with their projects, prior to deployment on the NimbleAI functional prototype. The NimbleAI light-field SDK by Raytrix will be compatible with Prophesee’s Metavision DVS SDK.


View attachment 74969

Wait a minute, @Fullmoonfever! 🤣
Don’t you remember? 👇🏻

Or was it..... :unsure:

I gotta protect my billable (wish) DD IP hours... I'll happily take any effective SP rise as payment though :ROFLMAO::LOL::ROFLMAO:

Thankfully through our collective DD efforts info is generally found on this site first most of the time.

It shouldn’t come as a surprise to you, then, that this also holds true for info on Nimble AI and our connection to them, which both @AI_Inquirer and I had posted about several times in the past…
We’ve actually known about the Nimble AI researchers’ intention to use AKD1500 for almost a year here on TSE! 🥳

Happy to receive some free shares in lieu of credit, though, in case you don’t have the heart to ask Rayz to return some of the “full credit” you so generously gave him… 🤣
 
  • Like
Reactions: 3 users

manny100

Regular
Thanks Frangipani, looks like some sales there already for M2.
Your reference to Prophesee and its financial woes in connection with Nimble certainly makes our LDA financing decision look very smart.
Can this be our product that really starts to move? Cheap and allows others to do their own thing- and that is its beauty.
 
  • Like
  • Fire
  • Wow
Reactions: 8 users
Top Bottom