BRN Discussion Ongoing

Diogenese

Top 20
Hi Dodgy Knees, forgive me if I sound like a complete eejit by asking this question. But is it necessary for a company to have an IP describing SNN's in order for them to incorporate SNN's into their products? I mean, can't they just sprinkle them in there and Bob's your uncle?
Quite rite!

It's just fairy dust like on the wings of a butterfly.
 
  • Haha
  • Like
Reactions: 13 users

Interesting link here also about GM and a tie up with Global Foundries one of our new partners.

GM securing chip supply through GF.

Within this article there is an embedded link to a similar article on Ford doing the same thing!

Could be coincidence but who knows!
excited whats going on GIF
:ROFLMAO::ROFLMAO::ROFLMAO::ROFLMAO: too funny, but seriously there is some serious ducks being lined up here around our BRN tech
 
  • Like
  • Haha
  • Fire
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Quite rite!

It's just fairy dust like on the wings of a butterfly.

Speaking of fairy dust, I'm hoping that we can get an exhaust fan to blow a shed load of it on this #49,748.
 
  • Like
  • Fire
  • Love
Reactions: 5 users

Tothemoon24

Top 20



Recent articles

Avnet Unveils the RASynBoard Edge AI Board, with Renesas MCU, Syntiant NDP, and TDK Sensors​

Featuring a microcontroller, edge AI accelerator, Wi-Fi and Bluetooth connectivity, plus on-board sensors, this little board packs a punch.​



5 minutes ago • Machine Learning & AI / Internet of Things /
image_4HrkPndnYv.png

Avnet has announced pre-orders for the RASynBoard, a development kit which combines a Renesas microcontroller and a Syntiant Neural Decision Processor (NDP) with on-board TDK sensors to provide a platform for ultra-low-power edge AI projects — with immediate support in Edge Impulse.
“Avnet’s strong relationship with industry leading suppliers such as Renesas, TDK, and our new supplier Syntiant allows us to combine the unique offerings and capabilities of each partner into a complete system-level, production-ready solution," explains Avnet's Jim Beneke of the new board. "Our design customization and manufacturing capabilities also allow us to offer production-ready solutions for customers needing a full-turnkey option."
image_yPQuuySacA.png
Avnet's latest development board aims squarely at energy-efficient on-device machine learning at the edge: the RASynBoard. (📷: Avnet)
The RASynBoard Core Board is a compact module with USB Type-C connectivity which includes a Renesas RA6M4 microcontroller with a single Arm Cortex-M33 core running at 200MHz, 256kB of static RAM, and 1MB of flash memory, a Syntiant NDP120 Neural Decision Processor (NDP) with Core 2 deep neural network hardware, Arm Cortex-M0 core, and Cadence HiFi 3 Digital Signal Processor (DSP). Elsewhere on the board is 2MB of SPI flash, a Renesas DA16600 802.11b/g/n Wi-Fi and Bluetooth 5.1 radio module, a lithium-polymer battery management circuit, and a six-axis inertial measurement unit (IMU) and digital microphone from TDK.
With all of the above in a compact 25×30mm (around 0.98×1.18") footprint, the RASynBoard aims to be a tiny titan for on-device machine learning work — loading models from the SPI flash storage for execution on the NDP120. "Our hardware technology brings advanced multimodal neural processing to Avnet's new RASynBoard with very low power consumption," explains Syntiant's Mallik Moturi. "The NDP120's ability to provide highly accurate, always-on sensor processing with relatively no impact to battery life was a key edge AI requirement in the development of the module, which also makes the device ideal for supporting a wide range of field applications in smart buildings, factories and cities."
For those growing beyond the RASynBoard Core Board's built-in capabilities, two 28-pin board-to-board connectors provide expansion — with an optional IO Board making use of these to offer an on-board debugger and USB-serial interface, a MikroE Click shuttle box header, a Pmod Type-6A socket, a 14-pin microcontroller expansion header, microSD storage, a user-definable button and RGB LED.
image_yuWN1ihkRC.png
A bundled IO Board provides expansion including a Pmod connector, general-purpose input/output (GPIO) pins, and microSD storage. (📷: Avnet)
On the software front, Avnet has announced a partnership with Edge Impulse to provide support for the RASynBoard within the company's popular Studio machine learning platform. "Edge impulse is excited to collaborate with Avnet on the launch of the RASynBoard, an ideal solution for ultra-low-power machine learning applications thanks to its Renesas MCU and sub-mW Syntiant NDP120 processor, and flexible array of sensors," says Edge Impulse's Raul Vergara. "Customers can quickly develop advanced models in Edge Impulse Studio and deploy them to the board for always-on inferencing in almost any location or environment."
The RASynBoard Core Board and IO Board bundle is now up for pre-order on Avnet's site at $99, with delivery expected to take place late in the second quarter of 2023.
internet of things
 
  • Like
  • Love
  • Fire
Reactions: 11 users

Deadpool

Did someone say KFC
Teksun Machine Learning section on their website, again right up our ally & they do Natural Language Processing amongst others.


MACHINE LEARNING


We assist you in developing and deploying personalized and data-intensive solutions based on Machine Learning Services, to let you counter business challenges.

Instilling Intelligence​


Teksun delivers you the new-age apps empowered with pattern recognition, artificial intelligence, and mathematical predictability, which collectively provide you higher scalability. Our technical developers are experts in optimally utilizing and placing machine learning in anomaly detection, algorithm design, future forecasting, data modeling, spam filtering, predictive analytics, product recommendations, etc.

Get You First Consultation for FREE

Our Offerings

The offerings that we present here are just a gist of options and alternatives that we have for you inside the box. Catch sight of these to know the scope of our services:

null

Deep​

Learning​

null

Predictive​

Analytics​

null

Image​

Analytics​

null

Video​

Analytics​

null

Natural

Language

Processing



We also provide for Neural Network Development and Machine Learning Solutions. Looking for a better start for your project! Partner with our expert consultants to draft out the premier ways of undertaking it.

Get Started

It’s an apt time to take-off with us!


What makes us unique

The unique is our ability to serve you in a ceaseless manner, with real-time updates of every project phase.

1​


We provide Machine Learning Consulting, assisting you all the way from project initiation to deployment.

2​


We furnish you with Supervised/Unsupervised ML services on both structured and unstructured data.

3​


Our experts undertake different algorithms and models to cater you the required service such as NLP, Decision Trees, etc.

4​


The tools and technologies used by us are the best in the market, a few of which can be named MongoDB, Cassandra, and so on.

5​


Our constantly updated and wide range of AI Models impart your business with high performance & scalability.

6​


Our experts undertake a personalized approach while delivering you the finest of Machine Learning Services.




Take a Look at

QA & Project Execution


Hire Developer​

Develop with the industry masters!
It’s the selection of technologies that carves out its full potential. Our top developers, assure your Machine Learning solutions of the finest tools as per the project and budget needs.


Industry we serve

We bring across a broad gamut of services, along with a versatile approach. Hence we are also able to facilitate a wide foot of industries, whether it be Forensic, Financial, Healthcare, Defence, or any other.
Consumer Electronics

Consumer Electronics

Wearable Devices

Wearable

Industrial Automation

Industry 4.0

Biotech Solutions

Biotech

Home Automation

Home Automation

Agritech Solutions

Agritech

null

Security & Surveillance

Health Care System Design

Health Care

null

Drones & Autonomy

Automated testing

Automotive


Every project needs different kind of attention and service. Our highly experienced consultants and technicians arrange for tailor-made plans and strategies to manage your varied projects.

Kick-Off Project

Surge on your success journey!
Golly TechGirl, this partnership is looking to be sensational for us.

Running Man Happy Dance GIF by MOODMAN
 
  • Haha
  • Like
  • Love
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
The forecast 12% market share would equate to AUD $229.2B MC / 1.8B SOI = $127.33 SP AUD.

That would mean BRN SP rises from 51c in March 2023 to $127.33 = x249.7 by March 2030.

PLS was 1c SP low in 2013 to $5.66 peak in November 2022 = x566 within 10 years.

BRN was 3.5c SP low in 2020 to $127.33 in 2030 = x3,638 within 10 years.

Appears impossible, however, BRN has breakthrough tech whereas there are lithium mines everywhere.

Anything under $100M MC BRN was like investing in pre-IPO.

I will be very happy with AUD $50B MC by 2030 or about $27.78 SP.

It will require 2.6% market share. Anything above will be a big bonus.

Yeah, but you gotta admit you wouldn't be too disappointed if it actually hit 12% as predicted. 😝
 
  • Like
  • Fire
Reactions: 21 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
USPTO shows the inventor is Heath Robinson:

The way that works on a single processor level is that you have a core and every one of them has a megabyte of SRAM. Packets arrive into buffers in this SRAM, which triggers software to fetch them and run a hardware unpacketization engine – this removes all the packet framing, interprets what it means, and decompresses the packet so it leaves compressed at all times, except when it’s being computed on.

“It essentially recreates that little tensor that made the packet. We run a bunch of computations on those tensors and eventually we're ready to to send them onward. What happens then is they get repacketized, recompressed, deposited into SRAM, and then from there our network functionality picks them up and forwards them to all the other cores that they need to go to under the directional compiler
.”
OK. I see how it all works now.🥴😝


a200a7b7b7dd4e845b3350a000e7d7fe.gif
 
  • Haha
  • Love
  • Thinking
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
OK. I see how it all works now.🥴😝


View attachment 31572

Meaning I might need to brush up a bit on my technological prowess as it seems to have absconded somewhere without a trace. This is coming from someone who valiantly tried to read, if not understand Peter Van Der Made's book "Higher Intelligence", but in doing so managed to forget nearly every sentence the second after reading it. Maybe Peter could write his next book using me as test case and this time he could call the book "Lower Intelligence".
 
Last edited:
  • Haha
  • Love
  • Like
Reactions: 18 users

Townyj

Ermahgerd
Some big buys going through now - Insto Analysts said BRN is a BUY! Only blue sky from here. DYOR

Over 300,000 shares wanted in this one order.

1:19:59 PM 0.560 366,481 $205,229.360

@Rocket577 was that you finally..?? Sheesh settle it down.
 
  • Haha
  • Like
  • Fire
Reactions: 13 users
Yes @MadMayHam, and I thought it was very interesting that Si-Five specify that they want their X280 Intelligence Series to be tightly integrated with either Akida-S or Akida-P neural processors.




View attachment 31560


View attachment 31564
Yes the logic of this did not escape me either:

“Now tell me Mr. Five might I call you Si, how can you say you have a preference to tightly integrate AKIDA S and/or P with X280 when AKIDA E is cheaper and more power efficient?

Yes sir you may call me Si. That is a really good question Freddie.
Well the answer is very simple we don’t just make these statements on a whim.
We have been officially partnered with Brainchip for some time but prior to that our engineers did extensive testing of Brainchip’s AKIDA technology IP family to firstly determine if X280 and AKIDA were compatible and had something in combination to offer to our customers.
Our Engineers having answered yes we then asked our marketing people what was it that our customers were seeking and which was lacking from our present Product line, not just the Intelligence Series, and they came back with quite a long list actually.
We then went back to our engineers with this list and they applied themselves to the task and after extensive further testing and interaction with Brainchip advised that AKIDA S and AKIDA P were the perfect fit for X280.

Thanks Si for that it sounds like you really do your technology due diligence before jumping into bed with your technology partners.”

or something like this but maybe Si and his mates are from the shoot from the hip school of management and make technology adoption decisions based on a magazine article and a coin toss.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 62 users

Steve10

Regular
Jobs ads for Sales Executives in USA & Europe for Edge Impulse targeting health monitoring market.

Proven experience with identifying and onboarding enterprise customers in the health monitoring arena.


 
  • Like
  • Fire
  • Thinking
Reactions: 19 users
D

Deleted member 118

Guest
  • Fire
  • Like
Reactions: 12 users

White Horse

Regular
https://www.linkedin.com/posts/leddartech_perception-adas-autonomousdriving-ugcPost-7038509033632198656-JMqm?utm_source=share&utm_medium=member_desktop


Rob’s profile photo

Rob Telson likes this
LeddarTech- Automotive Software: Low-Level Sensor Fusion & Perception
13,584 followers
1d • Edited •

1 day ago


😀 👍Are you looking to enhance your knowledge of ADAS and Perception systems? If so, then understanding the concepts of Ego-Lane and CIPV is crucial.

Identifying these vehicles especially when distant is critical for implementation of safety ADAS features such as automatic emergency braking and forward collision warning.

To gain a better insight into the concepts of Ego-Lane and CIPV, check out our informative video that explains how these concepts look from a perception system's point-of-view. It is an excellent opportunity to learn more about these crucial aspects of ADAS and Autonomous Driving.

If you're interested in exploring more about the world of Perception and ADAS, check out this interactive tool - https://hubs.li/Q01F72js0

#perception #ADAS #autonomousdriving





Remaining time 1:18
 
  • Like
  • Fire
  • Thinking
Reactions: 17 users

Steve10

Regular
I think this product will benefit with Akida.

Avoid obstacles.
GPS navigation.
All-in-one.​

biped is the future of mobility for blind and visually impaired people. A smart harness, worn on shoulders, that uses self-driving technology from Honda Research Institute to guide people with short sounds. Ideal complement to a white cane or to a guide dog.

270 million visually impaired people worldwide walk with a risk of hitting obstacles, such as electric scooters, bicycles, or tree branches.

White canes alone cannot detect all obstacles. But replacing it is not the way to go. What if you could walk with your mind free, knowing where potential obstacles are? And get GPS instructions too?

‍It uses wide-angle cameras and AI to generate short sounds to warn you about the position of important obstacles, such as branches, holes, vehicles or pedestrians. It also provides GPS instructions (coming soon). Sounds are played in Bluetooth headphones.

biped is a harness worn on the shoulders, equipped with ultra-wide angle cameras on the left of your chest, a battery behind your neck, and a small computer on the right of your chest. It works just like a self-driving car, for pedestrians. We partnered with Honda Research Institute to bring the best of car research, to biped.

1678249247495.png

1678249356959.png
 
  • Like
  • Love
  • Fire
Reactions: 21 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 45 users

MDhere

Top 20
is the agm on the 23rd May bring held at broadroom again like last year or where before i book my flights and accom?
 
  • Like
Reactions: 1 users

buena suerte :-)

BOB Bank of Brainchip
Certainly looking a lot healthier 🙏

719 buyers for 12,393,513 units

377 sellers for 5,624,129 units
 
  • Like
  • Fire
  • Love
Reactions: 26 users

White Horse

Regular
Just in case some missed the second article from Forbes yesterday, here it is.

BrainChip Readies 2nd Gen Platform For Power-Efficient Edge AI​


Karl Freund
Contributor
Founder and Principal Analyst, Cambrian-AI Research LLC

Mar 6, 2023

The company’s event-based digital Neuromorphic IP can add efficient AI processing to SoCs.

Edge AI is becoming a thing. Instead of using just an embedded microprocessor in edge applications and sending the data to a cloud for AI processing, many edge companies are considering adding AI at the edge itself, and then communicating conclusions about what the edge processor is “seeing” instead of sending the raw sensory data such as an image. To date, this dynamic has been held back by the cost and power requirements of initial implementations. What customers are looking for is proven AI tech that can run under a watt, and that they can add to a microcontroller for on-board processing.

Many startups have entered the field, looking to compete in an area of AI which does not have an 800-pound incumbent to displace (a.k.a., NVIDIA). Many startups have some sort of in-memory or near-memory architecture to reduce data movement, coupled with digital multiply-accumulate (MAC) logic. BrainChip is taking a different approach, applying event-base digital neuromorphic logic with SRAM that the company says is power-efficient, flexible, scalable, and enables on-chip learning. Let’s take a closer look.

The Akida Platform​


Brainchip has a lengthy list of enhancements it has engineered to the second-generation Akida platform. The motivation of these additions has been to enable processing on the modalities customers are increasingly demanding: real-time video, audio, and time-series data such as speech recognition, human action recognition, text translation, and video object detection. For some of these apps, the addition of an optional Vision Transformer (ViT) engine, along with an enhanced neural mesh can deliver up to 50 TOPS (Trillions of Operations Per Second) according to the company.

The company is selling its IP into designs for sensors and SoCs looking to add AI to the edge. While uptake has been slow for BrainChip’s first product, the AKD1000, there have been some high-profile demonstrations of its use by companies like Mercedes in the EQXX concept vehicle and by NASA on a project for autonomy and cognition in small satellites and adoption of their development kits and boards by a number of companies for prototyping purposes.

Now, with the second generation. BrainChip has added support for 8-bit weights and activations, the ViT mentioned above, and hardware support for an innovative Temporal Event-Based Neural Net (TENN) support. Akida maintains its ability to process multiple layers at a time, managed by its smart DMA which handles model and data load and store autonomously. This can enable low-power sensors attached to an Akida node without the need for CPU. The diagram below shows how Akida sensors can be coupled with an SoC for multi-sensor inference processing.



[IMG alt="The Akida IP can be used to create sensors and be
deployed within more advanced SoCs."]https://imageio.forbes.com/specials...ed-SoCs-/960x0.jpg?format=jpg&width=960[/IMG]

The Akida IP can be used to create sensors and be deployed within more advanced SoCs.
BrainChip


The new Akida platform, expected to be available later this year. is designed to process a variety of popular networks, including CNNs, DNNs, Vision Transformers, and SNNs. The event-based design is particularly good at time series data for problems such as audio processing, video object detection, and vital sign monitoring and prediction.

BrainChip has shown some initial benchmarks that demonstrate orders of magnitude fewer operations and smaller model size which can benefit edge AI implementations. In video object detection, a 16m implementation can handle 30FPS at 1382x512 resolution, in under 75mW. Keyword detection in 28nm can support over 125 inferences/sec taking less than 2 microJoules per inference. BrainChip has applied for patents in TENN model acceleration.

[IMG alt="The Akida processor and software can implement
multi-pass processing as well as on-chip learning."]https://imageio.forbes.com/specials...-well-as/960x0.jpg?format=jpg&width=960[/IMG]

The Akida processor and software can implement multi-pass processing as well as on-chip learning.
BrainChip
Akida’s runtime software manages the operation efficiently including key features like its multi-pass processing which are handled transparent to the user. Model development and tuning is supported on the TensorFlow framework with MetaTF.

The Akida Architecture and associated Benefits.


The Akida Architecture and associated Benefits.
BrainChip
BrainChip envisions three ranges of adoption, including a basic MCU with 1-4 nodes for always-on CPU-less operation, a MCU+Deep Learning Accelerators class, and a high-end MCU with up to 64 nodes and an optional ViT processor. In all cases, on-chip learning is possible by leveraging the trained model as a feature extractor and adding new classes in the final layer while untethered from cloud training resources.

[IMG alt="Akida can provide power efficient AI alternatives
across a broad range of solutions that currently
implement MCUs and GPUs."]https://imageio.forbes.com/specials...olutions/960x0.jpg?format=jpg&width=960[/IMG]

Akida can provide power efficient AI alternatives across a broad range of solutions that currently ... [+]
BrainChip

Conclusions​


While BrainChip has been vocal in the past about the partnerships it has forged, such as with MegaChips and Renesas, commercial growth has been slower, possibly a function of the IP model taking longer to ramp. With the inclusion of 8-bit operations, the Vision Transformer, temporal convolutions (TENN models), and the ability to run models like ResNet-50 completely on the Akida Neural processor with minimal CPU dependencies, we believe the company is laying the foundation to turn the corner and land some bigger design wins. A key factor may be the software, which is currently TensorFlow based but will soon support PyTorch as well, an essential addition given the current development landscape.
 
  • Like
  • Fire
  • Love
Reactions: 54 users
Jobs ads for Sales Executives in USA & Europe for Edge Impulse targeting health monitoring market.

Proven experience with identifying and onboarding enterprise customers in the health monitoring arena.


So cool that our partners feel the need to increase their workforce to target or maximise potential from our new enhanced Akida range.

Bit of 'the chicken and the egg' scenario here:

1678251274060.png


Was Edge Impulse involved in the feedback loop with their customers having them already lined up for the change?
Or did this second generation enhancement trigger the mass market requiring the new sales FTE.

Either way one of OUR Ecosystem partners are preparing to manage the increase in demand which will help Sean execute his last statement from the txt above
"we are focused on executing more IP licenced agreements and generating revenue growth over the coming years."

Our team has support to assist our ubiquitous ambitions.

I am a patient person but I'd be so pleased to see ONE license land before May 23rd.
Will this last BRN announcement be the final key to unlocking the licensing charge. Was it the final proof that Akida can evolve and extend its reach into the future.
 
  • Like
  • Fire
  • Love
Reactions: 38 users

White Horse

Regular
From the fishmonger and baker of fine bread.

BrainChip Introduces Second-Generation Akida Platform



peter van der Made


Peter Van Der Made

Chief Technology Officer at BrainChip

1 article


March 8, 2023

Introduces Vision Transformers and Spatial-Temporal Convolution for radically fast, hyper-efficient, and secure Edge AIoT products, untethered from the cloud

Laguna Hills, Calif. – March 6, 2023 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, neuromorphic AI IP, today announced the second generation of its Akida™ platform that drives extremely efficient and intelligent edge devices for the Artificial Intelligence of Things (AIoT) solutions and services market that is expected to be $1T+ by 2030. This hyper-efficient yet powerful neural processing system, architected for embedded Edge AI applications, now adds efficient 8-bit processing to go with advanced capabilities such as time domain convolutions and vision transformer acceleration, for an unprecedented level of performance in sub-watt devices, taking them from perception towards cognition.

The second-generation of Akida now includes Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions that supercharge the processing of raw time-continuous streaming data, such as video analytics, target tracking, audio classification, analysis of MRI and CT scans for vital signs prediction, and time series analytics used in forecasting, and predictive maintenance. These capabilities are critically needed in industrial, automotive, digital health, smart home and smart city applications. The TENNs allow for radically simpler implementations by consuming raw data directly from sensors – drastically reduces model size and operations performed, while maintaining very high accuracy. This can shrink design cycles and dramatically lower the cost of development.

Another addition to the second generation of Akida is Vision Transformers (ViT) acceleration, a leading edge neural network that has been shown to perform extremely well on various computer vision tasks, such as image classification, object detection, and semantic segmentation. This powerful acceleration, combined with Akida’s ability to process multiple layers simultaneously and hardware support for skip connections, allows it to self-manage the execution of complex networks like RESNET-50 completely in the neural processor without CPU intervention and minimizes system load.

The Akida IP platform has a unique ability to learn on the device for continuous improvement and data-less customization that improves security and privacy. This, combined with the efficiency and performance available, enable very differentiated solutions that until now have not been possible. These include secure, small form factor devices like hearable and wearable devices, that take raw audio input, medical devices for monitoring heart and respiratory rates and other vitals that consume only microwatts of power. This can scale up to HD-resolution vision solutions delivered through high-value, battery-operated or fanless devices enabling a wide variety of applications from surveillance systems to factory management and augmented reality to scale effectively.

“We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”

“Advancements in AI require parallel advancements in on-device learning capabilities while simultaneously overcoming the challenges of efficiency, scalability, and latency,” said Richard Wawrzyniak, principal analyst at Semico Research. “BrainChip has demonstrated the ability to create a truly intelligent edge with Akida and moves the needle even more in terms of how Edge AI solutions are developed and deployed. The benefits of on-chip AI from a performance and cost perspective are hard to deny.”

“Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device,” said Sean Hehir, BrainChip CEO. “By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience.”

Akida’s software and tooling further simplifies the development and deployment of solutions and services with these features:


  • An efficient runtime engine that autonomously manages model accelerations completely transparent to the developer
  • MetaTF™ software that developers can use with their preferred framework, like TensorFlow/Keras, or development platform, like Edge Impulse, to easily develop, tune, and deploy AI solutions.
  • Supports all types of Convolutional Neural Networks (CNN), Deep Learning Networks (DNN), Vision Transformer Networks (ViT) as well as Spiking Neural Networks (SNNs), future-proofing designs as the models get more advanced.



Akida comes with a Models Zoo and a burgeoning ecosystem of software, tools, and model vendors, as well as IP, SoC, foundry and system integrator partners. BrainChip is engaged with early adopters on the second generation IP platform. General availability will follow in Q3’ 2023.

See what they’re saying:​


“At Prophesee, we are driven by the pursuit of groundbreaking innovation addressing event-based vision solutions. Combining our highly efficient neuromorphic-enabled Metavision sensing approach with Brainchip’s Akida neuromorphic processor holds great potential for developers of high-performance, low-power Edge AI applications. We value our partnership with BrainChip and look forward to getting started with their 2nd generation Akida platform, supporting vision transformers and TENNs,” said Luca Verre, Co-Founder and CEO at Prophesee.

Luca Verre, Co-Founder and CEO, Prophesee

“BrainChip and its unique digital neuromorphic IP have been part of IFS’ Accelerator IP Alliance ecosystem since 2022,” said Suk Lee, Vice President of Design Ecosystem Development at IFS. “We are keen to see how the capabilities in Akida’s latest generation offerings enable more compelling AI use cases at the edge”

Suk Lee, VP Design Ecosystem Development, Intel Foundry Services

“Edge Impulse is thrilled to collaborate with BrainChip and harness their groundbreaking neuromorphic technology. Akida’s 2nd generation platform adds TENNs and Vision Transformers to a strong neuromorphic foundation. That’s going to accelerate the demand for intelligent solutions. Our growing partnership is a testament to the immense potential of combining Edge Impulse’s advanced machine learning capabilities with BrainChip’s innovative approach to computing. Together, we’re forging a path toward a more intelligent and efficient future,” said Zach Shelby, Co-Founder and CEO at Edge Impulse.

Zach Shelby, Co-Founder and CEO, Edge Impulse

“BrainChip has some exciting upcoming news and developments underway,” said Daniel Mandell, Director at VDC Research. “Their 2nd generation Akida platform provides direct support for the intelligence chip market, which is exploding. IoT market opportunities are driving rapid change in our global technology ecosystem, and BrainChip will help us get there.”

Daniel Mandell, Director, VDC Research

“Integration of AI Accelerators, such as BrainChip’s Akida technology, has application for high-performance RF, including spectrum monitoring, low-latency links, distributed networking, AESA radar, and 5G base stations,” said John Shanton, CEO of Ipsolon Research, a leader in small form factor, low power SDR technology.

John Shanton, CEO, Ipsolon Research

“Through our collaboration with BrainChip, we are enabling the combination of SiFive’s RISC-V processor IP portfolio and BrainChip’s 2nd generation Akida neuromorophic IP to provide a power-efficient, high capability solution for AI processing on the Edge,” said Phil Dworsky, Global Head of Strategic Alliances at SiFive. “Deeply embedded applications can benefit from the combination of compact SiFive Essential™ processors with BrainChip’s Akida-E, efficient processors; more complex applications including object detection, robotics, and more can take advantage of SiFive X280 Intelligence™ AI Dataflow Processors tightly integrated with BrainChip’s Akida-S or Akida-P neural processors.”

Phil Dworsky, Global Head of Strategic Alliances, SiFive

“Ai Labs is excited about the introduction of BrainChip’s 2nd generation Akida neuromorphic IP, which will support vision transformers and TENNs. This will enable high-end vision and multi-sensory capability devices to scale rapidly. Together, Ai Labs and BrainChip will support our customers’ needs to address complex problems,” said Bhasker Rao, Founder of Ai Labs. “Improving development and deployment for industries such as manufacturing, oil and gas, power generation, and water treatment, preventing costly failures and reducing machine downtime.”

Bhasker Rao, Founder, Ai Labs

“We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in a wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”

Roger Wendelken, Senior Vice President IoT and Infrastructure Business Unit, Renesas

“We see a growing number of predictive industrial (including HVAC, motor control) or automotive (including fleet maintenance), building automation, remote digital health equipment and other AIoT applications use complex models with minimal impact to product BOM and need faster real-time performance at the Edge” said Nalin Balan, Head of Business Development at Reality ai, a Renesas company. “BrainChip’s ability to efficiently handle streaming high frequency signal data, vision, and other advanced models at the edge can radically improve scale and timely delivery of intelligent services.”

Nalin Balan, Head of Business Development, Reality.ai, a Renesas Company

“Advancements in AI require parallel advancements in on-device learning capabilities while simultaneously overcoming the challenges of efficiency, scalability, and latency,” said Richard Wawrzyniak, Principal Analyst at Semico Research. “BrainChip has demonstrated the ability to create a truly intelligent edge with Akida and moves the needle even more, in terms of how Edge AI solutions are developed and deployed. The benefits of on-chip AI from a performance and cost perspective are hard to deny.”

Richard Wawrzyniak, Principal Analyst, Semico Research

“BrainChip’s cutting-edge neuromorphic technology is paving the way for the future of artificial intelligence, and Drexel University recognizes its immense potential to revolutionize numerous industries. We have experienced that neuromorphic compute is easy to use and addresses real-world applications today. We are proud to partner with BrainChip and advancing their groundbreaking technology, including TENNS and how it handles time series data, which is the basis to address a lot of complex problems and unlocking its full potential for the betterment of society,” said Anup Das, Associate Professor and Nagarajan Kandasamy, Interim Department Head of Electrical and Computer Engineering, Drexel University.

Anup Das, Associate Professor, Drexel University

“Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device,” said Sean Hehir, BrainChip CEO. “By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience.”

Sean Hehir, CEO, BrainChip



About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)

BrainChip is the worldwide leader in edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, AkidaTM, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables edge learning local to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be integrated into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like TensorFlow/Keras. In enabling effective edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.

Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc

Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 51 users
Top Bottom