BRN Discussion Ongoing

7für7

Top 20
He maybe is just a guest of brainchip… 🤷🏻‍♂️
 
  • Like
  • Love
Reactions: 2 users

IloveLamp

Top 20
Apologies if already posted


"Akida is particularly good at conserving energy and can be used in various applications, from smart cars to industrial IoT, where it excels in tasks like incremental learning and fast decision-making."
1000014948.jpg
 
  • Like
  • Fire
  • Love
Reactions: 49 users

Leevon

Emerged
Go you good thing!
 
  • Like
  • Thinking
  • Wow
Reactions: 7 users

Tothemoon24

Top 20
IMG_8769.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 43 users

Frangipani

Regular
He maybe is just a guest of brainchip… 🤷🏻‍♂️

No way! You need to carefully reread Frontgrade Gaisler’s LinkedIn post (which is not about embedded world 2024, by the way, just in case you missed that).
It literally says:

Dr. Jonathan Tapson from BrainChip visited our booth at Space Symposium to discuss neuromorphic processing for space applications. They have been developing and continuously improving their technology for extremely efficient AI inference and learning.”


F4D90E6B-D45C-42D6-B917-4B5F15DDE5D7.jpeg





Do you really believe our company would send someone who doesn’t work for them (in what capacity remains to be seen) to a Colorado Springs conference they are not involved with in order to discuss serious business matters with a Swedish company that we know is planning on integrating our IP into their hardware very soon and possibly also discuss our disruptive tech with interested third parties that are paying the Frontgrade Gaisler booth a visit?!

Just because he is an expert on neuromorphic hardware, happens to have known our CTO for years and conveniently lives in the same US state the Space Symposium is currently taking place (I suppose living 300 miles away from the venue counts as “just around the corner” within the Universe scheme of things 😉)?

Think about it: Alf Kuchenbuch, our Munich-based VP of Sales EMEA could hop on a short flight to Gothenburg any time to talk to Sandi Habinc or someone else from Frontgrade Gaisler in person.

Jonathan Tapson’s lanyard says “Visitor” as he was neither an attendee nor a speaker at the Space Symposium, and BrainChip isn’t an exhibitor either (whereas Frontgrade Gaisler has a booth). I scrolled through the Space Symposium’s Flickr photostream and picked out two pictures to demonstrate what I mean (the conference coincided with the eclipse on April 8).

8CD8BC0B-762E-4F4A-84E3-EBBF84BD7F01.jpeg



AA39E2D7-71F0-422E-B206-04770A3565E9.jpeg



If you ask me, the Frontgrade Gaisler LinkedIn post’s wording and accompanying photo are unambiguous. All that’s missing is some sort of confirmation by BrainChip or Jonathan Tapson himself.
 
  • Like
  • Love
  • Fire
Reactions: 33 users

Frangipani

Regular
Meanwhile at the embedded world 2024 in Nuremberg / Nürnberg:

CFF2D0DF-512A-474F-8827-DBE01E61D37D.jpeg


Sean seems to be pursuing the famous invisible tablecloth strategy now! 😜
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 28 users

Frangipani

Regular
Just came across this awesome article about Brainchip on Medium, written by someone with the moniker NeuroCortex.AI - and a follow-up is already in the works! 👍🏻



BrainChip’s Akida: Neuromorphic Processor Bringing AI to the Edge​

NeuroCortex.AI
NeuroCortex.AI
·
Follow
8 min read
·
6 hours ago

As our regular readers you might recall we have covered the basics of Neuromorphic computing a blog series last year and are now pursuing further research into its implementation. One of the blockers in the real time implementation of Spiking neural networks (SNNs) is availability of an actual Neuromorphic chip(s) to run SNN algorithms.

Thus we started connecting with various industry professionals who were connected to developing neuromorphic chips. Soon enough we connected with BrainChip US operations team (based in California) and started talking about potential collaboration. They were kind enough to help us out and agreed to send BrainChip Akida chips our way.

Before we start with implementing SNN models onto Akida let us tell you about BrainChip the company, Akida chipset and why its useful for us.


Akida by BrainChip, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy consumption.

BrainChip is an Australian company that specializes in edge artificial intelligence (AI) on-chip processing and learning. They are the worldwide leader in edge AI on-chip processing and learning, offering solutions that bring common sense to the processing of sensor data, enabling machines to do more with less. BrainChip has a global presence with engineering teams located in California, Toulouse France, Hyderabad India, and Perth Australia.

BrainChip’s flagship product, Akida™, is a fully digital, event-based AI processor that mimics the human brain, analyzing essential sensor inputs at the point of acquisition with high efficiency, precision, and energy economy. This technology allows for edge learning local to the chip, reducing latency, improving privacy, and enhancing data security. Akida, Greek for ‘spike,’ is a neuromorphic SoC that implements a spiking neural network. In many ways, it’s similar to some of the well-known research projects that were presented over the past several years such as IBM’s TrueNorth, SpiNNaker, and Intel Loihi. With Akida, BrainChip is attempting to seize this early market opportunity with one of the first commercial products. BrainChip is targeting a wide range of markets from the sub-1W edge applications to higher power and performance applications in the data center.

0*wMut4RI474WjcNy-.png


Timeline (BrainChip)
Here’s a breakdown of what Akida is all about:
  • Inspired by the Brain and Benefits of a Brain-Inspired Approach: Unlike traditional processors that rely on complex clock cycles, Akida uses event-based processing, similar to how neurons fire in the brain. It utilizes neuromorphic computing, mimicking the human brain’s structure and function. This means Akida processes information in a more efficient way, similar to how neurons fire and communicate This allows it to focus on essential information and reduce power consumption.
  • High Performance, Low Power Consumption: BrainChip claims that Akida offers superior performance per watt compared to other solutions. This makes it suitable for edge AI applications where power efficiency is crucial. Akida’s event-based processing focuses on essential information, significantly reducing energy use. This makes it perfect for edge AI applications where battery life is a constraint. Don’t be fooled by the low power consumption. Akida delivers exceptional performance per watt, making it suitable for real-time AI tasks at the network’s edge.
  • On-Chip Learning: Akida can perform some level of learning on the device itself, reducing reliance on cloud-based training and processing. Akida can perform some machine learning tasks directly on the chip, reducing reliance on cloud-based training and processing. This improves privacy and reduces latency.
  • Anomaly Detection: Akida can be trained to identify unusual patterns in data, making it ideal for security and fraud detection
  • Sensor Processing: From analyzing data from cameras and microphones to interpreting readings from industrial sensors, Akida can handle various sensor data streams.
  • Autonomous Systems: Akida’s low power consumption and real-time processing capabilities make it suitable for autonomous systems like drones and robots.
  • Supported Neural Networks: Akida is designed to accelerate various neural networks directly in hardware, including Convolutional Neural Networks (CNNs) commonly used for image recognition, Recurrent Neural Networks (RNNs) for sequence analysis, and even custom Temporal Event-based Nets (TENNs) optimized for processing complex time-series data.
  • Akida Development Environment: BrainChip offers a complete development environment called MetaTF for seamless creation, training, and testing of neural networks specifically designed for the Akida platform. This includes tools for simulating models and integrating them with Python-based machine learning frameworks for easier development.
0*BtaLK3VXnbXeo6SD.png


Akida NSoC Architecture
The Akida NSoC neuron fabric is comprised of cores that are organized in groups of four to create nodes, which are mesh networked. The cores can be implemented for either convolutional layers or fully-connected layers. This flexibility allows users to develop networks with ultra-low power Event-Based Convolution as well as Incremental Learning. The nodes also can be used to implement multiple networks on a single device.

0*0DSAmuNHRrfU_XMX.png

Akida Development environment
The development environment looks similar to any machine learning framework. Users describe their SNN model which is stored in the model zoo. The chip will come with three pre-created models (CIFAR, Imagenet and MNIST) or they can create their own architecture. A Python script can be used to specify data location, model type, and this is shipped off to the Akida execution engine with the Akida neuron model and training methodology with conversions (from pixel to spikes, etc.). It goes into training mode or inference mode depending on user settings. The Akida NSoC uses a pure CMOS logic process, ensuring high yields and low cost. Spiking neural networks (SNNs) are inherently lower power than traditional convolutional neural networks (CNNs), as they replace the math-intensive convolutions and back-propagation training methods with biologically inspired neuron functions and feed-forward training methodologies.

Brainchip’s claim is that while a convolutional approach is more akin to modeling the neuron as a large filter with weights, the iterative linear algebra matrix multiplication on data within an activation layer and associated memory and MAC units yields a power hungrier chip. Instead of this convolutional approach, an SNN models the neuron function with synapses and neurons with spikes between the neurons. The networks learn through reinforcement and inhibition of these spikes (repeating spikes are reinforcement).

The ability to change the firing threshold of the neuron itself and the sensitivity to those spikes is a different and more efficient way to train, albeit within complexity limitations. This means way less memory (there are 6MB per neural core) and a more efficient end result. Neurons learn through selective reinforcement or inhibition of synapses. The Akida NSoC has a neuron fabric comprised of 1.2 million neurons and 10 billion synapses.

0*SKlwkLOp6sUFMlNa.png

Akida Neuron Fabric
The “Akida” device has an on-chip processor complex for system and data management and is also used to tell the neuron fabric (more on that in a moment) to be in training or inference modes. This is a matter of setting the thresholds in the neuron fabric. The real key is the data to spike converter, however, especially in areas like computer vision where pixel data needs to be transformed into spikes. This is not a computationally expensive problem from an efficiency perspective, but it does add some compiler and software footwork. There are audio, pixel, and fintech converters for now with their own dedicated place on-chip. The Akida NSoC is designed for use as a stand-alone embedded accelerator or as a co-processor. It includes sensor interfaces for traditional pixel-based imaging, dynamic vision sensors (DVS), Lidar, audio, and analog signals. It also has high-speed data interfaces such as PCI-Express, USB, and Ethernet. Embedded in the NSoC are data-to-spike converters designed to optimally convert popular data formats into spikes to train and be processed by the Akida Neuron Fabric.

The PCIe links allow for data-center deployments and can scale with the multi-chip expansion port, which is a basic high speed serial interface to send spikes to different neural processing cores — expandable to 1024 devices for very large spiking neural networks. The Akida neuron fabric shown below has its own 6MB on-chip memory and the ability to interface with flash and DDR.

The “Akida” device has an on-chip processor complex for system and data management and is also used to tell the neuron fabric (more on that in a moment) to be in training or inference modes. This is a matter of setting the thresholds in the neuron fabric. The real key is the data to spike converter, however, especially in areas like computer vision where pixel data needs to be transformed into spikes. This is not a computationally expensive problem from an efficiency perspective, but it does add some compiler and software footwork. There are audio, pixel, and fintech converters for now with their own dedicated place on-chip.
0*Kiag3UHpWQjIETVZ.png

The CIFAR 10 benchmark they are rating their performance and efficiency
Brainchip’s “Akida” chip is aimed at both datacenter and training and inference. This includes vision systems in particular but also financial tech applications where users cannot tolerate intermittent connectivity or latency from cloud.

BrainChip’s Commitment to Development
BrainChip is actively developing Akida, with the second generation offering improved capabilities for handling complex neural networks. They are also working on a comprehensive development environment called MetaTF, which simplifies the creation and deployment of neural networks specifically designed for Akida.

The Future of AI is Neuromorphic
The Akida neuromorphic processor represents a significant leap forward in AI technology. With its efficient processing, on-chip learning capabilities, and wide range of applications, Akida is poised to revolutionize the way AI is used at the edge. As BrainChip continues to develop Akida, we can expect even more exciting possibilities to emerge in the future of AI.

Conclusion​

In essence, the Akida Neuromorphic Processor is a powerful yet energy-efficient AI processor designed to bring intelligence to the edge of networks by mimicking the human brain’s processing style. Its unique features make it a promising solution for various applications requiring real-time and low-power AI capabilities. Akida is still under development, with BrainChip working on newer generations to address the growing intelligence chip market. Overall, BrainChip is a company at the forefront of neuromorphic computing, aiming to revolutionize AI processing with brain-inspired hardware.

A good news we actually received 2x Akida chips few days back thanks to BrainChip. Soon we are publishing a detailed write-up as how to install and run AI models on top of it. Stay tuned. !!

1*MysyAOYIR-DB18-uKRKWwQ.jpeg

The two Akida chips we received from BrainChip

References​

[1] https://brainchip.com/akida-neural-processor-soc/
[2] https://brainchip.com/akida-generations/
[3] https://brainchip.com/what-is-the-akida-event-domain-neural-processor-2/
[4] https://www.design-reuse.com/news/54941/brainchip-akida-platform-ai.html
[5]
[6] https://www.sharesinvalue.com.au/brainchip-an-unrivalled-neural-network-processing-technology/
[7] https://www.edge-ai-vision.com/resources/technologies/processors/
[8] https://brainchip.com/technology/


I just found out who is behind this excellent article! 👆🏻
F5376BDF-478F-4B8D-B93B-3D11DF53955C.jpeg


999DC05B-E9B8-4173-90D4-D65D844BCD25.jpeg



B6457E52-3F1C-42E9-968C-86B9936C1EB5.jpeg



E9E72775-3331-47B2-974F-0BE5EF0475AC.jpeg


[Can’t open the link to the website https://www.neurodynamic.ai, though - it says the server could not be found… Maybe AGI has taken over already and blocked it from public view… 😂]




The talented blog author, Tamal Acharya, works for both Tata Consultancy Services and NeuroCortex.AI:

EC194E58-4F5D-4B87-8331-DE7FFF08CC72.jpeg

98DC8998-56FD-487A-875A-A77EBF0689B8.jpeg
88828847-41EE-45C0-94AB-5E96A439C31C.jpeg

81D0133D-6786-43ED-92E4-CDC151C4AC9D.jpeg


354B2B42-F8E8-4DE9-A33A-B484F8A85D1D.jpeg
 

Attachments

  • C3311F24-0F51-4DDB-974D-DC54DF4FE5C1.jpeg
    C3311F24-0F51-4DDB-974D-DC54DF4FE5C1.jpeg
    238.4 KB · Views: 47
Last edited:
  • Like
  • Fire
  • Love
Reactions: 43 users

IloveLamp

Top 20
Anil backin it up with a two fa....

View attachment 60572
🤔

"88% lower power per data transfer than previous generations"

1000014960.jpg
 
  • Like
  • Thinking
  • Fire
Reactions: 18 users

IloveLamp

Top 20

1000014964.jpg


Watch the speech in the video, he starts at about 1h 37min, I'm already grinning by 1hr 33min........

1h 27min analysing data better at the edge

Accenture Chief ai officer talks from 1hr 15min


Xeon 6 announced from 1hr 10min

Sierra forest going into production this qtr, 2.5 x better performance / watt.......

1000014967.jpg


Wasted on prem data from 1hr 6min

From 1hr 5min he essentially talks about new architecture types

About 1hr 3min gelsing mentions an mx alliance with nvidia, Qualcommand arm, to bring new standardise training and inferencing

1000014968.jpg



Head of tech at SuperMicro 52min

Arizona state uni president from 39min

1000014970.jpg



BOSCH from 35min

1000014971.jpg


Michael Dell talks from 22min

1000014973.jpg



16min GAUDI 3

1000014974.jpg


Closing notes 7min
 

Attachments

  • 1000014964.jpg
    1000014964.jpg
    338.5 KB · Views: 291
Last edited:
  • Like
  • Fire
  • Love
Reactions: 35 users

Esq.111

Fascinatingly Intuitive.
Pat Gunslinger ,

Three Laws of EDGE,

1, Economics....There is not enough physical cash , bullion & loose diamonds globally to purchase BrainChip outright.

2 , Physics... the logistics of physically distributing the above loot to BrainChip holders would take some time.

3, Land.... There are not enough tropical islands globaly to satiate every BrainChip holder.

Pat forgot the forth...

4, Due to the above constraints , companys engaged in the business of Edge should just pay Brainchip for a Licence & Royalties there after.

Regards,
Esq.
 
  • Like
  • Haha
  • Love
Reactions: 52 users

IloveLamp

Top 20
Pat Gunslinger ,

Three Laws of EDGE,

1, Economics....There is not enough physical cash , bullion & loose diamonds globally to purchase BrainChip outright.

2 , Physics... the logistics of physically distributing the above loot to BrainChip holders would take some time.

3, Land.... There are not enough tropical islands globaly to satiate every BrainChip holder.

Pat forgot the forth...

4, Due to the above constraints , companys engaged in the business of Edge should just pay Brainchip for a Licence & Royaltias there after.

Regards,
Esq.
uq8Lggl.gif
 
  • Haha
  • Like
Reactions: 9 users

TECH

Regular
This timeline is closing fast...within the next 14 business days, shipment dates will be confirmed.

It's the feedback from users, that's what will be key moving forward with this side operation, hopefully opening up
doors to much bigger things throughout 2025 and beyond, we are way overdue for some "real solid revenue type news"

Still, Brainchip remains firmly in the drivers seat, holding NUMBER 1 POLE POSITION ON THE GRID :ROFLMAO::ROFLMAO::ROFLMAO:(y)



VVDNEdgeBox_ProductImage10_1_300x.jpg

1._Pre-Order_Label.png

Akida™ Edge AI Box​

1._Powered_By_1.png



$799.00
Purchasing greater than 10 Edge boxes?
Contact sales@brainchip.com
Product pre-orders are available. Click here to see the terms.
Shipments are expected to commence by mid-year 2024. BrainChip will communicate specific shipment dates by Apr. 30th 2024.
 
  • Like
  • Fire
  • Love
Reactions: 42 users

7für7

Top 20
No way! You need to carefully reread Frontgrade Gaisler’s LinkedIn post (which is not about embedded world 2024, by the way, just in case you missed that).
It literally says:

Dr. Jonathan Tapson from BrainChip visited our booth at Space Symposium to discuss neuromorphic processing for space applications. They have been developing and continuously improving their technology for extremely efficient AI inference and learning.”


View attachment 60632




Do you really believe our company would send someone who doesn’t work for them (in what capacity remains to be seen) to a Colorado Springs conference they are not involved with in order to discuss serious business matters with a Swedish company that we know is planning on integrating our IP into their hardware very soon and possibly also discuss our disruptive tech with interested third parties that are paying the Frontgrade Gaisler booth a visit?!

Just because he is an expert on neuromorphic hardware, happens to have known our CTO for years and conveniently lives in the same US state the Space Symposium is currently taking place (I suppose living 300 miles away from the venue counts as “just around the corner” within the Universe scheme of things 😉)?

Think about it: Alf Kuchenbuch, our Munich-based VP of Sales EMEA could hop on a short flight to Gothenburg any time to talk to Sandi Habinc or someone else from Frontgrade Gaisler in person.

Jonathan Tapson’s lanyard says “Visitor” as he was neither an attendee nor a speaker at the Space Symposium, and BrainChip isn’t an exhibitor either (whereas Frontgrade Gaisler has a booth). I scrolled through the Space Symposium’s Flickr photostream and picked out two pictures to demonstrate what I mean (the conference coincided with the eclipse on April 8).

View attachment 60633


View attachment 60634


If you ask me, the Frontgrade Gaisler LinkedIn post’s wording and accompanying photo are unambiguous. All that’s missing is some sort of confirmation by BrainChip or Jonathan Tapson himself.
Ok, Maybe “Brainchip” is a unown city somewhere in this world?? Think about it … he said “from” brainchip not “working at brainchip” 🫵

Just kidding you’re right! It’s possible! I was yesterday just a quick checking the thread here and in the German forum. And I thought the user there had a good point regarding “brainchip guest” . Thanks for your clarifying!👍
 
  • Like
Reactions: 2 users

Terroni2105

Founding Member
ISC West is held annually in Las Vegas, it is the largest converged security trade event in USA.

VVDN is there and their area has a huge video screen running and brainchip is up on there under the heading Platform Experience.

The post is on X (twitter) and LinkedIn.

This is a screenshop I grabbed through the video.

1712788165607.png


 
  • Like
  • Love
  • Fire
Reactions: 71 users

7für7

Top 20
  • Like
  • Wow
  • Fire
Reactions: 12 users

Esq.111

Fascinatingly Intuitive.
Morning Chippers ,

TSMC kicking goals.

TECH

TSMC posts fastest monthly revenue growth since 2022 on AI chip boom​

PUBLISHED WED, APR 10 20242:51 AM EDTUPDATED 2 HOURS AGO
thumbnail

Arjun Kharpal@ARJUNKHARPAL
SHAREShare Article via FacebookShare Article via TwitterShare Article via LinkedInShare Article via Email
KEY POINTS
  • TSMC said March revenue came in at 195.2 billion new Taiwan dollars ($6.1 billion), up 34.3% year-on-year. that’s the fastest pace of growth since Nov. 2022.
  • TSMC is the world’s largest contract semiconductor manufacturer. It makes chips for companies from Apple to Nvidia.
  • The company is currently riding the AI boom. Semiconductors, such as those designed by Nvidia, have been underpinning the development of AI applications.
In this article
Follow your favorite stocksCREATE FREE ACCOUNT
TSMC displayed on a phone screen and microchip and are seen in this illustration photo taken in Krakow, Poland on July 19, 2023. (Photo by Jakub Porzycki/NurPhoto via Getty Images)

TSMC displayed on a phone screen and microchip and are seen in this illustration photo taken in Krakow, Poland on July 19, 2023
Jakub Porzycki | Nurphoto | Getty Images
Taiwan Semiconductor Manufacturing Co. (TSMC), posted a surge in monthly revenue in March, as it cashed in on a continuing artificial intelligence boom powered by high-end chips.
TSMC said March revenue came in at 195.2 billion new Taiwan dollars ($6.1 billion), up 34.3% year-on-year — marking the fastest pace of growth since November 2022.

The company’s first-quarter revenue totaled 592.6 billion new Taiwan dollars, up 16.5% year-on-year.
TSMC is the world’s largest contract semiconductor manufacturer, which makes chips for companies from Apple to Nvidia.

The company is currently riding the AI boom. Semiconductors, such as those designed by Nvidia, have underpinned the development of AI applications.
Competition has been rising in the market. AMD launched a rival chip to Nvidia last year, while Intel on Tuesday took the wraps off its latest AI offering.
A number of startups are also developing AI chips, which TSMC manufactures for some companies.

TSMC shares are up just under 40% this year to date, as investors bet on the continued demand for AI chips.
In January, the company said that its AI revenue is growing 50% on an annual basis. Analysts expect TSMC to post a 23.7% rise in total revenue this year, according to LSEG consensus estimates, after a decline in 2023.




Esq.
 
  • Like
  • Love
  • Thinking
Reactions: 24 users

Taproot

Regular
ISC West is held annually in Las Vegas, it is the largest converged security trade event in USA.

VVDN is there and their area has a huge video screen running and brainchip is up on there under the heading Platform Experience.

The post is on X (twitter) and LinkedIn.

This is a screenshop I grabbed through the video.

View attachment 60654

Looks like the world's first neuromorphic edge box sitting on the desk.
 

Attachments

  • VVDN.png
    VVDN.png
    309.2 KB · Views: 175
  • Like
  • Love
  • Fire
Reactions: 41 users

skutza

Regular
Two ASX companies at Embedded world, good to see.

1712798162250.png


1712798242690.png
 
  • Like
  • Haha
  • Fire
Reactions: 31 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
Reactions: 14 users
Top Bottom