Boab
I wish I could paint like Vincent
Anyone come across DeGirum offering very low power which seems to imply less than us at low power?
Now wondering if VVDN has this many options?
Anyone come across DeGirum offering very low power which seems to imply less than us at low power?
Well, I guess it would depend on the seniority of the role hey?An announcement because a new employee has been hired? Srsly?
Those of you who have taken a closer look at the global neuromorphic research community will likely have come across the annual Telluride Neuromorphic Cognition Engineering Workshop, a three week project-based meeting in eponymous Telluride, a charming former Victorian mining town in the Rocky Mountain high country of southwestern Colorado. Nestled in a deep glacial valley, Telluride sits at an elevation of 8750 ft (2667 m) and is surrounded by majestic rugged peaks. Truly a scenic location for a workshop.
The National Science Foundation (NSF), which has continuously supported the Telluride Workshop since its beginnings in the 1990s, described it in a 2023 announcement as follows: It “will bring together an interdisciplinary group of researchers from academia and industry, including engineers, computer scientists, neuroscientists, behavioral and cognitive scientists (…) The annual three-week hands-on, project-based meeting is organized around specific topic areas to explore organizing principles of neural cognition that can inspire implementation in artificial systems. Each topic area is guided by a group of experts who will provide tutorials, lectures and hands-on project guidance.”
https://new.nsf.gov/funding/opportu...ng-augmented-intelligence/announcements/95341
View attachment 59073
View attachment 59075
![]()
Telluride 2024
The workshop took place over 3-weeks as a project-based meeting organized around specific topic areas to bring the organizing principles of neural cognition into artificial intelligence, and to use AI to understand how brains work.sites.google.com
The topic areas for the 2024 Telluride Neuromorphic Workshop are now online. As every year, the list of topic leaders and invited speakers includes the crème de la crème of neuromorphic researchers from all over the world. While no one from Brainchip has made the invited speakers’ list (at least not to date), I was extremely pleased to notice that Akida will be featured nevertheless! It has taken the academic neuromorphic community ages to take Brainchip seriously (cf my previous post on Open Neuromorphic: https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-404235), but here we are, finally getting acknowledged alongside the usual suspects:
View attachment 59076
View attachment 59077
Some readers will now presumably shrug their shoulders and consider this mention of Brainchip in a workshop programme as being insignificant as opposed to those coveted commercial announcements. To me, however, the inclusion of Brainchip at Telluride marks a milestone.
Also keep in mind what NSF Program Director Soo-Siang Lim said about Telluride (see link above): “This workshop has a long and successful track-record of advancing and integrating our understanding of biological and artificial systems of learning. Many collaborations catalyzed by the workshop have led to significant technology innovations, and the training of future industry and academic leaders.”
I’d just love to know who of the four topic leaders and/or co-organisers had suggested to include Brainchip for their hands-on project “Processing space-based data using neuromorphic computing hardware” (and whether this was readily agreed on or not):
Was it one of the two colleagues from Western Sydney University’s International Centre for Neuromorphic Systems (ICNS)? Gregory Cohen (who is responsible for Astrosite, WSU’s containerised neuromorphic inspired mobile telescope observatory as well as for the modification of the two neuromorphic cameras on the ISS as part of the USAFA Falcon Neuro project) or Alexandre Marcireau?
Or was it Gregor Lenz, who left Synsense in mid-2023 to co-found Neurobus (“At Neurobus we’re harnessing the power of neuromorphic computing to transform space technology”) and is also one of the co-founders of the Open Neuromorphic community? He was one of the few live viewers of Cristian Axenie’s January 15 online presentation on the TinyML Vision Zero San Jose Competition (where his TH Nürnberg team, utilising Akida for their event-based visual motion detection and tracking of pedestrians, had come runner-up), and asked a number of intriguing questions about Akida during the live broadcast.
Or was it possibly Jens Egholm Pedersen, the Danish doctoral student at Stockholm’s KTH Royal Institute of Technology, Sweden’s largest technical university, who hosted said presentation by Cristian Axenie on the Open Neuromorphic YouTube channel and appeared to be genuinely impressed about Akida (and the Edge Impulse platform), too?
Oh, and last, but not least:
Our CTO Anthony M Lewis aka Tony Lewis has been to Telluride numerous times: the workshop website lists him as one of the early participants back in 1996 (when he was with UCLA’s Computer Science Department). Tony Lewis is subsequently listed as a guest speaker for the 1999, 2000, 2001, 2002, 2003 and 2004 workshops (in his then capacity as the founder of Iguana Robotics) - information on the participants between 2006 - 2009 as well as for the year 2011 is marked as “lost”. In 2019, Tony Lewis had once again been invited as either topic leader or guest speaker, but according to the website could not come.
So I guess there is a good chance we will see him return to Telluride one day, this time as CTO of Brainchip, catching up with a lot old friends and acquaintances, many of whom he also keeps in touch with via his extensive LinkedIn network, so they’d definitely know what he’s been up to.
As I said in another post six weeks ago:
Please let me know when it recovers closer to its high well in the $30 range. Refer attachment.![]()
Appen Launches Solution for Enterprises to Customize LLMs
KIRKLAND, Wash., March 28, 2024 -- Appen Limited, a leading provider of high-quality data for the AI lifecycle, announced the launch of new platformwww.datanami.com
Is this relevant to Brainchip?
By the way, Appen SP is recovering from a breach in a NDA about a month ago.
- Post #80,282 Mar 14, 2024
Great find Frangipani.When did Jonathan Tapson start working for BrainChip?! That’s big news, I’d say! No official announcement, though?
View attachment 60603
Well, I guess it would depend on the seniority of the role hey?
He appears pretty well credentialed and I susoect not just a say, a ML researcher employee (no disrespect)....so what would be the role of he is FTE?
View attachment 60611
An announcement because a new employee has been hired? Srsly?
Great find Frangipani.
Quite odd, he's still very much part of the team on the Iona Tech website.
I'm guessing Jonathan could now be some sort of advisor for Brainchip, as well as still with Iona.
Jonathan Tapson is (/ was in 2020) the Chief Scientific Officer of GrAI Matter Labs. Prior to this, he was the Director of the MARCS Institute for Brain, Behaviour and Development at the University of Western Sydney, and has held positions at Dean and Head of Department levels in multiple universities. His research covers neuromorphic engineering and bio-inspired sensors, and he has authored over 160 papers and a dozen patents.
Source: https://forums.tinyml.org/t/two-tin...le-tiny-ml-by-jon-tapson-grai-matter-labs/231
He maybe is just a guest of brainchip…![]()
Just came across this awesome article about Brainchip on Medium, written by someone with the moniker NeuroCortex.AI - and a follow-up is already in the works!
BrainChip’s Akida: Neuromorphic Processor Bringing AI to the Edge
![]()
NeuroCortex.AI
·
Follow
8 min read
·
6 hours ago
As our regular readers you might recall we have covered the basics of Neuromorphic computing a blog series last year and are now pursuing further research into its implementation. One of the blockers in the real time implementation of Spiking neural networks (SNNs) is availability of an actual Neuromorphic chip(s) to run SNN algorithms.
Thus we started connecting with various industry professionals who were connected to developing neuromorphic chips. Soon enough we connected with BrainChip US operations team (based in California) and started talking about potential collaboration. They were kind enough to help us out and agreed to send BrainChip Akida chips our way.
Before we start with implementing SNN models onto Akida let us tell you about BrainChip the company, Akida chipset and why its useful for us.
![]()
Akida by BrainChip, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy consumption.
BrainChip is an Australian company that specializes in edge artificial intelligence (AI) on-chip processing and learning. They are the worldwide leader in edge AI on-chip processing and learning, offering solutions that bring common sense to the processing of sensor data, enabling machines to do more with less. BrainChip has a global presence with engineering teams located in California, Toulouse France, Hyderabad India, and Perth Australia.
BrainChip’s flagship product, Akida™, is a fully digital, event-based AI processor that mimics the human brain, analyzing essential sensor inputs at the point of acquisition with high efficiency, precision, and energy economy. This technology allows for edge learning local to the chip, reducing latency, improving privacy, and enhancing data security. Akida, Greek for ‘spike,’ is a neuromorphic SoC that implements a spiking neural network. In many ways, it’s similar to some of the well-known research projects that were presented over the past several years such as IBM’s TrueNorth, SpiNNaker, and Intel Loihi. With Akida, BrainChip is attempting to seize this early market opportunity with one of the first commercial products. BrainChip is targeting a wide range of markets from the sub-1W edge applications to higher power and performance applications in the data center.
![]()
Timeline (BrainChip)
Here’s a breakdown of what Akida is all about:
- Inspired by the Brain and Benefits of a Brain-Inspired Approach: Unlike traditional processors that rely on complex clock cycles, Akida uses event-based processing, similar to how neurons fire in the brain. It utilizes neuromorphic computing, mimicking the human brain’s structure and function. This means Akida processes information in a more efficient way, similar to how neurons fire and communicate This allows it to focus on essential information and reduce power consumption.
- High Performance, Low Power Consumption: BrainChip claims that Akida offers superior performance per watt compared to other solutions. This makes it suitable for edge AI applications where power efficiency is crucial. Akida’s event-based processing focuses on essential information, significantly reducing energy use. This makes it perfect for edge AI applications where battery life is a constraint. Don’t be fooled by the low power consumption. Akida delivers exceptional performance per watt, making it suitable for real-time AI tasks at the network’s edge.
- On-Chip Learning: Akida can perform some level of learning on the device itself, reducing reliance on cloud-based training and processing. Akida can perform some machine learning tasks directly on the chip, reducing reliance on cloud-based training and processing. This improves privacy and reduces latency.
- Anomaly Detection: Akida can be trained to identify unusual patterns in data, making it ideal for security and fraud detection
- Sensor Processing: From analyzing data from cameras and microphones to interpreting readings from industrial sensors, Akida can handle various sensor data streams.
- Autonomous Systems: Akida’s low power consumption and real-time processing capabilities make it suitable for autonomous systems like drones and robots.
- Supported Neural Networks: Akida is designed to accelerate various neural networks directly in hardware, including Convolutional Neural Networks (CNNs) commonly used for image recognition, Recurrent Neural Networks (RNNs) for sequence analysis, and even custom Temporal Event-based Nets (TENNs) optimized for processing complex time-series data.
- Akida Development Environment: BrainChip offers a complete development environment called MetaTF for seamless creation, training, and testing of neural networks specifically designed for the Akida platform. This includes tools for simulating models and integrating them with Python-based machine learning frameworks for easier development.
![]()
Akida NSoC Architecture
The Akida NSoC neuron fabric is comprised of cores that are organized in groups of four to create nodes, which are mesh networked. The cores can be implemented for either convolutional layers or fully-connected layers. This flexibility allows users to develop networks with ultra-low power Event-Based Convolution as well as Incremental Learning. The nodes also can be used to implement multiple networks on a single device.
![]()
Akida Development environment
The development environment looks similar to any machine learning framework. Users describe their SNN model which is stored in the model zoo. The chip will come with three pre-created models (CIFAR, Imagenet and MNIST) or they can create their own architecture. A Python script can be used to specify data location, model type, and this is shipped off to the Akida execution engine with the Akida neuron model and training methodology with conversions (from pixel to spikes, etc.). It goes into training mode or inference mode depending on user settings. The Akida NSoC uses a pure CMOS logic process, ensuring high yields and low cost. Spiking neural networks (SNNs) are inherently lower power than traditional convolutional neural networks (CNNs), as they replace the math-intensive convolutions and back-propagation training methods with biologically inspired neuron functions and feed-forward training methodologies.
Brainchip’s claim is that while a convolutional approach is more akin to modeling the neuron as a large filter with weights, the iterative linear algebra matrix multiplication on data within an activation layer and associated memory and MAC units yields a power hungrier chip. Instead of this convolutional approach, an SNN models the neuron function with synapses and neurons with spikes between the neurons. The networks learn through reinforcement and inhibition of these spikes (repeating spikes are reinforcement).
The ability to change the firing threshold of the neuron itself and the sensitivity to those spikes is a different and more efficient way to train, albeit within complexity limitations. This means way less memory (there are 6MB per neural core) and a more efficient end result. Neurons learn through selective reinforcement or inhibition of synapses. The Akida NSoC has a neuron fabric comprised of 1.2 million neurons and 10 billion synapses.
![]()
Akida Neuron Fabric
The “Akida” device has an on-chip processor complex for system and data management and is also used to tell the neuron fabric (more on that in a moment) to be in training or inference modes. This is a matter of setting the thresholds in the neuron fabric. The real key is the data to spike converter, however, especially in areas like computer vision where pixel data needs to be transformed into spikes. This is not a computationally expensive problem from an efficiency perspective, but it does add some compiler and software footwork. There are audio, pixel, and fintech converters for now with their own dedicated place on-chip. The Akida NSoC is designed for use as a stand-alone embedded accelerator or as a co-processor. It includes sensor interfaces for traditional pixel-based imaging, dynamic vision sensors (DVS), Lidar, audio, and analog signals. It also has high-speed data interfaces such as PCI-Express, USB, and Ethernet. Embedded in the NSoC are data-to-spike converters designed to optimally convert popular data formats into spikes to train and be processed by the Akida Neuron Fabric.
The PCIe links allow for data-center deployments and can scale with the multi-chip expansion port, which is a basic high speed serial interface to send spikes to different neural processing cores — expandable to 1024 devices for very large spiking neural networks. The Akida neuron fabric shown below has its own 6MB on-chip memory and the ability to interface with flash and DDR.
The “Akida” device has an on-chip processor complex for system and data management and is also used to tell the neuron fabric (more on that in a moment) to be in training or inference modes. This is a matter of setting the thresholds in the neuron fabric. The real key is the data to spike converter, however, especially in areas like computer vision where pixel data needs to be transformed into spikes. This is not a computationally expensive problem from an efficiency perspective, but it does add some compiler and software footwork. There are audio, pixel, and fintech converters for now with their own dedicated place on-chip.
![]()
The CIFAR 10 benchmark they are rating their performance and efficiency
Brainchip’s “Akida” chip is aimed at both datacenter and training and inference. This includes vision systems in particular but also financial tech applications where users cannot tolerate intermittent connectivity or latency from cloud.
BrainChip’s Commitment to Development
BrainChip is actively developing Akida, with the second generation offering improved capabilities for handling complex neural networks. They are also working on a comprehensive development environment called MetaTF, which simplifies the creation and deployment of neural networks specifically designed for Akida.
The Future of AI is Neuromorphic
The Akida neuromorphic processor represents a significant leap forward in AI technology. With its efficient processing, on-chip learning capabilities, and wide range of applications, Akida is poised to revolutionize the way AI is used at the edge. As BrainChip continues to develop Akida, we can expect even more exciting possibilities to emerge in the future of AI.
Conclusion
In essence, the Akida Neuromorphic Processor is a powerful yet energy-efficient AI processor designed to bring intelligence to the edge of networks by mimicking the human brain’s processing style. Its unique features make it a promising solution for various applications requiring real-time and low-power AI capabilities. Akida is still under development, with BrainChip working on newer generations to address the growing intelligence chip market. Overall, BrainChip is a company at the forefront of neuromorphic computing, aiming to revolutionize AI processing with brain-inspired hardware.
A good news we actually received 2x Akida chips few days back thanks to BrainChip. Soon we are publishing a detailed write-up as how to install and run AI models on top of it. Stay tuned. !!
![]()
The two Akida chips we received from BrainChip
References
[1] https://brainchip.com/akida-neural-processor-soc/
[2] https://brainchip.com/akida-generations/
[3] https://brainchip.com/what-is-the-akida-event-domain-neural-processor-2/
[4] https://www.design-reuse.com/news/54941/brainchip-akida-platform-ai.html
[5]
[6] https://www.sharesinvalue.com.au/brainchip-an-unrivalled-neural-network-processing-technology/
[7] https://www.edge-ai-vision.com/resources/technologies/processors/
[8] https://brainchip.com/technology/
Anil backin it up with a two fa....
View attachment 60572![]()
Qualcomm Acquires foundries.io, Announces Products For IoT, Edge
The announcements come at a time when Qualcomm plans to expand the portfolio of its current IoT offerings.www.electronicsb2b.com
Pat Gunslinger ,
Three Laws of EDGE,
1, Economics....There is not enough physical cash , bullion & loose diamonds globally to purchase BrainChip outright.
2 , Physics... the logistics of physically distributing the above loot to BrainChip holders would take some time.
3, Land.... There are not enough tropical islands globaly to satiate every BrainChip holder.
Pat forgot the forth...
4, Due to the above constraints , companys engaged in the business of Edge should just pay Brainchip for a Licence & Royaltias there after.
Regards,
Esq.