BRN Discussion Ongoing

Diogenese

Top 20
Well Dickleboro is giving it another crack today. I wish he would come to the next AGM in Sydney on 21 May 2024. Refer attached image as I didn’t want him to get paid if some wanted to click the link. lol

I Linkedin with Magnus Osterberg last week about his nueromorphic post:

Thank you Magnus for this description of neuromorphic computing.

The recent promotion of the CLA concept car highlighted a central mounted liquid-cooled processor as a feature. Is this cooling needed because of the neuromorphic processor, and will the production version include a liquid-cooled processo
r?

Got this response under Magnus' moniker just now:

"Stay tuned!
 
  • Like
  • Love
  • Fire
Reactions: 40 users

Diogenese

Top 20
It is painfully obvious that the individual who wrote this article does not understand the difference between the Blackwell processors NVIDIA announced, the Akida processor/IP, and the markets to which they serve.

I believe that Blackwell was designed to improve performance for creating models with trillions of parameters and inferencing on these devices using these large models too. One of the systems they announced, GB200 NVL72, is a liquid-cooled rack system using 36 GB200 Blackwell chips. It is not meant for the average consumer.

I imagine that Blackwell's intent is mainly to create large models and provide data center server machines for inferencing. At the same time, Akida will still dominate the Edge market for unconnected, low-power, yet high-performance inferencing with a much broader target audience in consumer devices. While Akida can create "models" through its one-shot learning mechanism, it's not designed for building models from large datasets for general-purpose inferencing, but instead augmenting pre-trained models in the field as it learns and classifies new data.

I don't see them as competing devices, at least not in any substantive way that would suggest Blackwell's announcement is what caused Akida's SP to go down --unless it was due to a psychological knee-jerk reaction of investors that did not understand the target use and market for either technology.

On the figures Jensen quoted for training, Blackwell uses about 2 kW.
 
  • Like
  • Fire
Reactions: 13 users

7für7

Top 20
  • Like
Reactions: 2 users

Diogenese

Top 20
What!!

A Blackwell GPU will cost at least 30K $ and are not competing with the Alkida chips at all.
Completely different uses for them as far as I know, Alkida on the edge and the Blackwell in center (if you can call it that).

Those MF writers are really funny guys, idiots actually.

Synopsis. Nvidia highlighted that Blackwell chips enable organisations to develop and operate real-time generative AI on trillion-parameter large language models while reducing costs and energy consumption by up to 25 times compared to its predecessor.

More info here:
Jensen said the first one cost $10 billion ...
 
  • Haha
  • Wow
  • Like
Reactions: 14 users

Diogenese

Top 20
:ROFLMAO::ROFLMAO:...On a side note @Bravo ..Can you please go for a run tomorrow and Friday and all of next week >>>>>>:ROFLMAO:🙏🤜🤛

Amazon Love GIF by Regal
@Bravo left the tap running in the hot tub ...
 
  • Haha
Reactions: 7 users

Terroni2105

Founding Member
  • Like
  • Fire
  • Love
Reactions: 40 users

7für7

Top 20
  • Like
  • Love
Reactions: 2 users

JB49

Regular


Ericsson staff member explaining benefits for neuromorphic from about 14.30. Talks about how it can be used in 6g. But seems they are still a long way off and still just investigating
 
  • Like
  • Love
  • Thinking
Reactions: 14 users

IloveLamp

Top 20
I Linkedin with Magnus Osterberg last week about his nueromorphic post:

Thank you Magnus for this description of neuromorphic computing.

The recent promotion of the CLA concept car highlighted a central mounted liquid-cooled processor as a feature. Is this cooling needed because of the neuromorphic processor, and will the production version include a liquid-cooled processo
r?

Got this response under Magnus' moniker just now:

"Stay tuned!
@chapman89 are you @Diogenese too 🤔
 
  • Haha
Reactions: 11 users

IloveLamp

Top 20
  • Haha
Reactions: 5 users

7für7

Top 20
  • Like
  • Fire
Reactions: 6 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 20 users

Frangipani

Regular
Just came across this awesome article about Brainchip on Medium, written by someone with the moniker NeuroCortex.AI - and a follow-up is already in the works! 👍🏻



BrainChip’s Akida: Neuromorphic Processor Bringing AI to the Edge​

NeuroCortex.AI
NeuroCortex.AI
·
Follow
8 min read
·
6 hours ago

As our regular readers you might recall we have covered the basics of Neuromorphic computing a blog series last year and are now pursuing further research into its implementation. One of the blockers in the real time implementation of Spiking neural networks (SNNs) is availability of an actual Neuromorphic chip(s) to run SNN algorithms.

Thus we started connecting with various industry professionals who were connected to developing neuromorphic chips. Soon enough we connected with BrainChip US operations team (based in California) and started talking about potential collaboration. They were kind enough to help us out and agreed to send BrainChip Akida chips our way.

Before we start with implementing SNN models onto Akida let us tell you about BrainChip the company, Akida chipset and why its useful for us.


Akida by BrainChip, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy consumption.

BrainChip is an Australian company that specializes in edge artificial intelligence (AI) on-chip processing and learning. They are the worldwide leader in edge AI on-chip processing and learning, offering solutions that bring common sense to the processing of sensor data, enabling machines to do more with less. BrainChip has a global presence with engineering teams located in California, Toulouse France, Hyderabad India, and Perth Australia.

BrainChip’s flagship product, Akida™, is a fully digital, event-based AI processor that mimics the human brain, analyzing essential sensor inputs at the point of acquisition with high efficiency, precision, and energy economy. This technology allows for edge learning local to the chip, reducing latency, improving privacy, and enhancing data security. Akida, Greek for ‘spike,’ is a neuromorphic SoC that implements a spiking neural network. In many ways, it’s similar to some of the well-known research projects that were presented over the past several years such as IBM’s TrueNorth, SpiNNaker, and Intel Loihi. With Akida, BrainChip is attempting to seize this early market opportunity with one of the first commercial products. BrainChip is targeting a wide range of markets from the sub-1W edge applications to higher power and performance applications in the data center.

0*wMut4RI474WjcNy-.png


Timeline (BrainChip)
Here’s a breakdown of what Akida is all about:
  • Inspired by the Brain and Benefits of a Brain-Inspired Approach: Unlike traditional processors that rely on complex clock cycles, Akida uses event-based processing, similar to how neurons fire in the brain. It utilizes neuromorphic computing, mimicking the human brain’s structure and function. This means Akida processes information in a more efficient way, similar to how neurons fire and communicate This allows it to focus on essential information and reduce power consumption.
  • High Performance, Low Power Consumption: BrainChip claims that Akida offers superior performance per watt compared to other solutions. This makes it suitable for edge AI applications where power efficiency is crucial. Akida’s event-based processing focuses on essential information, significantly reducing energy use. This makes it perfect for edge AI applications where battery life is a constraint. Don’t be fooled by the low power consumption. Akida delivers exceptional performance per watt, making it suitable for real-time AI tasks at the network’s edge.
  • On-Chip Learning: Akida can perform some level of learning on the device itself, reducing reliance on cloud-based training and processing. Akida can perform some machine learning tasks directly on the chip, reducing reliance on cloud-based training and processing. This improves privacy and reduces latency.
  • Anomaly Detection: Akida can be trained to identify unusual patterns in data, making it ideal for security and fraud detection
  • Sensor Processing: From analyzing data from cameras and microphones to interpreting readings from industrial sensors, Akida can handle various sensor data streams.
  • Autonomous Systems: Akida’s low power consumption and real-time processing capabilities make it suitable for autonomous systems like drones and robots.
  • Supported Neural Networks: Akida is designed to accelerate various neural networks directly in hardware, including Convolutional Neural Networks (CNNs) commonly used for image recognition, Recurrent Neural Networks (RNNs) for sequence analysis, and even custom Temporal Event-based Nets (TENNs) optimized for processing complex time-series data.
  • Akida Development Environment: BrainChip offers a complete development environment called MetaTF for seamless creation, training, and testing of neural networks specifically designed for the Akida platform. This includes tools for simulating models and integrating them with Python-based machine learning frameworks for easier development.
0*BtaLK3VXnbXeo6SD.png


Akida NSoC Architecture
The Akida NSoC neuron fabric is comprised of cores that are organized in groups of four to create nodes, which are mesh networked. The cores can be implemented for either convolutional layers or fully-connected layers. This flexibility allows users to develop networks with ultra-low power Event-Based Convolution as well as Incremental Learning. The nodes also can be used to implement multiple networks on a single device.

0*0DSAmuNHRrfU_XMX.png

Akida Development environment
The development environment looks similar to any machine learning framework. Users describe their SNN model which is stored in the model zoo. The chip will come with three pre-created models (CIFAR, Imagenet and MNIST) or they can create their own architecture. A Python script can be used to specify data location, model type, and this is shipped off to the Akida execution engine with the Akida neuron model and training methodology with conversions (from pixel to spikes, etc.). It goes into training mode or inference mode depending on user settings. The Akida NSoC uses a pure CMOS logic process, ensuring high yields and low cost. Spiking neural networks (SNNs) are inherently lower power than traditional convolutional neural networks (CNNs), as they replace the math-intensive convolutions and back-propagation training methods with biologically inspired neuron functions and feed-forward training methodologies.

Brainchip’s claim is that while a convolutional approach is more akin to modeling the neuron as a large filter with weights, the iterative linear algebra matrix multiplication on data within an activation layer and associated memory and MAC units yields a power hungrier chip. Instead of this convolutional approach, an SNN models the neuron function with synapses and neurons with spikes between the neurons. The networks learn through reinforcement and inhibition of these spikes (repeating spikes are reinforcement).

The ability to change the firing threshold of the neuron itself and the sensitivity to those spikes is a different and more efficient way to train, albeit within complexity limitations. This means way less memory (there are 6MB per neural core) and a more efficient end result. Neurons learn through selective reinforcement or inhibition of synapses. The Akida NSoC has a neuron fabric comprised of 1.2 million neurons and 10 billion synapses.

0*SKlwkLOp6sUFMlNa.png

Akida Neuron Fabric
The “Akida” device has an on-chip processor complex for system and data management and is also used to tell the neuron fabric (more on that in a moment) to be in training or inference modes. This is a matter of setting the thresholds in the neuron fabric. The real key is the data to spike converter, however, especially in areas like computer vision where pixel data needs to be transformed into spikes. This is not a computationally expensive problem from an efficiency perspective, but it does add some compiler and software footwork. There are audio, pixel, and fintech converters for now with their own dedicated place on-chip. The Akida NSoC is designed for use as a stand-alone embedded accelerator or as a co-processor. It includes sensor interfaces for traditional pixel-based imaging, dynamic vision sensors (DVS), Lidar, audio, and analog signals. It also has high-speed data interfaces such as PCI-Express, USB, and Ethernet. Embedded in the NSoC are data-to-spike converters designed to optimally convert popular data formats into spikes to train and be processed by the Akida Neuron Fabric.

The PCIe links allow for data-center deployments and can scale with the multi-chip expansion port, which is a basic high speed serial interface to send spikes to different neural processing cores — expandable to 1024 devices for very large spiking neural networks. The Akida neuron fabric shown below has its own 6MB on-chip memory and the ability to interface with flash and DDR.

The “Akida” device has an on-chip processor complex for system and data management and is also used to tell the neuron fabric (more on that in a moment) to be in training or inference modes. This is a matter of setting the thresholds in the neuron fabric. The real key is the data to spike converter, however, especially in areas like computer vision where pixel data needs to be transformed into spikes. This is not a computationally expensive problem from an efficiency perspective, but it does add some compiler and software footwork. There are audio, pixel, and fintech converters for now with their own dedicated place on-chip.
0*Kiag3UHpWQjIETVZ.png

The CIFAR 10 benchmark they are rating their performance and efficiency
Brainchip’s “Akida” chip is aimed at both datacenter and training and inference. This includes vision systems in particular but also financial tech applications where users cannot tolerate intermittent connectivity or latency from cloud.

BrainChip’s Commitment to Development
BrainChip is actively developing Akida, with the second generation offering improved capabilities for handling complex neural networks. They are also working on a comprehensive development environment called MetaTF, which simplifies the creation and deployment of neural networks specifically designed for Akida.

The Future of AI is Neuromorphic
The Akida neuromorphic processor represents a significant leap forward in AI technology. With its efficient processing, on-chip learning capabilities, and wide range of applications, Akida is poised to revolutionize the way AI is used at the edge. As BrainChip continues to develop Akida, we can expect even more exciting possibilities to emerge in the future of AI.

Conclusion​

In essence, the Akida Neuromorphic Processor is a powerful yet energy-efficient AI processor designed to bring intelligence to the edge of networks by mimicking the human brain’s processing style. Its unique features make it a promising solution for various applications requiring real-time and low-power AI capabilities. Akida is still under development, with BrainChip working on newer generations to address the growing intelligence chip market. Overall, BrainChip is a company at the forefront of neuromorphic computing, aiming to revolutionize AI processing with brain-inspired hardware.

A good news we actually received 2x Akida chips few days back thanks to BrainChip. Soon we are publishing a detailed write-up as how to install and run AI models on top of it. Stay tuned. !!

1*MysyAOYIR-DB18-uKRKWwQ.jpeg

The two Akida chips we received from BrainChip

References​

[1] https://brainchip.com/akida-neural-processor-soc/
[2] https://brainchip.com/akida-generations/
[3] https://brainchip.com/what-is-the-akida-event-domain-neural-processor-2/
[4] https://www.design-reuse.com/news/54941/brainchip-akida-platform-ai.html
[5]
[6] https://www.sharesinvalue.com.au/brainchip-an-unrivalled-neural-network-processing-technology/
[7] https://www.edge-ai-vision.com/resources/technologies/processors/
[8] https://brainchip.com/technology/
 
  • Like
  • Love
  • Fire
Reactions: 65 users

Tothemoon24

Top 20

IMG_8652.jpeg

Neuromorphic Computing: Making Space Smart​


Published Mar 19, 2024
Eric Gallo, Sr. Principal, Future Technologies R&D, Accenture Labs
Space technology is advancing, and companies that never considered themselves space companies are now relying on satellite data and other space services to operate and grow their business. In fact, the immense global interest in satellite data and images has seen a growing number of new companies emerge, several of which we’ve invested in or aligned with through our Accenture Ventures “Project Spotlight” initiative, such as Open Cosmos, Pixxel, Planet, and SpiderOak.
To optimize these operations, reduce costs, and improve effectiveness, there is a growing demand to make space equipment, such as satellites, ‘smart’ by leveraging AI technologies. However, implementing AI in space faces challenges such as limits on the size, weight, and power (SWaP) of computing hardware. There has been a major research and engineering focus on developing low-power AI hardware at the edge to provide onboard intelligence under SWaP constrained conditions. While custom hardened electronics are still preferred for deep space missions, many are adopting commercial off-the-shelf components for shorter missions, offering space technologies the ability to implement these Edge AI capabilities.
Small satellites face bandwidth limitations, but technologies like edge processing and cognitive radiocan optimize these limited communication resources. The amount of data collected in space is increasing rapidly. According to NSR, space data traffic is expected to reach 566 exabytes over the next decade, with satellite communications making up 530 EB of that amount. The increasing congestion in space and the substantial data flow to ground stations necessitate improvements in efficiency. Edge AI can segment data in-situ into usable/unusable data to filter the information sent back to Earth, reducing bandwidth and data center processing.
The rise of neuromorphic computing, processors and sensors modeled after the neurons in our brains, are uniquely suited to provide edge processing in space. Neuromorphic technologies are low latency, extremely low power and even capable of learning while in use. Neuromorphic computing can be added to a satellite or other equipment with minimal impacts on system power requirements or mechanical design, making them an ideal low SWaP, adaptable solution for space applications, with the potential to revolutionize satellites and beyond.
Edge computing in space
Until recently, edge processing in space was uncommon. In 2017, the HPE Spaceborne Computer-2 was successfully deployed on the ISS to address data processing bottlenecks. However, its size and power requirements make it unsuitable for smaller satellites or instruments. In 2020, the Phi-Sat became the first satellite to demonstrate edge AI in Earth observation, reducing transmitted data by 30% by processing images prior to transmission to Earth. The satellite used an Intel Movidius Myriad processor designed for low-power edge intelligence, as used in drones, smart surveillance cameras and other energy-constrained applications on Earth. Now there are several vendors offering edge processing electronics for space applications, but the main challenge has been limited physical space, power, and cooling resources. Neuromorphic technologies offer a cost-effective solution by adding intelligence at extremely low power and financial costs, increasing satellite capabilities without adding weight or size.
Why is neuromorphic computing the solution to making space technology smarter?
Neuromorphic computing emulates neurons and synapses in the brain to process information, offering low power, low latency, and data sparsity benefits for edge computation. Accenture Labs demonstrated a 3-5x reduction in system power using BrainChip’s Akida neuromorphic accelerator for audio recognition tasks compared to CPU and GPU. Other companies—including BrainChip, Intel, and Synsense—have developed digital neuromorphic silicon chips, with BrainChip, and Synsense offering commercial processors. These chips are still in v1 and v2, with increasing performance expected as they mature. Startups like EDGX and Neurobus are exploring neuromorphic computing hardware and software designs for space. Intel’s Loihi processor has already been validated in space, and neuromorphic cameras are operating on the ISS as part of the Falco Neuro Project. BrainChip’s Akida recently launched on SpaceX Falcon 9. These efforts are laying the groundwork for widespread implementation of neuromorphic technologies in space applications.
A few potential neuromorphic applications have already been identified and are being investigated by universities, space agencies, start-ups and Accenture Labs in collaboration with Accenture Space Innovation:
1. Optimization of satellite communications using cognitive radios that adapt to changing conditions, ensuring efficient, robust and reliable data transmission.
2. Enabling real-time decision making on satellites, facilitating rapid signaling in emergencies and natural disasters, supporting latency-sensitive applications, and enhancing satellite health analysis and space situational awareness.
3. Filtering and sorting collected images before transmission to earth and enabling real-time focus on areas affected by flooding, deforestation, algal blooms, or other environmental disasters.
4. Enabling longer range missions by reducing reliance on earthbound communications, enhancing data quality, and mitigating potential dependencies.
5. Implementing energy-efficient robotic controls with rapid learning and adaptation to the environment.
6. Integrating real time health monitoring for satellites and for astronauts.
7. Increasing efficiency and effectiveness of experiments in space by directing resources, identifying optimal conditions, and controlling input, thereby optimizing results as companies explore the effects of zero gravity on materials and devices.
8. Learning and adapting to a satellite’s behavior which can then be used for anomaly detection as the hardware ages, allowing for early identification of potential issues or malfunctions.
9. Mitigating damage to hardware during severe space weather, detecting and shutting down components to prevent radiation damage.
10. Improving space situational awareness using neuromorphic cameras and processors to detect objects and identify threats rapidly and efficiently.
11. Enhancing specific satcom applications such as imaging radar (synthetic aperture radar), interference detection, congestion forecasting, jamming classification.
Neuromorphic technologies offer additional functional benefits, such as the ability to load and unload networks onto a processor for multiple tasks on the same hardware, enabling a satellite to switch between scanning for forest fires over land and investigating currents over the ocean. Open-source designs allow developers to create applications specific to their use case and deploy them on neuromorphic hardware, facilitating the use of a single CubeSat by multiple groups.
The technology also excels at leveraging multi-modal input for decision-making and can adapt and change its data processing approach in real-time. Accenture is actively exploring the applications of neuromorphic computing in space and collaborating with academic researchers and commercial neuromorphic companies to evaluate and demonstrate its potential for enhancing edge intelligence and creating new products and applications. In collaboration with BrainChip, we are evaluating their Akida platform, the first commercially available neuromorphic chips, to identify industries and applications that can benefit from this technology.
BrainChip’s Akida has demonstrated its power in various applications, including automotive safety, industrial maintenance, vision solutions, and healthcare devices. It is an ideal choice for investigating and designing edge processing in space, providing sufficient computing power for image streams while consuming less than 1 W of energy. At Accenture, we are using the Akida and other platforms to validate these benefits and create proofs of concept for space tech innovations.
Neuromorphic computing offers crucial benefits for the space industry, including low power consumption, minimal latency processing, and efficient edge intelligence. It has the potential to revolutionize space technology, leading to advancements in exploration, communication, and situational awareness. As our activities in space expand, the use of neuromorphic computing enables faster and more efficient operations, making it a crucial component in the journey of space exploration.
 
  • Like
  • Love
  • Fire
Reactions: 73 users

mototrans

Regular
Well, the Fool does have a point.. first comparing a "Graphics Processing Unit" with a Event Based AI Edge Chip... they've doubled down to reveal further weakness in the Brainchip proposal... i'm starting to get worried guys.. a warning for those offended by satire..

This from the Fool.

“Investors have been hitting the sell button today after The Schools P & C organisation announced its new Bike Rack acquisition, Grazed Kneez. The P&C spent an estimated $124.50 on the research and development budget for the bike rack. It's possible now that some investors are finally realising that Brainchip has almost zero chance of competing as a bike rack.

With just over 9,600 schools in Australia alone, all requiring at least two bike racks, this is an astonishing $2.3 trillion dollar industry which will invariably be awarded to our Hot Stock Limited Edition Super Charged Top Pick available to Platinum members of the Fool.

To see our Hot Stock Limited Edition Guaranteed winner, and the view from our Manhattan holiday home, subscribe to our all you can eat Exclusive Executive High Flyers Lounge for average IQ.

Buster Elbow, a ginger hair freckle faced Year 3 kid, commented: "Grazed Kneez offers massive storage during school hours, and will accelerate our ability to deliver bunny hops, wheelies and fully sick skids.. We're excited to continue working with the deputy principal toward getting off detention and back to the ramps." "Whats a nerdomorphic kid anyway?"
 
  • Haha
  • Like
  • Love
Reactions: 36 users

IloveLamp

Top 20
  • Like
  • Wow
Reactions: 3 users

IloveLamp

Top 20
View attachment 59567
So ARM is using "Poseidon"

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-415447

NVIDIA is bringing Jetson "THOR"

AMD
is using "SPARTAN"

And MICROSOFT is bringing out "ATHENA"


then we have .........AKIDA.........

Anyone else see a pattern developing??
 
  • Like
  • Fire
  • Haha
Reactions: 34 users
Top Bottom