BRN Discussion Ongoing

equanimous

Norse clairvoyant shapeshifter goddess
So which former leading chip maker needs an AI accelerator for their CPU?
What about google Dio?

1677285668934.png

1677285714980.png
 
  • Haha
  • Like
Reactions: 7 users

VictorG

Member
No direct mention of neuromorphic or neural processing…. But found this white paper

A Generational Shift in AI
Palantir Edge AI
Technical Whitepaper — 2022

Interesting….

Many of the most valuable possible actions — whether in combat or in manufacturing — are time dependent. Shuttling data back to the cloud for processing and analysis means a high-priority target may have changed location — or an operational failure may have occurred. The compet- itive advantage will accrue to those organizations who can make decisions on edge devices in seconds and in potentially disconnected environments.


…autonomous decision-making across edge devices and environments. Designed for situations where time and efficiency matter, it operates in low-bandwidth, low-power conditions — including on drones, aircraft, ships, robots, buildings, and satellites.

Extremely lightweight and power-efficient to deploy, Palantir Edge AI minimizes the data that needs to be stored and transmitted — enabling low-latency, real-time decision-making at the device level if necessary. It runs on cloud infrastructure, on-premise GPU servers, or Size, Weight, and Power (SWaP) optimized hardware.

Critically, users can hot swap models in real time without breaking the flow of sensor data through the system. This also means that if a model crashes, it does not impact downstream users who rely on that sensor output.

The architecture is implemented through utilizing standard, open-protocol interfaces that facilitate communication between Palantir Edge AI and processors. The solution handles a variety of sensor input formats, such as RTSP/RTP, NITF, GeoTIFF, and MPEG-TS. Finally, it supports outputs in open standard formats — such as Parquet, CoT, MISB 0601/0903 KLV, MPEG-TS, and GeoJSON — which enables data and insights to be sent downstream to other subsystems with little to no integration work required.


Palantir Edge AI supports multi-sensor models for customers who need to fuse data across diverse payloads. For example, if a customer uses RF and EO collection, they can field AI models with Palantir, combining both modalities to achieve higher fidelity detections of entities of interest (e.g., military equipment).
These fusion models can be deployed to Palantir Edge AI and run on edge devices, such as spacecraft. Additionally, separate Palantir Edge AI instances can communicate with each other on a mesh network, enabling sensor fusion and teaming across edge equipment.
It is a novel approach and worth digging into. It seems to function like an event based processor but still not sure if has Akida DNA
 
  • Like
Reactions: 2 users

Diogenese

Top 20
Competitor ?





Recent articles

SynSense Launches Speck, Xylo Neuromorphic Development Kits for Edge AI Vision and Audio Work​

Offering milliwatt and microwatt power budgets for vision and audio processing respectively, these new kits mimic the human brain.​


ghalfacree
3 months ago • Machine Learning & AI / HW101
image_8sVJNeEMDi.png

Neuromorphic computing specialist SynSense has announced the launch of hardware development kits for ultra-low-power vision and audio processing, featuring on-device neural network processing and tiny power requirements — thanks to their inspiration: the human brain.
"We are building a user community around our cutting-edge technologies, not just targeting commercial applications but including research users as well," claims SynSense's Dylan Muir, PhD, of the company's development kit launches. "We are working with universities and research institutions to support teaching, scientific experiments, and algorithmic research. At present, more than 100 industry customers, universities and research institutes are using SynSense neuromorphic boards and software."
image_pP3rsaajoM.png
SynSense has launched two development boards for its neuromorphic processing technology, including the pictured Spec computer vision board. (📷: SynSense)
The two new development kits will, the company hopes, boost those numbers. The first is the Speck, an edge AI computer vision board, which features a system-on-chip dedicated to low-power smart vision processing. A 320,000 neuron processor combines with event-based image sensing technology to, the company claims, offer real-time vision processing "at milliwatt power consumption" — and with the ability to train and deploy convolutional neural networks (CNNs) up to nine layers deep on-chip.
The Xylo-Audio board, meanwhile, focuses on audio processing in a "microwatt energy budget." The device offers more than keyword detection, the company claims, with the ability to detect "almost any audio feature." An open-source Python library, Rockpool, is provided to speed development.
"SynSense releases new development tools to help researchers and engineers further explore neuromorphic intelligence," says company founder and chief executive Qiao Ning, PhD. "The development boards and open-source software are made to strengthen the basic environment for developers. Allowing them to quickly develop, train, test, and deploy applications using spiking neural networks. We expect more developers to join the neuromorphic community and make breakthroughs."
image_ueozfjy2OV.png
The Xylo-Audio offers ultra-low-power edge audio processing, including detection of "almost any audio feature." (📷: SynSense)
"Before SynSense existed, designing, building and deploying an application to neuromorphic SNN [Spiking Neural Network] hardware required a PhD level of expertise, and a PhDs amount of time — 3–4 years," claims Muir. "Now we have interns joining the company and deploying their first applications to SNN hardware after only 1-2 months. This is a huge leap forward for commercialisation, and a huge reward for the hard work of the company."
The two development kits are compatible with Ubuntu 18.04 and 20.04, with the Xylo-Audio board also supporting macOS 10.15, and require a host PC with a USB 3.0 port and at least 4GB of RAM. More information is available on ther.io, an Avnet Community © 2022
SynSense were/are Prophesee's pre-Akida partner. The inference is that Akida is better than SynSense.

SynSense use separate development kits for audio and video, so apparently it is not as versatile as Akida, which has implications for manufacturing costs.

With Akida it is only necessary to change the model library data and reconfigure the nodes/NPUs electronically to handle different types of inputs.
 
Last edited:
  • Like
  • Fire
Reactions: 31 users

Tothemoon24

Top 20
SynSense were/are Prophesee's pre-Akida partner. The inference is that Akida is better than SynSense.
Thank you Dio , I’ll put my post in the bin .
 
  • Like
  • Sad
Reactions: 2 users

Diogenese

Top 20
  • Like
  • Fire
Reactions: 9 users

Diogenese

Top 20
  • Like
Reactions: 5 users

Diogenese

Top 20
Well at least your post is still included in my response ...
Over at the other place, I was once engaged in an "animated" discussion with another poster (possibly porcine aviator?) but all his posts were moderated leaving my posts detached, so I looked like a Tourettes case.
 
  • Haha
  • Like
  • Love
Reactions: 36 users

Tothemoon24

Top 20
Skip to footer
Tata Consultancy Services tata.com

What we do


Who we are​


This is from Tata ;
Just a matter of time $
Nice little mention “Akida”




research-banner-bg

Advancing edge computing capabilities with neuromorphic platforms​

9 MINS READ

ARIJIT MUKHERJEE​

Principal Scientist, TCS Research​


SOUNAK DEY ( see photo below )​

Senior Scientist, TCS Research​


VEDVYAS KRISHNAMOORTHY​

Business Development Manager, TCS​


WHAT WE DO​

  • TCS ResearchTCS Research

HIGHLIGHTS
  • Evolving neuromorphic processors, which are designed to replicate the human brain, is critical for enabling intelligence at the edge and processing sparse events.
  • With neuromorphic computing, low latency real-time operations can be performed with significant reduction in energy costs.
  • Advances in neuromorphic computing promise to ease the energy concerns associated with Dennard Scaling, Moore’s Law, and von Neumann bottleneck.
IN THIS ARTICLE
NEUROMORPHIC COMPUTING BRINGS AI TO THE EDGE
Connected devices driven by 5G and the internet of things (IoT) are everywhere from autonomous vehicles, smart homes, healthcare to space exploration. Devices are becoming more intelligent. Massive amounts of data from multiple sources need to be processed quickly, securely, and in real time, having low latency. Cloud-based architecture may not fulfil these needs of futuristic AI-based systems, that require intelligence at the edge and the ability to process sparse events. As research continues, neuromorphic processors will advance edge computing capabilities and bring AI closer to the edge.
The world is becoming increasingly connected, from autonomous vehicles, smart homes, personal robotics to space exploration. These AI-powered applications rely on fast, autonomous, near real-time analysis of diverse data from multiple sources. They are pushing the boundaries of computing – taking it closer to the edge, the point where data is collected, and analysed. Neuromorphic computing is expected to play an important role in advancing edge computing capabilities by mimicking the human brain and its cognitive functions such as interpretation and autonomous adaptation. It is a high-performance, ultra-low power alternative to the von Neumann architecture that is based on traditional bus-connected CPU-memory peripherals. Due to the time and energy required to send information back and forth between the memory and the CPU, von Neumann machines do not have the ability to support the increasing computational power required by AI applications. At the same time, physical limitations in the size of transistor-based processor circuits impact energy efficiency.
Evolving neuromorphic processors, which are designed to replicate the human brain, eliminate the von Neumann bottleneck. Inspired by the brain’s adaptability and ability to support parallel computation, neuromorphic devices integrate processing and memory at higher speed, complexity, and better energy efficiency. This is critical for enabling intelligence at the edge and processing sparse events. This whitepaper highlights the evolution and importance of neuromorphic processing for enabling Edge AI and showcases applications that can change the landscape of edge computing. It discusses how a spiking neural network (SNN) model, deployed on neuromorphic hardware, can learn from minimal data, and offer real-time responses in an energy-efficient manner (estimated 0.001 times of conventional computing), especially for perception-cognition tasks.
EMBEDDING INTELLIGENCE AT THE EDGE
Connected devices rely on sensors that continuously gather data from the surrounding environment and infrastructure. This necessitates intelligent processing of data for tasks, such as optimizing asset usage, monitoring health and safety, disaster management, surveillance, timely field inspection, remote sensing, etc. Intelligence needs to be embedded in systems closer to the sensors, i.e., on devices that are at the far edge of the network – such as drones, robots, wearables, small satellites (or nanosats), and autonomous/guided vehicle controllers. AI at the edge can enhance the performance of connected devices.
A 2020 study identified four major drivers for bringing AI to the edge–low latency, cost of bandwidth, reliability in case of critical operations, and privacy/security of sensor data. For example, autonomous cars rely on processing large volumes of data from within the vehicle, and outside such as weather, road conditions, and other vehicles. To improve safety, enhance efficiency, and reduce accidents, data needs to be processed securely in real-time for immediate action and reaction. Autonomous cars could require data transfer offload ranging from 383 GB an hour to 5.17 TB an hour.
Shifting from a cloud-based architecture to the edge will be vital to address latency issues, and achieve the vision of a truly intelligent and autonomous vehicle. However, the in-situ processing of sensor data within edge devices, comes with its own challenges, primarily related to the reduction in battery life due to additional processing requirements. With neuromorphic computing, since the processing is done locally, low latency real-time operations can be performed with significant reduction in energy costs.


IMPORTANCE OF SPIKING NEURAL NETWORKS
Advances in neuromorphic computing promise to ease the energy concerns associated with Dennard Scaling, Moore’s Law, and von Neumann bottleneck. The term ‘neuromorphic computing’, originally coined by Carver Mead in 1990, refers to very large-scale integration (VLSI) systems with analog components mimicking biological neurons. In recent years, however, the term has been rechristened to encompass an evolving genre of novel bio-inspired processors that are architected as connections of millions of neurons and synapses. These highly connected and parallel processors, coupled with event-based Spiking Neural Networks (SNN) models which power them, have shown promise in terms of energy consumption, real time response, and capability to learn from sparse data.
The fundamental idea behind neuromorphic computing is built on the sensory perception capabilities of mammalian brains, where an input from a sensory organ triggers a series of electro-chemical reactions within the neuronal path. This results in the flow of a spike-train, chain of electrical impulses, through the connected neurons. This propagation of the stimuli alters the synaptic bonds among the connected neurons. The neural network is said to learn or forget an event based on whether the bond strengthens or weakens, which in Hebbian learning concepts is popularly interpreted as ‘neurons that fire together, wire together.’
As opposed to traditional artificial neural networks, SNNs use mathematical modelsof bio-plausible neurons, such as Izhikevich or Leaky Integrate and Fire (LIF). These, together with models of learning rules such as spike timing dependent plasticity (STDP) and spike driven synaptic plasticity (SDSP), address different levels of application requirements. As shown in Figure 1, input to SNNs are spikes instead of real-valued data, where a spike or event can be viewed as the simplest possible temporal message whose timing is critical for understanding the event. Typically, SNN models tend to use few spikes with high information content, such as the output of advanced dynamic vision sensors (or DVS camera), where only the pixelwise change in luminosity is recorded in an asynchronous manner, resulting in a series of sparse events that can be processed by an SNN.
neuromor

Figure 1: Schematic diagram of spike processing using spiking neural networks

Though neuromorphic platforms are still under development, research reveals that SNN models possess computational capabilities comparable to that of regular artificial neural networks (ANN) and tend to converge towards the appropriate solution faster– implying fewer computational steps. These factors, along with processing based on sparse data, result in the improved power efficiency of neuromorphic systems explained in Figure 2 and 3 (obtained from a benchmarking exercise). Keyword spotting was executed on different hardware including CPU, GPU, NVIDIA Jetson Nano, Intel Movidius Neural Compute Stick (NCS) and Intel Loihi which is a neuromorphic processor. The graphs show the actual power consumption (measured in watts) and the comparative efficiency (measures as energy consumed in Joules per inference task) respectively, for the same task on each hardware. The results conclusively prove the power advantage (power consumed per inference operation) of the neuromorphic platform.

neuromor

Figure 2: Power consumption on different hardware
neuromor

Figure 3: Power efficiency of different hardware

ADVANCING EDGE AI CAPABILITIES WITH NEUROMORPHIC COMPUTING
The roadmap for neuromorphic processors is still evolving and a snapshot of this landscape is shown in Figure 4. Like all nascent explorations, there are diverse platforms such as SpiNNaker and BrainScaleS which are large scale systems that aim to enable high-end brain-inspired supercomputing. These are, however, not available for commercial usage and do not fit into the scope of intelligent edge platforms. Intel views the Loihi processor as a possible candidate for adoption at a low-powered edge, as well as server infrastructure within data centers. Other processors that are likely to come out within the next couple of years, such as Zeroth, Akida etc., cater to edge applications.
neuromor

Figure 4: Landscape of upcoming neuromorphic processors

The global neuromorphic computing market size is expected to reach $8.58 billion by 2030 from $0.26 billion in 2020, growing at a CAGR of 79% from 2021 to 2030. Target applications for neuromorphic computing are those where in-situ processing of sensed data is a prime requirement especially when there are device constraints. For example, it can enable visual recognition of gestures/actions and objects. Neuromorphic computing also supports simultaneous localization and mapping (SLAM) for modelling the surrounding environments by mobile robots from various sensory inputs (image, video, micro-doppler radar, sonar). Such functionalities will be crucial in domains such as manufacturing, mining, energy extraction, disaster management, elderly care etc., where the low-power neuromorphic approach reduces latency without compromising accuracy and real-time responses.
In space technology, large monolithic satellites are being replaced by smaller, low-cost satellites for commercial low earth orbit (LEO) missions that can lead to high temporal resolution for earth observation. The satellite constellations can be assigned for precision agriculture, weather monitoring, disaster monitoring etc. In the conventional approach data is sent back to the base station which can lead to latency and communication issues. Orbital edge computing (OEC), which supports onboard processing of data captured by the satellites, is emerging as a possible alternative. In addition, the smaller satellites, popularly known as CubeSats, are heavily in terms of energy. Neuromorphic processors are more energy efficient and generate real-time alerts.
Prototype neuromorphic edge applications are also being used for perception sensing – action/ gesture recognition, object identification (vision applications) and keyword spotting (audio applications). Time-series processing is another area where neuromorphic computing can prove to be a game changer, especially in healthcare. For wearables and implantable, apart from low-latency real-time requirements, in-situ processing is vital for preserving data privacy. It ensures that only encrypted alerts are sent over the open internet, instead of raw physiological signals. Moreover, additional processing based on the conventional approach can drain device batteries quickly. An SNN-based approach for processing vital body signals on the device itself helps resolve this issue as shown in Figure 5.
neuromor

Figure 5: Outline of a neuromorphic-powered wearable application

PUSHING THE BOUNDARIES OF EDGE AI
AI edge computing has immense potential to overcome cloud related challenges and is likely to garner more interest with the expanding 5G network footprint. The global AI edge computing industry is expected to generate $59.63 billion by 2030 growing at a CAGR of 21.2% between 2021 to 2030. However, there are challenges ahead, especially with respect to the fabrication mechanism and materials necessary for large scale commercial deployment of neuromorphic hardware. Scientists across the world are experimenting with various materials, including phase change and valence change memory, resistive RAM, ferroelectric devices, spintronics, and memristors for the creation of commercially viable neuromorphic devices. Such materials also hold the key to achieving synaptic plasticity, which is the capability of lifelong learning (formation of newer neuronal circuits) and unique to biological brains. Novel research in SNN has also increased considerably, especially in complex areas, such as adapting the back-propagation approach, optimal encoding of real-valued data into spikes and establishing new learning paradigms to suit newer applications. Such cutting-edge research in neuromorphic computing is going to pave the way for a variety of new and valuable services, applications and use cases for edge AI.

ABOUT THE AUTHORS​

https://www.tcs.com/insights/authors/arijit-mukherjee.html

Arijit Mukherjee

Arijit Mukherjee is a Principal Scientist at TCS Research working on intelligent edge and neuromorphic systems.
WRITE TO ME WRITE TO ME
https://www.tcs.com/insights/authors/sounak-dey.html

Sounak Dey. ⭐⭐⭐️see photo below​

Sounak Dey is a Senior Scientist at TCS Research working on intelligent edge and neuromorphic systems.
WRITE TO ME WRITE TO ME
https://www.tcs.com/insights/authors/vedvyas-krishnamoorthy.html

Vedvyas Krishnamoorthy

Vedvyas Krishnamoorthy is a Business Development Manager in the Technology Business Unit of TCS responsible for engineering sales for customers in the semiconductor, networking, and electronic systems and platforms segments.
WRITE TO ME WRITE TO ME





Related reading​




Stay in the know​

Register for our email newsletter to get the freshest takes, straight to your inbox.
Email

I consent to processing of my personal data entered above for the purpose of receiving newsletter from TCS
*

Sign up
For further details on how your personal data will be processed and how your consent can be managed, refer to the TCS Privacy Notice.

contact-icon Contact us



©2023 TATA Consultancy Services Limited
 

Attachments

  • FA601B03-37DB-425D-8049-E5824864211B.jpeg
    FA601B03-37DB-425D-8049-E5824864211B.jpeg
    830.8 KB · Views: 73
Last edited:
  • Like
  • Fire
  • Love
Reactions: 29 users

BaconLover

Founding Member
20230225_131617~2.jpg
 
  • Like
  • Fire
  • Love
Reactions: 25 users

alwaysgreen

Top 20
We have a CNN to SNN conversation module so can’t rule us out on that premise alone
It's being used this weekend at Mardi Gra. Unless they magically integrated it faster than Megachips and renasas, it's not Akida.
 
  • Like
Reactions: 4 users
Looks like another clean up may be required before the weekend is out. But just be warned, it’ll come with bans for those who break the rules. Check the rules in the terms page…
A demerit or strike system, might be a good idea, depending on the severity of the activity?

Somebody making an out of character post, because they're having a bad day or something and getting permanently banned, could possibly lose a valuable contributor.

Maybe even/and a suspension system? Again depending on the severity.
 
  • Like
  • Love
Reactions: 10 users

zeeb0t

Administrator
Staff member
A demerit or strike system, might be a good idea, depending on the severity of the activity?

Somebody making an out of character post, because they're having a bad day or something and getting permanently banned, could possibly lose a valuable contributor.

Maybe even/and a suspension system? Again depending on the severity.
Yes there is a demerit system already and the bans are only temporary, unless someone is a nuisance and then I take manual action.
 
  • Like
  • Fire
Reactions: 17 users

Diogenese

Top 20
So which former leading chip maker needs an AI accelerator for their CPU?
It has been sticking out like a sore thumb since the SiFive/Akida get together that all CPU/GPU makers will need to have an AI coprocessor.
 
  • Like
  • Fire
  • Love
Reactions: 42 users

Tothemoon24

Top 20
It has been sticking out like a sore thumb since the SiFive/Akida get together that all CPU/GPU makers will need to have an AI coprocessor.
Ouch !!!

 
  • Like
  • Fire
Reactions: 9 users
Ouch !!!


Interesting article, thanks @Tothemoon24

And how good is it that MB chose Brainchip to supply the edge AI whilst they are still working with NVIDIA at the same time. It speaks volumes to the fact we have something special!

How good is it that we are in bed with NVIDIA working on the same MB project with them!

NVIDIA will obviously recognise what Brainchip has to offer and can use that where it needs to get an even larger market share ….. and drag Brainchip along for the ride! No complaints from me.

Intel and SiFive will also be utilising Brainchip where needed to compete.

Win Win situation!
 
  • Like
  • Fire
  • Love
Reactions: 62 users

Tothemoon24

Top 20
In coming avalanche Stable G
 
  • Like
  • Fire
  • Love
Reactions: 16 users
In coming avalanche Stable G

As discussed many times @Tothemoon24 we don’t need the entire market. :)

A small percentage will be sufficient:



1677299572925.png


With the ecosystem being diligently set up by our fantastic management team it’s just a question of %1, %2, or even %5.

My head would explode thinking of anything larger than that! 😂
 
  • Like
  • Fire
  • Love
Reactions: 43 users

AusEire

Founding Member. It's ok to say No to Dot Joining
As discussed many times @Tothemoon24 we don’t need the entire market. :)

A small percentage will be sufficient:



View attachment 30576

With the ecosystem being diligently set up by our fantastic management team it’s just a question of %1, %2, or even %5.

My head would explode thinking of anything larger than that! 😂
Space Rocket GIF

This Rocket won't stop once it lifts off 🔥🔥
 
  • Like
  • Fire
  • Love
Reactions: 22 users

jtardif999

Regular
As discussed many times @Tothemoon24 we don’t need the entire market. :)

A small percentage will be sufficient:



View attachment 30576

With the ecosystem being diligently set up by our fantastic management team it’s just a question of %1, %2, or even %5.

My head would explode thinking of anything larger than that! 😂
But we are differentiated and will likely further that with an eventual unbeatable lead, so I think we will have a bigger than just small percentage share.
 
  • Like
  • Fire
Reactions: 12 users
Top Bottom