Found these guys over the weekend.
Their take on neuromorphic as well as a list of "start ups" and background in the neuromorphic space. I'll post that separately as post may limit the words maybe.
Neuromorphic The word “neuromorphic” comes from two Greek words, “neuro” meaning nerves, and “morphic” meaning shape. A system is neuromorphic if it is modelled on a biological nervous system such as the human brain. Neuromorphic Computing Neuromorphic computing means computer hardware and...
conscium.com
About us
Bringing together leaders in the fields of AI, deep learning, theory of mind, neuroscience, and evolutionary computation.
We are the world’s first applied AI research organisation.
Our aim is to deepen our understanding of consciousness to pioneer efficient, intelligent, and safe AI that builds a better future for humanity.
What is neuromorphic computing?
07.06.24
4 mins read
Neuromorphic
The word “neuromorphic” comes from two Greek words, “neuro” meaning nerves, and “morphic” meaning shape. A system is neuromorphic if it is modelled on a biological nervous system such as the human brain.
Neuromorphic Computing
Neuromorphic computing means computer hardware and software that processes information in ways similar to a biological brain. As well as enabling machines to become more powerful and more useful, the development of neuromorphic computing should teach us a great deal about how our brains work.
Neuromorphic Hardware
In the 1980s, Carver Mead, a pioneering American computer who coined the term “Moore’s Law”, began thinking about how to develop AIs based on architectures similar to mammalian brains. Initially, neuromorphic hardware used analogue circuits rather than digital ones, but today most of the leading neuromorphic hardware is digital.
The most important characteristic of neuromorphic computers is their sparsity of computation. In traditional artificial neural networks, every neuron makes a calculation many times each second, and stores representations of the results as numbers, usually large numbers with floating decimal points. In neuromorphic computing, only a minority of neurons perform a calculation at any one time.
In mammalian brains, neurons only “spike”, or “fire” when stimulated appropriately. This is replicated in neuromorphic systems, and it makes them much less energy intensive, like mammalian brains. It takes enough electricity to power New York City to train a Large Language Model (LLM) like GPT-4, while your brain consumes the same power as a single light bulb.
Neuromorphic hardware is an alternative to the main established computing paradigm, which is the von Neumann architecture.
Von Neumann architecture
John von Neumann was a Hungarian genius and polymath who played an important role in the development of the first computers, from the end of World War Two onwards.
Von Neumann proposed a system with a central processing unit (CPU), a memory unit, and input and output devices. Inside the CPU is a control unit, which manages the traffic of information and instructions, and an arithmetic logic unit (ALU), which performs calculations. The memory unit houses both data and programmes.
The architecture is simpler than its early rivals, but there are bottlenecks because data needs to be shuffled backwards and forwards between the CPU and the memory, and because data and programmes cannot be shuffled at the same time. As computers have become faster, these bottlenecks have become more problematic.
- Neuromorphic architectures often address this bottleneck by combining memory chips with processing chips, or locating them close to each other.
Neuromorphic Software
Neuromorphic software consists of algorithms and models that operate more like biological brains than traditional computer systems, which are based on the von Neumann paradigm. One example of neuromorphic software is spiking neural networks (see below).
Neuromorphic AI
Neuromorphic AI systems use neuromorphic hardware and / or software to handle cognitive tasks like information retrieval, pattern recognition, sensory processing, and decision making. Their developers argue that they are more adaptive, scalable, and efficient than traditional AIs.
Spiking neural networks
Spiking neural networks (SNNs) are an attempt to get closer to the brain’s architecture, and thus create more efficient and more robust AIs.
In biological brains, neurons transmit signals down fibres called axons. If the signal is strong enough, and if the internal states of the neurons are appropriate, then the signal will cross a gap called a synapse into a second neuron. The signal then travels down a slightly different type of fibre called a dendrite towards the main body, or soma, of the second neuron. The second neuron may get excited enough by the incoming signal to send a new signal out along its axons. And so on – the signal propagates across the brain.
Hoped-for benefits of neuromorphic computing
- Efficiency: significantly lower power consumption
- Speed: much less latency as data does not need to be transmitted to and from centralized data centres
- Adaptability: new data can be integrated, enabling more robust performance in dynamic environments
- Scalability: modular neuromorphic systems should scale efficiently as more computational units (neurons) are added
- Robustness: better at reacting to unexpected circumstances
Neuromorphic AI and machine consciousness
Some argue that neuromorphic AI systems are more likely to become conscious than other types of AI. It may also be easier to detect the early signs of consciousness in them.
Neuroevolution
Neuroevolution is not a form of neuromorphic computing, but it is a related concept, so we are summarising it here.
Neuroevolution uses evolutionary algorithms to generate artificial neural networks, and to simulate the behaviour of agents in artificial life, video games, and robotics. It requires only a measure of a network’s performance at a task, whereas supervised learning algorithms must be trained on a corpus of correct input-output pairs. Reinforcement learning systems are a form of neuroevolution.
————– Verses Technologies Inc was founded in 2018 by Gabriel René, Steven Swanson, Capm Petersen, and Dan Mapes, and is headquartered in Los Angeles, California. Karl Friston, a Conscium adviser, is the company’s Chief Scientist. Verses’ mission is to enhance human potential through intelligent...
conscium.com
Neuromorphic startups
18.10.24
7 mins read
- Verses
- Rain.AI
- Opteran
- SpiNNcloud
- Liquid AI
- BrainChip
- Prophesee
- GrAI Matter Labs
- SynSense
- Innatera
- General Vision
————–
Verses Technologies Inc was founded in 2018 by Gabriel René, Steven Swanson, Capm Petersen, and Dan Mapes, and is headquartered in Los Angeles, California. Karl Friston, a Conscium adviser, is the company’s Chief Scientist. Verses’ mission is to enhance human potential through intelligent tools that radically improve mutual understanding, fostering a smarter world where technology and humans coexist harmoniously.
Verses has developed KOSM, which it describes as the world’s first network operating system designed to enable distributed intelligence. KOSM generates a shared world model comprised of contextualized data, policies, simulations, and workflows, aiming to make information inter-operable, available, accessible, and trustworthy.
The company’s other products include Genius, an intelligent software system that transforms disparate data into coherent knowledge models. This platform allows intelligent agents to learn, reason, and adapt across various contexts, driven by the philosophy of mimicking the ‘wisdom and genius of nature’ to create safe and trustworthy interactions between humans, machines, and AI.
Verses is a public company trading under the ticker symbol VERS. It has a market capitalization of $157.6 million, with annual revenues of approximately $1.97 million.
————–
Rain AI was founded in 2017 by Gordon Wilson, Jack Kendall, and Juan Nino, and is located in Redwood City, California. The company designs neuromorphic chips to perform AI tasks more efficiently by mimicking how human brain synapses work, and reduce the energy consumption associated with traditional AI hardware.
A potential challenger to Nvidia in the field of AI model training and inference, the company is backed by prominent investors, including Sam Altman, Daniel Gross, and Airbus Ventures, and has raised over $76 million in funding. It is expected to release its first production chips by 2025.
—————
Opteran was spun out from the University of Sheffield in 2020 by Professor James Marshall and Dr. Alex Cope. It aims to revolutionize autonomous machine intelligence with a technology it calls “Natural Intelligence,” which replicates insect decision-making and navigation systems. This approach enables machines to perform tasks like seeing, sensing, obstacle avoidance, and dynamic navigation without relying on large datasets or extensive pre-training. The company has raised over £12 million in funding.
Opteran’s core product is the Opteran Development Kit (ODK), which enables its lightweight, efficient silicon “brains” to be incorporated into a range of autonomous machines, including drones, robots, and vehicles. Operating with minimal energy consumption, its algorithms make autonomy possible in GPS-denied environments, as demonstrated in trials controlling a sub-250g drone using just 10,000 pixels from a single low-resolution camera.
———–
SpiNNcloud Systems was founded by Christian Mayr, Christian Eichhorn, Matthias Lohrmann, and Hector Gonzalez, and emerged from the SpiNNaker2 research project in 2020. Its mission is to transfer cutting-edge neuromorphic computing research into practical applications, developing brain-inspired supercomputers with enhanced AI capabilities and dramatically reduced energy demands.
The company’s flagship product is the SpiNNaker2 supercomputer, a hybrid AI system that combines neuromorphic models with conventional AI approaches. It features 152 ARM cores per chip, and supports various neural networks, including both deep learning and spiking models. It is primarily designed to handle real-time applications, such as robotics, smart cities, and autonomous vehicles.
SpiNNcloud is working with Sandia National Laboratories and the Technical University of Munich, and is on track to make a beta version of its technology available in 2024, with full production systems available in 2025.
——————
Liquid AI was spun out from MIT in 2023 by Ramin Hasani, Mathias Lechner, Alexander Amini, and Daniela Rus. Its “Liquid Neural Networks” are inspired by biological neurons, and offer more flexibility and adaptability than traditional AI models. They can adjust dynamically to new data and environmental changes, making them particularly suited for real-time applications like autonomous driving and robotics.
Liquid AI’s Liquid Foundation Models (LFMs) are claimed to out-perform traditional transformer-based models while using significantly less memory and compute power. The company raised $37.5 million in seed funding at a valuation of $303m from investors including OSS Capital, PagsGroup, and angel investors like GitHub co-founder Tom Preston Werner, and Shopify’s Tobias Lütke.
—————-
BrainChip was founded in 2004 by Peter Van Der Made and Anil Mankar, and is a pioneer in neuromorphic computing, based in part on technology developed by Simon Thorpe at CNRS in Toulouse.
BrainChip’s flagship product is the Akida neuromorphic processor, which is optimized for ultra-low power AI inference and learning at the edge. The processor uses spiking neural networks to process inputs with high efficiency, making it ideal for applications in areas like autonomous vehicles, industrial IoT, and smart consumer devices.
In 2024, BrainChip launched its Akida technology into low Earth orbit for space applications, demonstrating its ability to operate in extreme environments.
—————–
Prophesee was founded in Paris in 2014 by Luca Verre, Christoph Posch, Daniel Matolin, Ryad Benosman, and Bernard Gilly. Its patented “event-based vision technology” captures changes in the visual field at high speed, and with significantly lower data and power requirements than traditional cameras.
Prophesee’s flagship product is the Metavision sensor, which excels in challenging conditions such as low light and fast motion. The GenX320 version is claimed to be the world’s smallest and most power-efficient event-based sensor, designed for edge AI devices like AR/VR headsets and IoT applications.
Prophesee has raised €126 million in funding, including a €50 million Series C round in 2023. The company has established partnerships with major tech players such as Sony, and a community of over 10,000 developers utilizing its technology.
————
GrAI Matter Labs was founded in 2016 by Ryad Benosman, Bernard Gilly, Giacomo Indiveri, and Atul Sinha. It is based in Paris, with offices in Eindhoven and Silicon Valley. It creates brain-inspired chips that mimic human neural processing to improve power efficiency and speed in edge AI applications.
Its main product, the GrAI Vision Inference Processor (VIP), was introduced in 2021. It is an AI system-on-a-chip for industries such as robotics, industrial automation, AR/VR, and smart surveillance. It delivers 100x better inference latency than conventional solutions, thanks to the company’s proprietary NeuronFlow technology.
GrAI Matter Labs has undertaken several funding rounds, including a $15 million raise in 2021.
—————–
SynSense was founded in 2017 as aiCTX, by researchers from the Institute of Neuroinformatics at ETH Zurich and the University of Zurich. It is based in Chengdu, China, with an office in Zurich. Its mission is to develop ultra-low-power, event-based neuromorphic processors that mimic the way the brain processes information, enabling efficient, real-time sensory processing on edge devices.
The company’s flagship product is DYNAP-CNN, a neuromorphic vision processor for tasks such as object detection, tracking, and sensory data processing for smart homes, autonomous driving, and healthcare devices. SynSense’s technology has also been put to use in robotics, smart toys, and security systems.
The company has raised significant funding, and has partnerships with several major companies and academic institutions.
————-
Innatera was spun out of the Delft University of Technology in the Netherlands in 2018 by Sumeet Kumar, Amir Zjajo, Rene van Leuken, and Uma Mahesh. Its mission is to enable real-time intelligence on edge devices through ultra-low-power neuromorphic processors.
Innatera’s flagship product is the Spiking Neural Processor (SNP) T1, which processes sensory data with extremely low latency, often under a millisecond, consuming only milliwatts of power.
It raised $21 million in an oversubscribed Series A round in 2024, and aims to scale the production of its processors and integrate their intelligence into a billion sensors by 2030.
————-
General Vision was founded in 1987 by Anne Menendez and Guy Paillet. The company uses neuromorphic computing to create energy-efficient systems capable of real-time learning and recognition in dynamic environments.
The company’s flagship products include NeuroMem, a neural network chip that offers learning and recognition capabilities in robotics, IoT, and industrial automation. It also offers CogniSight, a visual recognition engine for applications like image processing and autonomous systems, operating on a massively parallel neural network architecture.