BRN Discussion Ongoing

Diogenese

Top 20
Hi @Diogenese so does that mean we are not Neuromorphic Chip? As I dont have much knowledge in this technical field. As having invested much of my savings, this is second commentary on BRN ,Were we dupe so far :-( thanks
Hi Baisyet,

Not at all. For the purists, who insist on a spike, Akida still does 1-bit weights and activations for low power, high speed applications which are not too demanding on accuracy.

Possibly the other point of distinction for the purists who cut their teeth on analog, analog is much closer to a neuron in that it does sum voltages until they reach a threshold voltage value as does a neuron, whereas Akida counts the number of input binary bits until a threshold count is reached. On reflection, this is the more likely differentiator.

Many of the academic researchers have been toiling away at analog for decades and perhaps they take a proprietary interest in the term "neuromorphic".

The problem with analog is manufacturing variability, in that the dimensions cannot be strictly controlled so the electrical characteristics of the individual capacitors varies, which is significant when many thousands/millions of capacitors are involved ...

So it is odd as Jason said he did not want to emulate the neuron so much as to draw inspiration from it.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 31 users

Diogenese

Top 20
Hi Baisyet,

Not at all. For the purists, who insist on a spike, Akida still does 1-bit weights and activations for low power, high speed applications which are not too demanding on accuracy.

Possibly the other point of distinction for the purists who cut their teeth on analog, analog is much closer to a neuron in that it does sum voltages until they reach a threshold voltage value as does a neuron, whereas Akida counts the number of input bits until a threshold count is reached. On reflection, this is the more likely differentiator.

Many of the academic researchers have been toiling away at analog for decades and perhaps they take a proprietary interest in the term "neuromorphic".

The problem with analog is manufacturing variability, in that the dimensions cannot be strictly controlled so the electrical characteristics of the individual capacitors varies, which is significant when many thousands/millions of capacitors are involved ...

So it is odd as Jason said he did not want to emulate the neuron so much as to draw inspiration from it.
So really, Akida is the result of thinking outside the box.
 
  • Like
  • Love
  • Fire
Reactions: 30 users

Diogenese

Top 20
Hi Baisyet,

Not at all. For the purists, who insist on a spike, Akida still does 1-bit weights and activations for low power, high speed applications which are not too demanding on accuracy.

Possibly the other point of distinction for the purists who cut their teeth on analog, analog is much closer to a neuron in that it does sum voltages until they reach a threshold voltage value as does a neuron, whereas Akida counts the number of input bits until a threshold count is reached. On reflection, this is the more likely differentiator.

Many of the academic researchers have been toiling away at analog for decades and perhaps they take a proprietary interest in the term "neuromorphic".

The problem with analog is manufacturing variability, in that the dimensions cannot be strictly controlled so the electrical characteristics of the individual capacitors varies, which is significant when many thousands/millions of capacitors are involved ...

So it is odd as Jason said he did not want to emulate the neuron so much as to draw inspiration from it.
Digital Spiking Neural Network (DSNN)? ... or is that already taken?
 
  • Like
  • Fire
  • Love
Reactions: 23 users

TECH

Regular
When Dave Whatshisname deprecated Akida on linkedin, Tony Lewis replied that he tought he had made it clear that it was "event based".

So maybe the distinction is between the 1-bit Akida (still an available mode) and the 2 and 4-bit mode which are more accurate/definitive (able to make finer distinctions).

I haven't listened to the podcast with Jason yet, but we all know we are Neuromorphic, so possibly his comments have been misunderstood..as he is currently contracted as a contributing scientist on our SAB with Peter and Andre, less than 3 months ago, so I would suggest that Peter would speak with him if he feels his comment/s were in anyway inappropriate or not aligned with our companys objectives.
Integrity resides at Brainchip..we are 100% Neuromorphic and always have been.
Very happy to be corrected by the man himself...no regerts 🤣🤣🤣
Tech x
Let’s Go Brainchip time to bust open that dam.
 
  • Like
  • Love
  • Haha
Reactions: 16 users

Terroni2105

Founding Member
When Dave Whatshisname deprecated Akida on linkedin, Tony Lewis replied that he tought he had made it clear that it was "event based".

So maybe the distinction is between the 1-bit Akida (still an available mode) and the 2 and 4-bit mode which are more accurate/definitive (able to make finer distinctions).
If they are now confused on their own identity then I don’t think it is a good look after all these years of telling us they are leaders in neuromorphic.
I have looked at news releases from Brainchip and they still refer to their ”neuromorphic AI” and still refer to being “the worlds first commercial producer of neuromorphic AI” on their website, so I’m either not understanding things or I kind of feel Jason K. Eshraghian who is on our Scientific Advisory Board is undermining BrainChip. Maybe I just don’t understand the technicality’s of whatever Jason is saying but if he is correct then I feel like it is a massive miscommunication on the part of BrainChip all these years because there shouldn’t be confusion when trying to market your product.
 
  • Like
  • Fire
Reactions: 14 users

TECH

Regular
I haven't listened to the podcast with Jason yet, but we all know we are Neuromorphic, so possibly his comments have been misunderstood..as he is currently contracted as a contributing scientist on our SAB with Peter and Andre, less than 3 months ago, so I would suggest that Peter would speak with him if he feels his comment/s were in anyway inappropriate or not aligned with our companys objectives.
Integrity resides at Brainchip..we are 100% Neuromorphic and always have been.
Very happy to be corrected by the man himself...no regerts 🤣🤣🤣
Tech x
 

SERA2g

Founding Member
He says about BrainChip “I wouldn’t call them neuromorphic” , I don’t know why.
He’s on the scientific advisory board lol.. and here he is only 4 days ago not being 100% sure of the name of the new Pico chip and saying brainchip isn’t a spiking neural network. What the fuck. Someone needs to tune him up.
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Found these guys over the weekend.

Their take on neuromorphic as well as a list of "start ups" and background in the neuromorphic space. I'll post that separately as post may limit the words maybe.


About us​

Bringing together leaders in the fields of AI, deep learning, theory of mind, neuroscience, and evolutionary computation.​

We are the world’s first applied AI research organisation.​

Our aim is to deepen our understanding of consciousness to pioneer efficient, intelligent, and safe AI that builds a better future for humanity.


What is neuromorphic computing?​

07.06.24
4 mins read

Neuromorphic​

The word “neuromorphic” comes from two Greek words, “neuro” meaning nerves, and “morphic” meaning shape. A system is neuromorphic if it is modelled on a biological nervous system such as the human brain.

Neuromorphic Computing​

Neuromorphic computing means computer hardware and software that processes information in ways similar to a biological brain. As well as enabling machines to become more powerful and more useful, the development of neuromorphic computing should teach us a great deal about how our brains work.

Neuromorphic Hardware​

In the 1980s, Carver Mead, a pioneering American computer who coined the term “Moore’s Law”, began thinking about how to develop AIs based on architectures similar to mammalian brains. Initially, neuromorphic hardware used analogue circuits rather than digital ones, but today most of the leading neuromorphic hardware is digital.
The most important characteristic of neuromorphic computers is their sparsity of computation. In traditional artificial neural networks, every neuron makes a calculation many times each second, and stores representations of the results as numbers, usually large numbers with floating decimal points. In neuromorphic computing, only a minority of neurons perform a calculation at any one time.
In mammalian brains, neurons only “spike”, or “fire” when stimulated appropriately. This is replicated in neuromorphic systems, and it makes them much less energy intensive, like mammalian brains. It takes enough electricity to power New York City to train a Large Language Model (LLM) like GPT-4, while your brain consumes the same power as a single light bulb.
Neuromorphic hardware is an alternative to the main established computing paradigm, which is the von Neumann architecture.

Von Neumann architecture​

John von Neumann was a Hungarian genius and polymath who played an important role in the development of the first computers, from the end of World War Two onwards.
Von Neumann proposed a system with a central processing unit (CPU), a memory unit, and input and output devices. Inside the CPU is a control unit, which manages the traffic of information and instructions, and an arithmetic logic unit (ALU), which performs calculations. The memory unit houses both data and programmes.
The architecture is simpler than its early rivals, but there are bottlenecks because data needs to be shuffled backwards and forwards between the CPU and the memory, and because data and programmes cannot be shuffled at the same time. As computers have become faster, these bottlenecks have become more problematic.
  • Neuromorphic architectures often address this bottleneck by combining memory chips with processing chips, or locating them close to each other.

Neuromorphic Software​

Neuromorphic software consists of algorithms and models that operate more like biological brains than traditional computer systems, which are based on the von Neumann paradigm. One example of neuromorphic software is spiking neural networks (see below).

Neuromorphic AI​

Neuromorphic AI systems use neuromorphic hardware and / or software to handle cognitive tasks like information retrieval, pattern recognition, sensory processing, and decision making. Their developers argue that they are more adaptive, scalable, and efficient than traditional AIs.

Spiking neural networks​

Spiking neural networks (SNNs) are an attempt to get closer to the brain’s architecture, and thus create more efficient and more robust AIs.
In biological brains, neurons transmit signals down fibres called axons. If the signal is strong enough, and if the internal states of the neurons are appropriate, then the signal will cross a gap called a synapse into a second neuron. The signal then travels down a slightly different type of fibre called a dendrite towards the main body, or soma, of the second neuron. The second neuron may get excited enough by the incoming signal to send a new signal out along its axons. And so on – the signal propagates across the brain.

Hoped-for benefits of neuromorphic computing​

  • Efficiency: significantly lower power consumption
  • Speed: much less latency as data does not need to be transmitted to and from centralized data centres
  • Adaptability: new data can be integrated, enabling more robust performance in dynamic environments
  • Scalability: modular neuromorphic systems should scale efficiently as more computational units (neurons) are added
  • Robustness: better at reacting to unexpected circumstances

Neuromorphic AI and machine consciousness​

Some argue that neuromorphic AI systems are more likely to become conscious than other types of AI. It may also be easier to detect the early signs of consciousness in them.

Neuroevolution​

Neuroevolution is not a form of neuromorphic computing, but it is a related concept, so we are summarising it here.
Neuroevolution uses evolutionary algorithms to generate artificial neural networks, and to simulate the behaviour of agents in artificial life, video games, and robotics. It requires only a measure of a network’s performance at a task, whereas supervised learning algorithms must be trained on a corpus of correct input-output pairs. Reinforcement learning systems are a form of neuroevolution.
 
  • Like
  • Thinking
  • Fire
Reactions: 18 users
Found these guys over the weekend.

Their take on neuromorphic as well as a list of "start ups" and background in the neuromorphic space. I'll post that separately as post may limit the words maybe.


About us​

Bringing together leaders in the fields of AI, deep learning, theory of mind, neuroscience, and evolutionary computation.​

We are the world’s first applied AI research organisation.​

Our aim is to deepen our understanding of consciousness to pioneer efficient, intelligent, and safe AI that builds a better future for humanity.


What is neuromorphic computing?​

07.06.24
4 mins read

Neuromorphic​

The word “neuromorphic” comes from two Greek words, “neuro” meaning nerves, and “morphic” meaning shape. A system is neuromorphic if it is modelled on a biological nervous system such as the human brain.

Neuromorphic Computing​

Neuromorphic computing means computer hardware and software that processes information in ways similar to a biological brain. As well as enabling machines to become more powerful and more useful, the development of neuromorphic computing should teach us a great deal about how our brains work.

Neuromorphic Hardware​

In the 1980s, Carver Mead, a pioneering American computer who coined the term “Moore’s Law”, began thinking about how to develop AIs based on architectures similar to mammalian brains. Initially, neuromorphic hardware used analogue circuits rather than digital ones, but today most of the leading neuromorphic hardware is digital.
The most important characteristic of neuromorphic computers is their sparsity of computation. In traditional artificial neural networks, every neuron makes a calculation many times each second, and stores representations of the results as numbers, usually large numbers with floating decimal points. In neuromorphic computing, only a minority of neurons perform a calculation at any one time.
In mammalian brains, neurons only “spike”, or “fire” when stimulated appropriately. This is replicated in neuromorphic systems, and it makes them much less energy intensive, like mammalian brains. It takes enough electricity to power New York City to train a Large Language Model (LLM) like GPT-4, while your brain consumes the same power as a single light bulb.
Neuromorphic hardware is an alternative to the main established computing paradigm, which is the von Neumann architecture.

Von Neumann architecture​

John von Neumann was a Hungarian genius and polymath who played an important role in the development of the first computers, from the end of World War Two onwards.
Von Neumann proposed a system with a central processing unit (CPU), a memory unit, and input and output devices. Inside the CPU is a control unit, which manages the traffic of information and instructions, and an arithmetic logic unit (ALU), which performs calculations. The memory unit houses both data and programmes.
The architecture is simpler than its early rivals, but there are bottlenecks because data needs to be shuffled backwards and forwards between the CPU and the memory, and because data and programmes cannot be shuffled at the same time. As computers have become faster, these bottlenecks have become more problematic.
  • Neuromorphic architectures often address this bottleneck by combining memory chips with processing chips, or locating them close to each other.

Neuromorphic Software​

Neuromorphic software consists of algorithms and models that operate more like biological brains than traditional computer systems, which are based on the von Neumann paradigm. One example of neuromorphic software is spiking neural networks (see below).

Neuromorphic AI​

Neuromorphic AI systems use neuromorphic hardware and / or software to handle cognitive tasks like information retrieval, pattern recognition, sensory processing, and decision making. Their developers argue that they are more adaptive, scalable, and efficient than traditional AIs.

Spiking neural networks​

Spiking neural networks (SNNs) are an attempt to get closer to the brain’s architecture, and thus create more efficient and more robust AIs.
In biological brains, neurons transmit signals down fibres called axons. If the signal is strong enough, and if the internal states of the neurons are appropriate, then the signal will cross a gap called a synapse into a second neuron. The signal then travels down a slightly different type of fibre called a dendrite towards the main body, or soma, of the second neuron. The second neuron may get excited enough by the incoming signal to send a new signal out along its axons. And so on – the signal propagates across the brain.

Hoped-for benefits of neuromorphic computing​

  • Efficiency: significantly lower power consumption
  • Speed: much less latency as data does not need to be transmitted to and from centralized data centres
  • Adaptability: new data can be integrated, enabling more robust performance in dynamic environments
  • Scalability: modular neuromorphic systems should scale efficiently as more computational units (neurons) are added
  • Robustness: better at reacting to unexpected circumstances

Neuromorphic AI and machine consciousness​

Some argue that neuromorphic AI systems are more likely to become conscious than other types of AI. It may also be easier to detect the early signs of consciousness in them.

Neuroevolution​

Neuroevolution is not a form of neuromorphic computing, but it is a related concept, so we are summarising it here.
Neuroevolution uses evolutionary algorithms to generate artificial neural networks, and to simulate the behaviour of agents in artificial life, video games, and robotics. It requires only a measure of a network’s performance at a task, whereas supervised learning algorithms must be trained on a corpus of correct input-output pairs. Reinforcement learning systems are a form of neuroevolution.



Neuromorphic startups​

18.10.24
7 mins read

  1. Verses
  2. Rain.AI
  3. Opteran
  4. SpiNNcloud
  5. Liquid AI
  6. BrainChip
  7. Prophesee
  8. GrAI Matter Labs
  9. SynSense
  10. Innatera
  11. General Vision
————–
Verses Technologies Inc was founded in 2018 by Gabriel René, Steven Swanson, Capm Petersen, and Dan Mapes, and is headquartered in Los Angeles, California. Karl Friston, a Conscium adviser, is the company’s Chief Scientist. Verses’ mission is to enhance human potential through intelligent tools that radically improve mutual understanding, fostering a smarter world where technology and humans coexist harmoniously.
Verses has developed KOSM, which it describes as the world’s first network operating system designed to enable distributed intelligence. KOSM generates a shared world model comprised of contextualized data, policies, simulations, and workflows, aiming to make information inter-operable, available, accessible, and trustworthy.
The company’s other products include Genius, an intelligent software system that transforms disparate data into coherent knowledge models. This platform allows intelligent agents to learn, reason, and adapt across various contexts, driven by the philosophy of mimicking the ‘wisdom and genius of nature’ to create safe and trustworthy interactions between humans, machines, and AI.
Verses is a public company trading under the ticker symbol VERS. It has a market capitalization of $157.6 million, with annual revenues of approximately $1.97 million.
————–
Rain AI was founded in 2017 by Gordon Wilson, Jack Kendall, and Juan Nino, and is located in Redwood City, California. The company designs neuromorphic chips to perform AI tasks more efficiently by mimicking how human brain synapses work, and reduce the energy consumption associated with traditional AI hardware.
A potential challenger to Nvidia in the field of AI model training and inference, the company is backed by prominent investors, including Sam Altman, Daniel Gross, and Airbus Ventures, and has raised over $76 million in funding. It is expected to release its first production chips by 2025.
—————
Opteran was spun out from the University of Sheffield in 2020 by Professor James Marshall and Dr. Alex Cope. It aims to revolutionize autonomous machine intelligence with a technology it calls “Natural Intelligence,” which replicates insect decision-making and navigation systems. This approach enables machines to perform tasks like seeing, sensing, obstacle avoidance, and dynamic navigation without relying on large datasets or extensive pre-training. The company has raised over £12 million in funding.
Opteran’s core product is the Opteran Development Kit (ODK), which enables its lightweight, efficient silicon “brains” to be incorporated into a range of autonomous machines, including drones, robots, and vehicles. Operating with minimal energy consumption, its algorithms make autonomy possible in GPS-denied environments, as demonstrated in trials controlling a sub-250g drone using just 10,000 pixels from a single low-resolution camera.
———–
SpiNNcloud Systems was founded by Christian Mayr, Christian Eichhorn, Matthias Lohrmann, and Hector Gonzalez, and emerged from the SpiNNaker2 research project in 2020. Its mission is to transfer cutting-edge neuromorphic computing research into practical applications, developing brain-inspired supercomputers with enhanced AI capabilities and dramatically reduced energy demands.
The company’s flagship product is the SpiNNaker2 supercomputer, a hybrid AI system that combines neuromorphic models with conventional AI approaches. It features 152 ARM cores per chip, and supports various neural networks, including both deep learning and spiking models. It is primarily designed to handle real-time applications, such as robotics, smart cities, and autonomous vehicles.
SpiNNcloud is working with Sandia National Laboratories and the Technical University of Munich, and is on track to make a beta version of its technology available in 2024, with full production systems available in 2025.
——————
Liquid AI was spun out from MIT in 2023 by Ramin Hasani, Mathias Lechner, Alexander Amini, and Daniela Rus. Its “Liquid Neural Networks” are inspired by biological neurons, and offer more flexibility and adaptability than traditional AI models. They can adjust dynamically to new data and environmental changes, making them particularly suited for real-time applications like autonomous driving and robotics.
Liquid AI’s Liquid Foundation Models (LFMs) are claimed to out-perform traditional transformer-based models while using significantly less memory and compute power. The company raised $37.5 million in seed funding at a valuation of $303m from investors including OSS Capital, PagsGroup, and angel investors like GitHub co-founder Tom Preston Werner, and Shopify’s Tobias Lütke.
—————-
BrainChip was founded in 2004 by Peter Van Der Made and Anil Mankar, and is a pioneer in neuromorphic computing, based in part on technology developed by Simon Thorpe at CNRS in Toulouse.
BrainChip’s flagship product is the Akida neuromorphic processor, which is optimized for ultra-low power AI inference and learning at the edge. The processor uses spiking neural networks to process inputs with high efficiency, making it ideal for applications in areas like autonomous vehicles, industrial IoT, and smart consumer devices.
In 2024, BrainChip launched its Akida technology into low Earth orbit for space applications, demonstrating its ability to operate in extreme environments.
—————–
Prophesee was founded in Paris in 2014 by Luca Verre, Christoph Posch, Daniel Matolin, Ryad Benosman, and Bernard Gilly. Its patented “event-based vision technology” captures changes in the visual field at high speed, and with significantly lower data and power requirements than traditional cameras.
Prophesee’s flagship product is the Metavision sensor, which excels in challenging conditions such as low light and fast motion. The GenX320 version is claimed to be the world’s smallest and most power-efficient event-based sensor, designed for edge AI devices like AR/VR headsets and IoT applications.
Prophesee has raised €126 million in funding, including a €50 million Series C round in 2023. The company has established partnerships with major tech players such as Sony, and a community of over 10,000 developers utilizing its technology.
————
GrAI Matter Labs was founded in 2016 by Ryad Benosman, Bernard Gilly, Giacomo Indiveri, and Atul Sinha. It is based in Paris, with offices in Eindhoven and Silicon Valley. It creates brain-inspired chips that mimic human neural processing to improve power efficiency and speed in edge AI applications.
Its main product, the GrAI Vision Inference Processor (VIP), was introduced in 2021. It is an AI system-on-a-chip for industries such as robotics, industrial automation, AR/VR, and smart surveillance. It delivers 100x better inference latency than conventional solutions, thanks to the company’s proprietary NeuronFlow technology.
GrAI Matter Labs has undertaken several funding rounds, including a $15 million raise in 2021.
—————–
SynSense was founded in 2017 as aiCTX, by researchers from the Institute of Neuroinformatics at ETH Zurich and the University of Zurich. It is based in Chengdu, China, with an office in Zurich. Its mission is to develop ultra-low-power, event-based neuromorphic processors that mimic the way the brain processes information, enabling efficient, real-time sensory processing on edge devices.
The company’s flagship product is DYNAP-CNN, a neuromorphic vision processor for tasks such as object detection, tracking, and sensory data processing for smart homes, autonomous driving, and healthcare devices. SynSense’s technology has also been put to use in robotics, smart toys, and security systems.
The company has raised significant funding, and has partnerships with several major companies and academic institutions.
————-
Innatera was spun out of the Delft University of Technology in the Netherlands in 2018 by Sumeet Kumar, Amir Zjajo, Rene van Leuken, and Uma Mahesh. Its mission is to enable real-time intelligence on edge devices through ultra-low-power neuromorphic processors.
Innatera’s flagship product is the Spiking Neural Processor (SNP) T1, which processes sensory data with extremely low latency, often under a millisecond, consuming only milliwatts of power.
It raised $21 million in an oversubscribed Series A round in 2024, and aims to scale the production of its processors and integrate their intelligence into a billion sensors by 2030.
————-
General Vision was founded in 1987 by Anne Menendez and Guy Paillet. The company uses neuromorphic computing to create energy-efficient systems capable of real-time learning and recognition in dynamic environments.
The company’s flagship products include NeuroMem, a neural network chip that offers learning and recognition capabilities in robotics, IoT, and industrial automation. It also offers CogniSight, a visual recognition engine for applications like image processing and autonomous systems, operating on a massively parallel neural network architecture.
 
  • Like
  • Fire
Reactions: 23 users

Tothemoon24

Top 20
IMG_9841.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 27 users

Tothemoon24

Top 20
IMG_9842.jpeg


Neuromorphic computing: No, the tech industry is not planning on harvesting Spock's brain to drive computing technology. However, the idea of neuromorphic computing is that neuromorphic systems process many tasks in parallel rather than using sequential steps, which is much more of how the human brain works. This could be helpful in AI and in processing inputs from thousands of sensors.

Novel accelerators: This is another buzzword to describe the special-purpose processing units that have become popular as ways to augment traditional CPUs. The best known of these, of course, is the GPU. Initially popular as a way for gamers to get higher quality graphics, GPUs have proven to be amazingly capable in crypto and AI calculations. Other custom augment processors like Tensor Processing Units (Google's machine learning engine) are also proving popular.
 
  • Like
  • Thinking
  • Fire
Reactions: 7 users

CHIPS

Regular
SP +7,83% in Germany at 7 p.m. today.
I hope this signals the positive 4C coming tomorrow. Maybe some people know more and already bought more stocks? *hope*

Emoji Increasing GIF by emoji® - The Iconic Brand
 
  • Like
  • Fire
  • Love
Reactions: 20 users

Baisyet

Regular
Hi Baisyet,

Not at all. For the purists, who insist on a spike, Akida still does 1-bit weights and activations for low power, high speed applications which are not too demanding on accuracy.

Possibly the other point of distinction for the purists who cut their teeth on analog, analog is much closer to a neuron in that it does sum voltages until they reach a threshold voltage value as does a neuron, whereas Akida counts the number of input binary bits until a threshold count is reached. On reflection, this is the more likely differentiator.

Many of the academic researchers have been toiling away at analog for decades and perhaps they take a proprietary interest in the term "neuromorphic".

The problem with analog is manufacturing variability, in that the dimensions cannot be strictly controlled so the electrical characteristics of the individual capacitors varies, which is significant when many thousands/millions of capacitors are involved ...

So it is odd as Jason said he did not want to emulate the neuron so much as to draw inspiration from it.
Thank you very much @Diogenese much appreciated and you put it so easy to understand.
 
  • Like
Reactions: 9 users

FiveBucks

Regular
I haven't listened to the podcast with Jason yet, but we all know we are Neuromorphic, so possibly his comments have been misunderstood..as he is currently contracted as a contributing scientist on our SAB with Peter and Andre, less than 3 months ago, so I would suggest that Peter would speak with him if he feels his comment/s were in anyway inappropriate or not aligned with our companys objectives.
Integrity resides at Brainchip..we are 100% Neuromorphic and always have been.
Very happy to be corrected by the man himself...no regerts 🤣🤣🤣
Tech x

1730149509708.png


:p

Love your work Tech x
 
  • Haha
  • Like
Reactions: 9 users

Sosimple

Regular
Can anyone tell me what is Rag system?
 

Attachments

  • Screenshot_2024-10-29-08-31-57-20_cbf47468f7ecfbd8ebcc46bf9cc626da.jpg
    Screenshot_2024-10-29-08-31-57-20_cbf47468f7ecfbd8ebcc46bf9cc626da.jpg
    461 KB · Views: 33
  • Thinking
  • Like
Reactions: 2 users

DK6161

Regular
Just a reminder. This time last year our total cash inflow from customers was $30,000 USD for the quarter ending 30 June
 
  • Like
Reactions: 2 users

DK6161

Regular
Just a reminder. This time last year our total cash inflow from customers was $30,000 USD for the quarter ending 30 June
The judgement on Sean won't be final until the first 4C is out next year. So he is still got time.
Might ask TD if Sean is taking leave over Xmas. A good indication whether he is comfortable with his 2024 performance i.e. if we got some deals secured as promised
 
  • Like
Reactions: 4 users

jtardif999

Regular
If they are now confused on their own identity then I don’t think it is a good look after all these years of telling us they are leaders in neuromorphic.
I have looked at news releases from Brainchip and they still refer to their ”neuromorphic AI” and still refer to being “the worlds first commercial producer of neuromorphic AI” on their website, so I’m either not understanding things or I kind of feel Jason K. Eshraghian who is on our Scientific Advisory Board is undermining BrainChip. Maybe I just don’t understand the technicality’s of whatever Jason is saying but if he is correct then I feel like it is a massive miscommunication on the part of BrainChip all these years because there shouldn’t be confusion when trying to market your product.
@Diogenese explained the differences between analog and digital Neuromorphic beautifully. Thank you @Diogenese 🤓BrainChip have always claimed to be digital and Neuromorphic. How Akida’s neurons reach a threshold and how analog architected neurons reach a threshold is semantic. Both ways mimic the brain and can truely learn as a result. BrainChip have the sure way of getting a result though. Digital is always repeatable whereas analog could potentially be unreliable. Both ways are Neuromorphic in that they mimic the learning process of the brain. BrainChip have their way patented. AIMO.
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Top Bottom