Short term, no but long term, yes. Depends on how long you plan on holding for.Maybe a good direction to take from the company, but sorry does nothing for us shareholders with our investment.
Probably what I wanted to say, but I said mine with a lot less words lolShort term, no but long term, yes. Depends on how long you plan on holding for.
In 5 years, these university kids will be in engineering roles in companies that may have had no exposure to SNNs. These kids will have experience with Akida that the old dinosaurs in the company may never have been exposed to and may implement Akida in their products.
It isn't going to send the share price rocketing today but the more exposure people have to our product, the better.
I am very grateful for this post from @equanimous, I searched Michael Pfeiffer on LinkedIn and am now trying to connect with himJust cause ya posted about Michael refer linked prev post
Post in thread 'The Wall + Links' https://thestockexchange.com.au/threads/the-wall-links.11187/post-29194
learn&buyProbably what I wanted to say, but I said mine with a lot less words lol
That also describes me. Five months ago, I was a WANCA. I had a nice little amount of shares from November. Then I learned in a very short time about something completely unknown to me. I knew a lot about tech but not IT. I bought and am fascinated daily.learn&buy
?
short enough
for me
Edge Impulse up for an award for “real time object detection” View attachment 14403
In BrainChip’s news release announcing their University AI Accelerator Program they state that The Program successfully completed a pilot session at Carnegie Mellon University (CMU) this past spring semester and will be officially launching with Arizona State University in September … with a further 5 expected to participate.
I had never heard of Carnegie Mellon University so I took to google as any fine researcher does these days and I’ve come away very impressed.
CMU is a private research university.
If you search for top colleges / schools / universities for teaching computer science, Carnegie Mellon University repeatedly appears in the top 3 in America.
… the college is perhaps best known for its School of Computer Science. CMU is currently tied for second place on the US News and World Report’s ranked list of top schools for computer science in the country.
More specifically, the college’s programs for artificial intelligence and programming language are known as the very best in the country…
CMU is also known for its engineering program, which is currently tied for fourth in the national college rankings.
Best Artificial Intelligence Programs Being taught? Yep, CMU.
https://www.usnews.com/best-graduate-schools/top-science-schools/artificial-intelligence-rankings
But it doesn’t stop in the US! Additionally CMU ranks as 6th in the world for computer science according to https://www.timeshighereducation.com/world-university-rankings/2022/subject-ranking/computer-science
Pretty impressive work by BrainChip to hook up with Carnegie Mellon University.
I look forward to some recognition of BrainChip’s engagement in their computing school on the CMU website in the, hopefully near, future.
Great work T,
I too think this is a big step for our Company.
It's impressive that CMU has already successfully completed a pilot session this year.
From our press release
"The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year."
BrainChip Empowers Next Generation of Technology Innovators with Launch of the University AI Accelerator Program
Empower your tech innovation journey with BrainChip's University AI Accelerator Program. Unleash the future here!brainchip.com
You inspired me last night to do a bit of digging so I thought I would look into the Professor John Paul Shen, who said this
"We have incorporated experimentation with BrainChip’s Akida development boards in our new graduate-level course, “Neuromorphic Computer Architecture and Processor Design” at Carnegie Mellon University during the Spring 2022 semester,” said John Paul Shen, Professor, Electrical and Computer Engineering Department at Carnegie Mellon. “Our students had a great experience in using the Akida development environment and analyzing results from the Akida hardware. We look forward to running and expanding this program in 2023"
It didn't take much digging for my socks to be blown off, impressive man
John Shen
Professor, Electrical and Computer Engineering
Contact
Bio
John Paul Shen was a Nokia Fellow and the founding director of Nokia Research Center - North America Lab. NRC-NAL had research teams pursuing a wide range of research projects in mobile Internet and mobile computing. In six years (2007-2012), NRC-NAL filed over 100 patents, published over 200 papers, hosted about 100 Ph.D. interns, and collaborated with a dozen universities. Prior to joining Nokia in late 2006, John was the Director of the Microarchitecture Research Lab at Intel. MRL had research teams in Santa Clara, Portland, and Austin, pursuing research on aggressive ILP and TLP microarchitectures for IA32 and IA64 processors. Prior to joining Intel in 2000, John was a tenured Full Professor in the ECE Department at CMU, where he supervised a total of 17 Ph.D. students and dozens of M.S. students, received multiple teaching awards, and published two books and more than 100 research papers. One of his books, “Modern Processor Design: Fundamentals of Superscalar Processors” was used in the EE382A Advanced Processor Architecture course at Stanford, where he co-taught the EE382A course. After spending 15 years in the industry, all in the Silicon Valley, he returned to CMU in the fall of 2015 as a tenured Full Professor in the ECE Department, and is based at the Carnegie Mellon Silicon Valley campus.
Education
Ph.D.
Electrical Engineering
University of Southern California
M.S.
Electrical Engineering
University of Southern California
B.S.
Electrical Engineering
University of Michigan
Research
Modern Processor Design and Evaluation
With the emergence of superscalar processors, phenomenal performance increases are being achieved via the exploitation of instruction-level parallelism (ILP). Software tools for aiding the design and validation of complex superscalar processors are being developed. These tools, such as VMW (Visualization-Based Microarchitecture Workbench), facilitate the rigorous specification and validation of microarchitectures.
Architecture and Compilation for Instruction-Level Parallelism
Microarchitecture and code transformation techniques for effective exploitation of ILP are being studied. Synergistic combinations of static (compile-time software) and dynamic (run-time hardware) mechanisms are being explored. Going beyond a single instruction stream is necessary to achieve effective use of wide superscalar machines, as well as tightly coupled small-scale multiprocessors.
Dependable and Fault-Tolerant Computing
Techniques are being developed to exploit the idling machine resources of ILP machines for concurrent error checking. As ILP machines get wider, the utilization of the machine resources will decrease. The idling resources can potentially be used for enhancing system dependability via compile-time transformation techniques.
Keywords
- Wearable, mobile, and cloud computing
- Ultra energy-efficient computing for sensor processing
- Real-time data analytics
- Mobile-user behavior modelling and deep learning
John Shen - Electrical and Computer Engineering - College of Engineering - Carnegie Mellon University
Distinguished Service Professor of Electrical and Computer Engineering at Carnegie Mellon University.www.ece.cmu.edu
I also checked out his Linkedin page see screenshot, I especially like the sentence at the bottom
View attachment 14406
Home | CMU-NCA Lab
www.ncal.sv.cmu.edu
NCAL: Neuromorphic Computer Architecture Lab
"Energy-Efficient, Edge-Native, Sensory Processing Units
with Online Continuous Learning Capability"
The Neuromorphic Computer Architecture Lab (NCAL) is a new research group in the Electrical and Computer Engineering Department at Carnegie Mellon University, led by Prof. John Paul Shen and Prof. James E. Smith.
RESEARCH GOAL: New processor architecture and design that captures the capabilities and efficiencies of brain's neocortex for energy-efficient, edge-native, on-line, sensory processing in mobile and edge devices.
- Capabilities: strong adherence to biological plausibility and Spike Timing Dependent Plasticity (STDP) in order to enable continuous, unsupervised, and emergent learning.
- Efficiencies: can achieve several orders of magnitude improvements on system complexity and energy efficiency as compared to existing DNN computation infrastructures for edge-native sensory processing.
RESEARCH STRATEGY:
- Targeted Applications: Edge-Native Sensory Processing
- Computational Model: Space-Time Algebra (STA)
- Processor Architecture: Temporal Neural Networks (TNN)
- Processor Design Style: Space-Time Logic Design
- Hardware Implementation: Off-the-Shelf Digital CMOS
1. Targeted Applications: Edge-Native Sensory Processing
Targeted application domain: edge-native on-line sensory processing that mimics the human neocortex. The focus of this research is on temporal neural networks that can achieve brain-like capabilities with brain-like efficiency and can be implemented using standard CMOS technology. This effort can enable a whole new family of accelerators, or sensory processing units, that can be deployed in mobile and edge devices for performing edge-native, on-line, always-on, sensory processing with the capability for real-time inference and continuous learning, while consuming only a few mWatts.
2. Computational Model: Space-Time Algebra (STA)
A new Space-Time Computing (STC) Model has been developed for computing that communicates and processes information encoded as transient events in time -- action potentials or voltage spikes in the case of neurons. Consequently, the flow of time becomes a freely available, no-cost computational resource. The theoretical basis for the STC model is the "Space-Time Algebra“ (STA) with primitives that model points in time and functional operations that are consistent with the flow of Newtonian time. [STC/STA was developed by Jim Smith]
3. Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.
4. Processor Design Style: Space Time Logic Design
Conventional CMOS logic gates based on Boolean algebra can be re-purposed to implement STA based temporal operations and functions. Temporal values can be encoded using voltage edges or pulses. We have developed a TNN architecture based on two key building blocks: neurons and columns of neurons. We have implemented the excitatory neuron model with its input synaptic weights as well as a column of such neurons with winner-take-all (WTA) lateral inhibition, all using the space time logic design approach and standard digital CMOS design tools.
5. Hardware Implementation: Standard Digital CMOS Technology
Based on the STA theoretical foundation and the ST logic design approach, we can design a new type of special-purpose TNN-based "Neuromorphic Sensory Processing Units" (NSPU) for incorporation in mobile SoCs targeting mobile and edge devices. NSPUs can be a new core type for SoCs already with heterogeneous cores. Other than using off-the-shelf CMOS design and synthesis tools, there is the potential for creating a new custom standard cell library and design optimizations for supporting the design of TNN-based NSPUs for sensory processing.
And the Course Syllabus - Spring 2022 18-743: “Neuromorphic Computer Architecture & Processor Design”
BrainChip is listed on the course
View attachment 14407
View attachment 14408
It's great to be a share holder
De facto standardGreat work T,
I too think this is a big step for our Company.
It's impressive that CMU has already successfully completed a pilot session this year.
From our press release
"The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year."
BrainChip Empowers Next Generation of Technology Innovators with Launch of the University AI Accelerator Program
Empower your tech innovation journey with BrainChip's University AI Accelerator Program. Unleash the future here!brainchip.com
You inspired me last night to do a bit of digging so I thought I would look into the Professor John Paul Shen, who said this
"We have incorporated experimentation with BrainChip’s Akida development boards in our new graduate-level course, “Neuromorphic Computer Architecture and Processor Design” at Carnegie Mellon University during the Spring 2022 semester,” said John Paul Shen, Professor, Electrical and Computer Engineering Department at Carnegie Mellon. “Our students had a great experience in using the Akida development environment and analyzing results from the Akida hardware. We look forward to running and expanding this program in 2023"
It didn't take much digging for my socks to be blown off, impressive man
John Shen
Professor, Electrical and Computer Engineering
Contact
Bio
John Paul Shen was a Nokia Fellow and the founding director of Nokia Research Center - North America Lab. NRC-NAL had research teams pursuing a wide range of research projects in mobile Internet and mobile computing. In six years (2007-2012), NRC-NAL filed over 100 patents, published over 200 papers, hosted about 100 Ph.D. interns, and collaborated with a dozen universities. Prior to joining Nokia in late 2006, John was the Director of the Microarchitecture Research Lab at Intel. MRL had research teams in Santa Clara, Portland, and Austin, pursuing research on aggressive ILP and TLP microarchitectures for IA32 and IA64 processors. Prior to joining Intel in 2000, John was a tenured Full Professor in the ECE Department at CMU, where he supervised a total of 17 Ph.D. students and dozens of M.S. students, received multiple teaching awards, and published two books and more than 100 research papers. One of his books, “Modern Processor Design: Fundamentals of Superscalar Processors” was used in the EE382A Advanced Processor Architecture course at Stanford, where he co-taught the EE382A course. After spending 15 years in the industry, all in the Silicon Valley, he returned to CMU in the fall of 2015 as a tenured Full Professor in the ECE Department, and is based at the Carnegie Mellon Silicon Valley campus.
Education
Ph.D.
Electrical Engineering
University of Southern California
M.S.
Electrical Engineering
University of Southern California
B.S.
Electrical Engineering
University of Michigan
Research
Modern Processor Design and Evaluation
With the emergence of superscalar processors, phenomenal performance increases are being achieved via the exploitation of instruction-level parallelism (ILP). Software tools for aiding the design and validation of complex superscalar processors are being developed. These tools, such as VMW (Visualization-Based Microarchitecture Workbench), facilitate the rigorous specification and validation of microarchitectures.
Architecture and Compilation for Instruction-Level Parallelism
Microarchitecture and code transformation techniques for effective exploitation of ILP are being studied. Synergistic combinations of static (compile-time software) and dynamic (run-time hardware) mechanisms are being explored. Going beyond a single instruction stream is necessary to achieve effective use of wide superscalar machines, as well as tightly coupled small-scale multiprocessors.
Dependable and Fault-Tolerant Computing
Techniques are being developed to exploit the idling machine resources of ILP machines for concurrent error checking. As ILP machines get wider, the utilization of the machine resources will decrease. The idling resources can potentially be used for enhancing system dependability via compile-time transformation techniques.
Keywords
- Wearable, mobile, and cloud computing
- Ultra energy-efficient computing for sensor processing
- Real-time data analytics
- Mobile-user behavior modelling and deep learning
John Shen - Electrical and Computer Engineering - College of Engineering - Carnegie Mellon University
Distinguished Service Professor of Electrical and Computer Engineering at Carnegie Mellon University.www.ece.cmu.edu
I also checked out his Linkedin page see screenshot, I especially like the sentence at the bottom
View attachment 14406
Home | CMU-NCA Lab
www.ncal.sv.cmu.edu
NCAL: Neuromorphic Computer Architecture Lab
"Energy-Efficient, Edge-Native, Sensory Processing Units
with Online Continuous Learning Capability"
The Neuromorphic Computer Architecture Lab (NCAL) is a new research group in the Electrical and Computer Engineering Department at Carnegie Mellon University, led by Prof. John Paul Shen and Prof. James E. Smith.
RESEARCH GOAL: New processor architecture and design that captures the capabilities and efficiencies of brain's neocortex for energy-efficient, edge-native, on-line, sensory processing in mobile and edge devices.
- Capabilities: strong adherence to biological plausibility and Spike Timing Dependent Plasticity (STDP) in order to enable continuous, unsupervised, and emergent learning.
- Efficiencies: can achieve several orders of magnitude improvements on system complexity and energy efficiency as compared to existing DNN computation infrastructures for edge-native sensory processing.
RESEARCH STRATEGY:
- Targeted Applications: Edge-Native Sensory Processing
- Computational Model: Space-Time Algebra (STA)
- Processor Architecture: Temporal Neural Networks (TNN)
- Processor Design Style: Space-Time Logic Design
- Hardware Implementation: Off-the-Shelf Digital CMOS
1. Targeted Applications: Edge-Native Sensory Processing
Targeted application domain: edge-native on-line sensory processing that mimics the human neocortex. The focus of this research is on temporal neural networks that can achieve brain-like capabilities with brain-like efficiency and can be implemented using standard CMOS technology. This effort can enable a whole new family of accelerators, or sensory processing units, that can be deployed in mobile and edge devices for performing edge-native, on-line, always-on, sensory processing with the capability for real-time inference and continuous learning, while consuming only a few mWatts.
2. Computational Model: Space-Time Algebra (STA)
A new Space-Time Computing (STC) Model has been developed for computing that communicates and processes information encoded as transient events in time -- action potentials or voltage spikes in the case of neurons. Consequently, the flow of time becomes a freely available, no-cost computational resource. The theoretical basis for the STC model is the "Space-Time Algebra“ (STA) with primitives that model points in time and functional operations that are consistent with the flow of Newtonian time. [STC/STA was developed by Jim Smith]
3. Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.
4. Processor Design Style: Space Time Logic Design
Conventional CMOS logic gates based on Boolean algebra can be re-purposed to implement STA based temporal operations and functions. Temporal values can be encoded using voltage edges or pulses. We have developed a TNN architecture based on two key building blocks: neurons and columns of neurons. We have implemented the excitatory neuron model with its input synaptic weights as well as a column of such neurons with winner-take-all (WTA) lateral inhibition, all using the space time logic design approach and standard digital CMOS design tools.
5. Hardware Implementation: Standard Digital CMOS Technology
Based on the STA theoretical foundation and the ST logic design approach, we can design a new type of special-purpose TNN-based "Neuromorphic Sensory Processing Units" (NSPU) for incorporation in mobile SoCs targeting mobile and edge devices. NSPUs can be a new core type for SoCs already with heterogeneous cores. Other than using off-the-shelf CMOS design and synthesis tools, there is the potential for creating a new custom standard cell library and design optimizations for supporting the design of TNN-based NSPUs for sensory processing.
And the Course Syllabus - Spring 2022 18-743: “Neuromorphic Computer Architecture & Processor Design”
BrainChip is listed on the course
View attachment 14407
View attachment 14408
It's great to be a share holder
considering he is based in Silicon Valley campus it is possible that Sean made the introduction
ps. I love the look of the course calendar with Akida on there
Hi cosors,I think that's very important. Young people need to get to know Akida better. I still haven't really calmed down, had deleted my post. I listened to a recent webinar (almost 2h) in German. Topic:
Artificial intelligence is pushing computers as we know them today to their limits. Do we need new computers inspired by the brain?
AI Hardware of the Future: What is Neuromorphic Computing?
Presentation of the topic and our guests
00:03:05 What is Neuromorphic Computing?
00:09:20 Short message from our sponsor BWI
00:10:00 What do neural networks and neuromorphic computing have in common?
00:16:02 Emulation vs. architecture in neuromorphic computing
00:18:00 Analogue vs. digital computing methods
00:21:15 Differentiation to quantum computing
00:25:36 The history of neuromorphic computing
00:32:16 How does deep learning have it Neuromorphic Computing Changed?
00:35:55 What does neuromorphic computing mean for AI development?
00:39:50How does deep learning work on neuromorphic hardware?
00:44:15 Sparse and spiking in neural networks - what's the difference?
00:52:00 The advantage of spiking neural networks
00:59:24 What is the biggest challenge in neuromorphic computing right now?
01:05:07 As an AI developer, can I fully rely on neuromorphic computing?
01:07:25 Will we have a neuromorphic chip in the iPhone in five years?
01:08:00 Would moving to neuromorphic computing require new manufacturing processes? 01:11:33 What could a neuromorphic chip in the iPhone do better?
Test tube vs. chip factory: Could bio-computers also be bred?
01:31:02 Is biology even a good model for computing?
01:39:16 Max' Philo Solo: Where is the line between emulation and reproduction?
01:45:20 Where to learn more about Neuromorphic Computing?
01:47:52 How to get into neuromorphic computing?
They (also uni Bern in Switzerland, home of Bonseyes) managed to talk for two hours about the state of affairs and what is not yet possible today. I thought easily 20 times about it, hey! do you know as a researcher actually not Akida? The webinar was full of ignorance about the state of things whether on purpose or not. I have even prepared a letter but not sent. I don't think that makes any sense. All you have to do is type neuromorphic and spiking neural into Google and you'll inevitably come up with Brainchip. Anyway, I think it is very important that Akida be distributed to universities. I still shake my head. Who wants to hear it from my German compatriots:
https://mixed.de/vom-gehirn-inspirierte-ki-was-ist-neuromorphic-computing-deep-minds-7/?amp=1
Great to see the lecturers for lectures 7 & 8 will have hands-on experience with Akida.Great work T,
I too think this is a big step for our Company.
It's impressive that CMU has already successfully completed a pilot session this year.
From our press release
"The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year."
BrainChip Empowers Next Generation of Technology Innovators with Launch of the University AI Accelerator Program
Empower your tech innovation journey with BrainChip's University AI Accelerator Program. Unleash the future here!brainchip.com
You inspired me last night to do a bit of digging so I thought I would look into the Professor John Paul Shen, who said this
"We have incorporated experimentation with BrainChip’s Akida development boards in our new graduate-level course, “Neuromorphic Computer Architecture and Processor Design” at Carnegie Mellon University during the Spring 2022 semester,” said John Paul Shen, Professor, Electrical and Computer Engineering Department at Carnegie Mellon. “Our students had a great experience in using the Akida development environment and analyzing results from the Akida hardware. We look forward to running and expanding this program in 2023"
It didn't take much digging for my socks to be blown off, impressive man
John Shen
Professor, Electrical and Computer Engineering
Contact
Bio
John Paul Shen was a Nokia Fellow and the founding director of Nokia Research Center - North America Lab. NRC-NAL had research teams pursuing a wide range of research projects in mobile Internet and mobile computing. In six years (2007-2012), NRC-NAL filed over 100 patents, published over 200 papers, hosted about 100 Ph.D. interns, and collaborated with a dozen universities. Prior to joining Nokia in late 2006, John was the Director of the Microarchitecture Research Lab at Intel. MRL had research teams in Santa Clara, Portland, and Austin, pursuing research on aggressive ILP and TLP microarchitectures for IA32 and IA64 processors. Prior to joining Intel in 2000, John was a tenured Full Professor in the ECE Department at CMU, where he supervised a total of 17 Ph.D. students and dozens of M.S. students, received multiple teaching awards, and published two books and more than 100 research papers. One of his books, “Modern Processor Design: Fundamentals of Superscalar Processors” was used in the EE382A Advanced Processor Architecture course at Stanford, where he co-taught the EE382A course. After spending 15 years in the industry, all in the Silicon Valley, he returned to CMU in the fall of 2015 as a tenured Full Professor in the ECE Department, and is based at the Carnegie Mellon Silicon Valley campus.
Education
Ph.D.
Electrical Engineering
University of Southern California
M.S.
Electrical Engineering
University of Southern California
B.S.
Electrical Engineering
University of Michigan
Research
Modern Processor Design and Evaluation
With the emergence of superscalar processors, phenomenal performance increases are being achieved via the exploitation of instruction-level parallelism (ILP). Software tools for aiding the design and validation of complex superscalar processors are being developed. These tools, such as VMW (Visualization-Based Microarchitecture Workbench), facilitate the rigorous specification and validation of microarchitectures.
Architecture and Compilation for Instruction-Level Parallelism
Microarchitecture and code transformation techniques for effective exploitation of ILP are being studied. Synergistic combinations of static (compile-time software) and dynamic (run-time hardware) mechanisms are being explored. Going beyond a single instruction stream is necessary to achieve effective use of wide superscalar machines, as well as tightly coupled small-scale multiprocessors.
Dependable and Fault-Tolerant Computing
Techniques are being developed to exploit the idling machine resources of ILP machines for concurrent error checking. As ILP machines get wider, the utilization of the machine resources will decrease. The idling resources can potentially be used for enhancing system dependability via compile-time transformation techniques.
Keywords
- Wearable, mobile, and cloud computing
- Ultra energy-efficient computing for sensor processing
- Real-time data analytics
- Mobile-user behavior modelling and deep learning
John Shen - Electrical and Computer Engineering - College of Engineering - Carnegie Mellon University
Distinguished Service Professor of Electrical and Computer Engineering at Carnegie Mellon University.www.ece.cmu.edu
I also checked out his Linkedin page see screenshot, I especially like the sentence at the bottom
View attachment 14406
Home | CMU-NCA Lab
www.ncal.sv.cmu.edu
NCAL: Neuromorphic Computer Architecture Lab
"Energy-Efficient, Edge-Native, Sensory Processing Units
with Online Continuous Learning Capability"
The Neuromorphic Computer Architecture Lab (NCAL) is a new research group in the Electrical and Computer Engineering Department at Carnegie Mellon University, led by Prof. John Paul Shen and Prof. James E. Smith.
RESEARCH GOAL: New processor architecture and design that captures the capabilities and efficiencies of brain's neocortex for energy-efficient, edge-native, on-line, sensory processing in mobile and edge devices.
- Capabilities: strong adherence to biological plausibility and Spike Timing Dependent Plasticity (STDP) in order to enable continuous, unsupervised, and emergent learning.
- Efficiencies: can achieve several orders of magnitude improvements on system complexity and energy efficiency as compared to existing DNN computation infrastructures for edge-native sensory processing.
RESEARCH STRATEGY:
- Targeted Applications: Edge-Native Sensory Processing
- Computational Model: Space-Time Algebra (STA)
- Processor Architecture: Temporal Neural Networks (TNN)
- Processor Design Style: Space-Time Logic Design
- Hardware Implementation: Off-the-Shelf Digital CMOS
1. Targeted Applications: Edge-Native Sensory Processing
Targeted application domain: edge-native on-line sensory processing that mimics the human neocortex. The focus of this research is on temporal neural networks that can achieve brain-like capabilities with brain-like efficiency and can be implemented using standard CMOS technology. This effort can enable a whole new family of accelerators, or sensory processing units, that can be deployed in mobile and edge devices for performing edge-native, on-line, always-on, sensory processing with the capability for real-time inference and continuous learning, while consuming only a few mWatts.
2. Computational Model: Space-Time Algebra (STA)
A new Space-Time Computing (STC) Model has been developed for computing that communicates and processes information encoded as transient events in time -- action potentials or voltage spikes in the case of neurons. Consequently, the flow of time becomes a freely available, no-cost computational resource. The theoretical basis for the STC model is the "Space-Time Algebra“ (STA) with primitives that model points in time and functional operations that are consistent with the flow of Newtonian time. [STC/STA was developed by Jim Smith]
3. Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.
4. Processor Design Style: Space Time Logic Design
Conventional CMOS logic gates based on Boolean algebra can be re-purposed to implement STA based temporal operations and functions. Temporal values can be encoded using voltage edges or pulses. We have developed a TNN architecture based on two key building blocks: neurons and columns of neurons. We have implemented the excitatory neuron model with its input synaptic weights as well as a column of such neurons with winner-take-all (WTA) lateral inhibition, all using the space time logic design approach and standard digital CMOS design tools.
5. Hardware Implementation: Standard Digital CMOS Technology
Based on the STA theoretical foundation and the ST logic design approach, we can design a new type of special-purpose TNN-based "Neuromorphic Sensory Processing Units" (NSPU) for incorporation in mobile SoCs targeting mobile and edge devices. NSPUs can be a new core type for SoCs already with heterogeneous cores. Other than using off-the-shelf CMOS design and synthesis tools, there is the potential for creating a new custom standard cell library and design optimizations for supporting the design of TNN-based NSPUs for sensory processing.
And the Course Syllabus - Spring 2022 18-743: “Neuromorphic Computer Architecture & Processor Design”
BrainChip is listed on the course
View attachment 14407
View attachment 14408
It's great to be a share holder
I don't think they are there yet.New neuromorphic chip for AI on the edge, at a small fraction of the energy and size of today's computing platforms
An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of AI applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.techxplore.com
This looks like genuine competition