BRN Discussion Ongoing

D

Deleted member 118

Guest
Whoop whoop I brought Ibat (battle infinity) on presale and it went up x1000% on opening, going to cash some out later to buy more BRN




Maybe this was that feeling I was getting on Monday
 
Last edited by a moderator:
  • Like
  • Fire
  • Wow
Reactions: 18 users

Sirod69

bavarian girl ;-)
I'm trying to get involved here as best I can, so ignore it if it's not always something earth-shattering, I just didn't know any better
 
  • Like
  • Love
Reactions: 17 users
Gartner knows
2C049B3C-87F3-4920-8B06-B8E083DC8E86.png
 
  • Like
  • Fire
  • Love
Reactions: 18 users

alwaysgreen

Top 20
Maybe a good direction to take from the company, but sorry does nothing for us shareholders with our investment.
Short term, no but long term, yes. Depends on how long you plan on holding for.

In 5 years, these university kids will be in engineering roles in companies that may have had no exposure to SNNs. These kids will have experience with Akida that the old dinosaurs in the company may never have been exposed to and may implement Akida in their products.

It isn't going to send the share price rocketing today but the more exposure people have to our product, the better.
 
  • Like
  • Fire
  • Love
Reactions: 43 users
D

Deleted member 118

Guest
Short term, no but long term, yes. Depends on how long you plan on holding for.

In 5 years, these university kids will be in engineering roles in companies that may have had no exposure to SNNs. These kids will have experience with Akida that the old dinosaurs in the company may never have been exposed to and may implement Akida in their products.

It isn't going to send the share price rocketing today but the more exposure people have to our product, the better.
Probably what I wanted to say, but I said mine with a lot less words lol
 
  • Like
  • Haha
Reactions: 7 users

Sirod69

bavarian girl ;-)
  • Like
  • Fire
Reactions: 11 users

cosors

👀
  • Fire
  • Haha
  • Like
Reactions: 4 users

cosors

👀
learn&buy
?
short enough
for me
That also describes me. Five months ago, I was a WANCA. I had a nice little amount of shares from November. Then I learned in a very short time about something completely unknown to me. I knew a lot about tech but not IT. I bought and am fascinated daily.
___
There is probably no translation for this: bi turbo charged jump start "Kaltstart"
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 14 users

stuart888

Regular
Arm Ceo on Bloomberg Tech TV with Emily Chang soon.

Guess who else loves Brainchip = ARM!

1660770553398.png
 
  • Like
  • Fire
  • Love
Reactions: 35 users

chapman89

Founding Member
Edge Impulse up for an award for “real time object detection”
046AB440-6C4A-4031-9D1D-7580F59D3575.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 57 users

cosors

👀
The course of the morning takes a long time with you and with us everything is done with the trade if we are up. Some entertainment. Enjoy the coffee and good start to the day! I'm off to bed...


Here are also our neighbors the French - I love Hungry Music 🤗
 
Last edited:
  • Like
  • Love
Reactions: 8 users

TechGirl

Founding Member
In BrainChip’s news release announcing their University AI Accelerator Program they state that The Program successfully completed a pilot session at Carnegie Mellon University (CMU) this past spring semester and will be officially launching with Arizona State University in September … with a further 5 expected to participate.

I had never heard of Carnegie Mellon University so I took to google as any fine researcher does these days and I’ve come away very impressed.

CMU is a private research university.

If you search for top colleges / schools / universities for teaching computer science, Carnegie Mellon University repeatedly appears in the top 3 in America.

… the college is perhaps best known for its School of Computer Science. CMU is currently tied for second place on the US News and World Report’s ranked list of top schools for computer science in the country.

More specifically, the college’s programs for artificial intelligence and programming language are known as the very best in the country…

CMU is also known for its engineering program, which is currently tied for fourth in the national college rankings.

Best Artificial Intelligence Programs Being taught? Yep, CMU.
https://www.usnews.com/best-graduate-schools/top-science-schools/artificial-intelligence-rankings


But it doesn’t stop in the US! Additionally CMU ranks as 6th in the world for computer science according to https://www.timeshighereducation.com/world-university-rankings/2022/subject-ranking/computer-science


Pretty impressive work by BrainChip to hook up with Carnegie Mellon University. (y)
I look forward to some recognition of BrainChip’s engagement in their computing school on the CMU website in the, hopefully near, future.

Great work T,

I too think this is a big step for our Company.

It's impressive that CMU has already successfully completed a pilot session this year.

From our press release

"The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year."


You inspired me last night to do a bit of digging so I thought I would look into the Professor John Paul Shen, who said this

"We have incorporated experimentation with BrainChip’s Akida development boards in our new graduate-level course, “Neuromorphic Computer Architecture and Processor Design” at Carnegie Mellon University during the Spring 2022 semester,” said John Paul Shen, Professor, Electrical and Computer Engineering Department at Carnegie Mellon. “Our students had a great experience in using the Akida development environment and analyzing results from the Akida hardware. We look forward to running and expanding this program in 2023"

It didn't take much digging for my socks to be blown off, impressive man (y)

John Shen


John Shen​

Professor, Electrical and Computer Engineering​

Contact

Bio​

John Paul Shen was a Nokia Fellow and the founding director of Nokia Research Center - North America Lab. NRC-NAL had research teams pursuing a wide range of research projects in mobile Internet and mobile computing. In six years (2007-2012), NRC-NAL filed over 100 patents, published over 200 papers, hosted about 100 Ph.D. interns, and collaborated with a dozen universities. Prior to joining Nokia in late 2006, John was the Director of the Microarchitecture Research Lab at Intel. MRL had research teams in Santa Clara, Portland, and Austin, pursuing research on aggressive ILP and TLP microarchitectures for IA32 and IA64 processors. Prior to joining Intel in 2000, John was a tenured Full Professor in the ECE Department at CMU, where he supervised a total of 17 Ph.D. students and dozens of M.S. students, received multiple teaching awards, and published two books and more than 100 research papers. One of his books, “Modern Processor Design: Fundamentals of Superscalar Processors” was used in the EE382A Advanced Processor Architecture course at Stanford, where he co-taught the EE382A course. After spending 15 years in the industry, all in the Silicon Valley, he returned to CMU in the fall of 2015 as a tenured Full Professor in the ECE Department, and is based at the Carnegie Mellon Silicon Valley campus.

Education​

Ph.D.
Electrical Engineering
University of Southern California

M.S.
Electrical Engineering
University of Southern California

B.S.
Electrical Engineering
University of Michigan

Research​

Modern Processor Design and Evaluation​

With the emergence of superscalar processors, phenomenal performance increases are being achieved via the exploitation of instruction-level parallelism (ILP). Software tools for aiding the design and validation of complex superscalar processors are being developed. These tools, such as VMW (Visualization-Based Microarchitecture Workbench), facilitate the rigorous specification and validation of microarchitectures.

Architecture and Compilation for Instruction-Level Parallelism​

Microarchitecture and code transformation techniques for effective exploitation of ILP are being studied. Synergistic combinations of static (compile-time software) and dynamic (run-time hardware) mechanisms are being explored. Going beyond a single instruction stream is necessary to achieve effective use of wide superscalar machines, as well as tightly coupled small-scale multiprocessors.

Dependable and Fault-Tolerant Computing​

Techniques are being developed to exploit the idling machine resources of ILP machines for concurrent error checking. As ILP machines get wider, the utilization of the machine resources will decrease. The idling resources can potentially be used for enhancing system dependability via compile-time transformation techniques.

Keywords​

  • Wearable, mobile, and cloud computing
  • Ultra energy-efficient computing for sensor processing
  • Real-time data analytics
  • Mobile-user behavior modelling and deep learning



I also checked out his Linkedin page see screenshot, I especially like the sentence at the bottom

zzz.jpg




NCAL: Neuromorphic Computer Architecture Lab






"Energy-Efficient, Edge-Native, Sensory Processing Units
with Online Continuous Learning Capability"​


The Neuromorphic Computer Architecture Lab (NCAL) is a new research group in the Electrical and Computer Engineering Department at Carnegie Mellon University, led by Prof. John Paul Shen and Prof. James E. Smith.

RESEARCH GOAL: New processor architecture and design that captures the capabilities and efficiencies of brain's neocortex for energy-efficient, edge-native, on-line, sensory processing in mobile and edge devices.
  • Capabilities: strong adherence to biological plausibility and Spike Timing Dependent Plasticity (STDP) in order to enable continuous, unsupervised, and emergent learning.
  • Efficiencies: can achieve several orders of magnitude improvements on system complexity and energy efficiency as compared to existing DNN computation infrastructures for edge-native sensory processing.

RESEARCH STRATEGY:
  1. Targeted Applications: Edge-Native Sensory Processing
  2. Computational Model: Space-Time Algebra (STA)
  3. Processor Architecture: Temporal Neural Networks (TNN)
  4. Processor Design Style: Space-Time Logic Design
  5. Hardware Implementation: Off-the-Shelf Digital CMOS

1. Targeted Applications: Edge-Native Sensory Processing
Targeted application domain: edge-native on-line sensory processing that mimics the human neocortex. The focus of this research is on temporal neural networks that can achieve brain-like capabilities with brain-like efficiency and can be implemented using standard CMOS technology. This effort can enable a whole new family of accelerators, or sensory processing units, that can be deployed in mobile and edge devices for performing edge-native, on-line, always-on, sensory processing with the capability for real-time inference and continuous learning, while consuming only a few mWatts.

2. Computational Model: Space-Time Algebra (STA)
A new Space-Time Computing (STC) Model has been developed for computing that communicates and processes information encoded as transient events in time -- action potentials or voltage spikes in the case of neurons. Consequently, the flow of time becomes a freely available, no-cost computational resource. The theoretical basis for the STC model is the "Space-Time Algebra“ (STA) with primitives that model points in time and functional operations that are consistent with the flow of Newtonian time. [STC/STA was developed by Jim Smith]

3. Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.

4. Processor Design Style: Space Time Logic Design
Conventional CMOS logic gates based on Boolean algebra can be re-purposed to implement STA based temporal operations and functions. Temporal values can be encoded using voltage edges or pulses. We have developed a TNN architecture based on two key building blocks: neurons and columns of neurons. We have implemented the excitatory neuron model with its input synaptic weights as well as a column of such neurons with winner-take-all (WTA) lateral inhibition, all using the space time logic design approach and standard digital CMOS design tools.

5. Hardware Implementation: Standard Digital CMOS Technology
Based on the STA theoretical foundation and the ST logic design approach, we can design a new type of special-purpose TNN-based "Neuromorphic Sensory Processing Units" (NSPU) for incorporation in mobile SoCs targeting mobile and edge devices. NSPUs can be a new core type for SoCs already with heterogeneous cores. Other than using off-the-shelf CMOS design and synthesis tools, there is the potential for creating a new custom standard cell library and design optimizations for supporting the design of TNN-based NSPUs for sensory processing.



And the Course Syllabus - Spring 2022 18-743: “Neuromorphic Computer Architecture & Processor Design”

BrainChip is listed on the course

zzzz.jpg


zzzzz.jpg


It's great to be a share holder :)
 
  • Like
  • Fire
  • Love
Reactions: 92 users

Terroni2105

Founding Member
Great work T,

I too think this is a big step for our Company.

It's impressive that CMU has already successfully completed a pilot session this year.

From our press release

"The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year."


You inspired me last night to do a bit of digging so I thought I would look into the Professor John Paul Shen, who said this

"We have incorporated experimentation with BrainChip’s Akida development boards in our new graduate-level course, “Neuromorphic Computer Architecture and Processor Design” at Carnegie Mellon University during the Spring 2022 semester,” said John Paul Shen, Professor, Electrical and Computer Engineering Department at Carnegie Mellon. “Our students had a great experience in using the Akida development environment and analyzing results from the Akida hardware. We look forward to running and expanding this program in 2023"

It didn't take much digging for my socks to be blown off, impressive man (y)

John Shen


John Shen​

Professor, Electrical and Computer Engineering​

Contact

Bio​

John Paul Shen was a Nokia Fellow and the founding director of Nokia Research Center - North America Lab. NRC-NAL had research teams pursuing a wide range of research projects in mobile Internet and mobile computing. In six years (2007-2012), NRC-NAL filed over 100 patents, published over 200 papers, hosted about 100 Ph.D. interns, and collaborated with a dozen universities. Prior to joining Nokia in late 2006, John was the Director of the Microarchitecture Research Lab at Intel. MRL had research teams in Santa Clara, Portland, and Austin, pursuing research on aggressive ILP and TLP microarchitectures for IA32 and IA64 processors. Prior to joining Intel in 2000, John was a tenured Full Professor in the ECE Department at CMU, where he supervised a total of 17 Ph.D. students and dozens of M.S. students, received multiple teaching awards, and published two books and more than 100 research papers. One of his books, “Modern Processor Design: Fundamentals of Superscalar Processors” was used in the EE382A Advanced Processor Architecture course at Stanford, where he co-taught the EE382A course. After spending 15 years in the industry, all in the Silicon Valley, he returned to CMU in the fall of 2015 as a tenured Full Professor in the ECE Department, and is based at the Carnegie Mellon Silicon Valley campus.

Education​

Ph.D.
Electrical Engineering
University of Southern California

M.S.
Electrical Engineering
University of Southern California

B.S.
Electrical Engineering
University of Michigan

Research​

Modern Processor Design and Evaluation​

With the emergence of superscalar processors, phenomenal performance increases are being achieved via the exploitation of instruction-level parallelism (ILP). Software tools for aiding the design and validation of complex superscalar processors are being developed. These tools, such as VMW (Visualization-Based Microarchitecture Workbench), facilitate the rigorous specification and validation of microarchitectures.

Architecture and Compilation for Instruction-Level Parallelism​

Microarchitecture and code transformation techniques for effective exploitation of ILP are being studied. Synergistic combinations of static (compile-time software) and dynamic (run-time hardware) mechanisms are being explored. Going beyond a single instruction stream is necessary to achieve effective use of wide superscalar machines, as well as tightly coupled small-scale multiprocessors.

Dependable and Fault-Tolerant Computing​

Techniques are being developed to exploit the idling machine resources of ILP machines for concurrent error checking. As ILP machines get wider, the utilization of the machine resources will decrease. The idling resources can potentially be used for enhancing system dependability via compile-time transformation techniques.

Keywords​

  • Wearable, mobile, and cloud computing
  • Ultra energy-efficient computing for sensor processing
  • Real-time data analytics
  • Mobile-user behavior modelling and deep learning



I also checked out his Linkedin page see screenshot, I especially like the sentence at the bottom

View attachment 14406



NCAL: Neuromorphic Computer Architecture Lab






"Energy-Efficient, Edge-Native, Sensory Processing Units​

with Online Continuous Learning Capability"​


The Neuromorphic Computer Architecture Lab (NCAL) is a new research group in the Electrical and Computer Engineering Department at Carnegie Mellon University, led by Prof. John Paul Shen and Prof. James E. Smith.

RESEARCH GOAL: New processor architecture and design that captures the capabilities and efficiencies of brain's neocortex for energy-efficient, edge-native, on-line, sensory processing in mobile and edge devices.
  • Capabilities: strong adherence to biological plausibility and Spike Timing Dependent Plasticity (STDP) in order to enable continuous, unsupervised, and emergent learning.
  • Efficiencies: can achieve several orders of magnitude improvements on system complexity and energy efficiency as compared to existing DNN computation infrastructures for edge-native sensory processing.

RESEARCH STRATEGY:
  1. Targeted Applications: Edge-Native Sensory Processing
  2. Computational Model: Space-Time Algebra (STA)
  3. Processor Architecture: Temporal Neural Networks (TNN)
  4. Processor Design Style: Space-Time Logic Design
  5. Hardware Implementation: Off-the-Shelf Digital CMOS

1. Targeted Applications: Edge-Native Sensory Processing
Targeted application domain: edge-native on-line sensory processing that mimics the human neocortex. The focus of this research is on temporal neural networks that can achieve brain-like capabilities with brain-like efficiency and can be implemented using standard CMOS technology. This effort can enable a whole new family of accelerators, or sensory processing units, that can be deployed in mobile and edge devices for performing edge-native, on-line, always-on, sensory processing with the capability for real-time inference and continuous learning, while consuming only a few mWatts.

2. Computational Model: Space-Time Algebra (STA)
A new Space-Time Computing (STC) Model has been developed for computing that communicates and processes information encoded as transient events in time -- action potentials or voltage spikes in the case of neurons. Consequently, the flow of time becomes a freely available, no-cost computational resource. The theoretical basis for the STC model is the "Space-Time Algebra“ (STA) with primitives that model points in time and functional operations that are consistent with the flow of Newtonian time. [STC/STA was developed by Jim Smith]

3. Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.

4. Processor Design Style: Space Time Logic Design
Conventional CMOS logic gates based on Boolean algebra can be re-purposed to implement STA based temporal operations and functions. Temporal values can be encoded using voltage edges or pulses. We have developed a TNN architecture based on two key building blocks: neurons and columns of neurons. We have implemented the excitatory neuron model with its input synaptic weights as well as a column of such neurons with winner-take-all (WTA) lateral inhibition, all using the space time logic design approach and standard digital CMOS design tools.

5. Hardware Implementation: Standard Digital CMOS Technology
Based on the STA theoretical foundation and the ST logic design approach, we can design a new type of special-purpose TNN-based "Neuromorphic Sensory Processing Units" (NSPU) for incorporation in mobile SoCs targeting mobile and edge devices. NSPUs can be a new core type for SoCs already with heterogeneous cores. Other than using off-the-shelf CMOS design and synthesis tools, there is the potential for creating a new custom standard cell library and design optimizations for supporting the design of TNN-based NSPUs for sensory processing.



And the Course Syllabus - Spring 2022 18-743: “Neuromorphic Computer Architecture & Processor Design”

BrainChip is listed on the course

View attachment 14407

View attachment 14408

It's great to be a share holder :)

considering he is based in Silicon Valley campus it is possible that Sean made the introduction 🤔

ps. I love the look of the course calendar with Akida on there
 
  • Like
  • Love
  • Fire
Reactions: 24 users

equanimous

Norse clairvoyant shapeshifter goddess
Great work T,

I too think this is a big step for our Company.

It's impressive that CMU has already successfully completed a pilot session this year.

From our press release

"The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year."


You inspired me last night to do a bit of digging so I thought I would look into the Professor John Paul Shen, who said this

"We have incorporated experimentation with BrainChip’s Akida development boards in our new graduate-level course, “Neuromorphic Computer Architecture and Processor Design” at Carnegie Mellon University during the Spring 2022 semester,” said John Paul Shen, Professor, Electrical and Computer Engineering Department at Carnegie Mellon. “Our students had a great experience in using the Akida development environment and analyzing results from the Akida hardware. We look forward to running and expanding this program in 2023"

It didn't take much digging for my socks to be blown off, impressive man (y)

John Shen


John Shen​

Professor, Electrical and Computer Engineering​

Contact

Bio​

John Paul Shen was a Nokia Fellow and the founding director of Nokia Research Center - North America Lab. NRC-NAL had research teams pursuing a wide range of research projects in mobile Internet and mobile computing. In six years (2007-2012), NRC-NAL filed over 100 patents, published over 200 papers, hosted about 100 Ph.D. interns, and collaborated with a dozen universities. Prior to joining Nokia in late 2006, John was the Director of the Microarchitecture Research Lab at Intel. MRL had research teams in Santa Clara, Portland, and Austin, pursuing research on aggressive ILP and TLP microarchitectures for IA32 and IA64 processors. Prior to joining Intel in 2000, John was a tenured Full Professor in the ECE Department at CMU, where he supervised a total of 17 Ph.D. students and dozens of M.S. students, received multiple teaching awards, and published two books and more than 100 research papers. One of his books, “Modern Processor Design: Fundamentals of Superscalar Processors” was used in the EE382A Advanced Processor Architecture course at Stanford, where he co-taught the EE382A course. After spending 15 years in the industry, all in the Silicon Valley, he returned to CMU in the fall of 2015 as a tenured Full Professor in the ECE Department, and is based at the Carnegie Mellon Silicon Valley campus.

Education​

Ph.D.
Electrical Engineering
University of Southern California

M.S.
Electrical Engineering
University of Southern California

B.S.
Electrical Engineering
University of Michigan

Research​

Modern Processor Design and Evaluation​

With the emergence of superscalar processors, phenomenal performance increases are being achieved via the exploitation of instruction-level parallelism (ILP). Software tools for aiding the design and validation of complex superscalar processors are being developed. These tools, such as VMW (Visualization-Based Microarchitecture Workbench), facilitate the rigorous specification and validation of microarchitectures.

Architecture and Compilation for Instruction-Level Parallelism​

Microarchitecture and code transformation techniques for effective exploitation of ILP are being studied. Synergistic combinations of static (compile-time software) and dynamic (run-time hardware) mechanisms are being explored. Going beyond a single instruction stream is necessary to achieve effective use of wide superscalar machines, as well as tightly coupled small-scale multiprocessors.

Dependable and Fault-Tolerant Computing​

Techniques are being developed to exploit the idling machine resources of ILP machines for concurrent error checking. As ILP machines get wider, the utilization of the machine resources will decrease. The idling resources can potentially be used for enhancing system dependability via compile-time transformation techniques.

Keywords​

  • Wearable, mobile, and cloud computing
  • Ultra energy-efficient computing for sensor processing
  • Real-time data analytics
  • Mobile-user behavior modelling and deep learning



I also checked out his Linkedin page see screenshot, I especially like the sentence at the bottom

View attachment 14406



NCAL: Neuromorphic Computer Architecture Lab






"Energy-Efficient, Edge-Native, Sensory Processing Units
with Online Continuous Learning Capability"​


The Neuromorphic Computer Architecture Lab (NCAL) is a new research group in the Electrical and Computer Engineering Department at Carnegie Mellon University, led by Prof. John Paul Shen and Prof. James E. Smith.

RESEARCH GOAL: New processor architecture and design that captures the capabilities and efficiencies of brain's neocortex for energy-efficient, edge-native, on-line, sensory processing in mobile and edge devices.
  • Capabilities: strong adherence to biological plausibility and Spike Timing Dependent Plasticity (STDP) in order to enable continuous, unsupervised, and emergent learning.
  • Efficiencies: can achieve several orders of magnitude improvements on system complexity and energy efficiency as compared to existing DNN computation infrastructures for edge-native sensory processing.

RESEARCH STRATEGY:
  1. Targeted Applications: Edge-Native Sensory Processing
  2. Computational Model: Space-Time Algebra (STA)
  3. Processor Architecture: Temporal Neural Networks (TNN)
  4. Processor Design Style: Space-Time Logic Design
  5. Hardware Implementation: Off-the-Shelf Digital CMOS

1. Targeted Applications: Edge-Native Sensory Processing
Targeted application domain: edge-native on-line sensory processing that mimics the human neocortex. The focus of this research is on temporal neural networks that can achieve brain-like capabilities with brain-like efficiency and can be implemented using standard CMOS technology. This effort can enable a whole new family of accelerators, or sensory processing units, that can be deployed in mobile and edge devices for performing edge-native, on-line, always-on, sensory processing with the capability for real-time inference and continuous learning, while consuming only a few mWatts.

2. Computational Model: Space-Time Algebra (STA)
A new Space-Time Computing (STC) Model has been developed for computing that communicates and processes information encoded as transient events in time -- action potentials or voltage spikes in the case of neurons. Consequently, the flow of time becomes a freely available, no-cost computational resource. The theoretical basis for the STC model is the "Space-Time Algebra“ (STA) with primitives that model points in time and functional operations that are consistent with the flow of Newtonian time. [STC/STA was developed by Jim Smith]

3. Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.

4. Processor Design Style: Space Time Logic Design
Conventional CMOS logic gates based on Boolean algebra can be re-purposed to implement STA based temporal operations and functions. Temporal values can be encoded using voltage edges or pulses. We have developed a TNN architecture based on two key building blocks: neurons and columns of neurons. We have implemented the excitatory neuron model with its input synaptic weights as well as a column of such neurons with winner-take-all (WTA) lateral inhibition, all using the space time logic design approach and standard digital CMOS design tools.

5. Hardware Implementation: Standard Digital CMOS Technology
Based on the STA theoretical foundation and the ST logic design approach, we can design a new type of special-purpose TNN-based "Neuromorphic Sensory Processing Units" (NSPU) for incorporation in mobile SoCs targeting mobile and edge devices. NSPUs can be a new core type for SoCs already with heterogeneous cores. Other than using off-the-shelf CMOS design and synthesis tools, there is the potential for creating a new custom standard cell library and design optimizations for supporting the design of TNN-based NSPUs for sensory processing.



And the Course Syllabus - Spring 2022 18-743: “Neuromorphic Computer Architecture & Processor Design”

BrainChip is listed on the course

View attachment 14407

View attachment 14408

It's great to be a share holder :)
De facto standard
 
  • Like
  • Love
  • Fire
Reactions: 19 users

TechGirl

Founding Member
considering he is based in Silicon Valley campus it is possible that Sean made the introduction 🤔

ps. I love the look of the course calendar with Akida on there

It's possible, could have been anyone of our Superstars, we are certainly engaging with great minds & I like it (y)

Bill Hader Popcorn GIF by Saturday Night Live
 
  • Like
  • Love
  • Fire
Reactions: 19 users
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 16 users

Diogenese

Top 20
I think that's very important. Young people need to get to know Akida better. I still haven't really calmed down, had deleted my post. I listened to a recent webinar (almost 2h) in German. Topic:

Artificial intelligence is pushing computers as we know them today to their limits. Do we need new computers inspired by the brain?

AI Hardware of the Future: What is Neuromorphic Computing?
Presentation of the topic and our guests
00:03:05 What is Neuromorphic Computing?
00:09:20 Short message from our sponsor BWI
00:10:00 What do neural networks and neuromorphic computing have in common?
00:16:02 Emulation vs. architecture in neuromorphic computing
00:18:00 Analogue vs. digital computing methods
00:21:15 Differentiation to quantum computing
00:25:36 The history of neuromorphic computing
00:32:16 How does deep learning have it Neuromorphic Computing Changed?
00:35:55 What does neuromorphic computing mean for AI development?
00:39:50How does deep learning work on neuromorphic hardware?
00:44:15 Sparse and spiking in neural networks - what's the difference?
00:52:00 The advantage of spiking neural networks
00:59:24 What is the biggest challenge in neuromorphic computing right now?
01:05:07 As an AI developer, can I fully rely on neuromorphic computing?
01:07:25 Will we have a neuromorphic chip in the iPhone in five years?
01:08:00 Would moving to neuromorphic computing require new manufacturing processes? 01:11:33 What could a neuromorphic chip in the iPhone do better?
Test tube vs. chip factory: Could bio-computers also be bred?
01:31:02 Is biology even a good model for computing?
01:39:16 Max' Philo Solo: Where is the line between emulation and reproduction?
01:45:20 Where to learn more about Neuromorphic Computing?
01:47:52 How to get into neuromorphic computing?


They (also uni Bern in Switzerland, home of Bonseyes) managed to talk for two hours about the state of affairs and what is not yet possible today. I thought easily 20 times about it, hey! do you know as a researcher actually not Akida? The webinar was full of ignorance about the state of things whether on purpose or not. I have even prepared a letter but not sent. I don't think that makes any sense. All you have to do is type neuromorphic and spiking neural into Google and you'll inevitably come up with Brainchip. Anyway, I think it is very important that Akida be distributed to universities. I still shake my head. Who wants to hear it from my German compatriots:
https://mixed.de/vom-gehirn-inspirierte-ki-was-ist-neuromorphic-computing-deep-minds-7/?amp=1
Hi cosors,

This is something which I have noted before - academics confine their research to peer-reviewed publications because anything that is not peer reviewed is not "proven" scientifically.

In fact, finding Akida in peer reviewed papers may be a benefit of the Carnegie Mellon University project, as the students and academics will be experimenting with Akida and producing peer reviewed papers.
 
  • Like
  • Love
  • Fire
Reactions: 41 users

TasTroy77

Founding Member
  • Like
  • Thinking
  • Sad
Reactions: 6 users

Diogenese

Top 20
Great work T,

I too think this is a big step for our Company.

It's impressive that CMU has already successfully completed a pilot session this year.

From our press release

"The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year."


You inspired me last night to do a bit of digging so I thought I would look into the Professor John Paul Shen, who said this

"We have incorporated experimentation with BrainChip’s Akida development boards in our new graduate-level course, “Neuromorphic Computer Architecture and Processor Design” at Carnegie Mellon University during the Spring 2022 semester,” said John Paul Shen, Professor, Electrical and Computer Engineering Department at Carnegie Mellon. “Our students had a great experience in using the Akida development environment and analyzing results from the Akida hardware. We look forward to running and expanding this program in 2023"

It didn't take much digging for my socks to be blown off, impressive man (y)

John Shen


John Shen​

Professor, Electrical and Computer Engineering​

Contact

Bio​

John Paul Shen was a Nokia Fellow and the founding director of Nokia Research Center - North America Lab. NRC-NAL had research teams pursuing a wide range of research projects in mobile Internet and mobile computing. In six years (2007-2012), NRC-NAL filed over 100 patents, published over 200 papers, hosted about 100 Ph.D. interns, and collaborated with a dozen universities. Prior to joining Nokia in late 2006, John was the Director of the Microarchitecture Research Lab at Intel. MRL had research teams in Santa Clara, Portland, and Austin, pursuing research on aggressive ILP and TLP microarchitectures for IA32 and IA64 processors. Prior to joining Intel in 2000, John was a tenured Full Professor in the ECE Department at CMU, where he supervised a total of 17 Ph.D. students and dozens of M.S. students, received multiple teaching awards, and published two books and more than 100 research papers. One of his books, “Modern Processor Design: Fundamentals of Superscalar Processors” was used in the EE382A Advanced Processor Architecture course at Stanford, where he co-taught the EE382A course. After spending 15 years in the industry, all in the Silicon Valley, he returned to CMU in the fall of 2015 as a tenured Full Professor in the ECE Department, and is based at the Carnegie Mellon Silicon Valley campus.

Education​

Ph.D.
Electrical Engineering
University of Southern California

M.S.
Electrical Engineering
University of Southern California

B.S.
Electrical Engineering
University of Michigan

Research​

Modern Processor Design and Evaluation​

With the emergence of superscalar processors, phenomenal performance increases are being achieved via the exploitation of instruction-level parallelism (ILP). Software tools for aiding the design and validation of complex superscalar processors are being developed. These tools, such as VMW (Visualization-Based Microarchitecture Workbench), facilitate the rigorous specification and validation of microarchitectures.

Architecture and Compilation for Instruction-Level Parallelism​

Microarchitecture and code transformation techniques for effective exploitation of ILP are being studied. Synergistic combinations of static (compile-time software) and dynamic (run-time hardware) mechanisms are being explored. Going beyond a single instruction stream is necessary to achieve effective use of wide superscalar machines, as well as tightly coupled small-scale multiprocessors.

Dependable and Fault-Tolerant Computing​

Techniques are being developed to exploit the idling machine resources of ILP machines for concurrent error checking. As ILP machines get wider, the utilization of the machine resources will decrease. The idling resources can potentially be used for enhancing system dependability via compile-time transformation techniques.

Keywords​

  • Wearable, mobile, and cloud computing
  • Ultra energy-efficient computing for sensor processing
  • Real-time data analytics
  • Mobile-user behavior modelling and deep learning



I also checked out his Linkedin page see screenshot, I especially like the sentence at the bottom

View attachment 14406



NCAL: Neuromorphic Computer Architecture Lab






"Energy-Efficient, Edge-Native, Sensory Processing Units​

with Online Continuous Learning Capability"​


The Neuromorphic Computer Architecture Lab (NCAL) is a new research group in the Electrical and Computer Engineering Department at Carnegie Mellon University, led by Prof. John Paul Shen and Prof. James E. Smith.

RESEARCH GOAL: New processor architecture and design that captures the capabilities and efficiencies of brain's neocortex for energy-efficient, edge-native, on-line, sensory processing in mobile and edge devices.
  • Capabilities: strong adherence to biological plausibility and Spike Timing Dependent Plasticity (STDP) in order to enable continuous, unsupervised, and emergent learning.
  • Efficiencies: can achieve several orders of magnitude improvements on system complexity and energy efficiency as compared to existing DNN computation infrastructures for edge-native sensory processing.

RESEARCH STRATEGY:
  1. Targeted Applications: Edge-Native Sensory Processing
  2. Computational Model: Space-Time Algebra (STA)
  3. Processor Architecture: Temporal Neural Networks (TNN)
  4. Processor Design Style: Space-Time Logic Design
  5. Hardware Implementation: Off-the-Shelf Digital CMOS

1. Targeted Applications: Edge-Native Sensory Processing
Targeted application domain: edge-native on-line sensory processing that mimics the human neocortex. The focus of this research is on temporal neural networks that can achieve brain-like capabilities with brain-like efficiency and can be implemented using standard CMOS technology. This effort can enable a whole new family of accelerators, or sensory processing units, that can be deployed in mobile and edge devices for performing edge-native, on-line, always-on, sensory processing with the capability for real-time inference and continuous learning, while consuming only a few mWatts.

2. Computational Model: Space-Time Algebra (STA)
A new Space-Time Computing (STC) Model has been developed for computing that communicates and processes information encoded as transient events in time -- action potentials or voltage spikes in the case of neurons. Consequently, the flow of time becomes a freely available, no-cost computational resource. The theoretical basis for the STC model is the "Space-Time Algebra“ (STA) with primitives that model points in time and functional operations that are consistent with the flow of Newtonian time. [STC/STA was developed by Jim Smith]

3. Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.

4. Processor Design Style: Space Time Logic Design
Conventional CMOS logic gates based on Boolean algebra can be re-purposed to implement STA based temporal operations and functions. Temporal values can be encoded using voltage edges or pulses. We have developed a TNN architecture based on two key building blocks: neurons and columns of neurons. We have implemented the excitatory neuron model with its input synaptic weights as well as a column of such neurons with winner-take-all (WTA) lateral inhibition, all using the space time logic design approach and standard digital CMOS design tools.

5. Hardware Implementation: Standard Digital CMOS Technology
Based on the STA theoretical foundation and the ST logic design approach, we can design a new type of special-purpose TNN-based "Neuromorphic Sensory Processing Units" (NSPU) for incorporation in mobile SoCs targeting mobile and edge devices. NSPUs can be a new core type for SoCs already with heterogeneous cores. Other than using off-the-shelf CMOS design and synthesis tools, there is the potential for creating a new custom standard cell library and design optimizations for supporting the design of TNN-based NSPUs for sensory processing.



And the Course Syllabus - Spring 2022 18-743: “Neuromorphic Computer Architecture & Processor Design”

BrainChip is listed on the course

View attachment 14407

View attachment 14408

It's great to be a share holder :)
Great to see the lecturers for lectures 7 & 8 will have hands-on experience with Akida.
 
  • Like
  • Fire
  • Love
Reactions: 13 users
Top Bottom