BRN Discussion Ongoing

cosors

👀
Big deal here, way to go @Fullmoonfever as the education focus should be front and center. Edge Impulse seems to master the education focus already. What a huge event to help in the education of future Akida SNN publishers.

Getting the SNN IP into the workflow of all the steps on the SoC is a big deal. Let's recruit the entire college community!

View attachment 14311



PS. We need a better Wow Icon! Wow is better than Fire Icon or Heart Icon. Or is Thinking Icon better, you touched my brain deeply. Icon review!

View attachment 14309
That's where we get our emojis from. They have different packages @zeeb0t
star_struck.gif

 
Last edited:
  • Like
Reactions: 4 users

cosors

👀
In BrainChip’s news release announcing their University AI Accelerator Program they state that The Program successfully completed a pilot session at Carnegie Mellon University (CMU) this past spring semester and will be officially launching with Arizona State University in September … with a further 5 expected to participate.

I had never heard of Carnegie Mellon University so I took to google as any fine researcher does these days and I’ve come away very impressed.

CMU is a private research university.

If you search for top colleges / schools / universities for teaching computer science, Carnegie Mellon University repeatedly appears in the top 3 in America.

… the college is perhaps best known for its School of Computer Science. CMU is currently tied for second place on the US News and World Report’s ranked list of top schools for computer science in the country.

More specifically, the college’s programs for artificial intelligence and programming language are known as the very best in the country…

CMU is also known for its engineering program, which is currently tied for fourth in the national college rankings.

Best Artificial Intelligence Programs Being taught? Yep, CMU.
https://www.usnews.com/best-graduate-schools/top-science-schools/artificial-intelligence-rankings


But it doesn’t stop in the US! Additionally CMU ranks as 6th in the world for computer science according to https://www.timeshighereducation.com/world-university-rankings/2022/subject-ranking/computer-science


Pretty impressive work by BrainChip to hook up with Carnegie Mellon University. (y)
I look forward to some recognition of BrainChip’s engagement in their computing school on the CMU website in the, hopefully near, future.

Your message did get me to write to someone from the podcast, see my post above.
I have sent them our new program. Thank you! We can't leave students ignorant. 🤭
 
Last edited:
  • Like
  • Fire
Reactions: 10 users

cosors

👀
phew! I have now translated the Japanese page into German for me and now for you all in English, hope you appreciate it
🥰😘
Cortex-A55 Core Board | Renesas G2L industrial grade multi-core MPU

Based on Renesas Cortex-A55 RZ/G2L series high-performance processor design, integrates Cortex-M33 real-time hardcore, supports 2-channel Gigabit Ethernet, 2-channel CAN FD, HD display interface, camera interface, 3D, H .264 video hardware codec, USB interface, multi-channel serial interface, PWM, ADC, etc., suitable for rapid development of a number of the most innovative applications, such as Such as display control terminals, Industry 4.0, medical analysis instruments, vehicle terminals and edge computing equipment, etc.
View attachment 14345

10 year supply | High Performance | Extensive interfaces

The RZ/G2L series processor is one of the MPUs with the most comprehensive interfaces among general-purpose processors. It has a service life of over 10 years, and the chip's operating temperature is -40℃~+85℃. It is suitable for electric power, medical treatment, railway transportation, industrial automation, environmental protection, heavy industry and other industries.

View attachment 14346

Detailed supporting materials for development | Renesas official full technical support


Wuhan Vientiane Aoke is Renesas Electronics' first preferred partner in China. Vientiane Aoke combines the performance advantages of Renesas high-end processor chips, factory original supply and technical support to jointly provide users with comprehensive technical services, and can provide personalized customized services combined with industry characteristics.

Thanks for your work I really appreciate it as I often do it in Swedish and sometimes from Hebrew and rarely from Japanese. I had the most work with a newspaper article from Israel. I had to vectorize it screenshot by screenshot and then translate it first into German to make it a readable text and then into English. I can't do much with the info but still. I appreciate your tireless work!
 
Last edited:
  • Like
  • Love
Reactions: 17 users
D

Deleted member 118

Guest
Whoop whoop I brought Ibat (battle infinity) on presale and it went up x1000% on opening, going to cash some out later to buy more BRN




Maybe this was that feeling I was getting on Monday
 
Last edited by a moderator:
  • Like
  • Fire
  • Wow
Reactions: 18 users

Sirod69

bavarian girl ;-)
I'm trying to get involved here as best I can, so ignore it if it's not always something earth-shattering, I just didn't know any better
 
  • Like
  • Love
Reactions: 17 users
Gartner knows
2C049B3C-87F3-4920-8B06-B8E083DC8E86.png
 
  • Like
  • Fire
  • Love
Reactions: 18 users

alwaysgreen

Top 20
Maybe a good direction to take from the company, but sorry does nothing for us shareholders with our investment.
Short term, no but long term, yes. Depends on how long you plan on holding for.

In 5 years, these university kids will be in engineering roles in companies that may have had no exposure to SNNs. These kids will have experience with Akida that the old dinosaurs in the company may never have been exposed to and may implement Akida in their products.

It isn't going to send the share price rocketing today but the more exposure people have to our product, the better.
 
  • Like
  • Fire
  • Love
Reactions: 43 users
D

Deleted member 118

Guest
Short term, no but long term, yes. Depends on how long you plan on holding for.

In 5 years, these university kids will be in engineering roles in companies that may have had no exposure to SNNs. These kids will have experience with Akida that the old dinosaurs in the company may never have been exposed to and may implement Akida in their products.

It isn't going to send the share price rocketing today but the more exposure people have to our product, the better.
Probably what I wanted to say, but I said mine with a lot less words lol
 
  • Like
  • Haha
Reactions: 7 users

Sirod69

bavarian girl ;-)
  • Like
  • Fire
Reactions: 11 users

cosors

👀
Probably what I wanted to say, but I said mine with a lot less words lol
learn&buy
?
short enough
for me
 
  • Fire
  • Haha
  • Like
Reactions: 4 users

cosors

👀
learn&buy
?
short enough
for me
That also describes me. Five months ago, I was a WANCA. I had a nice little amount of shares from November. Then I learned in a very short time about something completely unknown to me. I knew a lot about tech but not IT. I bought and am fascinated daily.
___
There is probably no translation for this: bi turbo charged jump start "Kaltstart"
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 14 users

stuart888

Regular
Arm Ceo on Bloomberg Tech TV with Emily Chang soon.

Guess who else loves Brainchip = ARM!

1660770553398.png
 
  • Like
  • Fire
  • Love
Reactions: 35 users

chapman89

Founding Member
Edge Impulse up for an award for “real time object detection”
046AB440-6C4A-4031-9D1D-7580F59D3575.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 57 users

cosors

👀
The course of the morning takes a long time with you and with us everything is done with the trade if we are up. Some entertainment. Enjoy the coffee and good start to the day! I'm off to bed...


Here are also our neighbors the French - I love Hungry Music 🤗
 
Last edited:
  • Like
  • Love
Reactions: 8 users

TechGirl

Founding Member
In BrainChip’s news release announcing their University AI Accelerator Program they state that The Program successfully completed a pilot session at Carnegie Mellon University (CMU) this past spring semester and will be officially launching with Arizona State University in September … with a further 5 expected to participate.

I had never heard of Carnegie Mellon University so I took to google as any fine researcher does these days and I’ve come away very impressed.

CMU is a private research university.

If you search for top colleges / schools / universities for teaching computer science, Carnegie Mellon University repeatedly appears in the top 3 in America.

… the college is perhaps best known for its School of Computer Science. CMU is currently tied for second place on the US News and World Report’s ranked list of top schools for computer science in the country.

More specifically, the college’s programs for artificial intelligence and programming language are known as the very best in the country…

CMU is also known for its engineering program, which is currently tied for fourth in the national college rankings.

Best Artificial Intelligence Programs Being taught? Yep, CMU.
https://www.usnews.com/best-graduate-schools/top-science-schools/artificial-intelligence-rankings


But it doesn’t stop in the US! Additionally CMU ranks as 6th in the world for computer science according to https://www.timeshighereducation.com/world-university-rankings/2022/subject-ranking/computer-science


Pretty impressive work by BrainChip to hook up with Carnegie Mellon University. (y)
I look forward to some recognition of BrainChip’s engagement in their computing school on the CMU website in the, hopefully near, future.

Great work T,

I too think this is a big step for our Company.

It's impressive that CMU has already successfully completed a pilot session this year.

From our press release

"The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year."


You inspired me last night to do a bit of digging so I thought I would look into the Professor John Paul Shen, who said this

"We have incorporated experimentation with BrainChip’s Akida development boards in our new graduate-level course, “Neuromorphic Computer Architecture and Processor Design” at Carnegie Mellon University during the Spring 2022 semester,” said John Paul Shen, Professor, Electrical and Computer Engineering Department at Carnegie Mellon. “Our students had a great experience in using the Akida development environment and analyzing results from the Akida hardware. We look forward to running and expanding this program in 2023"

It didn't take much digging for my socks to be blown off, impressive man (y)

John Shen


John Shen​

Professor, Electrical and Computer Engineering​

Contact

Bio​

John Paul Shen was a Nokia Fellow and the founding director of Nokia Research Center - North America Lab. NRC-NAL had research teams pursuing a wide range of research projects in mobile Internet and mobile computing. In six years (2007-2012), NRC-NAL filed over 100 patents, published over 200 papers, hosted about 100 Ph.D. interns, and collaborated with a dozen universities. Prior to joining Nokia in late 2006, John was the Director of the Microarchitecture Research Lab at Intel. MRL had research teams in Santa Clara, Portland, and Austin, pursuing research on aggressive ILP and TLP microarchitectures for IA32 and IA64 processors. Prior to joining Intel in 2000, John was a tenured Full Professor in the ECE Department at CMU, where he supervised a total of 17 Ph.D. students and dozens of M.S. students, received multiple teaching awards, and published two books and more than 100 research papers. One of his books, “Modern Processor Design: Fundamentals of Superscalar Processors” was used in the EE382A Advanced Processor Architecture course at Stanford, where he co-taught the EE382A course. After spending 15 years in the industry, all in the Silicon Valley, he returned to CMU in the fall of 2015 as a tenured Full Professor in the ECE Department, and is based at the Carnegie Mellon Silicon Valley campus.

Education​

Ph.D.
Electrical Engineering
University of Southern California

M.S.
Electrical Engineering
University of Southern California

B.S.
Electrical Engineering
University of Michigan

Research​

Modern Processor Design and Evaluation​

With the emergence of superscalar processors, phenomenal performance increases are being achieved via the exploitation of instruction-level parallelism (ILP). Software tools for aiding the design and validation of complex superscalar processors are being developed. These tools, such as VMW (Visualization-Based Microarchitecture Workbench), facilitate the rigorous specification and validation of microarchitectures.

Architecture and Compilation for Instruction-Level Parallelism​

Microarchitecture and code transformation techniques for effective exploitation of ILP are being studied. Synergistic combinations of static (compile-time software) and dynamic (run-time hardware) mechanisms are being explored. Going beyond a single instruction stream is necessary to achieve effective use of wide superscalar machines, as well as tightly coupled small-scale multiprocessors.

Dependable and Fault-Tolerant Computing​

Techniques are being developed to exploit the idling machine resources of ILP machines for concurrent error checking. As ILP machines get wider, the utilization of the machine resources will decrease. The idling resources can potentially be used for enhancing system dependability via compile-time transformation techniques.

Keywords​

  • Wearable, mobile, and cloud computing
  • Ultra energy-efficient computing for sensor processing
  • Real-time data analytics
  • Mobile-user behavior modelling and deep learning



I also checked out his Linkedin page see screenshot, I especially like the sentence at the bottom

zzz.jpg




NCAL: Neuromorphic Computer Architecture Lab






"Energy-Efficient, Edge-Native, Sensory Processing Units
with Online Continuous Learning Capability"​


The Neuromorphic Computer Architecture Lab (NCAL) is a new research group in the Electrical and Computer Engineering Department at Carnegie Mellon University, led by Prof. John Paul Shen and Prof. James E. Smith.

RESEARCH GOAL: New processor architecture and design that captures the capabilities and efficiencies of brain's neocortex for energy-efficient, edge-native, on-line, sensory processing in mobile and edge devices.
  • Capabilities: strong adherence to biological plausibility and Spike Timing Dependent Plasticity (STDP) in order to enable continuous, unsupervised, and emergent learning.
  • Efficiencies: can achieve several orders of magnitude improvements on system complexity and energy efficiency as compared to existing DNN computation infrastructures for edge-native sensory processing.

RESEARCH STRATEGY:
  1. Targeted Applications: Edge-Native Sensory Processing
  2. Computational Model: Space-Time Algebra (STA)
  3. Processor Architecture: Temporal Neural Networks (TNN)
  4. Processor Design Style: Space-Time Logic Design
  5. Hardware Implementation: Off-the-Shelf Digital CMOS

1. Targeted Applications: Edge-Native Sensory Processing
Targeted application domain: edge-native on-line sensory processing that mimics the human neocortex. The focus of this research is on temporal neural networks that can achieve brain-like capabilities with brain-like efficiency and can be implemented using standard CMOS technology. This effort can enable a whole new family of accelerators, or sensory processing units, that can be deployed in mobile and edge devices for performing edge-native, on-line, always-on, sensory processing with the capability for real-time inference and continuous learning, while consuming only a few mWatts.

2. Computational Model: Space-Time Algebra (STA)
A new Space-Time Computing (STC) Model has been developed for computing that communicates and processes information encoded as transient events in time -- action potentials or voltage spikes in the case of neurons. Consequently, the flow of time becomes a freely available, no-cost computational resource. The theoretical basis for the STC model is the "Space-Time Algebra“ (STA) with primitives that model points in time and functional operations that are consistent with the flow of Newtonian time. [STC/STA was developed by Jim Smith]

3. Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.

4. Processor Design Style: Space Time Logic Design
Conventional CMOS logic gates based on Boolean algebra can be re-purposed to implement STA based temporal operations and functions. Temporal values can be encoded using voltage edges or pulses. We have developed a TNN architecture based on two key building blocks: neurons and columns of neurons. We have implemented the excitatory neuron model with its input synaptic weights as well as a column of such neurons with winner-take-all (WTA) lateral inhibition, all using the space time logic design approach and standard digital CMOS design tools.

5. Hardware Implementation: Standard Digital CMOS Technology
Based on the STA theoretical foundation and the ST logic design approach, we can design a new type of special-purpose TNN-based "Neuromorphic Sensory Processing Units" (NSPU) for incorporation in mobile SoCs targeting mobile and edge devices. NSPUs can be a new core type for SoCs already with heterogeneous cores. Other than using off-the-shelf CMOS design and synthesis tools, there is the potential for creating a new custom standard cell library and design optimizations for supporting the design of TNN-based NSPUs for sensory processing.



And the Course Syllabus - Spring 2022 18-743: “Neuromorphic Computer Architecture & Processor Design”

BrainChip is listed on the course

zzzz.jpg


zzzzz.jpg


It's great to be a share holder :)
 
  • Like
  • Fire
  • Love
Reactions: 92 users

Terroni2105

Founding Member
Great work T,

I too think this is a big step for our Company.

It's impressive that CMU has already successfully completed a pilot session this year.

From our press release

"The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year."


You inspired me last night to do a bit of digging so I thought I would look into the Professor John Paul Shen, who said this

"We have incorporated experimentation with BrainChip’s Akida development boards in our new graduate-level course, “Neuromorphic Computer Architecture and Processor Design” at Carnegie Mellon University during the Spring 2022 semester,” said John Paul Shen, Professor, Electrical and Computer Engineering Department at Carnegie Mellon. “Our students had a great experience in using the Akida development environment and analyzing results from the Akida hardware. We look forward to running and expanding this program in 2023"

It didn't take much digging for my socks to be blown off, impressive man (y)

John Shen


John Shen​

Professor, Electrical and Computer Engineering​

Contact

Bio​

John Paul Shen was a Nokia Fellow and the founding director of Nokia Research Center - North America Lab. NRC-NAL had research teams pursuing a wide range of research projects in mobile Internet and mobile computing. In six years (2007-2012), NRC-NAL filed over 100 patents, published over 200 papers, hosted about 100 Ph.D. interns, and collaborated with a dozen universities. Prior to joining Nokia in late 2006, John was the Director of the Microarchitecture Research Lab at Intel. MRL had research teams in Santa Clara, Portland, and Austin, pursuing research on aggressive ILP and TLP microarchitectures for IA32 and IA64 processors. Prior to joining Intel in 2000, John was a tenured Full Professor in the ECE Department at CMU, where he supervised a total of 17 Ph.D. students and dozens of M.S. students, received multiple teaching awards, and published two books and more than 100 research papers. One of his books, “Modern Processor Design: Fundamentals of Superscalar Processors” was used in the EE382A Advanced Processor Architecture course at Stanford, where he co-taught the EE382A course. After spending 15 years in the industry, all in the Silicon Valley, he returned to CMU in the fall of 2015 as a tenured Full Professor in the ECE Department, and is based at the Carnegie Mellon Silicon Valley campus.

Education​

Ph.D.
Electrical Engineering
University of Southern California

M.S.
Electrical Engineering
University of Southern California

B.S.
Electrical Engineering
University of Michigan

Research​

Modern Processor Design and Evaluation​

With the emergence of superscalar processors, phenomenal performance increases are being achieved via the exploitation of instruction-level parallelism (ILP). Software tools for aiding the design and validation of complex superscalar processors are being developed. These tools, such as VMW (Visualization-Based Microarchitecture Workbench), facilitate the rigorous specification and validation of microarchitectures.

Architecture and Compilation for Instruction-Level Parallelism​

Microarchitecture and code transformation techniques for effective exploitation of ILP are being studied. Synergistic combinations of static (compile-time software) and dynamic (run-time hardware) mechanisms are being explored. Going beyond a single instruction stream is necessary to achieve effective use of wide superscalar machines, as well as tightly coupled small-scale multiprocessors.

Dependable and Fault-Tolerant Computing​

Techniques are being developed to exploit the idling machine resources of ILP machines for concurrent error checking. As ILP machines get wider, the utilization of the machine resources will decrease. The idling resources can potentially be used for enhancing system dependability via compile-time transformation techniques.

Keywords​

  • Wearable, mobile, and cloud computing
  • Ultra energy-efficient computing for sensor processing
  • Real-time data analytics
  • Mobile-user behavior modelling and deep learning



I also checked out his Linkedin page see screenshot, I especially like the sentence at the bottom

View attachment 14406



NCAL: Neuromorphic Computer Architecture Lab






"Energy-Efficient, Edge-Native, Sensory Processing Units​

with Online Continuous Learning Capability"​


The Neuromorphic Computer Architecture Lab (NCAL) is a new research group in the Electrical and Computer Engineering Department at Carnegie Mellon University, led by Prof. John Paul Shen and Prof. James E. Smith.

RESEARCH GOAL: New processor architecture and design that captures the capabilities and efficiencies of brain's neocortex for energy-efficient, edge-native, on-line, sensory processing in mobile and edge devices.
  • Capabilities: strong adherence to biological plausibility and Spike Timing Dependent Plasticity (STDP) in order to enable continuous, unsupervised, and emergent learning.
  • Efficiencies: can achieve several orders of magnitude improvements on system complexity and energy efficiency as compared to existing DNN computation infrastructures for edge-native sensory processing.

RESEARCH STRATEGY:
  1. Targeted Applications: Edge-Native Sensory Processing
  2. Computational Model: Space-Time Algebra (STA)
  3. Processor Architecture: Temporal Neural Networks (TNN)
  4. Processor Design Style: Space-Time Logic Design
  5. Hardware Implementation: Off-the-Shelf Digital CMOS

1. Targeted Applications: Edge-Native Sensory Processing
Targeted application domain: edge-native on-line sensory processing that mimics the human neocortex. The focus of this research is on temporal neural networks that can achieve brain-like capabilities with brain-like efficiency and can be implemented using standard CMOS technology. This effort can enable a whole new family of accelerators, or sensory processing units, that can be deployed in mobile and edge devices for performing edge-native, on-line, always-on, sensory processing with the capability for real-time inference and continuous learning, while consuming only a few mWatts.

2. Computational Model: Space-Time Algebra (STA)
A new Space-Time Computing (STC) Model has been developed for computing that communicates and processes information encoded as transient events in time -- action potentials or voltage spikes in the case of neurons. Consequently, the flow of time becomes a freely available, no-cost computational resource. The theoretical basis for the STC model is the "Space-Time Algebra“ (STA) with primitives that model points in time and functional operations that are consistent with the flow of Newtonian time. [STC/STA was developed by Jim Smith]

3. Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.

4. Processor Design Style: Space Time Logic Design
Conventional CMOS logic gates based on Boolean algebra can be re-purposed to implement STA based temporal operations and functions. Temporal values can be encoded using voltage edges or pulses. We have developed a TNN architecture based on two key building blocks: neurons and columns of neurons. We have implemented the excitatory neuron model with its input synaptic weights as well as a column of such neurons with winner-take-all (WTA) lateral inhibition, all using the space time logic design approach and standard digital CMOS design tools.

5. Hardware Implementation: Standard Digital CMOS Technology
Based on the STA theoretical foundation and the ST logic design approach, we can design a new type of special-purpose TNN-based "Neuromorphic Sensory Processing Units" (NSPU) for incorporation in mobile SoCs targeting mobile and edge devices. NSPUs can be a new core type for SoCs already with heterogeneous cores. Other than using off-the-shelf CMOS design and synthesis tools, there is the potential for creating a new custom standard cell library and design optimizations for supporting the design of TNN-based NSPUs for sensory processing.



And the Course Syllabus - Spring 2022 18-743: “Neuromorphic Computer Architecture & Processor Design”

BrainChip is listed on the course

View attachment 14407

View attachment 14408

It's great to be a share holder :)

considering he is based in Silicon Valley campus it is possible that Sean made the introduction 🤔

ps. I love the look of the course calendar with Akida on there
 
  • Like
  • Love
  • Fire
Reactions: 24 users

equanimous

Norse clairvoyant shapeshifter goddess
Great work T,

I too think this is a big step for our Company.

It's impressive that CMU has already successfully completed a pilot session this year.

From our press release

"The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year."


You inspired me last night to do a bit of digging so I thought I would look into the Professor John Paul Shen, who said this

"We have incorporated experimentation with BrainChip’s Akida development boards in our new graduate-level course, “Neuromorphic Computer Architecture and Processor Design” at Carnegie Mellon University during the Spring 2022 semester,” said John Paul Shen, Professor, Electrical and Computer Engineering Department at Carnegie Mellon. “Our students had a great experience in using the Akida development environment and analyzing results from the Akida hardware. We look forward to running and expanding this program in 2023"

It didn't take much digging for my socks to be blown off, impressive man (y)

John Shen


John Shen​

Professor, Electrical and Computer Engineering​

Contact

Bio​

John Paul Shen was a Nokia Fellow and the founding director of Nokia Research Center - North America Lab. NRC-NAL had research teams pursuing a wide range of research projects in mobile Internet and mobile computing. In six years (2007-2012), NRC-NAL filed over 100 patents, published over 200 papers, hosted about 100 Ph.D. interns, and collaborated with a dozen universities. Prior to joining Nokia in late 2006, John was the Director of the Microarchitecture Research Lab at Intel. MRL had research teams in Santa Clara, Portland, and Austin, pursuing research on aggressive ILP and TLP microarchitectures for IA32 and IA64 processors. Prior to joining Intel in 2000, John was a tenured Full Professor in the ECE Department at CMU, where he supervised a total of 17 Ph.D. students and dozens of M.S. students, received multiple teaching awards, and published two books and more than 100 research papers. One of his books, “Modern Processor Design: Fundamentals of Superscalar Processors” was used in the EE382A Advanced Processor Architecture course at Stanford, where he co-taught the EE382A course. After spending 15 years in the industry, all in the Silicon Valley, he returned to CMU in the fall of 2015 as a tenured Full Professor in the ECE Department, and is based at the Carnegie Mellon Silicon Valley campus.

Education​

Ph.D.
Electrical Engineering
University of Southern California

M.S.
Electrical Engineering
University of Southern California

B.S.
Electrical Engineering
University of Michigan

Research​

Modern Processor Design and Evaluation​

With the emergence of superscalar processors, phenomenal performance increases are being achieved via the exploitation of instruction-level parallelism (ILP). Software tools for aiding the design and validation of complex superscalar processors are being developed. These tools, such as VMW (Visualization-Based Microarchitecture Workbench), facilitate the rigorous specification and validation of microarchitectures.

Architecture and Compilation for Instruction-Level Parallelism​

Microarchitecture and code transformation techniques for effective exploitation of ILP are being studied. Synergistic combinations of static (compile-time software) and dynamic (run-time hardware) mechanisms are being explored. Going beyond a single instruction stream is necessary to achieve effective use of wide superscalar machines, as well as tightly coupled small-scale multiprocessors.

Dependable and Fault-Tolerant Computing​

Techniques are being developed to exploit the idling machine resources of ILP machines for concurrent error checking. As ILP machines get wider, the utilization of the machine resources will decrease. The idling resources can potentially be used for enhancing system dependability via compile-time transformation techniques.

Keywords​

  • Wearable, mobile, and cloud computing
  • Ultra energy-efficient computing for sensor processing
  • Real-time data analytics
  • Mobile-user behavior modelling and deep learning



I also checked out his Linkedin page see screenshot, I especially like the sentence at the bottom

View attachment 14406



NCAL: Neuromorphic Computer Architecture Lab






"Energy-Efficient, Edge-Native, Sensory Processing Units
with Online Continuous Learning Capability"​


The Neuromorphic Computer Architecture Lab (NCAL) is a new research group in the Electrical and Computer Engineering Department at Carnegie Mellon University, led by Prof. John Paul Shen and Prof. James E. Smith.

RESEARCH GOAL: New processor architecture and design that captures the capabilities and efficiencies of brain's neocortex for energy-efficient, edge-native, on-line, sensory processing in mobile and edge devices.
  • Capabilities: strong adherence to biological plausibility and Spike Timing Dependent Plasticity (STDP) in order to enable continuous, unsupervised, and emergent learning.
  • Efficiencies: can achieve several orders of magnitude improvements on system complexity and energy efficiency as compared to existing DNN computation infrastructures for edge-native sensory processing.

RESEARCH STRATEGY:
  1. Targeted Applications: Edge-Native Sensory Processing
  2. Computational Model: Space-Time Algebra (STA)
  3. Processor Architecture: Temporal Neural Networks (TNN)
  4. Processor Design Style: Space-Time Logic Design
  5. Hardware Implementation: Off-the-Shelf Digital CMOS

1. Targeted Applications: Edge-Native Sensory Processing
Targeted application domain: edge-native on-line sensory processing that mimics the human neocortex. The focus of this research is on temporal neural networks that can achieve brain-like capabilities with brain-like efficiency and can be implemented using standard CMOS technology. This effort can enable a whole new family of accelerators, or sensory processing units, that can be deployed in mobile and edge devices for performing edge-native, on-line, always-on, sensory processing with the capability for real-time inference and continuous learning, while consuming only a few mWatts.

2. Computational Model: Space-Time Algebra (STA)
A new Space-Time Computing (STC) Model has been developed for computing that communicates and processes information encoded as transient events in time -- action potentials or voltage spikes in the case of neurons. Consequently, the flow of time becomes a freely available, no-cost computational resource. The theoretical basis for the STC model is the "Space-Time Algebra“ (STA) with primitives that model points in time and functional operations that are consistent with the flow of Newtonian time. [STC/STA was developed by Jim Smith]

3. Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.

4. Processor Design Style: Space Time Logic Design
Conventional CMOS logic gates based on Boolean algebra can be re-purposed to implement STA based temporal operations and functions. Temporal values can be encoded using voltage edges or pulses. We have developed a TNN architecture based on two key building blocks: neurons and columns of neurons. We have implemented the excitatory neuron model with its input synaptic weights as well as a column of such neurons with winner-take-all (WTA) lateral inhibition, all using the space time logic design approach and standard digital CMOS design tools.

5. Hardware Implementation: Standard Digital CMOS Technology
Based on the STA theoretical foundation and the ST logic design approach, we can design a new type of special-purpose TNN-based "Neuromorphic Sensory Processing Units" (NSPU) for incorporation in mobile SoCs targeting mobile and edge devices. NSPUs can be a new core type for SoCs already with heterogeneous cores. Other than using off-the-shelf CMOS design and synthesis tools, there is the potential for creating a new custom standard cell library and design optimizations for supporting the design of TNN-based NSPUs for sensory processing.



And the Course Syllabus - Spring 2022 18-743: “Neuromorphic Computer Architecture & Processor Design”

BrainChip is listed on the course

View attachment 14407

View attachment 14408

It's great to be a share holder :)
De facto standard
 
  • Like
  • Love
  • Fire
Reactions: 19 users

TechGirl

Founding Member
considering he is based in Silicon Valley campus it is possible that Sean made the introduction 🤔

ps. I love the look of the course calendar with Akida on there

It's possible, could have been anyone of our Superstars, we are certainly engaging with great minds & I like it (y)

Bill Hader Popcorn GIF by Saturday Night Live
 
  • Like
  • Love
  • Fire
Reactions: 19 users
If Anil likes it. I like it.

We’ve heard of the secret sauce before as well. This could be a great partnership if Akida is involved. Certainly references many words associated with Akida.

Really surprised our own verification engineer Jesse Chapman hasn’t liked it yet 😁

Hopefully the link works.

https://www.linkedin.com/posts/bob-...mention-and-activity-6965652323180175360-O7w7
 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 16 users
Top Bottom