BRN Discussion Ongoing

Diogenese

Top 20
I think that's very important. Young people need to get to know Akida better. I still haven't really calmed down, had deleted my post. I listened to a recent webinar (almost 2h) in German. Topic:

Artificial intelligence is pushing computers as we know them today to their limits. Do we need new computers inspired by the brain?

AI Hardware of the Future: What is Neuromorphic Computing?
Presentation of the topic and our guests
00:03:05 What is Neuromorphic Computing?
00:09:20 Short message from our sponsor BWI
00:10:00 What do neural networks and neuromorphic computing have in common?
00:16:02 Emulation vs. architecture in neuromorphic computing
00:18:00 Analogue vs. digital computing methods
00:21:15 Differentiation to quantum computing
00:25:36 The history of neuromorphic computing
00:32:16 How does deep learning have it Neuromorphic Computing Changed?
00:35:55 What does neuromorphic computing mean for AI development?
00:39:50How does deep learning work on neuromorphic hardware?
00:44:15 Sparse and spiking in neural networks - what's the difference?
00:52:00 The advantage of spiking neural networks
00:59:24 What is the biggest challenge in neuromorphic computing right now?
01:05:07 As an AI developer, can I fully rely on neuromorphic computing?
01:07:25 Will we have a neuromorphic chip in the iPhone in five years?
01:08:00 Would moving to neuromorphic computing require new manufacturing processes? 01:11:33 What could a neuromorphic chip in the iPhone do better?
Test tube vs. chip factory: Could bio-computers also be bred?
01:31:02 Is biology even a good model for computing?
01:39:16 Max' Philo Solo: Where is the line between emulation and reproduction?
01:45:20 Where to learn more about Neuromorphic Computing?
01:47:52 How to get into neuromorphic computing?


They (also uni Bern in Switzerland, home of Bonseyes) managed to talk for two hours about the state of affairs and what is not yet possible today. I thought easily 20 times about it, hey! do you know as a researcher actually not Akida? The webinar was full of ignorance about the state of things whether on purpose or not. I have even prepared a letter but not sent. I don't think that makes any sense. All you have to do is type neuromorphic and spiking neural into Google and you'll inevitably come up with Brainchip. Anyway, I think it is very important that Akida be distributed to universities. I still shake my head. Who wants to hear it from my German compatriots:
https://mixed.de/vom-gehirn-inspirierte-ki-was-ist-neuromorphic-computing-deep-minds-7/?amp=1
Hi cosors,

This is something which I have noted before - academics confine their research to peer-reviewed publications because anything that is not peer reviewed is not "proven" scientifically.

In fact, finding Akida in peer reviewed papers may be a benefit of the Carnegie Mellon University project, as the students and academics will be experimenting with Akida and producing peer reviewed papers.
 
  • Like
  • Love
  • Fire
Reactions: 41 users

TasTroy77

Founding Member
  • Like
  • Thinking
  • Sad
Reactions: 6 users

Diogenese

Top 20
Great work T,

I too think this is a big step for our Company.

It's impressive that CMU has already successfully completed a pilot session this year.

From our press release

"The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year."


You inspired me last night to do a bit of digging so I thought I would look into the Professor John Paul Shen, who said this

"We have incorporated experimentation with BrainChip’s Akida development boards in our new graduate-level course, “Neuromorphic Computer Architecture and Processor Design” at Carnegie Mellon University during the Spring 2022 semester,” said John Paul Shen, Professor, Electrical and Computer Engineering Department at Carnegie Mellon. “Our students had a great experience in using the Akida development environment and analyzing results from the Akida hardware. We look forward to running and expanding this program in 2023"

It didn't take much digging for my socks to be blown off, impressive man (y)

John Shen


John Shen​

Professor, Electrical and Computer Engineering​

Contact

Bio​

John Paul Shen was a Nokia Fellow and the founding director of Nokia Research Center - North America Lab. NRC-NAL had research teams pursuing a wide range of research projects in mobile Internet and mobile computing. In six years (2007-2012), NRC-NAL filed over 100 patents, published over 200 papers, hosted about 100 Ph.D. interns, and collaborated with a dozen universities. Prior to joining Nokia in late 2006, John was the Director of the Microarchitecture Research Lab at Intel. MRL had research teams in Santa Clara, Portland, and Austin, pursuing research on aggressive ILP and TLP microarchitectures for IA32 and IA64 processors. Prior to joining Intel in 2000, John was a tenured Full Professor in the ECE Department at CMU, where he supervised a total of 17 Ph.D. students and dozens of M.S. students, received multiple teaching awards, and published two books and more than 100 research papers. One of his books, “Modern Processor Design: Fundamentals of Superscalar Processors” was used in the EE382A Advanced Processor Architecture course at Stanford, where he co-taught the EE382A course. After spending 15 years in the industry, all in the Silicon Valley, he returned to CMU in the fall of 2015 as a tenured Full Professor in the ECE Department, and is based at the Carnegie Mellon Silicon Valley campus.

Education​

Ph.D.
Electrical Engineering
University of Southern California

M.S.
Electrical Engineering
University of Southern California

B.S.
Electrical Engineering
University of Michigan

Research​

Modern Processor Design and Evaluation​

With the emergence of superscalar processors, phenomenal performance increases are being achieved via the exploitation of instruction-level parallelism (ILP). Software tools for aiding the design and validation of complex superscalar processors are being developed. These tools, such as VMW (Visualization-Based Microarchitecture Workbench), facilitate the rigorous specification and validation of microarchitectures.

Architecture and Compilation for Instruction-Level Parallelism​

Microarchitecture and code transformation techniques for effective exploitation of ILP are being studied. Synergistic combinations of static (compile-time software) and dynamic (run-time hardware) mechanisms are being explored. Going beyond a single instruction stream is necessary to achieve effective use of wide superscalar machines, as well as tightly coupled small-scale multiprocessors.

Dependable and Fault-Tolerant Computing​

Techniques are being developed to exploit the idling machine resources of ILP machines for concurrent error checking. As ILP machines get wider, the utilization of the machine resources will decrease. The idling resources can potentially be used for enhancing system dependability via compile-time transformation techniques.

Keywords​

  • Wearable, mobile, and cloud computing
  • Ultra energy-efficient computing for sensor processing
  • Real-time data analytics
  • Mobile-user behavior modelling and deep learning



I also checked out his Linkedin page see screenshot, I especially like the sentence at the bottom

View attachment 14406



NCAL: Neuromorphic Computer Architecture Lab






"Energy-Efficient, Edge-Native, Sensory Processing Units​

with Online Continuous Learning Capability"​


The Neuromorphic Computer Architecture Lab (NCAL) is a new research group in the Electrical and Computer Engineering Department at Carnegie Mellon University, led by Prof. John Paul Shen and Prof. James E. Smith.

RESEARCH GOAL: New processor architecture and design that captures the capabilities and efficiencies of brain's neocortex for energy-efficient, edge-native, on-line, sensory processing in mobile and edge devices.
  • Capabilities: strong adherence to biological plausibility and Spike Timing Dependent Plasticity (STDP) in order to enable continuous, unsupervised, and emergent learning.
  • Efficiencies: can achieve several orders of magnitude improvements on system complexity and energy efficiency as compared to existing DNN computation infrastructures for edge-native sensory processing.

RESEARCH STRATEGY:
  1. Targeted Applications: Edge-Native Sensory Processing
  2. Computational Model: Space-Time Algebra (STA)
  3. Processor Architecture: Temporal Neural Networks (TNN)
  4. Processor Design Style: Space-Time Logic Design
  5. Hardware Implementation: Off-the-Shelf Digital CMOS

1. Targeted Applications: Edge-Native Sensory Processing
Targeted application domain: edge-native on-line sensory processing that mimics the human neocortex. The focus of this research is on temporal neural networks that can achieve brain-like capabilities with brain-like efficiency and can be implemented using standard CMOS technology. This effort can enable a whole new family of accelerators, or sensory processing units, that can be deployed in mobile and edge devices for performing edge-native, on-line, always-on, sensory processing with the capability for real-time inference and continuous learning, while consuming only a few mWatts.

2. Computational Model: Space-Time Algebra (STA)
A new Space-Time Computing (STC) Model has been developed for computing that communicates and processes information encoded as transient events in time -- action potentials or voltage spikes in the case of neurons. Consequently, the flow of time becomes a freely available, no-cost computational resource. The theoretical basis for the STC model is the "Space-Time Algebra“ (STA) with primitives that model points in time and functional operations that are consistent with the flow of Newtonian time. [STC/STA was developed by Jim Smith]

3. Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.

4. Processor Design Style: Space Time Logic Design
Conventional CMOS logic gates based on Boolean algebra can be re-purposed to implement STA based temporal operations and functions. Temporal values can be encoded using voltage edges or pulses. We have developed a TNN architecture based on two key building blocks: neurons and columns of neurons. We have implemented the excitatory neuron model with its input synaptic weights as well as a column of such neurons with winner-take-all (WTA) lateral inhibition, all using the space time logic design approach and standard digital CMOS design tools.

5. Hardware Implementation: Standard Digital CMOS Technology
Based on the STA theoretical foundation and the ST logic design approach, we can design a new type of special-purpose TNN-based "Neuromorphic Sensory Processing Units" (NSPU) for incorporation in mobile SoCs targeting mobile and edge devices. NSPUs can be a new core type for SoCs already with heterogeneous cores. Other than using off-the-shelf CMOS design and synthesis tools, there is the potential for creating a new custom standard cell library and design optimizations for supporting the design of TNN-based NSPUs for sensory processing.



And the Course Syllabus - Spring 2022 18-743: “Neuromorphic Computer Architecture & Processor Design”

BrainChip is listed on the course

View attachment 14407

View attachment 14408

It's great to be a share holder :)
Great to see the lecturers for lectures 7 & 8 will have hands-on experience with Akida.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

Boab

I wish I could paint like Vincent

This looks like genuine competition
I don't think they are there yet.

Next steps include improving architectures and circuits and scaling the design to more advanced technology nodes. Researchers also plan to tackle other applications, such as spiking neural networks.
 
  • Like
  • Fire
Reactions: 25 users

TasTroy77

Founding Member
Screenshot_20220818-103140_Chrome.jpg
 
  • Like
Reactions: 2 users

VictorG

Member

This looks like genuine competition
I'm no tech head but I cant see NeuRRam being in the same century as Akida much less the same market. my understanding is it uses convolution processing (analogue to digital). they are yet to tackle spiking architecture and their power saving is like 30 to 40 times less efficient than Akida. Furthermore the chip achieved 87% accuracy on image classification - imagine using it in self driving cars, it would make Tesla look good.
 
  • Like
  • Haha
  • Fire
Reactions: 32 users

Dozzaman1977

Regular
  • Like
  • Haha
Reactions: 16 users

uiux

Regular

This looks like genuine competition

Looks like a good chip

Gert Cauwenberghs was a member of the scientific advisory board at one stage and had positive things to say about BrainChip technology.



Screenshot_20220818-114232.png
 
  • Like
  • Fire
  • Thinking
Reactions: 14 users

Slade

Top 20
  • Like
  • Thinking
  • Fire
Reactions: 6 users

uiux

Regular
Any breach of our patents I wonder.

Doubtful

Gert is another genius
 
  • Like
Reactions: 7 users

uiux

Regular
There exists a video of Jeff Krichmar, Gert Cauwenbergs and Nicholas Spitzer talking up Brainchip



Maybe the really long long termers remember it and can source it?
 
  • Like
Reactions: 4 users

Slade

Top 20
There exists a video of Jeff Krichmar, Gert Cauwenbergs and Nicholas Spitzer talking up Brainchip



Maybe the really long long termers remember it and can source it?
They compliment BrainChip while secretly working on developing their own neuromorphic chip to compete with Akida. Have I got this right?
 
  • Like
Reactions: 4 users

uiux

Regular
They compliment BrainChip while secretly working on developing their own neuromorphic chip to compete with Akida. Have I got this right?

Secretly? No

You can read Gerts published works and patents publicly for most of his career


Maybe you don't understand that they were on the scientific advisory board BECAUSE they are experts in the field
 
  • Like
Reactions: 10 users

Rskiff

Regular
There exists a video of Jeff Krichmar, Gert Cauwenbergs and Nicholas Spitzer talking up Brainchip



Maybe the really long long termers remember it and can source it?
This one?
 
  • Like
  • Fire
Reactions: 4 users

uiux

Regular
  • Like
  • Fire
Reactions: 2 users

Slade

Top 20
Secretly? No

You can read Gerts published works and patents publicly for most of his career


Maybe you don't understand that they were on the scientific advisory board BECAUSE they are experts in the field
Funny because today is the first day that I have read about the development of the NeuRRAM chip. When did you first hear about it?

What do you mean by maybe I don’t understand that they were on the scientific advisory board BECAUSE they are experts in the field?
 
  • Like
Reactions: 3 users

uiux

Regular
Funny because today is the first day that I have read about the development of the NeuRRAM chip. When did you first hear about it?

What do you mean by maybe I don’t understand that they were on the scientific advisory board BECAUSE they are experts in the field?

I've been following Gerts work for 5 years or so, sure the chip is news, but the underlying innovations have been trickling out for ages

 
  • Like
  • Fire
Reactions: 4 users

Slade

Top 20
I've been following Gerts work for 5 years or so, sure the chip is news, but the underlying innovations have been trickling out for ages

So the chip was kept a secret. Interesting.
 
  • Like
Reactions: 2 users

uiux

Regular
  • Haha
Reactions: 1 users

VictorG

Member
Am I missing something here!!
Why is NueRram a good chip and how does it compare to Akida?
 
  • Like
Reactions: 5 users
Top Bottom