BRN Discussion Ongoing

Frangipani

Regular

View attachment 67696

This is the paper I linked in my previous post, co-authored by Lars Niedermeier, a Zurich-based IT consultant, and the above-mentioned Jeff Krichmar from UC Irvine.


View attachment 67703

The two of them co-authored three papers in recent years, including one in 2022 with another UC Irvine professor and member of the CARL team, Nikil Dutt (https://ics.uci.edu/~dutt/) as well as Anup Das from Drexel University, whose endorsement of Akida is quoted on the BrainChip website:

View attachment 67702


View attachment 67700




View attachment 67701

Lars Niedermeier’s and Jeff Krichmar’s April 2024 publication on CARLsim++ (which does not mention Akida) ends with the following conclusion and the acknowledgement that their work was supported by the Air Force Office of Scientific Research - the funding has been going on at least since 2022 -



and a UCI Beall Applied Innovation Proof of Product Award (https://innovation.uci.edu/pop/)

and they also thank the regional NSF I-Corps (= Innovation Corps) for valuable insights.

View attachment 67699



View attachment 67704


Their use of an E-Puck robot (https://en.m.wikipedia.org/wiki/E-puck_mobile_robot) for their work reminded me of our CTO’s address at the AGM in May, during which he envisioned the following object (from 22:44 min):

“Imagine a compact device similar in size to a hockey puck that combines speech recognition, LLMs and an intelligent agent capable of controlling your home’s lighting, assisting with home repairs and much more. All without needing constant connectivity or having to worry about privacy and security concerns, a major barrier to adaptation, particularly in industrial settings.”

Possibly something in the works here?

The version the two authors were envisioning in their April 2024 paper is, however, conceptualised as being available as a cloud service:

“We plan a hybrid approach to large language models available as cloud service for processing of voice and text to speech.”


The authors gave a tutorial on CARLsim++ at NICE 2024, where our CTO Tony Lewis was also presenting. Maybe they had a fruitful discussion at that conference in La Jolla, which resulted in UC Irvine’s Cognitive Anteater Robotics Laboratory (CARL) team experimenting with AKD1000, as evidenced in the video uploaded a couple of hours ago that I shared in my previous post?





View attachment 67705



View attachment 67716


While I was out running errands just now, I recalled that we had some sort of connection to UCI through one of our research scientists - and bingo!

Kristofor Carlson was a postdoc at Jeff Krichmar‘s Cognitive Robotics Lab a decade ago and co-authored a number of research papers with both Jeff Krichmar and Nikil Dutt over the years, the last one published in 2019:

2DF3C75D-EFC2-4A0E-98B2-496BEF8487FC.jpeg


1C119412-21D9-401D-972E-A0F1EA6A09EB.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 33 users

charles2

Regular
  • Like
  • Love
  • Fire
Reactions: 49 users

charles2

Regular
And huge capitulation on NASDAQ for BRCHF.

Over 700k on offer and sizable share dumps as low as 10 cents (US)

Usually capitulation is a good sign.....the weak hands give up at any price.

(He says wistfully).
 
  • Like
  • Wow
  • Fire
Reactions: 12 users

Diogenese

Top 20

Intel Foundry Achieves Major Milestones​

Intel 18A powered on and healthy, on track for next-gen client and server chip production next year.

View attachment 67663

View attachment 67664


BrainChip is an IP Partner of IFS. Worth reading the second link as well.

View attachment 67665



Hi Evermont,

Interesting development.

Could it be that the tapeout of Akida 2 has been delayed so it can be adapted for Intel's 18A process?


https://www.intel.com/content/www/u...ndry-achieves-major-milestones.html#gs.dahsjf

What’s New: Intel today announced that its lead products on Intel 18A, Panther Lake (AI PC client processor) and Clearwater Forest (server processor), are out of the fab and have powered-on and booted operating systems. These milestones were achieved less than two quarters after tape-out, with both products on track to start production in 2025. The company also announced that the first external customer is expected to tape out on Intel 18A in the first half of next year.
...
More on Intel 18A: In July, Intel released the 18A Process Design Kit (PDK) 1.0, design tools that enable foundry customers to harness the capabilities of RibbonFET gate-all-around transistor architecture and PowerVia backside power delivery in their designs on Intel 18A. Electronic design automation (EDA) and intellectual property (IP) partners are updating their offerings to enable customers to begin their final production designs.
...
How Customers are Involved: In gaining access to the Intel 18A PDK 1.0 last month, the company’s EDA and IP partners are updating their tools and design flows to enable external foundry customers to begin their Intel 18A chip designs. This is a critical enabling milestone for Intel’s foundry business.

The "A" in 18A is Angstrom, a measurement unit = 0.1 nm, so 18A = 1.8 nm. At these distances you're starting to get close to where parasitic quantum effects can influence the operation of the transistors, and the impedance of the connecting "wires" becomes a significant source of power loss. I would think that this will need a whole new design system. Intel are using gate-all-around transistors which are very different from our planar CMOS technology - no wonder Anil is retiring!

Akida's SNN sparsity would help overcome the connector wire impedance loss by sending electrical impulses less frequently than MACs.
 
  • Like
  • Fire
  • Love
Reactions: 37 users

CHIPS

Regular
There is new hope on the horizon!




BrainChip Appoints New CMO, Enhances Scientific Advisory Board



Laguna Hills, Calif. – August 7th, 2024BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced that it has hired Steven Brightfield as its new chief marketing officer and has re-envisioned its Scientific Advisory Board (SAB) by bringing on company founder Peter van der Made, Dr. Jason K. Eshraghian and Dr. André van Schaik.

Brightfield has a depth of tech industry knowledge and experience within the AI semiconductor industry. He previously led marketing at several AI focused technology companies, such as SiMa.ai, X-Silicon and Wave Computing combined with deep experience within the semiconductor sector, including executive leadership positions at LSI Logic, Qualcomm, Zoran and others. One of Brightfield’s first priorities at BrainChip will be to oversee the development of a marketing strategy for the new TENNs product, an advanced, ultra efficient neural network architecture and integrate it into the Akida technology platform.

The Scientific Advisory Board provides independent advice and expert consultation for the executive staff of Brainchip to guide the scientific and technical aspects of the company’s short-and long-term goals. The SAB also reviews and evaluates the research and development programs of BrainChip with respect to quality and scope. The re-envisioned SAB provides new perspectives from key industry leaders in AI with increasing focus under the leadership of Dr. Tony Lewis.

van der Made has been at the forefront of computer innovation for 45 years. One of the founders of BrainChip, he designed the first generations of digital neuromorphic devices on which the Akida™ chip was based. van der Made previously served as chief technology officer for BrainChip until his retirement last year. He remains a member of the company’s board of directors.

Eshraghian is an Assistant Professor with the Department of Electrical and Computer Engineering, University of California, Santa Cruz. He serves as the Secretary of the Neural Systems and Applications Technical Committee. His research interests are in large-scale neuromorphic computing. Dr. Eshraghian is the developer of snnTorch, a widely used Python library with more than 150,000 downloads used to train and model spiking neural networks, and his lab developed several high-profile language models, including SpikeGPT, and the MatMul-
Free LLM.

Van Schaik is a pioneer of the field of neuromorphic engineering. He is a professor of electrical engineering at the Western Sydney University and director of the International Centre for Neuromorphic Systems, also in Australia. His research focuses on neuromorphic engineering and computational neuroscience. Dr. Van Schaik has authored more than 300 publications, invented more than 35 patents and is a founder of four start-up companies: VAST Audio, Personal Audio, Heard Systems and Optera.

“I am pleased to add new team members with the skills, experience and credentials to advance BrainChip’s adoption in the market” said Sean Hehir, BrainChip’s CEO. “Leveraging Steve’s expertise as a technology marketer and expanding our Scientific Advisory Board with some of the keenest minds in the industry, better positions us to achieve our goals. I am eager to work closely with each of them.”
 
  • Like
  • Fire
  • Love
Reactions: 31 users

Diogenese

Top 20
Changes are being made.

Big ones


Hi Charles,

I think our new CMO should invest in a quality pair of garters because he's about to have, or probably already has had, his socks blown off. He will be doing a crash course in digital SNNs.

His earlier foray into AI marketing at Wave Computer* was based on MACs and one of his muses is Yan le Cun who is not a fan of SNNs.

US11227030B2 Matrix multiplication engine using pipelining 20190401

1723053004690.png



a matrix multiplication engine using pipelining are disclosed. A first and a second matrix are obtained for matrix multiplication. A first matrix multiply-accumulate (MAC) unit is configured, where a first matrix element and a second matrix element are presented to the MAC unit on a first cycle. A second MAC unit is configured in pipelined fashion, where the first element of the first matrix and a second element of the second matrix are presented to the second MAC unit on a second cycle, and where a second element of the first matrix and the first element of the second matrix are presented to the first MAC unit on the second cycle. Additional MAC units are further configured within the processor in pipelined fashion. Multiply-accumulate operations are executed in pipelined fashion on each of n MAC units over additional k sets of m cycles.


*WC recently emerged from Chapter 11 bankruptcy.
 
  • Like
  • Love
  • Fire
Reactions: 20 users

Frangipani

Regular
Changes are being made.

Big ones


… and small ones, too:

Merci beaucoup et au revoir, Sébastien Crouzet…

CDC5088E-4773-47C3-8E7D-A699FA5DEEF1.jpeg


07457131-C11A-48CC-BB3B-9691C2D13680.jpeg



… and welcome to another University of Washington summer intern - Justin-Pierre Tremblay!


80A3E829-2450-4C3E-8273-A9C8A630481D.jpeg



B6DB3966-BB83-4926-A39E-E52302A4CEEC.jpeg


FNU Sidharth, a Graduate Student Researcher from the University of Washington in Seattle, will be spending the summer as a Machine Learning Engineering Research Intern at BrainChip:




View attachment 66341

View attachment 66342

View attachment 66344


👆🏻Bingo! 😊 👇🏻

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-425543

View attachment 66345
View attachment 66346
 
  • Like
  • Fire
  • Love
Reactions: 17 users

Kachoo

Regular
  • Like
Reactions: 2 users

The Pope

Regular
Changes are being made.

Big ones

Read story and also mentions Dr. Van Schaik linked to western Sydney university. About two years ago others suggested on TSE that my comments on there was a solid connection between PVDM and WSU lectures etc were generally silly. One noteable exception was FF who liked my post. I suggest this WSU connection wasn’t hard to find back then including this guy mentioned above and I recall even WSU lectures denying knowledge on BRN tech to another TSE member while attending a presentation at WSU.
Now the connection is officially reestablished between WSU and BRN let’s see what others on TSE find.
Like I said back then I saw a demonstration (by chance) from WSU that appeared to be BRN tech used (2 years ago) and when I asked questions on tech being used the presenters went quiet and I suggest started playing dumb.
Leave it as that and this is my opinion that WSU have known and experimented with BRN tech for a while ( suggest atleast 3yrs or more given the WSU demo I witnessed)

Have a good day and hopefully Sean and Co can market and sell the TENNS technology into products ASAP.

Cheers
The Pope
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 32 users

IloveLamp

Top 20
  • Like
  • Love
  • Thinking
Reactions: 17 users

DK6161

Regular
Good morning fellow chippers!
What a great news to wake up to.
New CMO. Looks like big changes are being made! Great move Sean and co!
OMG how exciting! This will surely work for us👍

Can't help to think that if I recall correctly, our previous CMO Nandan had a lot of experience. He was Ex-ARM and came from Amazon. He was touted as the one we need to drive our product marketing globally through his connections. This was 2 years ago and got a lot of share holders excited. Then he left quietly.

I'd give this new guy 2-3 years top.

GLTAH
Not advice
 
  • Like
Reactions: 1 users

Evermont

Stealth Mode
Hi Evermont,

Interesting development.

Could it be that the tapeout of Akida 2 has been delayed so it can be adapted for Intel's 18A process?


https://www.intel.com/content/www/u...ndry-achieves-major-milestones.html#gs.dahsjf

What’s New: Intel today announced that its lead products on Intel 18A, Panther Lake (AI PC client processor) and Clearwater Forest (server processor), are out of the fab and have powered-on and booted operating systems. These milestones were achieved less than two quarters after tape-out, with both products on track to start production in 2025. The company also announced that the first external customer is expected to tape out on Intel 18A in the first half of next year.
...
More on Intel 18A: In July, Intel released the 18A Process Design Kit (PDK) 1.0, design tools that enable foundry customers to harness the capabilities of RibbonFET gate-all-around transistor architecture and PowerVia backside power delivery in their designs on Intel 18A. Electronic design automation (EDA) and intellectual property (IP) partners are updating their offerings to enable customers to begin their final production designs.
...
How Customers are Involved: In gaining access to the Intel 18A PDK 1.0 last month, the company’s EDA and IP partners are updating their tools and design flows to enable external foundry customers to begin their Intel 18A chip designs. This is a critical enabling milestone for Intel’s foundry business.

The "A" in 18A is Angstrom, a measurement unit = 0.1 nm, so 18A = 1.8 nm. At these distances you're starting to get close to where parasitic quantum effects can influence the operation of the transistors, and the impedance of the connecting "wires" becomes a significant source of power loss. I would think that this will need a whole new design system. Intel are using gate-all-around transistors which are very different from our planar CMOS technology - no wonder Anil is retiring!

Akida's SNN sparsity would help overcome the connector wire impedance loss by sending electrical impulses less frequently than MACs.

Innovation seems to be a key theme Dio.

Efficiency and power handling is what we do well.
 
  • Like
Reactions: 5 users

TECH

Regular
Researchers at UC Irvine’s Cognitive Anteater Robotics Laboratory (CARL), led by Jeffrey Krichmar, have been experimenting with AKD1000:








View attachment 67694


View attachment 67690


View attachment 67691

View attachment 67692


View attachment 67693



View attachment 67695




Nice Post.....AKD 1000 "yet again" doing us all proud !

Without Peters initial brilliance in creating SNAP 64 none of this would have ever been possible.

AKD 1000 "too narrow" I say yes, if you are referring to making our technology offering to a wider customer base, in an attempt to
potentially capture a larger market share at the edge, BUT AKD 1000 was company defining when in it's first wafer run turned out to
be more successful than both Peter and Anil had hoped for.....

LETS NOT FORGET THAT FACT.......God Bless our Founders 💘 Tech (Perth)
 
  • Like
  • Fire
  • Love
Reactions: 23 users
  • Like
  • Fire
  • Love
Reactions: 9 users

Frangipani

Regular
The two of them co-authored three papers in recent years, including one in 2022 with another UC Irvine professor and member of the CARL team, Nikil Dutt (https://ics.uci.edu/~dutt/) as well as Anup Das from Drexel University, whose endorsement of Akida is quoted on the BrainChip website:

F59D0CEB-A967-430B-B4BA-C5C50BD6DCFF.jpeg


Speaking of Anup Das (and also of Eric Gallo at Accenture Labs):


717964F6-148A-4D04-ADAF-2E8BFEB131A6.jpeg





408B747F-9714-4BE7-B108-AF04509AF466.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 17 users

7für7

Top 20
I thought it’s maybe important so I decided to post this to be shore everyone will read this…I hope other will follow my example and do the same

 
  • Like
  • Sad
Reactions: 6 users
Good morning fellow chippers!
What a great news to wake up to.
New CMO. Looks like big changes are being made! Great move Sean and co!
OMG how exciting! This will surely work for us👍

Can't help to think that if I recall correctly, our previous CMO Nandan had a lot of experience. He was Ex-ARM and came from Amazon. He was touted as the one we need to drive our product marketing globally through his connections. This was 2 years ago and got a lot of share holders excited. Then he left quietly.

I'd give this new guy 2-3 years top.

GLTAH
Not advice
1723077261678.gif
 
  • Like
  • Haha
Reactions: 5 users
  • Haha
Reactions: 1 users
Nice we still getting mentioned out there in recent articles / blogs as a leading company and Akida linked back to the BRN site.




CyberPro Magazine Logo

Neuromorphic Computing: Revolutionizing the Future of Artificial Intelligence​

Neuromorphic Computing: Future of Artificial Intelligence | CyberPro Magazine


Imagine a world where computers can think, learn, and adapt just like the human brain. This is not a futuristic dream but a rapidly approaching reality, thanks to neuromorphic computing. As the boundaries of artificial intelligence (AI) and machine learning continue to expand, it emerges as a groundbreaking approach that promises to revolutionize how we process information. By mimicking the neural architecture of the human brain, this innovative technology aims to create more efficient, adaptive, and intelligent systems. In this article, we will explore the fascinating world of neuromorphic computing, uncovering its principles, applications, and the profound impact it is set to have on various industries.

What is Neuromorphic Computing?​

Definition and Overview​

It is an innovative approach to designing computer systems that mimic the human brain’s architecture and functioning. Unlike traditional computing systems that rely on binary logic and von Neumann architecture, it uses artificial neurons and synapses to process information more organically and efficiently.

History and Development​

The concept of neuromorphic computing dates back to the 1980s when Carver Mead first introduced it. Over the years, significant advancements in neuroscience and materials science have propelled the development of neuromorphic systems, bringing us closer to creating machines that think and learn like humans.

The Science Behind Neuromorphic Computing​

1. Biological Inspiration

It draws heavy inspiration from the structure and functioning of the human brain. The brain’s neural networks, consisting of neurons and synapses, process information in parallel, allowing for remarkable efficiency and adaptability.

2. Key Principles and Concepts​

Key principles include the use of spiking neural networks (SNNs), which emulate the brain’s way of transmitting information through electrical spikes. This method not only enhances processing speed but also significantly reduces power consumption.

Neuromorphic Hardware​

1. Neuromorphic Chips​

Neuromorphic Computing: Future of Artificial Intelligence | CyberPro Magazine
-Source-techovedas.com_.jpg
At the heart of this computing are neuromorphic chips. These specialized processors are designed to replicate the brain’s neural networks, enabling efficient and real-time data processing. Leading examples include IBM’s TrueNorth and Intel’s Loihi chips.

2. Spiking Neural Networks (SNNs)​

SNNs are a crucial component of this computing. Unlike traditional neural networks, SNNs use spikes or bursts of electrical activity to transmit information. This approach closely mirrors how biological neurons communicate, leading to more efficient and realistic processing.

Advantages​

1 . Energy Efficiency

One of the most significant advantages is its energy efficiency. By mimicking the brain’s low-power consumption mechanisms, neuromorphic systems can operate with significantly less energy than conventional computers.

2. Real-time Processing​

Neuromorphic systems excel in real-time data processing, making them ideal for applications that require immediate responses, such as robotics and autonomous vehicles.

Applications​

1. Robotics

It has the potential to revolutionize robotics by enabling machines to process sensory information and make decisions more like humans. This can lead to more adaptive and intelligent robots capable of performing complex tasks.

2. Healthcare​

In healthcare, neuromorphic systems can enhance medical imaging, diagnostics, and personalized treatment plans. Their ability to process vast amounts of data in real-time can lead to more accurate and timely medical interventions.

3. Autonomous Vehicles​

For autonomous vehicles, it offer faster and more efficient data processing, improving decision-making and safety. These systems can handle complex sensory inputs, such as visual and auditory data, more effectively than traditional processors.

Internet of Things (IoT)​

The integration of this computing in IoT devices can lead to smarter and more responsive environments. From smart homes to industrial automation, the possibilities are endless with neuromorphic-enhanced IoT systems.

Challenges​

1. Technical Hurdles​

Despite its potential, it faces several technical challenges. These include the complexity of designing neuromorphic chips and the need for new programming paradigms to leverage their capabilities fully.

2. Adoption Barriers​

Adoption barriers also exist, such as the lack of standardization and the need for specialized knowledge to develop and implement neuromorphic systems. Overcoming these barriers will be crucial for widespread adoption.

Comparison with Traditional Computing​

1. Speed and Efficiency​

Neuromorphic Computing: Future of Artificial Intelligence | CyberPro Magazine

Neuromorphic systems offer superior speed and efficiency compared to traditional computers. Their parallel processing capabilities and low-power consumption make them ideal for tasks that require real-time responses.

2. Scalability​

Scalability is another area where neuromorphic computing shines. Unlike traditional systems that struggle with scaling, neuromorphic architectures can easily expand to accommodate larger and more complex tasks.

Future Prospects of Neuromorphic Computing​

1. Research and Development​

Ongoing research and development in this computing are paving the way for even more advanced and efficient systems. Collaboration between neuroscientists, computer scientists, and engineers is driving innovation in this field.

2. Potential Impact on AI​

The potential impact of this computing on AI is immense. By creating systems that learn and adapt like the human brain, we can develop more intelligent and capable AI applications that can solve complex problems in ways that traditional AI cannot.

Leading Companies​

1. IBM​

IBM is a pioneer in neuromorphic computing, with its TrueNorth chip leading the way. This chip features millions of artificial neurons and synapses, enabling advanced cognitive computing capabilities.

2. Intel​

Intel’s Loihi chip is another significant player in the neuromorphic space. Loihi is designed to emulate the brain’s natural learning processes, making it ideal for applications that require adaptive and intelligent systems.

3. BrainChip Holdings

Neuromorphic Computing: Revolutionizing the Future of Artificial Intelligence


BrainChip Holdings is known for its Akida neuromorphic system, which offers real-time learning and inference capabilities. Akida is designed for edge AI applications, providing efficient and low-power solutions for various industries.

Case Studies and Real-world Examples​

Case studies and real-world examples highlight the practical applications of this computing. From improving medical diagnostics to enhancing autonomous vehicle navigation, the impact of neuromorphic systems is being felt across various sectors.

Neuromorphic Computing and Ethics​

1. Privacy Concerns​

As with any advanced technology, it raises privacy concerns. The ability to process and analyze vast amounts of data in real-time necessitates robust privacy protections to ensure user data is safeguarded.

2. Ethical Implications​

Ethical implications also arise from the use of neuromorphic systems. Questions about the potential for misuse and the need for ethical guidelines to govern their development and deployment are critical considerations.

Neuromorphic Computing in Education​

1. Training and Development Programs​

To foster growth, education, and training programs are essential. Universities and institutions are beginning to offer specialized courses and programs to equip the next generation of scientists and engineers with the skills needed to advance this field.

How to Get Started?​

Resources and Learning Paths​

For those interested in this computing, various resources and learning paths are available. Online courses, workshops, and research papers provide valuable insights and knowledge to help you get started in this exciting field.

FAQs​

1. What is neuromorphic computing?​

It is a field of computing that aims to mimic the neural architecture and functioning of the human brain to create more efficient and intelligent systems.

2. How does it differ from traditional computing?​

Unlike traditional computing, which relies on binary logic and von Neumann architecture, it uses artificial neurons and synapses to process information in a way that closely resembles the human brain.

3. What are the advantages of neuromorphic computing?

It offers several advantages, including energy efficiency, real-time processing, and scalability, making it ideal for applications that require immediate and adaptive responses.

4. Which industries can benefit from neuromorphic computing?​

Industries such as healthcare, robotics, autonomous vehicles, and the Internet of Things (IoT) can benefit significantly from the advancements in neuromorphic computing.

5. What are the challenges facing neuromorphic computing?​

Challenges include technical hurdles in designing neuromorphic chips, adoption barriers, and the need for new programming paradigms to fully leverage the capabilities of
neuromorphic systems.

Conclusion​

Neuromorphic computing represents a groundbreaking shift in how we approach computing and AI. By emulating the human brain’s architecture and functioning, neuromorphic systems offer unparalleled efficiency, real-time processing, and adaptability. As research and development continue to advance, the potential applications of this computing are vast, promising to revolutionize industries ranging from healthcare to autonomous vehicles. Embracing this technology and overcoming its challenges will pave the way for a smarter, more efficient future.
 
  • Like
  • Fire
  • Love
Reactions: 59 users
Top Bottom