thelittleshort
Top Bloke
Info
There's a familiar face amongst the speakersHow to Build Open-Source Neuromorphic Hardware and Algorithms
The brain is the perfect place to look for inspiration to develop more efficient neural networks. While the computational cost of deep learning exceeds millions of dollars to train large-scale models, our brains are somehow equipped to process an abundance of signals from our sensory periphery within a power budget of approximately 10-20 watts. The brain’s incredible efficiency can be attributed to how biological neurons encode data in the time domain as spiking action potentials.
This tutorial will take a hands-on approach to learning how to train spiking neural networks (SNNs), and designing neuromorphic accelerators that can process these models. With the advent of open-sourced neuromorphic training libraries and electronic design automation tools, we will conduct hands-on coding sessions to train SNNs, and attendees will subsequently design a lightweight neuromorphic accelerator in the SKY130 process. Participants will be equipped with practical skills that apply principles of neuroscience to deep learning and hardware acceleration in building the next generation of machine intelligence.
Jason Eshraghian.jpg
UC Santa Cruz, USA
photo_CF.jpg
Delft University of Technology, Netherlands
Learning🏖
Fair enough. Cheers FFThat is a very easy question to answer because it is relevant to Brainchip whereas random opinions and swearing in abbreviated form are not.
To express an opinion it helps to have done your own research. Someone who has done their own research or simply read all the research done by others and posted here would understand all the dots that come together by having knowledge of this Brainchip employee.
As you don’t understand you probably need to DYOR or go back and read the research generously shared here by those that do.
My opinion only DYOR
FF
AKIDA BALLISTA
Hang in there mate. It can be a tough road at times and people have all sorts of strife occurring from time to time.Fair enough. Cheers FF
We've seen rumors that Mecedes is planning to switch to Luminar's foveated LiDaR, and, Mercedes' expressed preference for component standardization aside, we do not have any proof that Luminar will use Akida, so this GM award to Valeo for ADAS is very encouraging.This interesting that GM are awarding Valeo for service delivery in ADAS arena:
View attachment 33074
![]()
Valeo on LinkedIn: Have you heard the news? We won GM’s Supplier of the Year award in… | 70 comments
Have you heard the news? We won GM’s Supplier of the Year award in Advanced Driver Assistance Systems (ADAS)! “This award is an exceptional milestone in… | 70 comments on LinkedInwww.linkedin.com
Yes this caught me out at first then I remembered 'cortical' is to do with the eye. Prophesee states they have taken inspiration from the eye in developing their vision sensor.View attachment 33090
I might be slow but i thought the cortical side of the tech is still being evaluated or researched?
![]()
2023 Shortlist - Global Business Tech Awards
Congratulations to our 2023 Global Business Tech Awards finalists. Finalist Assetsglobalbusinesstechawards.com
Hi @LearningNot related to Brainchip but a good read regarding AI chip from Synopsys.
View attachment 33087
According to Allied Market Research, the global artificial intelligence (AI) chip market is projected to reach $263.6 billion by 2031. The AI chip market is vast and can be segmented in a variety of different ways, including chip type, processing type, technology, application, industry vertical, and more. However, the two main areas where AI chips are being used are at the edge (such as the chips that power your phone and smartwatch) and in data centers (for deep learning inference and training).
No matter the application, however, all AI chips can be defined as integrated circuits (ICs) that have been engineered to run machine learning workloads and may consist of FPGAs, GPUs, or custom-built ASIC AI accelerators. They work very much like how our human brains operate and process decisions and tasks in our complicated and fast-moving world. The true differentiator between a traditional chip and an AI chip is how much and what type of data it can process and how many calculations it can do at the same time. At the same time, new software AI algorithmic breakthroughs are driving new AI chip architectures to enable efficient deep learning computation.
Read on to learn more about the unique demands of AI, the many benefits of an AI chip architecture, and finally the applications and future of the AI chip architecture.
The Distinct Requirements of AI Chips
The AI workload is so strenuous and demanding that the industry couldn’t efficiently and cost-effectively design AI chips before the 2010s due to the compute power it required—orders of magnitude more than traditional workloads. AI requires massive parallelism of multiply-accumulate functions such as dot product functions. Traditional GPUs were able to do parallelism in a similar way for graphics, so they were re-used for AI applications.
The optimization we’ve seen in the last decade is drastic. AI requires a chip architecture with the right processors, arrays of memories, robust security, and reliable real-time data connectivity between sensors. Ultimately, the best AI chip architecture is the one that condenses the most compute elements and memory into a single chip. Today, we’re moving into multiple chip systems for AI as well since we are reaching the limits of what we can do on one chip.
Chip designers need to take into account parameters called weights and activations as they design for the maximum size of the activation value. Looking ahead, being able to take into account both software and hardware design for AI is extremely important in order to optimize AI chip architecture for greater efficiency.
The Benefits of AI Chip Architecture
There’s no doubt that we are in the renaissance of AI. Now that we are overcoming the obstacles of designing chips that can handle the AI workload, there are many innovative companies that are experts in the field and designing better AI chips to do things that would have seemed very much out of reach a decade ago.
As you move down process nodes, AI chip designs can result in 15 to 20% less clocking speed and 15 to 30% more density, which allows designers to fit more compute elements on a chip. They also increase memory components that allow AI technology to be trained in minutes vs. hours, which translates into substantial savings. This is especially true when companies are renting space from an online data center to design AI chips, but even those using in-house resources can benefit by conducting trial and error much more effectively.
We are now at the point where AI itself is being used to design new AI chip architectures and calculate new optimization paths to optimize power, performance, and area (PPA) based on big data from many different industries and applications.
AI Chip Architecture Applications and the Future Ahead
AI is all around us quite literally. AI processors are being put into almost every type of chip, from the smallest IoT chips to the largest servers, data centers, and graphic accelerators. The industries that require higher performance will of course utilize AI chip architecture more, but as AI chips become cheaper to produce, we will begin to see AI chip architecture in places like IoT to optimize power and other types of optimizations that we may not even know are possible yet.
It’s an exciting time for AI chip architecture. Synopsys predicts that we’ll continue to see next-generation process nodes adopted aggressively because of the performance needs. Additionally, there’s already much exploration around different types of memory as well as different types of processor technologies and the software components that go along with each of these.
In terms of memory, chip designers are beginning to put memory right next to or even within the actual computing elements of the hardware to make processing time much faster. Additionally, software is driving the hardware, meaning that software AI models such as new neural networks are requiring new AI chip architectures. Proven, real-time interfaces deliver the data connectivity required with high speed and low latency, while security protects the overall systems and their data.
Finally, we’ll see photonics and multi-die systems come more into play for new AI chip architectures to overcome some of the AI chip bottlenecks. Photonics provides a much more power-efficient way to do computing and multi-die systems (which involve the heterogeneous integration of dies, often with memory stacked directly on top of compute boards) can also improve performance as the possible connection speed between different processing elements and between processing and memory units increases.
One thing is for sure: Innovations in AI chip architecture will continue to abound, and Synopsys will have a front-row seat and a hand in them as we help our customers design next-generation AI chips in an array of industries.
![]()
AI Chip Architecture Explained | Hardware, Processors & Memory | Synopsys Blog
Explore AI chip architecture and learn how AI's requirements and applications shape AI optimized hardware design across processors, memory chips, and more.blogs.synopsys.com
Learning 🏖
Sorry I forgot to mention that just ONE tiny little percent of that market would be $US2.636 billion.Hi @Learning
I read this at the optometrist waiting for my six monthly check up. When I was called in Harold said "You seem unusually chirpy today."
I am pretty sure the first line had something to do with it:
According to Allied Market Research, the global artificial intelligence (AI) chip market is projected to reach $263.6 billion by 2031.
My opinion only DYOR
FF
AKIDA BALLISTA
The way I see it is if all these super qualified technicians are being laid off but Brainchip is hiring said legends then something good must be going on at BrainChip as opposed to tech companies who are shedding workers. My logic and opinion only. Cheers. Onward.That is a very easy question to answer because it is relevant to Brainchip whereas random opinions and swearing in abbreviated form are not.
To express an opinion it helps to have done your own research. Someone who has done their own research or simply read all the research done by others and posted here would understand all the dots that come together by having knowledge of this Brainchip employee.
As you don’t understand you probably need to DYOR or go back and read the research generously shared here by those that do.
My opinion only DYOR
FF
AKIDA BALLISTA
Good Afternoon Chippers,
Not much to report my end...
Pressently having fun watching the below company.
Relating to a completely diffrent company , industry...
LTR : Liontown Resources Ltd
Their Board of directors rejected a buyout offer...
One can only wonder what will unfold for us , BRN , once serious deals are signed , announced on ASX & royalty streams start to flow.
Regards,
Esq.
Funny story @Esq.111 ........................ NOTGood Afternoon Chippers,
Not much to report my end...
Pressently having fun watching the below company.
Relating to a completely diffrent company , industry...
LTR : Liontown Resources Ltd
Their Board of directors rejected a buyout offer...
One can only wonder what will unfold for us , BRN , once serious deals are signed , announced on ASX & royalty streams start to flow.
Regards,
Esq.
Neuromorphic appears to be the elephant in the room in this articleNot related to Brainchip but a good read regarding AI chip from Synopsys.
View attachment 33087
According to Allied Market Research, the global artificial intelligence (AI) chip market is projected to reach $263.6 billion by 2031. The AI chip market is vast and can be segmented in a variety of different ways, including chip type, processing type, technology, application, industry vertical, and more. However, the two main areas where AI chips are being used are at the edge (such as the chips that power your phone and smartwatch) and in data centers (for deep learning inference and training).
No matter the application, however, all AI chips can be defined as integrated circuits (ICs) that have been engineered to run machine learning workloads and may consist of FPGAs, GPUs, or custom-built ASIC AI accelerators. They work very much like how our human brains operate and process decisions and tasks in our complicated and fast-moving world. The true differentiator between a traditional chip and an AI chip is how much and what type of data it can process and how many calculations it can do at the same time. At the same time, new software AI algorithmic breakthroughs are driving new AI chip architectures to enable efficient deep learning computation.
Read on to learn more about the unique demands of AI, the many benefits of an AI chip architecture, and finally the applications and future of the AI chip architecture.
The Distinct Requirements of AI Chips
The AI workload is so strenuous and demanding that the industry couldn’t efficiently and cost-effectively design AI chips before the 2010s due to the compute power it required—orders of magnitude more than traditional workloads. AI requires massive parallelism of multiply-accumulate functions such as dot product functions. Traditional GPUs were able to do parallelism in a similar way for graphics, so they were re-used for AI applications.
The optimization we’ve seen in the last decade is drastic. AI requires a chip architecture with the right processors, arrays of memories, robust security, and reliable real-time data connectivity between sensors. Ultimately, the best AI chip architecture is the one that condenses the most compute elements and memory into a single chip. Today, we’re moving into multiple chip systems for AI as well since we are reaching the limits of what we can do on one chip.
Chip designers need to take into account parameters called weights and activations as they design for the maximum size of the activation value. Looking ahead, being able to take into account both software and hardware design for AI is extremely important in order to optimize AI chip architecture for greater efficiency.
The Benefits of AI Chip Architecture
There’s no doubt that we are in the renaissance of AI. Now that we are overcoming the obstacles of designing chips that can handle the AI workload, there are many innovative companies that are experts in the field and designing better AI chips to do things that would have seemed very much out of reach a decade ago.
As you move down process nodes, AI chip designs can result in 15 to 20% less clocking speed and 15 to 30% more density, which allows designers to fit more compute elements on a chip. They also increase memory components that allow AI technology to be trained in minutes vs. hours, which translates into substantial savings. This is especially true when companies are renting space from an online data center to design AI chips, but even those using in-house resources can benefit by conducting trial and error much more effectively.
We are now at the point where AI itself is being used to design new AI chip architectures and calculate new optimization paths to optimize power, performance, and area (PPA) based on big data from many different industries and applications.
AI Chip Architecture Applications and the Future Ahead
AI is all around us quite literally. AI processors are being put into almost every type of chip, from the smallest IoT chips to the largest servers, data centers, and graphic accelerators. The industries that require higher performance will of course utilize AI chip architecture more, but as AI chips become cheaper to produce, we will begin to see AI chip architecture in places like IoT to optimize power and other types of optimizations that we may not even know are possible yet.
It’s an exciting time for AI chip architecture. Synopsys predicts that we’ll continue to see next-generation process nodes adopted aggressively because of the performance needs. Additionally, there’s already much exploration around different types of memory as well as different types of processor technologies and the software components that go along with each of these.
In terms of memory, chip designers are beginning to put memory right next to or even within the actual computing elements of the hardware to make processing time much faster. Additionally, software is driving the hardware, meaning that software AI models such as new neural networks are requiring new AI chip architectures. Proven, real-time interfaces deliver the data connectivity required with high speed and low latency, while security protects the overall systems and their data.
Finally, we’ll see photonics and multi-die systems come more into play for new AI chip architectures to overcome some of the AI chip bottlenecks. Photonics provides a much more power-efficient way to do computing and multi-die systems (which involve the heterogeneous integration of dies, often with memory stacked directly on top of compute boards) can also improve performance as the possible connection speed between different processing elements and between processing and memory units increases.
One thing is for sure: Innovations in AI chip architecture will continue to abound, and Synopsys will have a front-row seat and a hand in them as we help our customers design next-generation AI chips in an array of industries.
![]()
AI Chip Architecture Explained | Hardware, Processors & Memory | Synopsys Blog
Explore AI chip architecture and learn how AI's requirements and applications shape AI optimized hardware design across processors, memory chips, and more.blogs.synopsys.com
Learning 🏖
Afternoon Steve10,What do we all think current FV is for BRN?
I would say at least USD $1B MC & up to USD $2B MC = AUD $1.5-3B MC or 85c to $1.70 SP.
I recently bought a small percentage into Liontown though now wish I bought more. I feel your pain, I rode PLS from $0.50 down to $0.20 and sold at $1.20, it recently went over $5.Funny story @Esq.111 ........................ NOT
I bought one million shares of LTR @ $0.03 so $30,000.00 back in the day when they first came across their spodumene deposit.
Thinking i was clever i sold one million shares in LTR @ $0.08 so made $50,000.00 profit .............. woohoo
The rest as they say is history.
Agreed, ............. what will unfold for us? ....................... I sure as hell ain"t selling out @ $2.50
AKIDA BALLISTA
Not sure how to value Brainchip now because as the cortical column question pointed out there is much too much I do not know about just in the technology space which includes the filed patents we have absolutely no knowledge of other than there are around 25 lodged and awaiting determination. I know our engineer @Diogenese is hanging out excitedly for at least one of these as it will be revolutionary. So leaving aside everything else it would be impossible for us on the outside to value Brainchip's intellectual property at this point in time.What do we all think current FV is for BRN?
I would say at least USD $1B MC & up to USD $2B MC = AUD $1.5-3B MC or 85c to $1.70 SP.
There’s no way PVDM will let his first born go for <$20bn usd. He doesn’t need the money, this is his lifes work his higher purpose and anyone attempting a buyout would have to court him very well to gain his vote and the 1000s that would likely follow his lead. So let’s start there.Afternoon Steve10,
I'll have a stab...
Bargain price....
AU$4.7325 = AU$8,828,150,300.00 Market Cap.
Converted to US currency...
US$3.1653874 = US$5,907,930,659.00 Market Cap.
* Total Shares, Options etc as of 2nd Feb 2023 is
1,856,430,614 .
* Pressent AU$ to US$ = 0.669215. As of 28/3/2022.
At the above prices one would have to think that the world's top ten company's CEO's & Boards would be displaying gross negligence to their Shareholders by not attempting a cheeky Buyout offer for Brainchip.
Regards,
Esq.