BRN Discussion Ongoing

Adam82

Member
So why the hell are people posting stuff about appointments from a year ago FFS? Seriously
And that why it’s important to do your own research and have a plan that suits you….. 🙄
 
  • Like
Reactions: 17 users

Learning

Learning to the Top 🕵‍♂️
How to Build Open-Source Neuromorphic Hardware and Algorithms
The brain is the perfect place to look for inspiration to develop more efficient neural networks. While the computational cost of deep learning exceeds millions of dollars to train large-scale models, our brains are somehow equipped to process an abundance of signals from our sensory periphery within a power budget of approximately 10-20 watts. The brain’s incredible efficiency can be attributed to how biological neurons encode data in the time domain as spiking action potentials.

This tutorial will take a hands-on approach to learning how to train spiking neural networks (SNNs), and designing neuromorphic accelerators that can process these models. With the advent of open-sourced neuromorphic training libraries and electronic design automation tools, we will conduct hands-on coding sessions to train SNNs, and attendees will subsequently design a lightweight neuromorphic accelerator in the SKY130 process. Participants will be equipped with practical skills that apply principles of neuroscience to deep learning and hardware acceleration in building the next generation of machine intelligence.

Jason Eshraghian.jpg
UC Santa Cruz, USA

photo_CF.jpg
Delft University of Technology, Netherlands


Learning🏖
 
  • Like
  • Fire
  • Love
Reactions: 14 users
Info

1679964670399.png

1679964598443.png
 
  • Like
  • Love
  • Fire
Reactions: 51 users

Boab

I wish I could paint like Vincent
How to Build Open-Source Neuromorphic Hardware and Algorithms
The brain is the perfect place to look for inspiration to develop more efficient neural networks. While the computational cost of deep learning exceeds millions of dollars to train large-scale models, our brains are somehow equipped to process an abundance of signals from our sensory periphery within a power budget of approximately 10-20 watts. The brain’s incredible efficiency can be attributed to how biological neurons encode data in the time domain as spiking action potentials.

This tutorial will take a hands-on approach to learning how to train spiking neural networks (SNNs), and designing neuromorphic accelerators that can process these models. With the advent of open-sourced neuromorphic training libraries and electronic design automation tools, we will conduct hands-on coding sessions to train SNNs, and attendees will subsequently design a lightweight neuromorphic accelerator in the SKY130 process. Participants will be equipped with practical skills that apply principles of neuroscience to deep learning and hardware acceleration in building the next generation of machine intelligence.

Jason Eshraghian.jpg
UC Santa Cruz, USA

photo_CF.jpg
Delft University of Technology, Netherlands


Learning🏖
There's a familiar face amongst the speakers
Jason.jpg

Biography
Jason K. Eshraghian is an Assistant Professor at the Department of Electrical and Computer Engineering at UC Santa Cruz, CA, USA. Prior to that, he was a Post-Doctoral Researcher at the Department of Electrical Engineering and Computer Science, University of Michigan in Ann Arbor. He
received the Bachelor of Engineering (Electrical and Electronic) and the Bachelor of Laws degrees from The University of Western Australia, WA, Australia in 2016, where he also completed his Ph.D. Degree.


He is the developer of snnTorch, a high-profile Python library used to train and model brain-inspired spiking neural networks, which has amassed over 60,000 downloads since its release. It has been used at Meta, the Space Communications and Navigation project arm of NASA, and has been
integrated for native acceleration with GraphCore’s Intelligent Processing Units.

Professor Eshraghian was awarded the 2019 IEEE VLSI Best Paper Award, the Best Paper Award at 2019 IEEE Artificial Intelligence CAS Conference, and the Best Live Demonstration Award at 2020 IEEE ICECS for his work on neuromorphic vision and in-memory computing using RRAM. He
currently serves as the secretary-elect of the IEEE Neural Systems and Applications Committee, and was a recipient of the Fulbright Future Fellowship (Australian-America Fulbright Commission), the Forrest Research Fellowship (Forrest Research Foundation), and the Endeavour Fellowship (Australian Government).
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Learning

Learning to the Top 🕵‍♂️
Not related to Brainchip but a good read regarding AI chip from Synopsys.

Screenshot_20230328_115938_Samsung Internet.jpg

According to Allied Market Research, the global artificial intelligence (AI) chip market is projected to reach $263.6 billion by 2031. The AI chip market is vast and can be segmented in a variety of different ways, including chip type, processing type, technology, application, industry vertical, and more. However, the two main areas where AI chips are being used are at the edge (such as the chips that power your phone and smartwatch) and in data centers (for deep learning inference and training).

No matter the application, however, all AI chips can be defined as integrated circuits (ICs) that have been engineered to run machine learning workloads and may consist of FPGAs, GPUs, or custom-built ASIC AI accelerators. They work very much like how our human brains operate and process decisions and tasks in our complicated and fast-moving world. The true differentiator between a traditional chip and an AI chip is how much and what type of data it can process and how many calculations it can do at the same time. At the same time, new software AI algorithmic breakthroughs are driving new AI chip architectures to enable efficient deep learning computation.

Read on to learn more about the unique demands of AI, the many benefits of an AI chip architecture, and finally the applications and future of the AI chip architecture.

The Distinct Requirements of AI Chips
The AI workload is so strenuous and demanding that the industry couldn’t efficiently and cost-effectively design AI chips before the 2010s due to the compute power it required—orders of magnitude more than traditional workloads. AI requires massive parallelism of multiply-accumulate functions such as dot product functions. Traditional GPUs were able to do parallelism in a similar way for graphics, so they were re-used for AI applications.

The optimization we’ve seen in the last decade is drastic. AI requires a chip architecture with the right processors, arrays of memories, robust security, and reliable real-time data connectivity between sensors. Ultimately, the best AI chip architecture is the one that condenses the most compute elements and memory into a single chip. Today, we’re moving into multiple chip systems for AI as well since we are reaching the limits of what we can do on one chip.

Chip designers need to take into account parameters called weights and activations as they design for the maximum size of the activation value. Looking ahead, being able to take into account both software and hardware design for AI is extremely important in order to optimize AI chip architecture for greater efficiency.

The Benefits of AI Chip Architecture
There’s no doubt that we are in the renaissance of AI. Now that we are overcoming the obstacles of designing chips that can handle the AI workload, there are many innovative companies that are experts in the field and designing better AI chips to do things that would have seemed very much out of reach a decade ago.

As you move down process nodes, AI chip designs can result in 15 to 20% less clocking speed and 15 to 30% more density, which allows designers to fit more compute elements on a chip. They also increase memory components that allow AI technology to be trained in minutes vs. hours, which translates into substantial savings. This is especially true when companies are renting space from an online data center to design AI chips, but even those using in-house resources can benefit by conducting trial and error much more effectively.

We are now at the point where AI itself is being used to design new AI chip architectures and calculate new optimization paths to optimize power, performance, and area (PPA) based on big data from many different industries and applications.

AI Chip Architecture Applications and the Future Ahead​

AI is all around us quite literally. AI processors are being put into almost every type of chip, from the smallest IoT chips to the largest servers, data centers, and graphic accelerators. The industries that require higher performance will of course utilize AI chip architecture more, but as AI chips become cheaper to produce, we will begin to see AI chip architecture in places like IoT to optimize power and other types of optimizations that we may not even know are possible yet.

It’s an exciting time for AI chip architecture. Synopsys predicts that we’ll continue to see next-generation process nodes adopted aggressively because of the performance needs. Additionally, there’s already much exploration around different types of memory as well as different types of processor technologies and the software components that go along with each of these.

In terms of memory, chip designers are beginning to put memory right next to or even within the actual computing elements of the hardware to make processing time much faster. Additionally, software is driving the hardware, meaning that software AI models such as new neural networks are requiring new AI chip architectures. Proven, real-time interfaces deliver the data connectivity required with high speed and low latency, while security protects the overall systems and their data.

Finally, we’ll see photonics and multi-die systems come more into play for new AI chip architectures to overcome some of the AI chip bottlenecks. Photonics provides a much more power-efficient way to do computing and multi-die systems (which involve the heterogeneous integration of dies, often with memory stacked directly on top of compute boards) can also improve performance as the possible connection speed between different processing elements and between processing and memory units increases.

One thing is for sure: Innovations in AI chip architecture will continue to abound, and Synopsys will have a front-row seat and a hand in them as we help our customers design next-generation AI chips in an array of industries.


Learning 🏖
 
  • Like
  • Love
Reactions: 18 users

gex

Regular
1679965970165.png


I might be slow but i thought the cortical side of the tech is still being evaluated or researched?

 
  • Like
  • Fire
  • Love
Reactions: 25 users

Foxdog

Regular
That is a very easy question to answer because it is relevant to Brainchip whereas random opinions and swearing in abbreviated form are not.

To express an opinion it helps to have done your own research. Someone who has done their own research or simply read all the research done by others and posted here would understand all the dots that come together by having knowledge of this Brainchip employee.

As you don’t understand you probably need to DYOR or go back and read the research generously shared here by those that do.

My opinion only DYOR
FF

AKIDA BALLISTA
Fair enough. Cheers FF
 
  • Like
Reactions: 8 users

HopalongPetrovski

I'm Spartacus!
Fair enough. Cheers FF
Hang in there mate. It can be a tough road at times and people have all sorts of strife occurring from time to time.
We all wish it would happen already/would have already happened, and when pressure is applied it can exacerbate personal situations.
Many fine/ clever/ deserving people here who are rooting for Brainchip so at least you're in good company.
 
  • Like
  • Love
  • Fire
Reactions: 40 users

Diogenese

Top 20
This interesting that GM are awarding Valeo for service delivery in ADAS arena:

View attachment 33074


We've seen rumors that Mecedes is planning to switch to Luminar's foveated LiDaR, and, Mercedes' expressed preference for component standardization aside, we do not have any proof that Luminar will use Akida, so this GM award to Valeo for ADAS is very encouraging.

We do know that BrainChip have been working with Valeo in a Joint Development on autonomous vehicles since mid-2020.

https://smallcaps.com.au/brainchip-joint-development-agreement-akida-neuromorphic-chip-valeo/

BrainChip signs joint development agreement for Akida neuromorphic chip with Valeo​

By
George Tchetvertakov
-
June 9, 2020

"Artificial intelligence device company BrainChip Holdings (ASX: BRN) has taken an affirmative step towards integrating its Akida neuromorphic chip into autonomous vehicles after signing a binding joint development agreement with European automotive supplier Valeo Corporation.

The agreement means both companies will collaborate to develop a new wave of tech solutions based on artificial intelligence (AI) and reduced power consumption within the overarching theme of miniaturisation that’s taking the tech industry by storm
."
...
"This latest agreement between BrainChip and Valeo has been hailed as a validation of the company’s Akida device by a Tier-1 sensor supplier and “considered to be a significant development”, according to BrainChip.

In a statement to the market this morning, BrainChip said Valeo will utilise Akida and collaborate on the development of neural network processing solutions, for integration in autonomous vehicles (AVs).

The terms of the deal stipulate that both companies must reach specific performance milestones, with BrainChip stating it expects to receive payments to cover its expenses, subject to the completion of, as yet, undisclosed milestones
."

A JD is a different beast from a licence agreement. There is no licence fee. Usually the JD members split the income in proportion to their contribution to the project, so income is still dependent on the number of units sold, but it can have the potential to be significantly greater than a standard royalty fee.

https://www.valeo.com/en/valeo-scala-lidar/

The Honda Legend, which was the first vehicle in the world to be approved for SAE level 3 automated driving, uses Valeo LiDAR scanners, two frontal cameras and a Valeo data fusion controller. The Mercedes-Benz Class S, the second level 3-certified car, is also equipped with a laser LiDAR technology, Valeo SCALA® Gen2.

Valeo’s third-generation laser LiDAR technology, which is scheduled to hit the market in 2024, will take autonomous driving even further, making it possible to delegate driving to the vehicle in many situations, including at speeds of up to 130 km/h on the highway. Even at high speeds on the highway, autonomous vehicles equipped with this system are able to manage emergency situation autonomously
."

https://www.repairerdrivennews.com/2023/03/16/gms-new-adas-boasts-hands-free-technology/

GM’s new ADAS boasts hands-free technology​

By Michelle Thompson on March 16, 2023
Announcements | Market Trends | Technology

General Motors (GM) is rolling out a new advanced driver assistance program (ADAS) that it says will enable hands-free driving 95% of the time.

The automaker shared details about its next-generation system, Ultra Cruise, this week and said it will first be launched on the Cadillac Celestiq, a hand-built electric vehicle expected to begin production in December.

...

The OEM said Ultra Cruise-equipped vehicles will contain more than 20 sensors, with a driver attention system in place to ensure the vehicle’s pilot is alert.

“The destination-to-destination hands-free system will use more than just cameras to ‘see’ the world,” GM said in a press release. “Ultra Cruise uses a blend of cameras, short- and long-range radars, LiDAR behind the windshield, an all-new computing system and a driver attention system to monitor the driver’s head position and/or eyes in relation to the road to help ensure driver attention. These systems work together through ‘sensor fusion’ to provide Ultra Cruise with a confident, 360-degree, three-dimensional representation of the vehicle’s surroundings
.”
...
A spokeswoman told Repairer Driven News that Ultra Cruise is a Level 2 system.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 43 users
View attachment 33090

I might be slow but i thought the cortical side of the tech is still being evaluated or researched?

Yes this caught me out at first then I remembered 'cortical' is to do with the eye. Prophesee states they have taken inspiration from the eye in developing their vision sensor.

So if I read this as being AKIDA processing a Prophesee vision sensor better than anyone else in the world ever (my exaggeration) but more over at least better than anyone that Prophesee has trialled or commercially partnered with to date I think it makes sense.

I feel very confident if Brainchip was actually beyond the design stage with its cortical column and sending it off to engineering such an event would at least rate a Tweet but if I was in charge a specially made ASX announcement illuminated in orange neon tube.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Esq.111

Fascinatingly Intuitive.
Good Afternoon Chippers,

Not much to report my end...

Pressently having fun watching the below company.

Relating to a completely diffrent company , industry...
LTR : Liontown Resources Ltd

Their Board of directors rejected a buyout offer...

One can only wonder what will unfold for us , BRN , once serious deals are signed , announced on ASX & royalty streams start to flow.

Regards,
Esq.
 
  • Like
  • Thinking
  • Fire
Reactions: 33 users
Not related to Brainchip but a good read regarding AI chip from Synopsys.

View attachment 33087
According to Allied Market Research, the global artificial intelligence (AI) chip market is projected to reach $263.6 billion by 2031. The AI chip market is vast and can be segmented in a variety of different ways, including chip type, processing type, technology, application, industry vertical, and more. However, the two main areas where AI chips are being used are at the edge (such as the chips that power your phone and smartwatch) and in data centers (for deep learning inference and training).

No matter the application, however, all AI chips can be defined as integrated circuits (ICs) that have been engineered to run machine learning workloads and may consist of FPGAs, GPUs, or custom-built ASIC AI accelerators. They work very much like how our human brains operate and process decisions and tasks in our complicated and fast-moving world. The true differentiator between a traditional chip and an AI chip is how much and what type of data it can process and how many calculations it can do at the same time. At the same time, new software AI algorithmic breakthroughs are driving new AI chip architectures to enable efficient deep learning computation.

Read on to learn more about the unique demands of AI, the many benefits of an AI chip architecture, and finally the applications and future of the AI chip architecture.

The Distinct Requirements of AI Chips
The AI workload is so strenuous and demanding that the industry couldn’t efficiently and cost-effectively design AI chips before the 2010s due to the compute power it required—orders of magnitude more than traditional workloads. AI requires massive parallelism of multiply-accumulate functions such as dot product functions. Traditional GPUs were able to do parallelism in a similar way for graphics, so they were re-used for AI applications.

The optimization we’ve seen in the last decade is drastic. AI requires a chip architecture with the right processors, arrays of memories, robust security, and reliable real-time data connectivity between sensors. Ultimately, the best AI chip architecture is the one that condenses the most compute elements and memory into a single chip. Today, we’re moving into multiple chip systems for AI as well since we are reaching the limits of what we can do on one chip.

Chip designers need to take into account parameters called weights and activations as they design for the maximum size of the activation value. Looking ahead, being able to take into account both software and hardware design for AI is extremely important in order to optimize AI chip architecture for greater efficiency.

The Benefits of AI Chip Architecture
There’s no doubt that we are in the renaissance of AI. Now that we are overcoming the obstacles of designing chips that can handle the AI workload, there are many innovative companies that are experts in the field and designing better AI chips to do things that would have seemed very much out of reach a decade ago.

As you move down process nodes, AI chip designs can result in 15 to 20% less clocking speed and 15 to 30% more density, which allows designers to fit more compute elements on a chip. They also increase memory components that allow AI technology to be trained in minutes vs. hours, which translates into substantial savings. This is especially true when companies are renting space from an online data center to design AI chips, but even those using in-house resources can benefit by conducting trial and error much more effectively.

We are now at the point where AI itself is being used to design new AI chip architectures and calculate new optimization paths to optimize power, performance, and area (PPA) based on big data from many different industries and applications.

AI Chip Architecture Applications and the Future Ahead​

AI is all around us quite literally. AI processors are being put into almost every type of chip, from the smallest IoT chips to the largest servers, data centers, and graphic accelerators. The industries that require higher performance will of course utilize AI chip architecture more, but as AI chips become cheaper to produce, we will begin to see AI chip architecture in places like IoT to optimize power and other types of optimizations that we may not even know are possible yet.

It’s an exciting time for AI chip architecture. Synopsys predicts that we’ll continue to see next-generation process nodes adopted aggressively because of the performance needs. Additionally, there’s already much exploration around different types of memory as well as different types of processor technologies and the software components that go along with each of these.

In terms of memory, chip designers are beginning to put memory right next to or even within the actual computing elements of the hardware to make processing time much faster. Additionally, software is driving the hardware, meaning that software AI models such as new neural networks are requiring new AI chip architectures. Proven, real-time interfaces deliver the data connectivity required with high speed and low latency, while security protects the overall systems and their data.

Finally, we’ll see photonics and multi-die systems come more into play for new AI chip architectures to overcome some of the AI chip bottlenecks. Photonics provides a much more power-efficient way to do computing and multi-die systems (which involve the heterogeneous integration of dies, often with memory stacked directly on top of compute boards) can also improve performance as the possible connection speed between different processing elements and between processing and memory units increases.

One thing is for sure: Innovations in AI chip architecture will continue to abound, and Synopsys will have a front-row seat and a hand in them as we help our customers design next-generation AI chips in an array of industries.


Learning 🏖
Hi @Learning
I read this at the optometrist waiting for my six monthly check up. When I was called in Harold said "You seem unusually chirpy today."🤣😂🤣

I am pretty sure the first line had something to do with it:

According to Allied Market Research, the global artificial intelligence (AI) chip market is projected to reach $263.6 billion by 2031.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 37 users
Hi @Learning
I read this at the optometrist waiting for my six monthly check up. When I was called in Harold said "You seem unusually chirpy today."🤣😂🤣

I am pretty sure the first line had something to do with it:

According to Allied Market Research, the global artificial intelligence (AI) chip market is projected to reach $263.6 billion by 2031.

My opinion only DYOR
FF

AKIDA BALLISTA
Sorry I forgot to mention that just ONE tiny little percent of that market would be $US2.636 billion.

Allowing for Brainchip to squander its 3 to 5 year lead just HALF of ONE percent would be $US1.318 billion.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 39 users

Papacass

Regular
That is a very easy question to answer because it is relevant to Brainchip whereas random opinions and swearing in abbreviated form are not.

To express an opinion it helps to have done your own research. Someone who has done their own research or simply read all the research done by others and posted here would understand all the dots that come together by having knowledge of this Brainchip employee.

As you don’t understand you probably need to DYOR or go back and read the research generously shared here by those that do.

My opinion only DYOR
FF

AKIDA BALLISTA
The way I see it is if all these super qualified technicians are being laid off but Brainchip is hiring said legends then something good must be going on at BrainChip as opposed to tech companies who are shedding workers. My logic and opinion only. Cheers. Onward.
 
  • Like
  • Love
  • Fire
Reactions: 28 users

Steve10

Regular
Good Afternoon Chippers,

Not much to report my end...

Pressently having fun watching the below company.

Relating to a completely diffrent company , industry...
LTR : Liontown Resources Ltd

Their Board of directors rejected a buyout offer...

One can only wonder what will unfold for us , BRN , once serious deals are signed , announced on ASX & royalty streams start to flow.

Regards,
Esq.

What do we all think current FV is for BRN?

I would say at least USD $1B MC & up to USD $2B MC = AUD $1.5-3B MC or 85c to $1.70 SP.
 
  • Like
  • Fire
  • Love
Reactions: 11 users

mrgds

Regular
Good Afternoon Chippers,

Not much to report my end...

Pressently having fun watching the below company.

Relating to a completely diffrent company , industry...
LTR : Liontown Resources Ltd

Their Board of directors rejected a buyout offer...

One can only wonder what will unfold for us , BRN , once serious deals are signed , announced on ASX & royalty streams start to flow.

Regards,
Esq.
Funny story @Esq.111 ........................ NOT :mad:
I bought one million shares of LTR @ $0.03 so $30,000.00 back in the day when they first came across their spodumene deposit.
Thinking i was clever i sold one million shares in LTR @ $0.08 so made $50,000.00 profit .............. woohoo :mad:
The rest as they say is history.

Agreed, ............. what will unfold for us? ....................... I sure as hell ain"t selling out @ $2.50

AKIDA BALLISTA
 
  • Like
  • Sad
  • Love
Reactions: 43 users

jtardif999

Regular
Not related to Brainchip but a good read regarding AI chip from Synopsys.

View attachment 33087
According to Allied Market Research, the global artificial intelligence (AI) chip market is projected to reach $263.6 billion by 2031. The AI chip market is vast and can be segmented in a variety of different ways, including chip type, processing type, technology, application, industry vertical, and more. However, the two main areas where AI chips are being used are at the edge (such as the chips that power your phone and smartwatch) and in data centers (for deep learning inference and training).

No matter the application, however, all AI chips can be defined as integrated circuits (ICs) that have been engineered to run machine learning workloads and may consist of FPGAs, GPUs, or custom-built ASIC AI accelerators. They work very much like how our human brains operate and process decisions and tasks in our complicated and fast-moving world. The true differentiator between a traditional chip and an AI chip is how much and what type of data it can process and how many calculations it can do at the same time. At the same time, new software AI algorithmic breakthroughs are driving new AI chip architectures to enable efficient deep learning computation.

Read on to learn more about the unique demands of AI, the many benefits of an AI chip architecture, and finally the applications and future of the AI chip architecture.

The Distinct Requirements of AI Chips
The AI workload is so strenuous and demanding that the industry couldn’t efficiently and cost-effectively design AI chips before the 2010s due to the compute power it required—orders of magnitude more than traditional workloads. AI requires massive parallelism of multiply-accumulate functions such as dot product functions. Traditional GPUs were able to do parallelism in a similar way for graphics, so they were re-used for AI applications.

The optimization we’ve seen in the last decade is drastic. AI requires a chip architecture with the right processors, arrays of memories, robust security, and reliable real-time data connectivity between sensors. Ultimately, the best AI chip architecture is the one that condenses the most compute elements and memory into a single chip. Today, we’re moving into multiple chip systems for AI as well since we are reaching the limits of what we can do on one chip.

Chip designers need to take into account parameters called weights and activations as they design for the maximum size of the activation value. Looking ahead, being able to take into account both software and hardware design for AI is extremely important in order to optimize AI chip architecture for greater efficiency.

The Benefits of AI Chip Architecture
There’s no doubt that we are in the renaissance of AI. Now that we are overcoming the obstacles of designing chips that can handle the AI workload, there are many innovative companies that are experts in the field and designing better AI chips to do things that would have seemed very much out of reach a decade ago.

As you move down process nodes, AI chip designs can result in 15 to 20% less clocking speed and 15 to 30% more density, which allows designers to fit more compute elements on a chip. They also increase memory components that allow AI technology to be trained in minutes vs. hours, which translates into substantial savings. This is especially true when companies are renting space from an online data center to design AI chips, but even those using in-house resources can benefit by conducting trial and error much more effectively.

We are now at the point where AI itself is being used to design new AI chip architectures and calculate new optimization paths to optimize power, performance, and area (PPA) based on big data from many different industries and applications.

AI Chip Architecture Applications and the Future Ahead​

AI is all around us quite literally. AI processors are being put into almost every type of chip, from the smallest IoT chips to the largest servers, data centers, and graphic accelerators. The industries that require higher performance will of course utilize AI chip architecture more, but as AI chips become cheaper to produce, we will begin to see AI chip architecture in places like IoT to optimize power and other types of optimizations that we may not even know are possible yet.

It’s an exciting time for AI chip architecture. Synopsys predicts that we’ll continue to see next-generation process nodes adopted aggressively because of the performance needs. Additionally, there’s already much exploration around different types of memory as well as different types of processor technologies and the software components that go along with each of these.

In terms of memory, chip designers are beginning to put memory right next to or even within the actual computing elements of the hardware to make processing time much faster. Additionally, software is driving the hardware, meaning that software AI models such as new neural networks are requiring new AI chip architectures. Proven, real-time interfaces deliver the data connectivity required with high speed and low latency, while security protects the overall systems and their data.

Finally, we’ll see photonics and multi-die systems come more into play for new AI chip architectures to overcome some of the AI chip bottlenecks. Photonics provides a much more power-efficient way to do computing and multi-die systems (which involve the heterogeneous integration of dies, often with memory stacked directly on top of compute boards) can also improve performance as the possible connection speed between different processing elements and between processing and memory units increases.

One thing is for sure: Innovations in AI chip architecture will continue to abound, and Synopsys will have a front-row seat and a hand in them as we help our customers design next-generation AI chips in an array of industries.


Learning 🏖
Neuromorphic appears to be the elephant in the room in this article 😒. Either the main-stream feel threatened by the potential proliferation of Neuromorphic architectures and are in denial or they consider them to NOT be an important development - which is unlikely since many think tanks acknowledge Neuromorphic as a game changer for AI. Makes me think that they have a subtle agenda, but I do like the projected $260 billion market though. AIMO.
 
  • Like
Reactions: 7 users

Esq.111

Fascinatingly Intuitive.
What do we all think current FV is for BRN?

I would say at least USD $1B MC & up to USD $2B MC = AUD $1.5-3B MC or 85c to $1.70 SP.
Afternoon Steve10,

I'll have a stab...

Bargain price....

AU$4.7325 = AU$8,828,150,300.00 Market Cap.

Converted to US currency...

US$3.1653874 = US$5,907,930,659.00 Market Cap.


* Total Shares, Options etc as of 2nd Feb 2023 is
1,856,430,614 .

* Pressent AU$ to US$ = 0.669215. As of 28/3/2022.

At the above prices one would have to think that the world's top ten company's CEO's & Boards would be displaying gross negligence to their Shareholders by not attempting a cheeky Buyout offer for Brainchip.

Regards,
Esq.
 
  • Like
  • Love
  • Wow
Reactions: 20 users

Lex555

Regular
Funny story @Esq.111 ........................ NOT :mad:
I bought one million shares of LTR @ $0.03 so $30,000.00 back in the day when they first came across their spodumene deposit.
Thinking i was clever i sold one million shares in LTR @ $0.08 so made $50,000.00 profit .............. woohoo :mad:
The rest as they say is history.

Agreed, ............. what will unfold for us? ....................... I sure as hell ain"t selling out @ $2.50

AKIDA BALLISTA
I recently bought a small percentage into Liontown though now wish I bought more. I feel your pain, I rode PLS from $0.50 down to $0.20 and sold at $1.20, it recently went over $5.

It taught me a good lesson to ride your winners. My plan for BRN is to eventually divest 10% at a number of MC milestones and hold the majority for dividends. I’m not making the same mistake again.
 
  • Like
  • Love
  • Fire
Reactions: 33 users

Slade

Top 20
Another nice day. Feels like things are steady and awaiting the good news that is coming.
 
  • Like
Reactions: 21 users
Top Bottom