BRN Discussion Ongoing

Dhm

Regular
It is interesting how a group of investors called shorts are not referred to as FCB’s (Future Confirmed Buyers).

Shorts do not own shares in a company after they take their position. A Short or FCB after taking a position has a future obligation to buy in this case Brainchip shares to return them to the lender. They cannot avoid this obligation and in most cases have given security to the lender to guarantee they have the capacity to buy back.

So when a company has FCB’s shareholders know there are guaranteed buyers for their shares if they need to sell.

Yet shareholders worry about FCB’s.

The FACT that FCB’s exist gives rise to panic buying when an event of unexpected origin incites new retail shareholders interest.

My opinion only DYOR
FF

AKIDA BALLISTA

PS: Obviously ‘the more to come’ over the next couple of weeks has been missed by most.
One thing I like about a 'short' position is that when the time is appropriate ie a bullish situation, the shorter may well not only buy back the short position, but also go long as well. So shorters could well buy back and buy in. 2 shares for the price of one.
 
  • Like
  • Love
  • Fire
Reactions: 10 users

HopalongPetrovski

I'm Spartacus!
Hi Hoppy,

I'm as much in the dark as everyone else.

One application for E (up to 4 nodes) would be on-sensor SoCs, but I would think that, for Prophesee, we'd need at least the S (up to 8 nodes) or possibly the P (up to 128 nodes), although the full P would be for some really heavy lifting, possibly running several NN model libraries in parallel.

The SiFive thing is, in my opinion, very significant as we now have direct compatibility with their 8-bit X280 Intelligence processor, which, as Fmf has intimated, may be moon-bound, as well as saving Intel's bacon.

SiFive have said Akida 2E is compatible with their Efficiency MCU and P & S are compatible with X280 Intelligence so clearly they have had hands-on experience with the Akida Gen 2 simulation software (or possibly a deep cover FPGA?).

Semantic segmentation is moving well beyond wake-word recognition into the realm of speech recognition/NLP.

View attachment 31683

Similarly we are looking at object tracking in silicon, rather than relaying that task to CPU software. So tumbleweeds are in our sights, as are Colt 38 bullets.

View attachment 31684
Semantic segmentation is moving well beyond wake-word recognition into the realm of speech recognition/NLP.

We are importing into silicon some further time-and power consuming tasks beyond classification/inference/ML which were previously performed by the CPU.

Hey D.
Thank you very much for your reply.
My problem I guess is that whilst I think I know what these words mean in general terms, I'm probably wildly wrong in how they are applied and just what they make possible in the world of accelerators, cpu's, gpu's and micro electronics.

I understand that in Akida first gen. due to it's neuromorphic architecture, we have a better way of performing tasks that give us much greater efficiency and hence use less power and that it has a type or aspect of intelligence that can somehow extrapolate similarities between objects it has previously "learned" and can, without requiring extensive retraining "learn" and add new objects to its dataset.

Now I just need to get a layman's understanding of what the new attributes embodied by the second generation potentially allow that were not available previously. I think that it may be that whilst gen 1 could handle still images and changes/simularities therein, gen 2 can do something similar with video. In that it now has somewhat more memory and so can comprehend a sequence and somehow intuit or learn/understand a process?
The vision transformers ability seems intriguing given the recent popularity of the Chat GPT but am still trying to wrap my head around just what this newfound capacity makes possible. Thank you for making much of this somewhat more accessible for those of us without technical expertise. You are very much appreciated around here. 😀
 
  • Like
  • Love
  • Fire
Reactions: 20 users

One IP Platform, Multiple Configurable Products​




The 2nd generation IP platform will support a very wide range of market verticals and will be delivered in three classes of product.​


Akida-E: Extremely energy-efficient, for always on operation very close to, or at sensors.
Akida-S: Integration into MCU or other general purpose platforms that are used in broad variety of sensor-related applications
Akida-P: Mid-range to higher end configurations with optional vision transformers for ground breaking and yet efficient performance.

View attachment 31682
I'm sticking with my belief that I've broken Peter's secret naming code
If it walks like a duck and quacks like a duck it's a 🦆
🦆🦆🦆🦆🦆
 
  • Haha
  • Love
  • Like
Reactions: 6 users

HopalongPetrovski

I'm Spartacus!
I'm sticking with my belief that I've broken Peter's secret naming code
If it walks like a duck and quacks like a duck it's a 🦆
🦆🦆🦆🦆🦆
Psychic duck

Featured snippet from the web​


Psyduck resembles a yellow duck with a vacant stare. It has a small tuft of black hair at the top of its head. It walks on its hind legs, and has arms rather than wings. Its arms are useful in using its powerful psychic abilities. Its appearance is meant to trick enemies into thinking it is weak.

Psyduck_AG_anime.png
 
  • Haha
  • Like
  • Love
Reactions: 10 users

Mccabe84

Regular
It has every relevance to us:
- a tech stock in the same industry;
- a similar target audience;
- a pre-revenue tech stock on the ASX;
- facing the same “economic headwinds”;
- a similar market cap;
- a less developed company…

Need I go on?
Would it be fair to say that due WBT have less shares on offer then less manipulation would happen?
 
  • Like
  • Fire
Reactions: 8 users

buena suerte :-)

BOB Bank of Brainchip
As usual Hailo also does not provide actual power consumption numbers.

When I first dug into Hailo could find nothing so asked Peter van der Made and he just dismissed them saying something about how much power they used compared to AKIDA. I was not completely satisfied and kept digging and on a Hailo customers site on their specifications page they had the Hailo spec doc and it’s a while back but what I think I remember was 3 to 7 watts for Hailo 8.

My opinion only DYOR
FF

AKIDA BALLISTA
You are right again FF,

I like the way you started that response FF "As usual" :ROFLMAO::ROFLMAO::love::ROFLMAO::ROFLMAO::p
 
  • Like
  • Love
  • Haha
Reactions: 7 users

Labsy

Regular
Prediction:
Intel licence Brainchip Ip and roll out to market Loihi 3 chip on Risc V architecture, utilising Akida Ip. Built in their own foundry.
We will never know but will be evident in ridiculous, explosive revenue..
 
  • Like
  • Fire
  • Haha
Reactions: 37 users

Townyj

Ermahgerd
Prediction:
Intel licence Brainchip Ip and roll out to market Loihi 3 chip on Risc V architecture, utilising Akida Ip. Built in their own foundry.
We will never know but will be evident in ridiculous, explosive revenue..
Happy The Fresh Prince Of Bel Air GIF


The Dream!
 
  • Haha
  • Like
  • Fire
Reactions: 16 users
As some seem intent on finding comparisons to rate Brainchip's performance against on the ASX when there are absolutely none I will remind readers both here and in the background of what Steven Leibson of Tirias Research said in his Forbes Magazine article dated 6 March, 2023:

"Brainchip’s bio-inspired Akida platform is certainly an unusual way to tackle AI/ML applications. While most other NPU vendors are figuring out how many MACs they can fit – and power – on the head of a pin, Brainchip is taking an alternative approach that’s been proven by Mother Nature to work over many tens of millions of years.

In Tirias Research’s opinion, it’s not the path taken to the result that’s important, it’s the result that counts. If Brainchip’s Akida event-based platform succeeds, it won’t be the first time that a radical new silicon technology has swept the field. Consider DRAMs (dynamic random access memories), microprocessors, microcontrollers, and FPGAs (field programmable gate arrays), for example. When those devices first appeared, there were many who expressed doubts. No longer. It’s possible that Brainchip has developed yet another breakthrough that could rank with those previous innovations. Time will tell."


Edge Impulse described AKIDA technology as science fiction. A data scientist and Phd candidate described the first generation AKIDA 1000 as a "beast".

Against this background the WANCA's have continued to ask what's a neuromorphic chip anyway, well at least those who can say 'neuromorphic'.

"It’s possible that Brainchip has developed yet another breakthrough that could rank with those previous innovations." Mark these words as they are only directed at AKIDA 2000 not the overall mission which Brainchip has embarked upon of creating Artificial General Intelligence.

Brainchip, Peter van der Made and Anil Mankar are creating an entirely new computing paradigm well beyond anything that exists.

They are on track with the timetable set out by Peter van der Made and with each step are opening up engineering possibilities that have only existed in unfulfilled patents and dreams.

The first and second generation AKIDA technology will create industries that do not yet exist.

Peter van der Made has stated recently that his vision of Artificial General Intelligence will be fulfilled in about 7 years which puts it at 2030.

The enormity of what Artificial General Intelligence means is found in the quote of Bill Gates that the person who invents Artificial General Intelligence will have created a company worth ten times Microsoft. I am not saying that his valuation was or is correct what I am pointing you too is the significance he attaches to what Brainchip and Peter van der Made are pursuing.

The late Stephen Hawking and the still living Elon Musk have both stated that Artificial General Intelligence could lead to the destruction of mankind. There are of course many others who are saying and who have said the same thing.


Now in the overly optimistic hope that the above will be sufficient to shut down the mindless comparisons can anyone here point to a technology company on the ASX that could just by its mere existence lead to the destruction of mankind in seven years time.

Of course not this is why Brainchip is not understood by normal retail investors.

Why it is not understood by WANCA commentators who cannot even say neuromorphic.

Why it is extremely obvious that anyone who persists down this road of comparison either is ignorant of just what Brainchip is doing or has some other motive innocent or otherwise.

My opinion only DYOR
FF

AKIDA BALLISTA




 
  • Like
  • Love
  • Fire
Reactions: 85 users

buena suerte :-)

BOB Bank of Brainchip
As some seem intent on finding comparisons to rate Brainchip's performance against on the ASX when there are absolutely none I will remind readers both here and in the background of what Steven Leibson of Tirias Research said in his Forbes Magazine article dated 6 March, 2023:

"Brainchip’s bio-inspired Akida platform is certainly an unusual way to tackle AI/ML applications. While most other NPU vendors are figuring out how many MACs they can fit – and power – on the head of a pin, Brainchip is taking an alternative approach that’s been proven by Mother Nature to work over many tens of millions of years.

In Tirias Research’s opinion, it’s not the path taken to the result that’s important, it’s the result that counts. If Brainchip’s Akida event-based platform succeeds, it won’t be the first time that a radical new silicon technology has swept the field. Consider DRAMs (dynamic random access memories), microprocessors, microcontrollers, and FPGAs (field programmable gate arrays), for example. When those devices first appeared, there were many who expressed doubts. No longer. It’s possible that Brainchip has developed yet another breakthrough that could rank with those previous innovations. Time will tell."


Edge Impulse described AKIDA technology as science fiction. A data scientist and Phd candidate described the first generation AKIDA 1000 as a "beast".

Against this background the WANCA's have continued to ask what's a neuromorphic chip anyway, well at least those who can say 'neuromorphic'.

"It’s possible that Brainchip has developed yet another breakthrough that could rank with those previous innovations." Mark these words as they are only directed at AKIDA 2000 not the overall mission which Brainchip has embarked upon of creating Artificial General Intelligence.

Brainchip, Peter van der Made and Anil Mankar are creating an entirely new computing paradigm well beyond anything that exists.

They are on track with the timetable set out by Peter van der Made and with each step are opening up engineering possibilities that have only existed in unfulfilled patents and dreams.

The first and second generation AKIDA technology will create industries that do not yet exist.

Peter van der Made has stated recently that his vision of Artificial General Intelligence will be fulfilled in about 7 years which puts it at 2030.

The enormity of what Artificial General Intelligence means is found in the quote of Bill Gates that the person who invents Artificial General Intelligence will have created a company worth ten times Microsoft. I am not saying that his valuation was or is correct what I am pointing you too is the significance he attaches to what Brainchip and Peter van der Made are pursuing.

The late Stephen Hawking and the still living Elon Musk have both stated that Artificial General Intelligence could lead to the destruction of mankind. There are of course many others who are saying and who have said the same thing.


Now in the overly optimistic hope that the above will be sufficient to shut down the mindless comparisons can anyone here point to a technology company on the ASX that could just by its mere existence lead to the destruction of mankind in seven years time.

Of course not this is why Brainchip is not understood by normal retail investors.

Why it is not understood by WANCA commentators who cannot even say neuromorphic.

Why it is extremely obvious that anyone who persists down this road of comparison either is ignorant of just what Brainchip is doing or has some other motive innocent or otherwise.

My opinion only DYOR
FF

AKIDA BALLISTA




Seth Meyers Clapping GIF by Late Night with Seth Meyers
 
  • Like
  • Love
  • Haha
Reactions: 17 users

Diogenese

Top 20
Hey D.
Thank you very much for your reply.
My problem I guess is that whilst I think I know what these words mean in general terms, I'm probably wildly wrong in how they are applied and just what they make possible in the world of accelerators, cpu's, gpu's and micro electronics.

I understand that in Akida first gen. due to it's neuromorphic architecture, we have a better way of performing tasks that give us much greater efficiency and hence use less power and that it has a type or aspect of intelligence that can somehow extrapolate similarities between objects it has previously "learned" and can, without requiring extensive retraining "learn" and add new objects to its dataset.

Now I just need to get a layman's understanding of what the new attributes embodied by the second generation potentially allow that were not available previously. I think that it may be that whilst gen 1 could handle still images and changes/simularities therein, gen 2 can do something similar with video. In that it now has somewhat more memory and so can comprehend a sequence and somehow intuit or learn/understand a process?
The vision transformers ability seems intriguing given the recent popularity of the Chat GPT but am still trying to wrap my head around just what this newfound capacity makes possible. Thank you for making much of this somewhat more accessible for those of us without technical expertise. You are very much appreciated around here. 😀
Hi Hoppy,

AKD1 can identify objects in individual frames of video with their position.

It passes this information to the CPU which can then track the movement of the object.

AKD2 has additional storage to store a sequence of frames with the tracked images and locations so it can track movement.

Similarly with words/sentences/pargraphs.

Doing it in silicon is much faster and less energy intensive.

Until I see the Gen 2 patent, I'm guessing how it does this, but I have a vague idea which I won't disclose here because:
A. It's probably wrong;
B. It may be patentable.
 
  • Like
  • Haha
  • Fire
Reactions: 34 users

ndefries

Regular
My understanding of AGI is that the technology is able to interact with an environment, encounter obstacles and problems and solve them while potentially storing the skill set. I assume this means literally any problem. We know Akida can sense and now make sense of a sequence of events that are stored and actioned on the chip. It's ability to seek answers and then carry out tasks and keep that loop going would make me think OpenAi and Microsoft and robotics companies would be great partners. If this thing was low and solar powered it would be a everliving beast. Science fiction almost.
 
  • Like
  • Love
Reactions: 11 users

rgupta

Regular
It is interesting how a group of investors called shorts are not referred to as FCB’s (Future Confirmed Buyers).

Shorts do not own shares in a company after they take their position. A Short or FCB after taking a position has a future obligation to buy in this case Brainchip shares to return them to the lender. They cannot avoid this obligation and in most cases have given security to the lender to guarantee they have the capacity to buy back.

So when a company has FCB’s shareholders know there are guaranteed buyers for their shares if they need to sell.

Yet shareholders worry about FCB’s.

The FACT that FCB’s exist gives rise to panic buying when an event of unexpected origin incites new retail shareholders interest.

My opinion only DYOR
FF

AKIDA BALLISTA

PS: Obviously ‘the more to come’ over the next couple of weeks has been missed by most.
I am 100% there with you and that is why I assume the shorts had given us an opportunity to top up
There are chances investors may not be patient and sell low and give benefit to shorters to cover.
But on the other side a good news can roast shorter in the oven. And looking at brn history it will take a lot of money to buy that 130 million shorts quickly.
 
  • Like
  • Love
  • Fire
Reactions: 9 users

TopCat

Regular

Language generators such as ChatGPT are gaining attention for their ability to reshape how we use search engines and change the way we interact with artificial intelligence. But these algorithms are both computationally expensive to run and depend on maintenance from just a few companies to avoid outages.

but Eshraghian is powering a language model with an alternative algorithm called a spiking neural network (SNN). He and two students have recently released the open-sourced code for the largest language-generating SNN ever, named SpikeGPT, which uses 22 times less energy than a similar model using typical deep learning. Using SNNs for language generation can have huge implications for accessibility, data security, and green computing and energy efficiency within this field.
 
  • Like
  • Fire
  • Wow
Reactions: 20 users

Language generators such as ChatGPT are gaining attention for their ability to reshape how we use search engines and change the way we interact with artificial intelligence. But these algorithms are both computationally expensive to run and depend on maintenance from just a few companies to avoid outages.

but Eshraghian is powering a language model with an alternative algorithm called a spiking neural network (SNN). He and two students have recently released the open-sourced code for the largest language-generating SNN ever, named SpikeGPT, which uses 22 times less energy than a similar model using typical deep learning. Using SNNs for language generation can have huge implications for accessibility, data security, and green computing and energy efficiency within this field.
We need to get them a development board.
 
  • Like
  • Fire
  • Haha
Reactions: 13 users

HopalongPetrovski

I'm Spartacus!
Hi Hoppy,

AKD1 can identify objects in individual frames of video with their position.

It passes this information to the CPU which can then track the movement of the object.

AKD2 has additional storage to store a sequence of frames with the tracked images and locations so it can track movement.

Similarly with words/sentences/pargraphs.

Doing it in silicon is much faster and less energy intensive.

Until I see the Gen 2 patent, I'm guessing how it does this, but I have a vague idea which I won't disclose here because:
A. It's probably wrong;
B. It may be patentable.
Thank you again.
Hope its B. You manage to patent it, and make a billion dollars. 🤣

Unknown.jpeg
 
  • Like
  • Fire
  • Haha
Reactions: 7 users

TopCat

Regular
  • Like
  • Love
Reactions: 6 users

Language generators such as ChatGPT are gaining attention for their ability to reshape how we use search engines and change the way we interact with artificial intelligence. But these algorithms are both computationally expensive to run and depend on maintenance from just a few companies to avoid outages.

but Eshraghian is powering a language model with an alternative algorithm called a spiking neural network (SNN). He and two students have recently released the open-sourced code for the largest language-generating SNN ever, named SpikeGPT, which uses 22 times less energy than a similar model using typical deep learning. Using SNNs for language generation can have huge implications for accessibility, data security, and green computing and energy efficiency within this field.

Seems to be a increasing consensus.


BLOG POST​

TECH & SOURCING @ MORGAN LEWIS

TECHNOLOGY, OUTSOURCING, AND COMMERCIAL TRANSACTIONS
NEWS FOR LAWYERS AND SOURCING PROFESSIONALS

Neuromorphic Computing: A More Efficient Technology for Activities That Require a Human Touch

March 01, 2023

ChatGPT and subsequent artificial intelligence (AI) programs have been in the headlines recently. Not as common is the discussion of the cost associated with developing and operating such AI tools or if such AI is right for every job.

It is estimated that ChatGPT can cost millions of dollars per day to operate. Given the potentially large price tag, consumers may ask how users can harness the benefit of AI without the high operating cost and what the best technology is in applications where precise decision-making and outcomes are desired. Some believe that the answer to both of these questions is neuromorphic computing.

What Is Neuromorphic Computing?


Neuromorphic computing is designed to mimic the human brain, operating in a manner that allows the technology to solve problems in ways our brains would.

The chips that power neuromorphic computing are specially designed with the same structure as our brain’s neurons and synapses, allowing it to make decisions and judgments in a way that typical computer systems or algorithms cannot.

Neuromorphic computing is intended to be more efficient, powerful, and cost-effective than other AI technologies.

Although still in development and not widely deployed, it is being evaluated in various settings, including cloud computing, robotics, and autonomous vehicle technology.

The End of the Algorithm in AI?


Rather than processing all the data to follow an algorithm to an answer, the goal of neuromorphic computing is to decipher the necessary information to determine the correct solution. Leveraging this would allow companies and consumers to implement technology into everyday life wherever a human touch is required—rather than utilizing answers based solely on an algorithm.

AI is effective at providing large amounts of computing power, responding to queries that may take a human or even a standard computer a long time to answer.

Neuromorphic computing, on the other hand, takes a more active approach, giving the correct response or action to a scenario.

Key Takeaway


As technology and society integrate on a deeper level, there will be an increased demand on our computers and technology to interact with us as a human would with speech, movement, and reason. Neuromorphic computing’s deployment is no easy feat, and we will be on the lookout for how companies bring humanity into future computers and technologies.
 
  • Like
  • Love
  • Fire
Reactions: 23 users
Worrying about the SP every day must be truly exhausting for those who find it hard to handle the psychology of the 'red' days of recent times. It's only painful if you NEED to sell now or don't believe in Brainchip anymore. These SP fluctuations are surely going to seem trivial when Brainchip start closing deals (no matter whether these deals are announced or we see it via revenue in quarterlies). Getting excited for Brainchip's promising future seems a much wiser use of time, given the announcement on 6th March.

The company recently released a fantastic announcement that has strengthened and increased the lead of Brainchip's position in neuromorphic edge AI. Some of our partners are quoted, gushing about us. Brainchip has implemented new features in it's second generation Akida platform that customers asked for to meet their needs. Game changing new features. I don't know about you - but any major company I have ever worked for, does not spend massive $$$ to improve functionality on a whim, without being confident that said changes will secure deals.

Lead adopters are engaging with the second gen Akida NOW. We will be in a Renesas chip this year. How could anyone be anything except excited, is beyond me.

DYOR, all in my excited opinon.
 
  • Like
  • Love
  • Fire
Reactions: 60 users
And for holders like me who have to think about how a lot things connect, this article was right up my alley.

Especially liked the couple points I highlighted near the end.



The Different Types of AI Accelerators

The Different Types of AI Accelerators​

Posted on February 15, 2023 by Kristoffer Bonheur

AI accelerators are specialized hardware accelerators and coprocessors that process artificial intelligence algorithms or AI-related tasks. The expansion of AI applications and use cases has made these hardware components advantageous in modern consumer electronic devices such as computers, smartphones, and other smart devices. These hardware components are also found in autonomous or self-driving vehicles, robotics systems, and automated machines used in different industrial applications.

Remember that AI accelerators are coprocessors. Their purpose is to take AI-related workloads from the central processor to improve the efficiency of the overall system. Examples of these workloads include machine learning, deep learning, image processing, and natural language processing, among others. Their purpose and advantages also rest on the purpose and advantages of hardware accelerators. Of course, to understand better their purpose, it is important to understand the different types of AI accelerators.

A GUIDE TO THE DIFFERENT TYPES OF AI ACCELERATORS

1. Graphics Processing Unit
A graphics accelerator or a graphics processing unit or GPU is a hardware accelerator or specialized coprocessor designed to handle image rendering. These components are essential in modern computer systems with a graphical user interface.

Most personal computers and portable devices such as smartphones and tablets have integrated graphics processors that form part of their respective system-on-chips. Use cases that require intensive image rendering such as in the case of video gaming and high-resolution graphics design and video editing require discrete graphics processors.

Nevertheless, behind rendering images or processing graphics, GPUs are flexible. Note that these components have been used in blockchain applications that require proof-of-work validation to mine cryptocurrencies or validate blockchain entries.

GPUs have also been the first coprocessors that have been retrofitted as AI accelerators. They have been used for AI-related processing because the mathematical bases of image manipulation and artificial neural networks and relevant deep learning models are similar. Data centers used for training AI models with huge datasets use discrete GPUs.

2. Field-Programmable Gate Arrays
A field-programmable gate array or FPGA is an integrated circuit that can be programmed in the field after it has been manufactured. Traditional microprocessors have fixed architectures while FPGAs can be programmed to a particular architecture.

FPGAs can be described as a blank canvas. They can be configured to perform a wide range of digital logic functions. The configuration is generally specified using a hardware description language. They are often used in applications that require high-performance and low-latency processing. These include video processing, simulations, and cryptography.

There have been attempts to configure FPGAs as dedicated AI accelerators. They have been used for handling machine learning and deep learning tasks that require complex mathematical computations. They are also used for implementing neural networks, providing real-time video processing, and can be employed in computer vision applications.

Remember that FPGAs are beneficial for use cases that require low latency and high performance. The search engine Bing uses these chips for its search algorithm while Microsoft uses them in its Project Catapult which is aimed at augmenting cloud computing.

3. Application-Specific Integrated Circuit
Running in contrast with FPGAs are application-specific integrated circuits or ASICs. They are integrated chips designed and deployed for a particular use. Remember that FPGAs are fundamentally blank chips manufactured without a particular use case in mind. ASICs are manufactured for a specific application.

ASICs are essentially custom-built designed and optimized for a specific task or set of tasks. They can be a standalone hardware component or a part of a greater integrated chip such as in the case of a system-on-a-chip composed of different processors and coprocessors.

Nevertheless, in the field of artificial intelligence, ASICs are useful in AI-related tasks where high computational power is required. Examples include deep learning algorithms such as using large language models or real-time video and image processing such as in the case of computational photography features of smartphones.

There are several examples of ASICs. Google began using its Tensor Processing Unit or TPU in 2015 and made it available to the public in 2018. A TPU is used for neural network machine learning to perform matrix computations.

Another example is the Neural Engine of Apple. It is an AI accelerator built within the A series and M series system-on-chips used in iPhones, iPads, and Mac computers. The AI Engine of Qualcomm is another example that is based on its proprietary Qualcomm Hexagon digital signal processor and a licensed Tensor accelerator.

4. Massively Multicore Scalar Processors
A particular massively multicore scalar processor is essentially a multi-core processor. Modern CPUs and GPUs are multi-core processors. However, based on its name alone, a massively multicore scalar processor has massive amounts of simple processing cores.

Scalar processing is at the heart of this hardware accelerator. A scalar processor processes a single data item at a time.

However, considering its multiple cores, a massively multicore scalar processor can execute more than one data item or multiple instructions in a single clock cycle by distributing them in its redundant cores.

Massively multicore scalar processors are also called superscalar processors. One of the advantages of these hardware components is that they use simple arithmetic units that can be combined in various ways to execute different types of algorithms.

Another advantage of massively multicore scalar processors is that they are highly scalable. This enables them to handle complex computations effectively and efficiently. Hence, in considering this, this advantage also makes them suitable for AI applications where large amounts of data need to be processed as quickly and as power efficiently as possible.

5. Neuromorphic Hardware Components
The emerging field of neuromorphic computing is dedicated to developing approaches to computing patterned from the structure and function of the human brain or biological neural systems. Neuromorphic hardware is one of its applications

A neuromorphic hardware is designed to mimic the features of the human brain. For example, considering how neurons and synapses interact in a biological neural system, this hardware has structures and components that allow the simulation of neurological electrical activity. This can also be called a physical artificial neural network.

There are different benefits to using this hardware. It is designed to effectively and efficiently process and handle complex and high-dimensional data. This makes it suitable for use in AI applications such as natural language processing and computer vision.

Using a particular neuromorphic hardware in a computer system would make it an AI accelerator and a coprocessor because it will be dedicated to handling AI-specific applications
. This same hardware can also become a next-generation main processor or central processing unit of a computer system based on neuromorphic computing.
 
  • Like
  • Love
  • Fire
Reactions: 33 users
Top Bottom