BRN Discussion Ongoing

The other day someone suggested Brainchip only had one product.

I think this is an inappropriate way to look at AKIDA technology.

AKIDA technology is at its heart a neural processing engine (NPE) AKD1000/AKIDA 1.0 whatever name you want to give it is made up of 80 of these Neural Processing Units (NPUs).

AKD1000 is designed to allow for 64 x AKD1000 to be ganged together and this would then be a semiconductor of 5,120 NPU’s

Brainchip has said they could if needed gang 1,024 AKD1000 chips together making a total of 81,920 NPU’s.

Renesas licenced just two nodes or 8 NPE’s.

Anil Mankar stated at the 2021 Ai Field Day that you could do keyword spotting with just one node if you did not have latency concerns but 1.5 to 2 nodes would likely be desirable and referenced Renesas.

MegaChips licensed an 80 NPU AKD1000 IP.

Edge Impulse is comparing AKD1000 with a GPU.

So my point is Brainchip does not have just one semiconductor it has on its IP shelf 81,920 variants of that semiconductor IP which can be sold for an ultra low powered single purpose MCU right through to a computing task requiring the computing power of 81,920 NPUs or 1,024 AKD1000 chips.

1,024 AKD1000 chips would give you the computing power of 1,024 GPUs for a total cost of $25,600 being 1,024 x $25.00.

As Peter van der Made said in his Pitt Street Research interview in 2021 ‘we are only just getting started.’

Something further occurred to me which if you doubt my proposition may convince you.

If you look up Intel’s Loihi 1 & 2 you will see the actual manifestation of what I am alluding to as they start with two Loihi chips to make their base product USB then gang together in different numbers to create further variants of Loihi with different Hawaiian names right up to their attempt at a full blown neuromorphic server and claim each one as a separate distinct product.

My opinion only DYOR
FF

AKIDA BALLISTA
I have been sort of discussing Nodes, NPUs and NPEs in a private chat with @Diogenese. Well mainly I have been listening and as a result I have amended the above post to meet a better understanding of nodes, NPEs and NPUs.

Well at least I think I have until @Diogenese checks my work.

The numbers do not change but using nodes where I should say NPU or NPE is important.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Haha
Reactions: 27 users

Easytiger

Regular
For anyone worried about Brainchip's share price consider the following headline:

The Daily Digest
"The Daily Digest
Follow

Elon Musk becomes world's first person to lose $200 billion"


And that is US dollars by the way.

My opinion only DYOR
FF

AKIDA BALLISTA
announcement, and an increase in SP to reflect this is what
A small company at the beginning of its journey holding its share price well in the same climate that a large well established company can not hold its share price well is a good point I think.

Brainchip are managing their company better than Elon musk is managing his in my opinion.
EM just needs to stop talking silly, BRN needs to start communicating smart (see Imugen’s ASX announcement today)
 
  • Like
  • Fire
  • Love
Reactions: 6 users

mrgds

Regular
My friend asked me the other day how could make a million dollars in the stock market.. I said easy, start with $2million..
Nice, so ur 50% up, .................... but really the same would apply with any amount.
Your friend is blessed to have $2mil to invest.

yes, i did get what you were trying to say, .......................... funny, haha, ........................ just needed to be worded better

AKIDA BALLISTA
 
  • Like
  • Thinking
Reactions: 5 users
Not sure I see your point FF. A multibillionaire losing billions and still having multiple billions in net worth, has nothing to do with a company 2 years into commercialisation, with a SP that is struggling to gain traction. It certainly doesn’t put my mind at ease anyway.

Many here have significant portions of their net worth invested in Brainchip, and although the potential seems fantastic, the SP clearly isn’t reflecting this. The past year has been an economic disaster on a global scale, and I know for a a fact that many here expected more by reading the thread about SP expectations at the end of 2022.

Revenue, some more commercialisation announcements, some actual ASX development announcement, and an increase in SP to reflect this is what will put my mind at ease.

That being said, there’s no reward without risk and I think that most understand the potential, so this may be a good opportunity to accumulate.
Well I cannot help the fact that you do not get the point.

I would have thought it stuck out like the proverbial.

So what is the point just for you.

These are unusual times.

Even billionaires are making record losses.

From which you should be able to extrapolate that looking at the share price is not the way to value any company at this point in time.

If you are going to stay in the market, and some investors are not and are sitting on cash, you cannot be here fussing about the share price you need to understand the intrinsic value of the company and that is not going to be found by looking on Commsec or some other share price platform.

In my opinion if you are fussing about the share price in the present market you should like others be in cash and come back when it’s all nice and safe and profits are guaranteed.

What’s that you say Blind Freddie profits are never guaranteed in the markets.

Well when would they come back to the markets?

Don’t walk away I asked a question.

Don’t tell me to do my own research that’s not an answer or is it?

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 28 users

stuart888

Regular
This CNET woman does a nice/entertaining tour of LG CES Highlights. She goes into all the various display locations.

Seems like the AI smarts went into the Washer/Dryer, rather than the Refrigerator. Nicely done product overview of all sorts of new gadgets.


 
  • Like
  • Love
  • Fire
Reactions: 12 users

stuart888

Regular
John Deer overview, with plenty of AI Smarts!

 
  • Like
  • Love
  • Fire
Reactions: 16 users

Pappagallo

Regular
Well I cannot help the fact that you do not get the point.

I would have thought it stuck out like the proverbial.

So what is the point just for you.

These are unusual times.

Even billionaires are making record losses.

From which you should be able to extrapolate that looking at the share price is not the way to value any company at this point in time.

If you are going to stay in the market, and some investors are not and are sitting on cash, you cannot be here fussing about the share price you need to understand the intrinsic value of the company and that is not going to be found by looking on Commsec or some other share price platform.

In my opinion if you are fussing about the share price in the present market you should like others be in cash and come back when it’s all nice and safe and profits are guaranteed.

What’s that you say Blind Freddie profits are never guaranteed in the markets.

Well when would they come back to the markets?

Don’t walk away I asked a question.

Don’t tell me to do my own research that’s not an answer or is it?

My opinion only DYOR
FF

AKIDA BALLISTA

I’d go even further and say that the share price is NEVER the way to value a company.

It’s all sentiment. When the sentiment inevitably flips the share price will still not reflect the value of the company but in this hopefully not too distant future no one will be complaining about it haha.
 
  • Like
  • Love
  • Haha
Reactions: 13 users

Cardpro

Regular
There isn’t much is it to be honest. Tesla is down about 72% from last years ATH whereas we are down about 69%. The difference is that Tesla was already commercialised.

Im a fan, I’m continuing to buy, and I have hopes, let’s just not sugar coat it.
Imo risk of investing in BrainChip has dropped significantly especially at this price point.

Although we don't have massive revenues or known commercial contracts with multiple tech giants, we have established and successfully joined multiple key eco systems and hold multiple registered & protected IPs.

IMO above already worth more than 1 Bill in USD...

I think we are truely ready to hit the sky and sky wont be our limit :) (hi NASA & SiFive)

I really hope we hear back on the relationship with Valeo soon... I will be going crazy 🤪
 
  • Like
  • Fire
  • Love
Reactions: 18 users

Wags

Regular
I hope this doesn't jeopardize his use of AKIDA 😉
Gidday Fox,
Just between SpaceX and Tesla, Using Akida could be one thing that saves him.
 
  • Like
  • Love
  • Haha
Reactions: 9 users
Just going to keep this short and sour. Thanks for the concern for my situation yesterday. My Father is ok.
Had very very low blood sugar almost basically lifeless half off the bed.
And some braindead stupid fucking moron, idiot, dumb fucking imbecile
Decided to give him a shot of insulin.
Which made him start to gurgle and other fucked up movements.
He's 86
 
  • Sad
  • Love
  • Like
Reactions: 45 users

Murphy

Life is not a dress rehearsal!
Just going to keep this short and sour. Thanks for the concern for my situation yesterday. My Father is ok.
Had very very low blood sugar almost basically lifeless half off the bed.
And some braindead stupid fucking moron, idiot, dumb fucking imbecile
Decided to give him a shot of insulin.
Which made him start to gurgle and other fucked up movements.
He's 86
Glad he got sorted and is doing better mate. Does make you realize what a tenuous grip on life we all have. Family and health really are everything in my oversimplified world.


If you don't have dreams, you can't have dreams come true!
 
  • Like
  • Love
Reactions: 15 users

robsmark

Regular
Just going to keep this short and sour. Thanks for the concern for my situation yesterday. My Father is ok.
Had very very low blood sugar almost basically lifeless half off the bed.
And some braindead stupid fucking moron, idiot, dumb fucking imbecile
Decided to give him a shot of insulin.
Which made him start to gurgle and other fucked up movements.
He's 86
Glad to hear he’s doing better Rise. Not the start of the year you or he needed.
 
  • Like
  • Love
Reactions: 12 users

Steve10

Regular
AMD new Ryzen 7040 series has dedicated on chip AI engine similar to Apple's neural engine with similar battery life & on device AI inference.

  • PUBLISHED ON JANUARY 5, 2023
  • IN OPINIONS

AMD’s Big Bet: Will Edge AI Finally Become a Reality?​

AMD just released their M2 competitor. Will it catapult edge AI into the mainstream?

Apple’s M1 chip made waves in the tech world when it was released in 2020, mainly because it was a discrete chip developed from the ground up by Apple. The chip’s Neural Engine was of particular note, being the first dedicated AI inference chip in a consumer product. The company also built up a strong developer infrastructure around the chip in the past two years, and has now scripted a success story that other companies wish to replicate.

At CES 2023, AMD released a new series of laptop CPUs in direct competition to Apple’s M1 chips. The Ryzen 7000 series for laptops is a part of AMD’s push to challenge Intel’s, and now Apple’s, dominance in the laptop CPU market. However, one set of chips in the new lineup stands out; the Ryzen 7040 series. The lineup comprises the Ryzen 5 7640HS, the Ryzen 7 7840HS, and the Ryzen 9 7940HS.

This new set of laptop CPUs have a dedicated on-chip AI engine, similar to Apple’s Neural Engine, and are the first x86 processors to build out this capability. This killer feature of 7040 series will allow it to directly compete with Apple’s M2 lineup, as it claims to offer similar battery life numbers and can enable on-device AI inference.

This might be the beginning of a new trend in consumer hardware, where hardware manufacturers are including on-device inference capabilities to better handle new AI workloads. After seeing the precedent set by Apple’s Neural Engine, Microsoft has also seen the light when it comes to integrating AI functions into their operating system. Panos Panay, EVP and Chief Product Officer At Microsoft, said this at AMD’s CES presentation,

“AI is going to reinvent how you do everything on Windows, quite literally. These large generative models, think language models, cogent models, image models, these models are so powerful, so delightful, so useful, personal, but they’re also very compute intensive. So, we haven’t been able to do this before.”
Let’s delve deeper into the industry trend of bringing AI to the edge using dedicated AI hardware on consumer devices.

AMD’s Big Bet

To understand why AMD’s AI chip is such a big deal, we must first understand the challenges associated with creating a dedicated AI inference engine on a CPU. While Apple took the relatively easier path of licencing an SoC design from ARM, AMD has built their AI engine into an x86 processor, marking the first time a chipmaker has done this.

ARM and x86 refer to the instruction sets and architectures used in modern CPUs. Most mobile chips and the Apple M1 chip use ARM architecture, while AMD and Intel CPUs use x86, previously x64, for their CPUs. NVIDIA uses their proprietary Turing architecture for their GPUs.
The in-house design for AMD’s AI engine is based on an architecture that AMD has termed ‘XDNA adaptive AI architecture’, previously seen in their Alveo AI accelerator cards. The chipmaker claims that their new chips outperform the Apple M2 by up to 20% while being up to 50% more energy efficient. The engine is based on a field programmable gate array; a kind of processor that can be reconfigured to the silicon level even after the manufacturing process. Reportedly, the new chips will be used for use-cases such as noise reduction in video conferencing, predictive UI, and preserving the security of the device.

This AI push comes after AMD acquired Xilinx for $49 billion early last year. Xilinx is a chipmaker that offers FPGAs, adaptive systems-on-chip (SoCs), and AI inference engines. When looking at the 7040 series of chips, it is clear that AMD has integrated Xilinx’s decades of hardware know-how into their newest chips, with a lot more to come down the line.

AMD showed off new AI-powered features integrated into Microsoft Teams, such as auto-framing and eye gaze correction. The Microsoft team also showed off the noise reduction feature in Teams in a separate keynote. While all of these AI-based features already exist in the market with solutions like NVIDIA Broadcast and Krisp AI, the new set of CPUs can do it with 0% load on the CPU or GPU due to their dedicated inference hardware.

AMD also claims that the inference chip will result in smarter battery consumption, resulting in a longer overall battery life for the device. Apple’s M series users have been enjoying these optimisations unbeknownst for the past two years. However, now that other companies are catching up, Apple cannot afford to fall behind in the wave of AI at the edge.

AI at the edge

While edge computing has been a dream for many tech giants for years now, the rise of dedicated inference hardware on every device might actually make it a reality. Apple’s Neural Engine showed that it was possible to do meaningful amounts of AI inference on-device without needing to send data back to the cloud.

Edge AI allows low-powered devices like laptops and phones to process data in real-time, providing a seamless user experience powered by AI. Apple’s move sparked a response from competitors like AMD and Intel, which released dedicated AI accelerators aimed at speeding up training and inferencing tasks on the cloud, and now, at the edge.

However, AMD’s new set of chips fights Apple on their own terms, providing capable AI inferencing services on low-powered devices. As per AMD’s claims, their offering is also superior to the current generation of Apple’s chips, both in terms of power and efficiency. Panoy summed up the significance of these chips, stating,

“We are now. . . at an inflection point and this is where computing from the cloud to the edge is becoming more and more intelligent, more personal, and it’s all done by harnessing the power of AI.”

The trend towards offering powerful inferencing capabilities at the edge is very important for the future of AI. With AI becoming more pervasive in everyday tasks such as image processing, UI optimisation, smart recommendations, and more, edge AI can offer a real advantage for users. Moreover, AI at the edge also offers better data privacy and security for the end user; a paradigm which is ripe for change considering regulations for data protection.

 
  • Like
  • Love
  • Fire
Reactions: 38 users

Boab

I wish I could paint like Vincent
AMD new Ryzen 7040 series has dedicated on chip AI engine similar to Apple's neural engine with similar battery life & on device AI inference.

  • PUBLISHED ON JANUARY 5, 2023
  • IN OPINIONS

AMD’s Big Bet: Will Edge AI Finally Become a Reality?​

AMD just released their M2 competitor. Will it catapult edge AI into the mainstream?

Apple’s M1 chip made waves in the tech world when it was released in 2020, mainly because it was a discrete chip developed from the ground up by Apple. The chip’s Neural Engine was of particular note, being the first dedicated AI inference chip in a consumer product. The company also built up a strong developer infrastructure around the chip in the past two years, and has now scripted a success story that other companies wish to replicate.

At CES 2023, AMD released a new series of laptop CPUs in direct competition to Apple’s M1 chips. The Ryzen 7000 series for laptops is a part of AMD’s push to challenge Intel’s, and now Apple’s, dominance in the laptop CPU market. However, one set of chips in the new lineup stands out; the Ryzen 7040 series. The lineup comprises the Ryzen 5 7640HS, the Ryzen 7 7840HS, and the Ryzen 9 7940HS.

This new set of laptop CPUs have a dedicated on-chip AI engine, similar to Apple’s Neural Engine, and are the first x86 processors to build out this capability. This killer feature of 7040 series will allow it to directly compete with Apple’s M2 lineup, as it claims to offer similar battery life numbers and can enable on-device AI inference.

This might be the beginning of a new trend in consumer hardware, where hardware manufacturers are including on-device inference capabilities to better handle new AI workloads. After seeing the precedent set by Apple’s Neural Engine, Microsoft has also seen the light when it comes to integrating AI functions into their operating system. Panos Panay, EVP and Chief Product Officer At Microsoft, said this at AMD’s CES presentation,


Let’s delve deeper into the industry trend of bringing AI to the edge using dedicated AI hardware on consumer devices.

AMD’s Big Bet

To understand why AMD’s AI chip is such a big deal, we must first understand the challenges associated with creating a dedicated AI inference engine on a CPU. While Apple took the relatively easier path of licencing an SoC design from ARM, AMD has built their AI engine into an x86 processor, marking the first time a chipmaker has done this.


The in-house design for AMD’s AI engine is based on an architecture that AMD has termed ‘XDNA adaptive AI architecture’, previously seen in their Alveo AI accelerator cards. The chipmaker claims that their new chips outperform the Apple M2 by up to 20% while being up to 50% more energy efficient. The engine is based on a field programmable gate array; a kind of processor that can be reconfigured to the silicon level even after the manufacturing process. Reportedly, the new chips will be used for use-cases such as noise reduction in video conferencing, predictive UI, and preserving the security of the device.

This AI push comes after AMD acquired Xilinx for $49 billion early last year. Xilinx is a chipmaker that offers FPGAs, adaptive systems-on-chip (SoCs), and AI inference engines. When looking at the 7040 series of chips, it is clear that AMD has integrated Xilinx’s decades of hardware know-how into their newest chips, with a lot more to come down the line.

AMD showed off new AI-powered features integrated into Microsoft Teams, such as auto-framing and eye gaze correction. The Microsoft team also showed off the noise reduction feature in Teams in a separate keynote. While all of these AI-based features already exist in the market with solutions like NVIDIA Broadcast and Krisp AI, the new set of CPUs can do it with 0% load on the CPU or GPU due to their dedicated inference hardware.

AMD also claims that the inference chip will result in smarter battery consumption, resulting in a longer overall battery life for the device. Apple’s M series users have been enjoying these optimisations unbeknownst for the past two years. However, now that other companies are catching up, Apple cannot afford to fall behind in the wave of AI at the edge.

AI at the edge

While edge computing has been a dream for many tech giants for years now, the rise of dedicated inference hardware on every device might actually make it a reality. Apple’s Neural Engine showed that it was possible to do meaningful amounts of AI inference on-device without needing to send data back to the cloud.

Edge AI allows low-powered devices like laptops and phones to process data in real-time, providing a seamless user experience powered by AI. Apple’s move sparked a response from competitors like AMD and Intel, which released dedicated AI accelerators aimed at speeding up training and inferencing tasks on the cloud, and now, at the edge.

However, AMD’s new set of chips fights Apple on their own terms, providing capable AI inferencing services on low-powered devices. As per AMD’s claims, their offering is also superior to the current generation of Apple’s chips, both in terms of power and efficiency. Panoy summed up the significance of these chips, stating,



The trend towards offering powerful inferencing capabilities at the edge is very important for the future of AI. With AI becoming more pervasive in everyday tasks such as image processing, UI optimisation, smart recommendations, and more, edge AI can offer a real advantage for users. Moreover, AI at the edge also offers better data privacy and security for the end user; a paradigm which is ripe for change considering regulations for data protection.

Thanks for sharing
Looks like they've all woken up and we have the remedy.💪💪
 
  • Like
  • Fire
  • Love
Reactions: 16 users
Interesting point @Boab sir. This is likely to be an extra feature that a user would pay for.

It's really for off the grid text connectivity, but it does need low power AI smarts to find and point to the correct satellite (shown in the video).

Being able to send a text message for help, miles off lost on a trail sounds great. This will hurt Garmin, as boaters use this now, and when it is part of andriod a big plus too. Akida could probably help any pattern recognition use-case. Akida 1500 and 2000 are well down the path towards a likely 2023 announcement!🗣️🗣️



https://www.engadget.com/qualcomm-snapdragon-satellite-messaging-android-211037007.html

This is where NASA thinks AKIDA can help by creating cognitive communications by injecting intelligence into every router on every satellite. @Fullmoonfever posted an ISL NASA Phase 2 covering this precise use of AKIDA technology only this morning. AND YES AKIDA IS MENTIONED AND @Fullmoonfever even highlighted it in Orange to make it easier to trip over.

And as I rabbited on about yesterday AKIDA providing intelligent routing in 4G, 5G, 6G wireless etc is a future multi billion dollar market ripe for AKIDA technology plucking.

But don’t believe me read the source documents.

My opinion only DYOR
FF

AKIDA BALLISTA

PS: Why I am so confident about this application it is because before AKIDA existed engineers knew that intelligent routing would work but you needed the processing power of a GPU. In some situations even a $30,000 GPU.

This created both a cost and power use overhead that was impractical.

Why, because you ideally would have literally billions of these intelligent routers across a national network.

AKIDA SOLVES BOTH COST AND POWER.
 
  • Like
  • Love
  • Fire
Reactions: 53 users

Boab

I wish I could paint like Vincent
This is where NASA thinks AKIDA can help by creating cognitive communications by injecting intelligence into every router on every satellite. @Fullmoonfever posted an ISL NASA Phase 2 covering this precise use of AKIDA technology only this morning. AND YES AKIDA IS MENTIONED AND @Fullmoonfever even highlighted it in Orange to make it easier to trip over.

And as I rabbited on about yesterday AKIDA providing intelligent routing in 4G, 5G, 6G wireless etc is a future multi billion dollar market ripe for AKIDA technology plucking.

But don’t believe me read the source documents.

My opinion only DYOR
FF

AKIDA BALLISTA

PS: Why I am so confident about this application it is because before AKIDA existed engineers knew that intelligent routing would work but you needed the processing power of a GPU. In some situations even a $30,000 GPU.

This created both a cost and power use overhead that was impractical.

Why, because you ideally would have literally billions of these intelligent routers across a national network.

AKIDA SOLVES BOTH COST AND POWER.
It really is the stuff of Science Fiction.
Your posts have been especially informative today.
Cheers
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Diogenese

Top 20
AMD new Ryzen 7040 series has dedicated on chip AI engine similar to Apple's neural engine with similar battery life & on device AI inference.

  • PUBLISHED ON JANUARY 5, 2023
  • IN OPINIONS

AMD’s Big Bet: Will Edge AI Finally Become a Reality?​

AMD just released their M2 competitor. Will it catapult edge AI into the mainstream?

Apple’s M1 chip made waves in the tech world when it was released in 2020, mainly because it was a discrete chip developed from the ground up by Apple. The chip’s Neural Engine was of particular note, being the first dedicated AI inference chip in a consumer product. The company also built up a strong developer infrastructure around the chip in the past two years, and has now scripted a success story that other companies wish to replicate.

At CES 2023, AMD released a new series of laptop CPUs in direct competition to Apple’s M1 chips. The Ryzen 7000 series for laptops is a part of AMD’s push to challenge Intel’s, and now Apple’s, dominance in the laptop CPU market. However, one set of chips in the new lineup stands out; the Ryzen 7040 series. The lineup comprises the Ryzen 5 7640HS, the Ryzen 7 7840HS, and the Ryzen 9 7940HS.

This new set of laptop CPUs have a dedicated on-chip AI engine, similar to Apple’s Neural Engine, and are the first x86 processors to build out this capability. This killer feature of 7040 series will allow it to directly compete with Apple’s M2 lineup, as it claims to offer similar battery life numbers and can enable on-device AI inference.

This might be the beginning of a new trend in consumer hardware, where hardware manufacturers are including on-device inference capabilities to better handle new AI workloads. After seeing the precedent set by Apple’s Neural Engine, Microsoft has also seen the light when it comes to integrating AI functions into their operating system. Panos Panay, EVP and Chief Product Officer At Microsoft, said this at AMD’s CES presentation,


Let’s delve deeper into the industry trend of bringing AI to the edge using dedicated AI hardware on consumer devices.

AMD’s Big Bet

To understand why AMD’s AI chip is such a big deal, we must first understand the challenges associated with creating a dedicated AI inference engine on a CPU. While Apple took the relatively easier path of licencing an SoC design from ARM, AMD has built their AI engine into an x86 processor, marking the first time a chipmaker has done this.


The in-house design for AMD’s AI engine is based on an architecture that AMD has termed ‘XDNA adaptive AI architecture’, previously seen in their Alveo AI accelerator cards. The chipmaker claims that their new chips outperform the Apple M2 by up to 20% while being up to 50% more energy efficient. The engine is based on a field programmable gate array; a kind of processor that can be reconfigured to the silicon level even after the manufacturing process. Reportedly, the new chips will be used for use-cases such as noise reduction in video conferencing, predictive UI, and preserving the security of the device.

This AI push comes after AMD acquired Xilinx for $49 billion early last year. Xilinx is a chipmaker that offers FPGAs, adaptive systems-on-chip (SoCs), and AI inference engines. When looking at the 7040 series of chips, it is clear that AMD has integrated Xilinx’s decades of hardware know-how into their newest chips, with a lot more to come down the line.

AMD showed off new AI-powered features integrated into Microsoft Teams, such as auto-framing and eye gaze correction. The Microsoft team also showed off the noise reduction feature in Teams in a separate keynote. While all of these AI-based features already exist in the market with solutions like NVIDIA Broadcast and Krisp AI, the new set of CPUs can do it with 0% load on the CPU or GPU due to their dedicated inference hardware.

AMD also claims that the inference chip will result in smarter battery consumption, resulting in a longer overall battery life for the device. Apple’s M series users have been enjoying these optimisations unbeknownst for the past two years. However, now that other companies are catching up, Apple cannot afford to fall behind in the wave of AI at the edge.

AI at the edge

While edge computing has been a dream for many tech giants for years now, the rise of dedicated inference hardware on every device might actually make it a reality. Apple’s Neural Engine showed that it was possible to do meaningful amounts of AI inference on-device without needing to send data back to the cloud.

Edge AI allows low-powered devices like laptops and phones to process data in real-time, providing a seamless user experience powered by AI. Apple’s move sparked a response from competitors like AMD and Intel, which released dedicated AI accelerators aimed at speeding up training and inferencing tasks on the cloud, and now, at the edge.

However, AMD’s new set of chips fights Apple on their own terms, providing capable AI inferencing services on low-powered devices. As per AMD’s claims, their offering is also superior to the current generation of Apple’s chips, both in terms of power and efficiency. Panoy summed up the significance of these chips, stating,



The trend towards offering powerful inferencing capabilities at the edge is very important for the future of AI. With AI becoming more pervasive in everyday tasks such as image processing, UI optimisation, smart recommendations, and more, edge AI can offer a real advantage for users. Moreover, AI at the edge also offers better data privacy and security for the end user; a paradigm which is ripe for change considering regulations for data protection.


AMD have their own analog NN:

"The in-house design for AMD’s AI engine is based on an architecture that AMD has termed ‘XDNA adaptive AI architecture’, previously seen in their Alveo AI accelerator cards."

US2020151572A1 Using Multiple Functional Blocks for Training Neural Networks

Priority: 20181114

1672979947150.png




A system is described that performs training operations for a neural network, the system including an analog circuit element functional block with an array of analog circuit elements, and a controller. The controller monitors error values computed using an output from each of one or more initial iterations of a neural network training operation, the one or more initial iterations being performed using neural network data acquired from the memory. When one or more error values are less than a threshold, the controller uses the neural network data from the memory to configure the analog circuit element functional block to perform remaining iterations of the neural network training operation. The controller then causes the analog circuit element functional block to perform the remaining iterations.
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Vladsblood

Regular
We can and should look also at our profit margin on OUR Companies IP Product….It’s 98 percent!! How many other companies can say that about their Product/s ?? Vlad.
 
  • Like
  • Love
  • Fire
Reactions: 32 users
I was just going to turn in as had a hectic day and almost did not have a final look and am I glad that I did.

Great find generously shared FMF.

My opinion only DYOR
FF

AKIDA BALLISTA
As some will recall I complemented FMF’s post last night and in my tired state relied heavily on FMF’s generous highlighting of the important parts and did not fully register the full significance to be drawn from the document in particular from the following extract:

“a second sensor intended for the immersive experience for players in augmented reality” - CEO Prophesee

Consider MegaChips and their major long standing customer Nintendo.

It has been often discussed here where could Nintendo use AKIDA technology.

Well it seems combined with an event based sensor here is one use to enhance the immersive experience in augmented reality.

There is also still floating in the air the statement by the former CEO Mr. Dinardo that smart controllers were an AKIDA technology use case and Nintendo certainly makes and sells controllers.

Then for the immersive gaming experience eye tracking courtesy of Nviso and AKIDA at 1,000 fps is a feature that this industry according to some of my reading has an interest in adding to their technology arsenals. Look, blink, fire perhaps?

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 35 users
Top Bottom