BRN Discussion Ongoing

Someone asked about AKIDA and NASA recently.

SiFive has stated publicly they will be using AKIDA with the X280 Intelligence Series.

NASA is working with Brainchip for in space autonomy and navigation so two plus two:

“NASA HPSC processor

One especially notable implementation of RVV is NASA’s next generation High-Performance Spaceflight Computing (HPSC) processor.

HPSC will utilize SiFive Intelligence X280 RISC-V vector cores (which support RVV extensions), as well as additional SiFive RISC-V cores, to deliver 100× the computational capability of today’s space computers. The RVV extensions allow the X280 to support extremely high-throughput, single-thread performance, while also managing significant power constraints. NASA’s HPSC will be used for future Mars surface missions and human lunar missions, along with applications including industrial automation and edge computing for other government agencies”

https://www.design-reuse.com/redir3/34712/352793/8IDCcehr87FC7QuZSKvfeNO7vLEwt

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 58 users

TheFunkMachine

seeds have the potential to become trees.
Well if you have a need to have the last word by ignoring propositions put to you, that's fine by me.
Sneaky last word on a “have to have the last word” comment. Classic
 
  • Haha
  • Like
  • Fire
Reactions: 12 users
A clear “6 of one/half a dozen of the other”, non-committal article by The Fools, but interestingly without the usual ending of “Your money is better spent in one of these top 5 blah blah blah”.

It is quite ridiculous to try to analyse either Brainchip or Weebit on fundamentals, it is more ludicrous to attempt to compare how ‘cheap’ their share price is based purely on fundamentals. Not yet anyway!

The only way to analyse BrainChip and Weebit, in their current incarnation, is on the potential of their technologies. And that potential is huge for both companies.

Weebit does have one advantage over Brainchip (purely on the marketability side of things) in that it doesn‘t suffer from WANCAs.

Almost anyone can easily comprehend what Weebit brings to the table. They already use devices into which Weebit’s technology will soon replace the established technology—FLASH.

People don’t seem to care that they don’t know what a FLASH is or how it works—they simply accept what it can do for them. And those same people will accept the advantages brought to the table by a new technology that is cheaper, faster, uses less power, lasts longer, is more resilient to heat, radiation and magnetic fields etc.

Brainchip needs to get Akida to the same level of consumer acceptance, where people stop being concerned about how it works and ONLY appreciate how they can‘t live without the advantages, products that contain it, give to them. And for that we need products.

PANTENE for both.

But then, after they both reach their full potential, how do you truly compare the value of two ubiquitous technologies. Like I’ve said before, Weebit and Brainchip will be “Living together in perfect harmony”.
Cheers for the reply Sly I've kinda given up on trying to value companies. One thing I have noticed is everything gets way overvalued at some stages. It does not matter what we personally think what true value is because market will decide that for us. Missed plenty of huge gains to cash in on because my perceived value of a stock at time was completely wrong.( <Insert greed) for good measure.
I'll have to take a captain cook at wbt for a possible small parcel.
@FabricatedLunacy yes as @Slymeat explained.

Once again offering free removals of unwanted couches or washing machines.
Please call :01234567890
 
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 14 users

Iseki

Regular
Last edited:
  • Like
  • Thinking
  • Haha
Reactions: 3 users

The Pope

Regular
Hi FF and others,

Maybe it’s just me, but it appears to be at times you are very responsive to some posters comments and on occasion a couple not as quick. (Eg the LinkedIn advertisement for a new role at BRN that was linked to three companies in this ad). My understanding is on the other forum before many moved over to TSE and you were the unofficial group leader in contacting BRN management for clarification or comment on links to companies etc that may be linked to Akida. I don’t see this as a negative as it may streamline replies between BRN management and TSE forum.
My question is how close are you to BRN management team in receiving clarification or replies to comments from you or other TSE posters in a timely manner?
It appears some key posters on TSE come across like that they are in the BRN management corporate spa to obtain comments, there are others that want to be there or come across like that are even though I suggest they aren’t.
While I appreciate all the comments and links by posters to speculative connections etc provided by all on TSE I’m concerned at how close some are to BRN management when it comes to knowledge or how it is posted on TSE.
I only wish the best for all but at times I get concerned on how posters post comments on TSE.
 
  • Like
  • Haha
Reactions: 7 users

Iseki

Regular
Hi FF and others,

Maybe it’s just me, but it appears to be at times you are very responsive to some posters comments and on occasion a couple not as quick. (Eg the LinkedIn advertisement for a new role at BRN that was linked to three companies in this ad). My understanding is on the other forum before many moved over to TSE and you were the unofficial group leader in contacting BRN management for clarification or comment on links to companies etc that may be linked to Akida. I don’t see this as a negative as it may streamline replies between BRN management and TSE forum.
My question is how close are you to BRN management team in receiving clarification or replies to comments from you or other TSE posters in a timely manner?
It appears some key posters on TSE come across like that they are in the BRN management corporate spa to obtain comments, there are others that want to be there or come across like that are even though I suggest they aren’t.
While I appreciate all the comments and links by posters to speculative connections etc provided by all on TSE I’m concerned at how close some are to BRN management when it comes to knowledge or how it is posted on TSE.
I only wish the best for all but at times I get concerned on how posters post comments on TSE.
Yeah. It's just you. Everyone else knows how hard it is to do research. Try it, and you'll find out for yourself. Go on. You can do it!
 
  • Haha
  • Like
Reactions: 13 users
Another two plus two:

“While many people have referred to the VISION EQXX as a concept car, those people do not work at Mercedes-Benz. The company insists on calling it a research vehicle, because it concentrates experimental projects from many areas of research into a single, practical EV for the express purpose of testing the results in real-world scenarios. During the VISION EQXX technological briefings on the day of our co-drive, Jasmin Eichler, Director of Future Technologies for Mercedes-Benz AG, said the best of those lessons will be used in all Mercedes series vehicles”

“The brains of the operation


Markkus Rovito/SlashGear
A "pillar to pillar" micro LED display, the largest display ever in a Mercedes, stretches from one end of the VISION EQXX dashboard to the other. According the company, it is even more energy-efficient than an OLED display, in part because when an area of the display is black, it turns off the pixels for that area, saving energy and creating great contrast. This screen is the heart and—if you cross the uncanny valley—soul of the car. That's because the digital assistant that responds to the "hey Mercedes" cue runs off "neuromorphic computing" from California AI company BrainChip. This utilizes a machine-learning neural network that only draws energy in "spikes" when called upon and over time can be 5-10x more efficient than conventional voice control, according to Mercedes-Benz.”

Read More: https://www.slashgear.com/832474/me...ve-record-setting-ev-range/?utm_campaign=clip

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 49 users

Proga

Regular
So is Samsung for its Galaxy S23. We’ve been hearing for a while that Samsung will reportedly use the Snapdragon 8 Gen 2 chipset globally in the Galaxy S23 line, which was already good news for reasons we’ll get into below. But now it looks like the company might go one better, and equip these phones with an exclusive, even more powerful variant of the chipset.

@Diogenese can the Snapdragon 8 Gen 2 SOC use Akida IP 🙏? If so the mind boggles.


 
  • Like
Reactions: 8 users

Iseki

Regular
Soz if that last post was attacking @prnewy74 . But we have in Akida, something new and exciting. But that alone doesn't guarantee success. We have to fit in with ecosystems, the expectations of engineers, and the whim of consumers. We have done wonderfully to date, but share price is all about how people imagine, rightfully or wrongly, the SP will go - ie what they think they will think irrespective of what it can do. Which is a non-sense.

So you have to decide - will you join others and research into competitors and patents and links to chip makers, or will you crowd around a ouija board and gasp and holler?

Good chatting.
 
  • Like
  • Haha
  • Love
Reactions: 9 users

Proga

Regular
So is Samsung for its Galaxy S23. We’ve been hearing for a while that Samsung will reportedly use the Snapdragon 8 Gen 2 chipset globally in the Galaxy S23 line, which was already good news for reasons we’ll get into below. But now it looks like the company might go one better, and equip these phones with an exclusive, even more powerful variant of the chipset.

@Diogenese can the Snapdragon 8 Gen 2 SOC use Akida IP 🙏? If so the mind boggles.


From the techradar article

Only Akida can do that????

Through what can only be arcane wizardry, Qualcomm is able to scale 32-bit processes down to 4-bit without compromising the quality of the data sets being processed, which the company's Ziad Asghar – VP of Product Management at Qualcomm – told TechRadar, amounts to a 64x power reduction.
 
  • Like
  • Thinking
Reactions: 6 users
Hi FF and others,

Maybe it’s just me, but it appears to be at times you are very responsive to some posters comments and on occasion a couple not as quick. (Eg the LinkedIn advertisement for a new role at BRN that was linked to three companies in this ad). My understanding is on the other forum before many moved over to TSE and you were the unofficial group leader in contacting BRN management for clarification or comment on links to companies etc that may be linked to Akida. I don’t see this as a negative as it may streamline replies between BRN management and TSE forum.
My question is how close are you to BRN management team in receiving clarification or replies to comments from you or other TSE posters in a timely manner?
It appears some key posters on TSE come across like that they are in the BRN management corporate spa to obtain comments, there are others that want to be there or come across like that are even though I suggest they aren’t.
While I appreciate all the comments and links by posters to speculative connections etc provided by all on TSE I’m concerned at how close some are to BRN management when it comes to knowledge or how it is posted on TSE.
I only wish the best for all but at times I get concerned on how posters post comments on TSE.
I am as close to Brainchip as you or any other shareholder could be if they put in the effort to ask respectful questions and not be abusive when offering criticism.

What is a respectful question? It is one which would not require the person to give an answer they are not entitled to give.

So I never ask questions like is Siemens a customer of Brainchip because asking that question is insulting the honesty and integrity of the Brainchip employee.

I always communicate with the company by email so there is a written record of my communications and the replies I receive.

The knowledge I have is available to every shareholder provided they take the time to do the research.

I know I have an advantage over a lot of shareholders as I have a developed memory for narrative and the ability to read/scan large amounts of material quite quickly.

I cannot speak for the others that you have concerns about but I doubt from my interaction with the company if any other shareholder has knowledge which would amount to insider information.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 50 users
Hi FF and others,

Maybe it’s just me, but it appears to be at times you are very responsive to some posters comments and on occasion a couple not as quick. (Eg the LinkedIn advertisement for a new role at BRN that was linked to three companies in this ad). My understanding is on the other forum before many moved over to TSE and you were the unofficial group leader in contacting BRN management for clarification or comment on links to companies etc that may be linked to Akida. I don’t see this as a negative as it may streamline replies between BRN management and TSE forum.
My question is how close are you to BRN management team in receiving clarification or replies to comments from you or other TSE posters in a timely manner?
It appears some key posters on TSE come across like that they are in the BRN management corporate spa to obtain comments, there are others that want to be there or come across like that are even though I suggest they aren’t.
While I appreciate all the comments and links by posters to speculative connections etc provided by all on TSE I’m concerned at how close some are to BRN management when it comes to knowledge or how it is posted on TSE.
I only wish the best for all but at times I get concerned on how posters post comments on TSE.
Mate if it's a reasonable question I find Tony very quick to reply. Even gave me his work mobile number to call on. I may have missed the meaning of your post if so I apologize for my misinterpretation.
 
  • Like
  • Love
  • Fire
Reactions: 10 users

Boab

I wish I could paint like Vincent
I am as close to Brainchip as you or any other shareholder could be if they put in the effort to ask respectful questions and not be abusive when offering criticism.

What is a respectful question? It is one which would not require the person to give an answer they are not entitled to give.

So I never ask questions like is Siemens a customer of Brainchip because asking that question is insulting the honesty and integrity of the Brainchip employee.

I always communicate with the company by email so there is a written record of my communications and the replies I receive.

The knowledge I have is available to every shareholder provided they take the time to do the research.

I know I have an advantage over a lot of shareholders as I have a developed memory for narrative and the ability to read/scan large amounts of material quite quickly.

I cannot speak for the others that you have concerns about but I doubt from my interaction with the company if any other shareholder has knowledge which would amount to insider information.

My opinion only DYOR
FF

AKIDA BALLISTA
Hi FF,
did you watch the show SUITS? One of the central characters has a photographic memory and although he never qualified to become a lawyer they employed him anyway. just wondered if you felt like you had similar abilities? I am envious and grateful of your recollections.
Keep up the great work.
Cheers
 
  • Like
  • Love
  • Fire
Reactions: 16 users

robsmark

Regular
The World may not have fully woken up but the future of computing is all about the ‘BEAST’.

A quick question. When Edge Impulse was exposing AKIDA to 55,000 or more engineers and developers I had a little trouble comprehending that sort of reach but when ARM revealed they had over 55 million engineers and developers it threw me for a six and still does is anyone else similarly afflicted?

I mean could you imagine harnessing the brain power of 55 million @Diogenese?

ARM is the bird in the hand but I think Brainchip is going to need a much, much bigger hand.

Remember according to Edge Impulse AKIDA is the stuff of SCIENCE FICTION and Edge Impulse is speaking constantly to ARM customers as it’s leading engineering firm.

My opinion only DYOR
FF

AKIDA BALLISTA

“ARM is the bird in the hand but I think Brainchip is going to need a much, much bigger hand.”

Perhaps we already have one… I’d love a peek behind those NDA’s.
 
  • Like
  • Love
  • Fire
Reactions: 11 users
Hi FF,
did you watch the show SUITS? One of the central characters has a photographic memory and although he never qualified to become a lawyer they employed him anyway. just wondered if you felt like you had similar abilities? I am envious and grateful of your recollections.
Keep up the great work.
Cheers
I don’t consider I have a photographic memory. When I first joined the police you had to give your evidence entirely from memory so I had to practise from the age of 19 years and initially it was a struggle.

When I was prosecuting because of the volume of work the more you remembered from the wording of indictments to Acts and Sections for offences the more efficient you could operate.

I liked to watch the witness closely while they gave evidence which was difficult if not impossible while taking notes so I practiced remembering what they said without taking notes.

Then when I had to investigate and prosecute police to be effective in cross examination you needed to build a picture of all the connections from truck loads of documents that you could call on immediately as you cross examined to have any hope of breaking down very experienced police witnesses.

Then in private practice it just continued. The interesting thing is my son does have a photographic memory and when he worked for me and I asked if he had an interest in taking over the practice he said he could never remember what I did and could not see how he would manage.

We now invest together and he calls my research and memory for it his advantage.

Personally I believe it arises from 50 years of practice developing that part of the brain and needing to read masses of documents quickly.

Regards
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 46 users

Diogenese

Top 20
So is Samsung for its Galaxy S23. We’ve been hearing for a while that Samsung will reportedly use the Snapdragon 8 Gen 2 chipset globally in the Galaxy S23 line, which was already good news for reasons we’ll get into below. But now it looks like the company might go one better, and equip these phones with an exclusive, even more powerful variant of the chipset.

@Diogenese can the Snapdragon 8 Gen 2 SOC use Akida IP 🙏? If so the mind boggles.


Hi Proga,

Sadly the ogre has retired, but the short answer is "No".

Qualcomm has put a lot of effort into analog SNNs and this article does not suggest they have switched away from their in-house technology:

More simultaneous AI tasks, while using less power​

AI (yes, artificial intelligence) has been the rising star in mobile computing over the last few years and that trend continues upwards with the 8 Gen 2.

Although every major element of the SoC already leverages AI to some degree, this generation's dedicated Hexagon Processor offers a revised toolset (including a Tensor accelerator that's doubled in size) set to deliver some significant gains.

Qualcomm promises a 4.35x speed increase when performing AI-based tasks, thanks – in part – to the fact that the 8 Gen 2 is the first of its kind to leverage something called INT4 (integer 4) precision; allowing for 60% more AI-based tasks to be performed concurrently per watt.

Through what can only be arcane wizardry, Qualcomm is able to scale 32-bit processes down to 4-bit without compromising the quality of the data sets being processed, which the company's Ziad Asghar – VP of Product Management at Qualcomm – told TechRadar, amounts to a 64x power reduction
.

Upgraded always-on Sensing Hub​

Modern phones can already help us transpose the analogue world into digital; with features like semantic text recognition and object recognition, but the Sensing Hub inside the 8 Gen 2 has been purpose-built to help with such tasks; boasting two AI processing cores for up to double the AI performance compared to 8 Gen 1, along with 50% more memory than previously.

The Sensing Hub supports an 'always-sensing camera' (a rewording from last-generation's 'always-on camera'), that's great for everything from QR code scanning to face proximity detection, facial recognition and even eye-tracking – all without having to actively open your device's camera app.

Asghar confirmed to TechRadar that multiple OEM partners have been particularly interested in this aspect of the Sensing Hub, suggesting the next wave of phones powered by the 8 Gen 2 may well have the ability to scan and action QR codes and the like without even needing to be woken up or for particular apps to be opened to interact with them.

Despite its always-on nature, Qualcomm also states that the data processed by the Sensing Hub doesn't leave your device
.

This patent dates from mid-2017.

US10460817B2 Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors


1668774218800.png



[0009] In this regard, FIG. 1A illustrates a matrix network circuit 100 as a cross-bar network that includes a way of interconnecting memristors and CMOS circuit neurons for STDP learning.

[0010] Neural networks that employ memristor networks for providing synapses can also be used for other applications that require weighted matrix multiplication computations, such as convolution for example.

Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors are disclosed. An MLC NVM matrix circuit includes a plurality of NVM storage string circuits that each include a plurality of MLC NVM storage circuits each containing a plurality of NVM bit cell circuits each configured to store 1-bit memory state. Thus, each MLC NVM storage circuit stores a multi-bit memory state according to memory states of its respective NVM bit cell circuits. Each NVM bit cell circuit includes a transistor whose gate node is coupled to a word line among a plurality of word lines configured to receive an input vector. Activation of the gate node of a given NVM bit cell circuit in an MLC NVM storage circuit controls whether its resistance is contributed to total resistance of an MLC NVM storage circuit coupled to a respective source line.

[006] … Real synapses, however, exhibit non-linear phenomena like spike timing dependent plasticity (STDP) that modulate the weight of an individual synapse based on the activity of the pre- and post-synaptic neurons. The modulation of synaptic weights through plasticity has been shown to greatly increase the range of computations that neural networks can perform.

[0074] Further, if the word lines WL0 -WLm are coupled to a pre-neuron layer and the output nodes 308 ( 0 )- 308 (m ) are coupled to a post-neuron layer, the NVM matrix circuit 300 is also configured to train the channel resistance of the NVM bit cell circuits R00 -Rmn by supporting backwards propagation of a weight update ...
 
  • Like
  • Love
  • Fire
Reactions: 21 users

Proga

Regular
Hi Proga,

Sadly the ogre has retired, but the short answer is "No".

Qualcomm has put a lot of effort into analog SNNs and this article does not suggest they have switched away from their in-house technology:

More simultaneous AI tasks, while using less power​

AI (yes, artificial intelligence) has been the rising star in mobile computing over the last few years and that trend continues upwards with the 8 Gen 2.

Although every major element of the SoC already leverages AI to some degree, this generation's dedicated Hexagon Processor offers a revised toolset (including a Tensor accelerator that's doubled in size) set to deliver some significant gains.

Qualcomm promises a 4.35x speed increase when performing AI-based tasks, thanks – in part – to the fact that the 8 Gen 2 is the first of its kind to leverage something called INT4 (integer 4) precision; allowing for 60% more AI-based tasks to be performed concurrently per watt.

Through what can only be arcane wizardry, Qualcomm is able to scale 32-bit processes down to 4-bit without compromising the quality of the data sets being processed, which the company's Ziad Asghar – VP of Product Management at Qualcomm – told TechRadar, amounts to a 64x power reduction
.

Upgraded always-on Sensing Hub​

Modern phones can already help us transpose the analogue world into digital; with features like semantic text recognition and object recognition, but the Sensing Hub inside the 8 Gen 2 has been purpose-built to help with such tasks; boasting two AI processing cores for up to double the AI performance compared to 8 Gen 1, along with 50% more memory than previously.

The Sensing Hub supports an 'always-sensing camera' (a rewording from last-generation's 'always-on camera'), that's great for everything from QR code scanning to face proximity detection, facial recognition and even eye-tracking – all without having to actively open your device's camera app.

Asghar confirmed to TechRadar that multiple OEM partners have been particularly interested in this aspect of the Sensing Hub, suggesting the next wave of phones powered by the 8 Gen 2 may well have the ability to scan and action QR codes and the like without even needing to be woken up or for particular apps to be opened to interact with them.

Despite its always-on nature, Qualcomm also states that the data processed by the Sensing Hub doesn't leave your device
.

This patent dates from mid-2017.

US10460817B2 Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors


View attachment 22355


[0009] In this regard, FIG. 1A illustrates a matrix network circuit 100 as a cross-bar network that includes a way of interconnecting memristors and CMOS circuit neurons for STDP learning.

[0010] Neural networks that employ memristor networks for providing synapses can also be used for other applications that require weighted matrix multiplication computations, such as convolution for example.

Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors are disclosed. An MLC NVM matrix circuit includes a plurality of NVM storage string circuits that each include a plurality of MLC NVM storage circuits each containing a plurality of NVM bit cell circuits each configured to store 1-bit memory state. Thus, each MLC NVM storage circuit stores a multi-bit memory state according to memory states of its respective NVM bit cell circuits. Each NVM bit cell circuit includes a transistor whose gate node is coupled to a word line among a plurality of word lines configured to receive an input vector. Activation of the gate node of a given NVM bit cell circuit in an MLC NVM storage circuit controls whether its resistance is contributed to total resistance of an MLC NVM storage circuit coupled to a respective source line.

[006] … Real synapses, however, exhibit non-linear phenomena like spike timing dependent plasticity (STDP) that modulate the weight of an individual synapse based on the activity of the pre- and post-synaptic neurons. The modulation of synaptic weights through plasticity has been shown to greatly increase the range of computations that neural networks can perform.

[0074] Further, if the word lines WL0 -WLm are coupled to a pre-neuron layer and the output nodes 308 ( 0 )- 308 (m ) are coupled to a post-neuron layer, the NVM matrix circuit 300 is also configured to train the channel resistance of the NVM bit cell circuits R00 -Rmn by supporting backwards propagation of a weight update ...
Thanks mate
 
  • Like
Reactions: 4 users

Diogenese

Top 20
Hi Proga,

Sadly the ogre has retired, but the short answer is "No".

Qualcomm has put a lot of effort into analog SNNs and this article does not suggest they have switched away from their in-house technology:

More simultaneous AI tasks, while using less power​

AI (yes, artificial intelligence) has been the rising star in mobile computing over the last few years and that trend continues upwards with the 8 Gen 2.

Although every major element of the SoC already leverages AI to some degree, this generation's dedicated Hexagon Processor offers a revised toolset (including a Tensor accelerator that's doubled in size) set to deliver some significant gains.

Qualcomm promises a 4.35x speed increase when performing AI-based tasks, thanks – in part – to the fact that the 8 Gen 2 is the first of its kind to leverage something called INT4 (integer 4) precision; allowing for 60% more AI-based tasks to be performed concurrently per watt.

Through what can only be arcane wizardry, Qualcomm is able to scale 32-bit processes down to 4-bit without compromising the quality of the data sets being processed, which the company's Ziad Asghar – VP of Product Management at Qualcomm – told TechRadar, amounts to a 64x power reduction
.

Upgraded always-on Sensing Hub​

Modern phones can already help us transpose the analogue world into digital; with features like semantic text recognition and object recognition, but the Sensing Hub inside the 8 Gen 2 has been purpose-built to help with such tasks; boasting two AI processing cores for up to double the AI performance compared to 8 Gen 1, along with 50% more memory than previously.

The Sensing Hub supports an 'always-sensing camera' (a rewording from last-generation's 'always-on camera'), that's great for everything from QR code scanning to face proximity detection, facial recognition and even eye-tracking – all without having to actively open your device's camera app.

Asghar confirmed to TechRadar that multiple OEM partners have been particularly interested in this aspect of the Sensing Hub, suggesting the next wave of phones powered by the 8 Gen 2 may well have the ability to scan and action QR codes and the like without even needing to be woken up or for particular apps to be opened to interact with them.

Despite its always-on nature, Qualcomm also states that the data processed by the Sensing Hub doesn't leave your device
.

This patent dates from mid-2017.

US10460817B2 Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors


View attachment 22355


[0009] In this regard, FIG. 1A illustrates a matrix network circuit 100 as a cross-bar network that includes a way of interconnecting memristors and CMOS circuit neurons for STDP learning.

[0010] Neural networks that employ memristor networks for providing synapses can also be used for other applications that require weighted matrix multiplication computations, such as convolution for example.

Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors are disclosed. An MLC NVM matrix circuit includes a plurality of NVM storage string circuits that each include a plurality of MLC NVM storage circuits each containing a plurality of NVM bit cell circuits each configured to store 1-bit memory state. Thus, each MLC NVM storage circuit stores a multi-bit memory state according to memory states of its respective NVM bit cell circuits. Each NVM bit cell circuit includes a transistor whose gate node is coupled to a word line among a plurality of word lines configured to receive an input vector. Activation of the gate node of a given NVM bit cell circuit in an MLC NVM storage circuit controls whether its resistance is contributed to total resistance of an MLC NVM storage circuit coupled to a respective source line.

[006] … Real synapses, however, exhibit non-linear phenomena like spike timing dependent plasticity (STDP) that modulate the weight of an individual synapse based on the activity of the pre- and post-synaptic neurons. The modulation of synaptic weights through plasticity has been shown to greatly increase the range of computations that neural networks can perform.

[0074] Further, if the word lines WL0 -WLm are coupled to a pre-neuron layer and the output nodes 308 ( 0 )- 308 (m ) are coupled to a post-neuron layer, the NVM matrix circuit 300 is also configured to train the channel resistance of the NVM bit cell circuits R00 -Rmn by supporting backwards propagation of a weight update ...
... and now in a reprise of my highly acclaimed character as the group wet blanket:

Asus, Honor, Motorola OnePlus, Oppo, Sony, Vivo, Xiaomi, ZTE and more have all committed to delivering 8 Gen 2-powered devices in the near future, however, who'll be first remains to be seen.
 
  • Like
  • Haha
  • Wow
Reactions: 13 users

Proga

Regular
... and now in a reprise of my highly acclaimed character as the group wet blanket:

Asus, Honor, Motorola OnePlus, Oppo, Sony, Vivo, Xiaomi, ZTE and more have all committed to delivering 8 Gen 2-powered devices in the near future, however, who'll be first remains to be seen.
So they can't use MetaTF to convert into digital?

Yeah I read that bit. What's the significance of Sony being included?
 
  • Like
Reactions: 2 users
🤣 That couch looks like it belongs in a crack house.
Some nostalgia while I'm sitting here eating 3 poorly made pizzas.
IMG_20221119_013305.jpg

Australia's first ATM later to be converted into pizza ovens.
Screenshot_20221119-014017-109.png
 

Attachments

  • Screenshot_20221119-013018.png
    Screenshot_20221119-013018.png
    1.7 MB · Views: 71
Last edited:
  • Love
  • Haha
  • Like
Reactions: 6 users
Top Bottom