If you liked it you will love Peaky Blinders. Last series still available new series coming shortly.
Absolutely best thing I have seen in years.
My opinion only of course.
FF
My favourite movie of all time is based on a true story. It is politically incorrect now but the underlying message is trust in those who have gone before you and who know what works ie don’t panic and throw away corporate knowledge, have and embrace your plan despite overwhelming odds, respect, trust and discipline is a critical ingredient in achieving success as a group of diverse individuals. The movie is Zulu.Those also with wisdom listen to your recommendations on what television programs to watch . I have just finished watching Tin Star ( binge worthy status ) and was perfect for my sort of entertainment. The title and introduction blurb gave an incorrect introduction to how this multi series played out . Thankyou FF
And I'll raise you with an Irish blessing to all of us:I think I'll start this week with an Irish curse to all the short sellers.
“May you be plagued by a powerful itch and never have the nails to scratch it.”
I like this one for short sellersI think I'll start this week with an Irish curse to all the short sellers.
“May you be plagued by a powerful itch and never have the nails to scratch it.”
I have always liked "May you be in Heaven a half an hour before the devil knows you're dead!"I like this one for short sellers
Buinneach dhearg go dtigidh ort
That you may have red diarrhoea
Akida Ballista
I’ll give it a crack . Cheers FF . It will have to be better than watching the share price go downMy favourite movie of all time is based on a true story. It is politically incorrect now but the underlying message is trust in those who have gone before you and who know what works ie don’t panic and throw away corporate knowledge, have and embrace your plan despite overwhelming odds, respect, trust and discipline is a critical ingredient in achieving success as a group of diverse individuals. The movie is Zulu.
Suspect not everyone’s cup of tea but the underlying message should be as this is why the 1,000 Eyes works to everyone’s benefit.
My opinion only DYOR
FF
AKUDA BALLISTA
Anyone come across or know much about these guys?
![]()
Technology
Neuromorphic Analog Signal Processing offers inference on Tiny ML chips with a brain-like analog architecture for Edge AI low-power nodespolyn.ai
TECHNOLOGY
Neuromorphic Analog Signal Processing (NASP)
Ideal for New Generation of Front-end Devices in Real Time Edge AI Applications
Sensor level platform synthesizes a true neuromorphic Tiny AI chip layout from any trained neural network
NEURAL NET DESIGN
Select a trained Neural Net or train the customer’s Neural Net
MATH MODEL SIMULATION
Generated with NAPS Compiler: D-MVP – Neural Network Software Simulation
NASP CHIP SYNTHESIS
The Math Model converted into the chip layout ready for production
NASP CHIP PRODUCTION
Semiconductor fabs produce NASP chips with standard equipment and processes
NASP technology is ideal for real time Edge sensor signal processing (ESSP) appliances, providing small size, ultra-low power consumption and low latency.
NASP Convertor works with any standard Neural Net framework like Keras, Tensor Flow or others.
NASP receives any type of signal and processes raw sensor data using neuromorphic AI computations on sensor level without digitalizing analog signals.
1Direct analog /digital signal input
2POLYN’s neuromorphic architecture processes input signals in a true parallel asynchronous mode providing unprecedented low latency and low power consumption. Calculations do not require CPU usage or memory access.
3NASP can use a pre-trained artificial neural network from any major ML framework (such as Tensor Flow, PyTorch, MXNet etc.) for the neuromorphic representation resulting in exceptional precision and accuracy.
DEVELOPMENT PROCESS
PROTOTYPE NEURAL NET MODEL
NEURAL NET TRAINING AND CONVERSION
NN is provided by the customer or POLYN assists the customer in NN selection
Fully functionable math model of NN
Data collection and training process
The customer accepts the final functionality of the trained NN
SYNTHESIS
NETLIST FOR THE TARGET CAD
Convert the trained NN to Netlist for CAD and generate the neurons library
Build CAD model of NASP block
Verification of the NASP CAD model and the NN model conformity
IMPLEMENTATION
THE CHIP PRODUCTION
Generate the final layout and GDSII format for a target Node and Fab
Thanks FFHi FMF
Something funny about these guys. Some of the links on their website too partners do not work and as I dig deeper they do not actually seem to have products using their technology which I think is all software based despite some of the wording used in their marketing. I will keep digging but at the moment there does seem to be more marketing than substance.
In due course @Diogenese will no doubt do a deep patent dive but this is one of the other intriguing things nowhere can I find any mention about having patents protecting their technology not even applications in play which is normally what I would expect to see from these very early stage companies.
They have a link to the EUHuman Brain Project but going there reveals not a lot if anything about them and if they had exclusively licenced their technology from them once again I would have expected them to mention this somewhere. They only commenced as a startup in 2019 and at the beginning of 2020 they were crowing about having this product which seems amazingly little time to have beaten everyone in the world to solving all the known issues with analogue???
My opinion only but more research to do so DYOR
FF
AKIDA BALLISTA
Appears predominantly Russian Exec etc based in Israel as small start-up....hmmHi FMF
Something funny about these guys. Some of the links on their website too partners do not work and as I dig deeper they do not actually seem to have products using their technology which I think is all software based despite some of the wording used in their marketing. I will keep digging but at the moment there does seem to be more marketing than substance.
In due course @Diogenese will no doubt do a deep patent dive but this is one of the other intriguing things nowhere can I find any mention about having patents protecting their technology not even applications in play which is normally what I would expect to see from these very early stage companies.
They have a link to the EUHuman Brain Project but going there reveals not a lot if anything about them and if they had exclusively licenced their technology from them once again I would have expected them to mention this somewhere. They only commenced as a startup in 2019 and at the beginning of 2020 they were crowing about having this product which seems amazingly little time to have beaten everyone in the world to solving all the known issues with analogue???
My opinion only but more research to do so DYOR
FF
AKIDA BALLISTA
Hi FMF
Something funny about these guys. Some of the links on their website too partners do not work and as I dig deeper they do not actually seem to have products using their technology which I think is all software based despite some of the wording used in their marketing. I will keep digging but at the moment there does seem to be more marketing than substance.
In due course @Diogenese will no doubt do a deep patent dive but this is one of the other intriguing things nowhere can I find any mention about having patents protecting their technology not even applications in play which is normally what I would expect to see from these very early stage companies.
They have a link to the EUHuman Brain Project but going there reveals not a lot if anything about them and if they had exclusively licenced their technology from them once again I would have expected them to mention this somewhere. They only commenced as a startup in 2019 and at the beginning of 2020 they were crowing about having this product which seems amazingly little time to have beaten everyone in the world to solving all the known issues with analogue???
My opinion only but more research to do so DYOR
FF
AKIDA BALLISTA
I have another for them... Pog Mo HonI like this one for short sellers
Buinneach dhearg go dtigidh ort
That you may have red diarrhoea
Akida Ballista
POLYN Technology
SearchReset
Not sure we need to spend much time on this mob. Their alias is Polyn Technology.
All their published patents are for analog. From recollection, I think the EUHBP is also focused on analog.
They have software for designing analog NNs.
WO2021259482A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
1.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021259482A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
It appears they have done a Nelson in relation to Akida, referring to TrueNorth and Loihi, but studiously avoiding Akida:
US2021406662A1 ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
2.ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
US2021406662A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of convolutional neural networks for voice clarity. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents one or more connections between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
Complexity of neural networks continues to outpace CPU and GPU computational power as digital microprocessor advances are plateauing. Neuromorphic processors based on spike neural networks, such as Loihi and True North, are limited in their applications. For GPU-like architectures, power and speed of such architectures are limited by data transmission speed. Data transmission can consume up to 80% of chip power, and can significantly impact speed of calculations. Edge applications demand low power consumption, but there are currently no known performant hardware implementations that consume less than 50 milliwatts of power.
Some interseting limitations of memristors:
[0004] Memristor-based architectures that use cross-bar technology remain impractical for manufacturing recurrent and feed-forward neural networks. For example, memristor-based cross-bars have a number of disadvantages, including high latency and leakage of currents during operation, that make them impractical. Also, there are reliability issues in manufacturing memristor-based cross-bars, especially when neural networks have both negative and positive weights. For large neural networks with many neurons, at high dimensions, memristor-based cross-bars cannot be used for simultaneous propagation of different signals, which in turn complicates summation of signals, when neurons are represented by operational amplifiers. Furthermore, memristor-based analog integrated circuits have a number of limitations, such as a small number of resistive states, first cycle problem when forming memristors, complexity with channel formation when training the memristors, unpredictable dependency on dimensions of the memristors, slow operations of memristors, and drift of state of resistance.
A lot of their underlying presumptions that are contestable:
[0005] Additionally, the training process required for neural networks presents unique challenges for hardware realization of neural networks. A trained neural network is used for specific inferencing tasks, such as classification. Once a neural network is trained, a hardware equivalent is manufactured. When the neural network is retrained, the hardware manufacturing process is repeated, driving up costs. Although some reconfigurable hardware solutions exist, such hardware cannot be easily mass produced, and cost a lot more (e.g., cost 5 times more) than hardware that is not reconfigurable. Further, edge environments, such as smart-home applications, do not require re-programmability as such. For example, 85% of all applications of neural networks do not require any retraining during operation, so on-chip learning is not that useful. Furthermore, edge applications include noisy environments, that can cause reprogrammable hardware to become unreliable.
Funny they refer to on-chip learning, but don't mention Akida.
It looks like they design a circuit for each specific application:
1 . A method for analog hardware realization of trained convolutional neural networks for voice clarity, comprising:
obtaining a neural network topology and weights of a trained neural network;
transforming the neural network topology into an equivalent analog network of analog components;
computing a weight matrix for the equivalent analog network based on the weights of the trained neural network, wherein each element of the weight matrix represents one or more connections between analog components of the equivalent analog network; and
generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
WO2021262023A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
3.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021262023A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
US2021406661A1 Analog Hardware Realization of Neural Networks
4.Analog Hardware Realization of Neural Networks
US2021406661A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
###################################################################
Their analog chip claims very low power consumption: 100 microW.
They have a heart rate monitor
https://polyn.ai/wp-content/uploads/2022/02/NeuroSense-V222.pdf
View attachment 3766
Afternoon Diogenese,POLYN Technology
SearchReset
Not sure we need to spend much time on this mob. Their alias is Polyn Technology.
All their published patents are for analog. From recollection, I think the EUHBP is also focused on analog.
They have software for designing analog NNs.
WO2021259482A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
1.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021259482A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
It appears they have done a Nelson in relation to Akida, referring to TrueNorth and Loihi, but studiously avoiding Akida:
US2021406662A1 ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
2.ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
US2021406662A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of convolutional neural networks for voice clarity. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents one or more connections between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
Complexity of neural networks continues to outpace CPU and GPU computational power as digital microprocessor advances are plateauing. Neuromorphic processors based on spike neural networks, such as Loihi and True North, are limited in their applications. For GPU-like architectures, power and speed of such architectures are limited by data transmission speed. Data transmission can consume up to 80% of chip power, and can significantly impact speed of calculations. Edge applications demand low power consumption, but there are currently no known performant hardware implementations that consume less than 50 milliwatts of power.
Some interseting limitations of memristors:
[0004] Memristor-based architectures that use cross-bar technology remain impractical for manufacturing recurrent and feed-forward neural networks. For example, memristor-based cross-bars have a number of disadvantages, including high latency and leakage of currents during operation, that make them impractical. Also, there are reliability issues in manufacturing memristor-based cross-bars, especially when neural networks have both negative and positive weights. For large neural networks with many neurons, at high dimensions, memristor-based cross-bars cannot be used for simultaneous propagation of different signals, which in turn complicates summation of signals, when neurons are represented by operational amplifiers. Furthermore, memristor-based analog integrated circuits have a number of limitations, such as a small number of resistive states, first cycle problem when forming memristors, complexity with channel formation when training the memristors, unpredictable dependency on dimensions of the memristors, slow operations of memristors, and drift of state of resistance.
A lot of their underlying presumptions that are contestable:
[0005] Additionally, the training process required for neural networks presents unique challenges for hardware realization of neural networks. A trained neural network is used for specific inferencing tasks, such as classification. Once a neural network is trained, a hardware equivalent is manufactured. When the neural network is retrained, the hardware manufacturing process is repeated, driving up costs. Although some reconfigurable hardware solutions exist, such hardware cannot be easily mass produced, and cost a lot more (e.g., cost 5 times more) than hardware that is not reconfigurable. Further, edge environments, such as smart-home applications, do not require re-programmability as such. For example, 85% of all applications of neural networks do not require any retraining during operation, so on-chip learning is not that useful. Furthermore, edge applications include noisy environments, that can cause reprogrammable hardware to become unreliable.
Funny they refer to on-chip learning, but don't mention Akida.
It looks like they design a circuit for each specific application:
1 . A method for analog hardware realization of trained convolutional neural networks for voice clarity, comprising:
obtaining a neural network topology and weights of a trained neural network;
transforming the neural network topology into an equivalent analog network of analog components;
computing a weight matrix for the equivalent analog network based on the weights of the trained neural network, wherein each element of the weight matrix represents one or more connections between analog components of the equivalent analog network; and
generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
WO2021262023A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
3.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021262023A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
US2021406661A1 Analog Hardware Realization of Neural Networks
4.Analog Hardware Realization of Neural Networks
US2021406661A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
###################################################################
Their analog chip claims very low power consumption: 100 microW.
They have a heart rate monitor
https://polyn.ai/wp-content/uploads/2022/02/NeuroSense-V222.pdf
View attachment 3766
If you listened to what they have said about memristors you would lock the doors, pull down the shades and go and take the whole bottle.POLYN Technology
SearchReset
Not sure we need to spend much time on this mob. Their alias is Polyn Technology.
All their published patents are for analog. From recollection, I think the EUHBP is also focused on analog.
They have software for designing analog NNs.
WO2021259482A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
1.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021259482A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
It appears they have done a Nelson in relation to Akida, referring to TrueNorth and Loihi, but studiously avoiding Akida:
US2021406662A1 ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
2.ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
US2021406662A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of convolutional neural networks for voice clarity. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents one or more connections between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
Complexity of neural networks continues to outpace CPU and GPU computational power as digital microprocessor advances are plateauing. Neuromorphic processors based on spike neural networks, such as Loihi and True North, are limited in their applications. For GPU-like architectures, power and speed of such architectures are limited by data transmission speed. Data transmission can consume up to 80% of chip power, and can significantly impact speed of calculations. Edge applications demand low power consumption, but there are currently no known performant hardware implementations that consume less than 50 milliwatts of power.
Some interseting limitations of memristors:
[0004] Memristor-based architectures that use cross-bar technology remain impractical for manufacturing recurrent and feed-forward neural networks. For example, memristor-based cross-bars have a number of disadvantages, including high latency and leakage of currents during operation, that make them impractical. Also, there are reliability issues in manufacturing memristor-based cross-bars, especially when neural networks have both negative and positive weights. For large neural networks with many neurons, at high dimensions, memristor-based cross-bars cannot be used for simultaneous propagation of different signals, which in turn complicates summation of signals, when neurons are represented by operational amplifiers. Furthermore, memristor-based analog integrated circuits have a number of limitations, such as a small number of resistive states, first cycle problem when forming memristors, complexity with channel formation when training the memristors, unpredictable dependency on dimensions of the memristors, slow operations of memristors, and drift of state of resistance.
A lot of their underlying presumptions that are contestable:
[0005] Additionally, the training process required for neural networks presents unique challenges for hardware realization of neural networks. A trained neural network is used for specific inferencing tasks, such as classification. Once a neural network is trained, a hardware equivalent is manufactured. When the neural network is retrained, the hardware manufacturing process is repeated, driving up costs. Although some reconfigurable hardware solutions exist, such hardware cannot be easily mass produced, and cost a lot more (e.g., cost 5 times more) than hardware that is not reconfigurable. Further, edge environments, such as smart-home applications, do not require re-programmability as such. For example, 85% of all applications of neural networks do not require any retraining during operation, so on-chip learning is not that useful. Furthermore, edge applications include noisy environments, that can cause reprogrammable hardware to become unreliable.
Funny they refer to on-chip learning, but don't mention Akida.
It looks like they design a circuit for each specific application:
1 . A method for analog hardware realization of trained convolutional neural networks for voice clarity, comprising:
obtaining a neural network topology and weights of a trained neural network;
transforming the neural network topology into an equivalent analog network of analog components;
computing a weight matrix for the equivalent analog network based on the weights of the trained neural network, wherein each element of the weight matrix represents one or more connections between analog components of the equivalent analog network; and
generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
WO2021262023A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
3.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021262023A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
US2021406661A1 Analog Hardware Realization of Neural Networks
4.Analog Hardware Realization of Neural Networks
US2021406661A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
###################################################################
Their analog chip claims very low power consumption: 100 microW.
They have a heart rate monitor
https://polyn.ai/wp-content/uploads/2022/02/NeuroSense-V222.pdf
View attachment 3766
G"day EsqChippers,
I am shaw others have seen it , but if you look down the sell order side, some cheeky bugger has got a sell order for 105,491 units @ $10.99
Think that is the biggest by volume I have seen.
Could be a fake sell ???.
Good start.
Esq.