BRN Discussion Ongoing

VictorG

Member
I think I'll start this week with an Irish curse to all the short sellers.

“May you be plagued by a powerful itch and never have the nails to scratch it.”
 
  • Like
  • Haha
Reactions: 14 users
If you liked it you will love Peaky Blinders. Last series still available new series coming shortly.
Absolutely best thing I have seen in years.
My opinion only of course.
FF
Those also with wisdom listen to your recommendations on what television programs to watch . I have just finished watching Tin Star ( binge worthy status ) and was perfect for my sort of entertainment. The title and introduction blurb gave an incorrect introduction to how this multi series played out . Thankyou FF
My favourite movie of all time is based on a true story. It is politically incorrect now but the underlying message is trust in those who have gone before you and who know what works ie don’t panic and throw away corporate knowledge, have and embrace your plan despite overwhelming odds, respect, trust and discipline is a critical ingredient in achieving success as a group of diverse individuals. The movie is Zulu.

Suspect not everyone’s cup of tea but the underlying message should be as this is why the 1,000 Eyes works to everyone’s benefit.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
Reactions: 21 users

Dhm

Regular
I think I'll start this week with an Irish curse to all the short sellers.

“May you be plagued by a powerful itch and never have the nails to scratch it.”
And I'll raise you with an Irish blessing to all of us:

"May the tree from which they make your coffin not yet be planted."
 
  • Like
  • Love
  • Haha
Reactions: 19 users

AusEire

Founding Member.
I think I'll start this week with an Irish curse to all the short sellers.

“May you be plagued by a powerful itch and never have the nails to scratch it.”
I like this one for short sellers

Buinneach dhearg go dtigidh ort

That you may have red diarrhoea

Akida Ballista
 
  • Like
  • Haha
  • Love
Reactions: 9 users

Murphy

Life is not a dress rehearsal!
I like this one for short sellers

Buinneach dhearg go dtigidh ort

That you may have red diarrhoea

Akida Ballista
I have always liked "May you be in Heaven a half an hour before the devil knows you're dead!" :)

If you don't have dreams, you can't have dreams come true!
 
  • Love
  • Like
Reactions: 5 users
My favourite movie of all time is based on a true story. It is politically incorrect now but the underlying message is trust in those who have gone before you and who know what works ie don’t panic and throw away corporate knowledge, have and embrace your plan despite overwhelming odds, respect, trust and discipline is a critical ingredient in achieving success as a group of diverse individuals. The movie is Zulu.

Suspect not everyone’s cup of tea but the underlying message should be as this is why the 1,000 Eyes works to everyone’s benefit.

My opinion only DYOR
FF

AKUDA BALLISTA
I’ll give it a crack . Cheers FF . It will have to be better than watching the share price go down 😊
 
  • Love
  • Like
Reactions: 2 users
Anyone come across or know much about these guys?








TECHNOLOGY
Neuromorphic Analog Signal Processing (NASP)
Ideal for New Generation of Front-end Devices in Real Time Edge AI Applications
Sensor level platform synthesizes a true neuromorphic Tiny AI chip layout from any trained neural network

NEURAL NET DESIGN
Select a trained Neural Net or train the customer’s Neural Net


MATH MODEL SIMULATION
Generated with NAPS Compiler: D-MVP – Neural Network Software Simulation


NASP CHIP SYNTHESIS
The Math Model converted into the chip layout ready for production


NASP CHIP PRODUCTION
Semiconductor fabs produce NASP chips with standard equipment and processes


NASP technology is ideal for real time Edge sensor signal processing (ESSP) appliances, providing small size, ultra-low power consumption and low latency.

NASP Convertor works with any standard Neural Net framework like Keras, Tensor Flow or others.

NASP receives any type of signal and processes raw sensor data using neuromorphic AI computations on sensor level without digitalizing analog signals.

1Direct analog /digital signal input

2POLYN’s neuromorphic architecture processes input signals in a true parallel asynchronous mode providing unprecedented low latency and low power consumption. Calculations do not require CPU usage or memory access.

3NASP can use a pre-trained artificial neural network from any major ML framework (such as Tensor Flow, PyTorch, MXNet etc.) for the neuromorphic representation resulting in exceptional precision and accuracy.

DEVELOPMENT PROCESS
PROTOTYPE NEURAL NET MODEL
NEURAL NET TRAINING AND CONVERSION
NN is provided by the customer or POLYN assists the customer in NN selection
Fully functionable math model of NN
Data collection and training process
The customer accepts the final functionality of the trained NN
SYNTHESIS
NETLIST FOR THE TARGET CAD
Convert the trained NN to Netlist for CAD and generate the neurons library
Build CAD model of NASP block
Verification of the NASP CAD model and the NN model conformity
IMPLEMENTATION
THE CHIP PRODUCTION
Generate the final layout and GDSII format for a target Node and Fab
 
  • Like
Reactions: 1 users
Also anything on this Oz co?




Neuromorphic Technology – Is It Better Than AI Today?​

Home NEWS Neuromorphic Technology – Is It Better Than AI Today?
Share
TI2 News About Neuromorphic Technology - Is It Better than AI (Artificial Intelligence)?

March 18, 2022
As disruptive innovation continues to take place, it seems automation technology is becoming more certain over time, contrary to our now uncertain times – especially in Neuromorphic Computing.
The Australian Department of Defense has recently introduced the MANTIS (Mutual-Axis Neuromorphic Twin Imaging System) prototype, a new sensor that was developed by the Jericho Smart Sensing Lab (JSSL) at the University of Sydney in only three months.
Under a partnership between the Air Force and Defense Science and Technology Group, this prototype has shown the potential to measure speed and predict the trajectory and velocity of incredibly fast-moving objects.
This exploration has led to plans of future iterations where MANTIS could also be combined with a robotic eye to maneuver traditional camera frame limitation to large-portion surveillance capacity for airspace looking for air vehicles passively driving around.

What To Expect From The Next World-Beating Supercomputer?​

With neuromorphic technologies, computers will be expected to solve problems faster while using less energy than our normal computers.
Sourced from Techradar, researchers at Sandia National Laboratories have demonstrated that neuromorphic computers that replicate the brain’s logic synthetically can solve more complex problems than those posed by AI.
“The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.” Said Brian Frankee, Sandia’s engineer and author of the new paper.
Sandia received over 50-million-chip Loihi research chips from Intel in the early 2020s to run their tests.
Although chips with artificial neurons are considered more cost-efficient and easy to utilize, it’s also undeniable that more data load needs to be transported off of neurochip processors. Eventually, the cost will increase along with the collected data, slowing the system until it finally stops working.
In the next supercomputer, we will be looking at new configurations of small neurons group with computed summary statistics, instead of raw data output.
Will the next supercomputer really mimic the human brain? Intel is looking to scale their neurons significantly, so bigger and better things are to come!

World’s First Neuromorphic Demonstration​

The recognition goes to Western Sydney University and the United State Airforce Academy.
The world-first demonstration involved transmitting neuromorphic data from outer space.
Was it a success you may ask?
“Project Falcon Neuro is the first use of these sensors for earth observation from orbit, and the data received is the first neuromorphic data to be transmitted from space.”
“These cameras don’t take pictures, but rather sense changes and only send those when they happen. This method of sensing the visual world allows them to perform tasks that simply cannot be done with a conventional camera,” commented Associate Professor Gregory Cohen, ICNS’s lead researcher on the project.
This is definitely a Pioneer of Neuromorphic Technology.
What fascinating invention will we further unveil with Neuromorphic Technology? Will it be better than AI? Looks like the answer is a strong possible YES.
Stay close and don’t miss out on our next news and events to stay on top of the industry.
Ti2 is currently very focused on working with its global partners to secure stock and avoid long delay times in delivering products to their customers. Together with our trusted partners, we are here to provide solutions.
To learn more about what we do, please click projects, products & services.
Please click here to email us your enquiry, we would like to hear from you.
Article inspired from gov.au, technology decisions, and techradar.
 
  • Like
  • Fire
Reactions: 11 users
S

Straw

Guest
Just a random comment from a LTH (some of this I'm sure is repeating myself but I need to remind myself of it).

I sold some of my shares earlier this year (simply because my personal financial situation re having choices was very poor - now pretty good). That is, after buying in 2015 so I was extremely fortunate (and grateful to BRN and informative posters) patience paid off in my particular circumstance and is the best investment I have ever made. Though patience generally looks easier in retrospect and I won't say I've never had doubts or complained about anything.

Extremely grateful for all the hard work/commitment of those at the company (or formerly). Not only for the financial benefit but because it's one of very few things in my life I can think about and largely see as being positive and maybe even be excited about (which is a huge deal for me).

I'm still feeling a bit ordinary about having sold any of my BRN.
I know what I'm invested in and I feel I know who I'm invested in.
 
  • Like
  • Love
  • Fire
Reactions: 32 users
Anyone come across or know much about these guys?








TECHNOLOGY
Neuromorphic Analog Signal Processing (NASP)
Ideal for New Generation of Front-end Devices in Real Time Edge AI Applications
Sensor level platform synthesizes a true neuromorphic Tiny AI chip layout from any trained neural network

NEURAL NET DESIGN
Select a trained Neural Net or train the customer’s Neural Net


MATH MODEL SIMULATION
Generated with NAPS Compiler: D-MVP – Neural Network Software Simulation


NASP CHIP SYNTHESIS
The Math Model converted into the chip layout ready for production


NASP CHIP PRODUCTION
Semiconductor fabs produce NASP chips with standard equipment and processes


NASP technology is ideal for real time Edge sensor signal processing (ESSP) appliances, providing small size, ultra-low power consumption and low latency.

NASP Convertor works with any standard Neural Net framework like Keras, Tensor Flow or others.

NASP receives any type of signal and processes raw sensor data using neuromorphic AI computations on sensor level without digitalizing analog signals.

1Direct analog /digital signal input

2POLYN’s neuromorphic architecture processes input signals in a true parallel asynchronous mode providing unprecedented low latency and low power consumption. Calculations do not require CPU usage or memory access.

3NASP can use a pre-trained artificial neural network from any major ML framework (such as Tensor Flow, PyTorch, MXNet etc.) for the neuromorphic representation resulting in exceptional precision and accuracy.

DEVELOPMENT PROCESS
PROTOTYPE NEURAL NET MODEL
NEURAL NET TRAINING AND CONVERSION
NN is provided by the customer or POLYN assists the customer in NN selection
Fully functionable math model of NN
Data collection and training process
The customer accepts the final functionality of the trained NN
SYNTHESIS
NETLIST FOR THE TARGET CAD
Convert the trained NN to Netlist for CAD and generate the neurons library
Build CAD model of NASP block
Verification of the NASP CAD model and the NN model conformity
IMPLEMENTATION
THE CHIP PRODUCTION
Generate the final layout and GDSII format for a target Node and Fab

Hi FMF
Something funny about these guys. Some of the links on their website too partners do not work and as I dig deeper they do not actually seem to have products using their technology which I think is all software based despite some of the wording used in their marketing. I will keep digging but at the moment there does seem to be more marketing than substance.

In due course @Diogenese will no doubt do a deep patent dive but this is one of the other intriguing things nowhere can I find any mention about having patents protecting their technology not even applications in play which is normally what I would expect to see from these very early stage companies.

They have a link to the EUHuman Brain Project but going there reveals not a lot if anything about them and if they had exclusively licenced their technology from them once again I would have expected them to mention this somewhere. They only commenced as a startup in 2019 and at the beginning of 2020 they were crowing about having this product which seems amazingly little time to have beaten everyone in the world to solving all the known issues with analogue???

My opinion only but more research to do so DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Fire
Reactions: 16 users
S

Straw

Guest
@Fact Finder
Maybe there is a question there for the AGM.
 
  • Like
  • Thinking
Reactions: 3 users
Hi FMF
Something funny about these guys. Some of the links on their website too partners do not work and as I dig deeper they do not actually seem to have products using their technology which I think is all software based despite some of the wording used in their marketing. I will keep digging but at the moment there does seem to be more marketing than substance.

In due course @Diogenese will no doubt do a deep patent dive but this is one of the other intriguing things nowhere can I find any mention about having patents protecting their technology not even applications in play which is normally what I would expect to see from these very early stage companies.

They have a link to the EUHuman Brain Project but going there reveals not a lot if anything about them and if they had exclusively licenced their technology from them once again I would have expected them to mention this somewhere. They only commenced as a startup in 2019 and at the beginning of 2020 they were crowing about having this product which seems amazingly little time to have beaten everyone in the world to solving all the known issues with analogue???

My opinion only but more research to do so DYOR
FF

AKIDA BALLISTA
Thanks FF

Hadn't had much chance to dig into myself so appreciate team having a look.

Came up in a quick search and didn't know if already been looked into.

Wondered if they were using our IP to get products put there.
 
  • Like
Reactions: 1 users
Hi FMF
Something funny about these guys. Some of the links on their website too partners do not work and as I dig deeper they do not actually seem to have products using their technology which I think is all software based despite some of the wording used in their marketing. I will keep digging but at the moment there does seem to be more marketing than substance.

In due course @Diogenese will no doubt do a deep patent dive but this is one of the other intriguing things nowhere can I find any mention about having patents protecting their technology not even applications in play which is normally what I would expect to see from these very early stage companies.

They have a link to the EUHuman Brain Project but going there reveals not a lot if anything about them and if they had exclusively licenced their technology from them once again I would have expected them to mention this somewhere. They only commenced as a startup in 2019 and at the beginning of 2020 they were crowing about having this product which seems amazingly little time to have beaten everyone in the world to solving all the known issues with analogue???

My opinion only but more research to do so DYOR
FF

AKIDA BALLISTA
Appears predominantly Russian Exec etc based in Israel as small start-up....hmm


 
  • Like
  • Thinking
Reactions: 3 users

Diogenese

Top 20
Hi FMF
Something funny about these guys. Some of the links on their website too partners do not work and as I dig deeper they do not actually seem to have products using their technology which I think is all software based despite some of the wording used in their marketing. I will keep digging but at the moment there does seem to be more marketing than substance.

In due course @Diogenese will no doubt do a deep patent dive but this is one of the other intriguing things nowhere can I find any mention about having patents protecting their technology not even applications in play which is normally what I would expect to see from these very early stage companies.

They have a link to the EUHuman Brain Project but going there reveals not a lot if anything about them and if they had exclusively licenced their technology from them once again I would have expected them to mention this somewhere. They only commenced as a startup in 2019 and at the beginning of 2020 they were crowing about having this product which seems amazingly little time to have beaten everyone in the world to solving all the known issues with analogue???

My opinion only but more research to do so DYOR
FF

AKIDA BALLISTA



POLYN Technology

SearchReset


Not sure we need to spend much time on this mob. Their alias is Polyn Technology.

All their published patents are for analog. From recollection, I think the EUHBP is also focused on analog.

They have software for designing analog NNs.

WO2021259482A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
1.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021259482A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

It appears they have done a Nelson in relation to Akida, referring to TrueNorth and Loihi, but studiously avoiding Akida:

US2021406662A1 ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
2.ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
US2021406662A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of convolutional neural networks for voice clarity. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents one or more connections between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

Complexity of neural networks continues to outpace CPU and GPU computational power as digital microprocessor advances are plateauing. Neuromorphic processors based on spike neural networks, such as Loihi and True North, are limited in their applications. For GPU-like architectures, power and speed of such architectures are limited by data transmission speed. Data transmission can consume up to 80% of chip power, and can significantly impact speed of calculations. Edge applications demand low power consumption, but there are currently no known performant hardware implementations that consume less than 50 milliwatts of power.

Some interseting limitations of memristors:

[0004] Memristor-based architectures that use cross-bar technology remain impractical for manufacturing recurrent and feed-forward neural networks. For example, memristor-based cross-bars have a number of disadvantages, including high latency and leakage of currents during operation, that make them impractical. Also, there are reliability issues in manufacturing memristor-based cross-bars, especially when neural networks have both negative and positive weights. For large neural networks with many neurons, at high dimensions, memristor-based cross-bars cannot be used for simultaneous propagation of different signals, which in turn complicates summation of signals, when neurons are represented by operational amplifiers. Furthermore, memristor-based analog integrated circuits have a number of limitations, such as a small number of resistive states, first cycle problem when forming memristors, complexity with channel formation when training the memristors, unpredictable dependency on dimensions of the memristors, slow operations of memristors, and drift of state of resistance.

A lot of their underlying presumptions that are contestable:

[0005] Additionally, the training process required for neural networks presents unique challenges for hardware realization of neural networks. A trained neural network is used for specific inferencing tasks, such as classification. Once a neural network is trained, a hardware equivalent is manufactured. When the neural network is retrained, the hardware manufacturing process is repeated, driving up costs. Although some reconfigurable hardware solutions exist, such hardware cannot be easily mass produced, and cost a lot more (e.g., cost 5 times more) than hardware that is not reconfigurable. Further, edge environments, such as smart-home applications, do not require re-programmability as such. For example, 85% of all applications of neural networks do not require any retraining during operation, so on-chip learning is not that useful. Furthermore, edge applications include noisy environments, that can cause reprogrammable hardware to become unreliable.

Funny they refer to on-chip learning, but don't mention Akida.

It looks like they design a circuit for each specific application:

1 . A method for analog hardware realization of trained convolutional neural networks for voice clarity, comprising:

obtaining a neural network topology and weights of a trained neural network;

transforming the neural network topology into an equivalent analog network of analog components;

computing a weight matrix for the equivalent analog network based on the weights of the trained neural network, wherein each element of the weight matrix represents one or more connections between analog components of the equivalent analog network; and

generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components
.


WO2021262023A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
3.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021262023A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.


US2021406661A1 Analog Hardware Realization of Neural Networks
4.Analog Hardware Realization of Neural Networks
US2021406661A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
###################################################################
Their analog chip claims very low power consumption: 100 microW.

They have a heart rate monitor
https://polyn.ai/wp-content/uploads/2022/02/NeuroSense-V222.pdf

1649041734590.png
 
  • Like
  • Love
  • Fire
Reactions: 31 users

Hittman

Regular
I like this one for short sellers

Buinneach dhearg go dtigidh ort

That you may have red diarrhoea

Akida Ballista
I have another for them... Pog Mo Hon
 
  • Like
  • Haha
Reactions: 3 users

Esq.111

Fascinatingly Intuitive.
POLYN Technology

SearchReset


Not sure we need to spend much time on this mob. Their alias is Polyn Technology.

All their published patents are for analog. From recollection, I think the EUHBP is also focused on analog.

They have software for designing analog NNs.

WO2021259482A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
1.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021259482A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

It appears they have done a Nelson in relation to Akida, referring to TrueNorth and Loihi, but studiously avoiding Akida:

US2021406662A1 ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
2.ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
US2021406662A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of convolutional neural networks for voice clarity. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents one or more connections between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

Complexity of neural networks continues to outpace CPU and GPU computational power as digital microprocessor advances are plateauing. Neuromorphic processors based on spike neural networks, such as Loihi and True North, are limited in their applications. For GPU-like architectures, power and speed of such architectures are limited by data transmission speed. Data transmission can consume up to 80% of chip power, and can significantly impact speed of calculations. Edge applications demand low power consumption, but there are currently no known performant hardware implementations that consume less than 50 milliwatts of power.

Some interseting limitations of memristors:

[0004] Memristor-based architectures that use cross-bar technology remain impractical for manufacturing recurrent and feed-forward neural networks. For example, memristor-based cross-bars have a number of disadvantages, including high latency and leakage of currents during operation, that make them impractical. Also, there are reliability issues in manufacturing memristor-based cross-bars, especially when neural networks have both negative and positive weights. For large neural networks with many neurons, at high dimensions, memristor-based cross-bars cannot be used for simultaneous propagation of different signals, which in turn complicates summation of signals, when neurons are represented by operational amplifiers. Furthermore, memristor-based analog integrated circuits have a number of limitations, such as a small number of resistive states, first cycle problem when forming memristors, complexity with channel formation when training the memristors, unpredictable dependency on dimensions of the memristors, slow operations of memristors, and drift of state of resistance.

A lot of their underlying presumptions that are contestable:

[0005] Additionally, the training process required for neural networks presents unique challenges for hardware realization of neural networks. A trained neural network is used for specific inferencing tasks, such as classification. Once a neural network is trained, a hardware equivalent is manufactured. When the neural network is retrained, the hardware manufacturing process is repeated, driving up costs. Although some reconfigurable hardware solutions exist, such hardware cannot be easily mass produced, and cost a lot more (e.g., cost 5 times more) than hardware that is not reconfigurable. Further, edge environments, such as smart-home applications, do not require re-programmability as such. For example, 85% of all applications of neural networks do not require any retraining during operation, so on-chip learning is not that useful. Furthermore, edge applications include noisy environments, that can cause reprogrammable hardware to become unreliable.

Funny they refer to on-chip learning, but don't mention Akida.

It looks like they design a circuit for each specific application:

1 . A method for analog hardware realization of trained convolutional neural networks for voice clarity, comprising:

obtaining a neural network topology and weights of a trained neural network;

transforming the neural network topology into an equivalent analog network of analog components;

computing a weight matrix for the equivalent analog network based on the weights of the trained neural network, wherein each element of the weight matrix represents one or more connections between analog components of the equivalent analog network; and

generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components
.


WO2021262023A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
3.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021262023A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.


US2021406661A1 Analog Hardware Realization of Neural Networks
4.Analog Hardware Realization of Neural Networks
US2021406661A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
###################################################################
Their analog chip claims very low power consumption: 100 microW.

They have a heart rate monitor
https://polyn.ai/wp-content/uploads/2022/02/NeuroSense-V222.pdf

View attachment 3766

POLYN Technology

SearchReset


Not sure we need to spend much time on this mob. Their alias is Polyn Technology.

All their published patents are for analog. From recollection, I think the EUHBP is also focused on analog.

They have software for designing analog NNs.

WO2021259482A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
1.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021259482A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

It appears they have done a Nelson in relation to Akida, referring to TrueNorth and Loihi, but studiously avoiding Akida:

US2021406662A1 ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
2.ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
US2021406662A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of convolutional neural networks for voice clarity. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents one or more connections between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

Complexity of neural networks continues to outpace CPU and GPU computational power as digital microprocessor advances are plateauing. Neuromorphic processors based on spike neural networks, such as Loihi and True North, are limited in their applications. For GPU-like architectures, power and speed of such architectures are limited by data transmission speed. Data transmission can consume up to 80% of chip power, and can significantly impact speed of calculations. Edge applications demand low power consumption, but there are currently no known performant hardware implementations that consume less than 50 milliwatts of power.

Some interseting limitations of memristors:

[0004] Memristor-based architectures that use cross-bar technology remain impractical for manufacturing recurrent and feed-forward neural networks. For example, memristor-based cross-bars have a number of disadvantages, including high latency and leakage of currents during operation, that make them impractical. Also, there are reliability issues in manufacturing memristor-based cross-bars, especially when neural networks have both negative and positive weights. For large neural networks with many neurons, at high dimensions, memristor-based cross-bars cannot be used for simultaneous propagation of different signals, which in turn complicates summation of signals, when neurons are represented by operational amplifiers. Furthermore, memristor-based analog integrated circuits have a number of limitations, such as a small number of resistive states, first cycle problem when forming memristors, complexity with channel formation when training the memristors, unpredictable dependency on dimensions of the memristors, slow operations of memristors, and drift of state of resistance.

A lot of their underlying presumptions that are contestable:

[0005] Additionally, the training process required for neural networks presents unique challenges for hardware realization of neural networks. A trained neural network is used for specific inferencing tasks, such as classification. Once a neural network is trained, a hardware equivalent is manufactured. When the neural network is retrained, the hardware manufacturing process is repeated, driving up costs. Although some reconfigurable hardware solutions exist, such hardware cannot be easily mass produced, and cost a lot more (e.g., cost 5 times more) than hardware that is not reconfigurable. Further, edge environments, such as smart-home applications, do not require re-programmability as such. For example, 85% of all applications of neural networks do not require any retraining during operation, so on-chip learning is not that useful. Furthermore, edge applications include noisy environments, that can cause reprogrammable hardware to become unreliable.

Funny they refer to on-chip learning, but don't mention Akida.

It looks like they design a circuit for each specific application:

1 . A method for analog hardware realization of trained convolutional neural networks for voice clarity, comprising:

obtaining a neural network topology and weights of a trained neural network;

transforming the neural network topology into an equivalent analog network of analog components;

computing a weight matrix for the equivalent analog network based on the weights of the trained neural network, wherein each element of the weight matrix represents one or more connections between analog components of the equivalent analog network; and

generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components
.


WO2021262023A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
3.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021262023A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.


US2021406661A1 Analog Hardware Realization of Neural Networks
4.Analog Hardware Realization of Neural Networks
US2021406661A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
###################################################################
Their analog chip claims very low power consumption: 100 microW.

They have a heart rate monitor
https://polyn.ai/wp-content/uploads/2022/02/NeuroSense-V222.pdf

View attachment 3766
Afternoon Diogenese,

Great job as per usual.

You have a great way of breaking down complex stuff.

On your above effort, it's like you have spun them around several times , pulled their pants down then given them a gentle kick to wards the exit door.

Great stuff.

Regards,
Esq.
 
  • Like
  • Haha
  • Fire
Reactions: 19 users
POLYN Technology

SearchReset


Not sure we need to spend much time on this mob. Their alias is Polyn Technology.

All their published patents are for analog. From recollection, I think the EUHBP is also focused on analog.

They have software for designing analog NNs.

WO2021259482A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
1.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021259482A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

It appears they have done a Nelson in relation to Akida, referring to TrueNorth and Loihi, but studiously avoiding Akida:

US2021406662A1 ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
2.ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
US2021406662A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of convolutional neural networks for voice clarity. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents one or more connections between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

Complexity of neural networks continues to outpace CPU and GPU computational power as digital microprocessor advances are plateauing. Neuromorphic processors based on spike neural networks, such as Loihi and True North, are limited in their applications. For GPU-like architectures, power and speed of such architectures are limited by data transmission speed. Data transmission can consume up to 80% of chip power, and can significantly impact speed of calculations. Edge applications demand low power consumption, but there are currently no known performant hardware implementations that consume less than 50 milliwatts of power.

Some interseting limitations of memristors:

[0004] Memristor-based architectures that use cross-bar technology remain impractical for manufacturing recurrent and feed-forward neural networks. For example, memristor-based cross-bars have a number of disadvantages, including high latency and leakage of currents during operation, that make them impractical. Also, there are reliability issues in manufacturing memristor-based cross-bars, especially when neural networks have both negative and positive weights. For large neural networks with many neurons, at high dimensions, memristor-based cross-bars cannot be used for simultaneous propagation of different signals, which in turn complicates summation of signals, when neurons are represented by operational amplifiers. Furthermore, memristor-based analog integrated circuits have a number of limitations, such as a small number of resistive states, first cycle problem when forming memristors, complexity with channel formation when training the memristors, unpredictable dependency on dimensions of the memristors, slow operations of memristors, and drift of state of resistance.

A lot of their underlying presumptions that are contestable:

[0005] Additionally, the training process required for neural networks presents unique challenges for hardware realization of neural networks. A trained neural network is used for specific inferencing tasks, such as classification. Once a neural network is trained, a hardware equivalent is manufactured. When the neural network is retrained, the hardware manufacturing process is repeated, driving up costs. Although some reconfigurable hardware solutions exist, such hardware cannot be easily mass produced, and cost a lot more (e.g., cost 5 times more) than hardware that is not reconfigurable. Further, edge environments, such as smart-home applications, do not require re-programmability as such. For example, 85% of all applications of neural networks do not require any retraining during operation, so on-chip learning is not that useful. Furthermore, edge applications include noisy environments, that can cause reprogrammable hardware to become unreliable.

Funny they refer to on-chip learning, but don't mention Akida.

It looks like they design a circuit for each specific application:

1 . A method for analog hardware realization of trained convolutional neural networks for voice clarity, comprising:

obtaining a neural network topology and weights of a trained neural network;

transforming the neural network topology into an equivalent analog network of analog components;

computing a weight matrix for the equivalent analog network based on the weights of the trained neural network, wherein each element of the weight matrix represents one or more connections between analog components of the equivalent analog network; and

generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components
.


WO2021262023A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
3.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021262023A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.


US2021406661A1 Analog Hardware Realization of Neural Networks
4.Analog Hardware Realization of Neural Networks
US2021406661A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
###################################################################
Their analog chip claims very low power consumption: 100 microW.

They have a heart rate monitor
https://polyn.ai/wp-content/uploads/2022/02/NeuroSense-V222.pdf

View attachment 3766
If you listened to what they have said about memristors you would lock the doors, pull down the shades and go and take the whole bottle.

Should I send their insights into memristors to NASA and DARPA it would save them a heap of time. Thanks once again @Diogenese:

"Some interesting limitations of memristors:

[0004] Memristor-based architectures that use cross-bar technology remain impractical for manufacturing recurrent and feed-forward neural networks. For example, memristor-based cross-bars have a number of disadvantages, including high latency and leakage of currents during operation, that make them impractical. Also, there are reliability issues in manufacturing memristor-based cross-bars, especially when neural networks have both negative and positive weights. For large neural networks with many neurons, at high dimensions, memristor-based cross-bars cannot be used for simultaneous propagation of different signals, which in turn complicates summation of signals, when neurons are represented by operational amplifiers. Furthermore, memristor-based analog integrated circuits have a number of limitations, such as a small number of resistive states, first cycle problem when forming memristors, complexity with channel formation when training the memristors, unpredictable dependency on dimensions of the memristors, slow operations of memristors, and drift of state of resistance."

I would be counting my fingers after shaking hands.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Fire
Reactions: 12 users

Dang Son

Regular
Full Movie Zulu Introducing Michael Caine
 
  • Like
  • Love
Reactions: 5 users

Esq.111

Fascinatingly Intuitive.
Chippers,

I am shaw others have seen it , but if you look down the sell order side, some cheeky bugger has got a sell order for 105,491 units @ $10.99

Think that is the biggest by volume I have seen.

Could be a fake sell ???.

Good start.

Esq.
 
  • Like
  • Fire
Reactions: 5 users

mrgds

Regular
Chippers,

I am shaw others have seen it , but if you look down the sell order side, some cheeky bugger has got a sell order for 105,491 units @ $10.99

Think that is the biggest by volume I have seen.

Could be a fake sell ???.

Good start.

Esq.
G"day Esq
Yeah, just try doing that as a retail investor, :rolleyes: .................. aint going to happen, ................. level playing field? :rolleyes:

Screenshot (14).png
 
  • Like
  • Fire
Reactions: 9 users
Top Bottom