BRN Discussion Ongoing

Hi FMF
Something funny about these guys. Some of the links on their website too partners do not work and as I dig deeper they do not actually seem to have products using their technology which I think is all software based despite some of the wording used in their marketing. I will keep digging but at the moment there does seem to be more marketing than substance.

In due course @Diogenese will no doubt do a deep patent dive but this is one of the other intriguing things nowhere can I find any mention about having patents protecting their technology not even applications in play which is normally what I would expect to see from these very early stage companies.

They have a link to the EUHuman Brain Project but going there reveals not a lot if anything about them and if they had exclusively licenced their technology from them once again I would have expected them to mention this somewhere. They only commenced as a startup in 2019 and at the beginning of 2020 they were crowing about having this product which seems amazingly little time to have beaten everyone in the world to solving all the known issues with analogue???

My opinion only but more research to do so DYOR
FF

AKIDA BALLISTA
Appears predominantly Russian Exec etc based in Israel as small start-up....hmm


 
  • Like
  • Thinking
Reactions: 3 users

Diogenese

Top 20
Hi FMF
Something funny about these guys. Some of the links on their website too partners do not work and as I dig deeper they do not actually seem to have products using their technology which I think is all software based despite some of the wording used in their marketing. I will keep digging but at the moment there does seem to be more marketing than substance.

In due course @Diogenese will no doubt do a deep patent dive but this is one of the other intriguing things nowhere can I find any mention about having patents protecting their technology not even applications in play which is normally what I would expect to see from these very early stage companies.

They have a link to the EUHuman Brain Project but going there reveals not a lot if anything about them and if they had exclusively licenced their technology from them once again I would have expected them to mention this somewhere. They only commenced as a startup in 2019 and at the beginning of 2020 they were crowing about having this product which seems amazingly little time to have beaten everyone in the world to solving all the known issues with analogue???

My opinion only but more research to do so DYOR
FF

AKIDA BALLISTA



POLYN Technology

SearchReset


Not sure we need to spend much time on this mob. Their alias is Polyn Technology.

All their published patents are for analog. From recollection, I think the EUHBP is also focused on analog.

They have software for designing analog NNs.

WO2021259482A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
1.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021259482A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

It appears they have done a Nelson in relation to Akida, referring to TrueNorth and Loihi, but studiously avoiding Akida:

US2021406662A1 ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
2.ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
US2021406662A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of convolutional neural networks for voice clarity. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents one or more connections between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

Complexity of neural networks continues to outpace CPU and GPU computational power as digital microprocessor advances are plateauing. Neuromorphic processors based on spike neural networks, such as Loihi and True North, are limited in their applications. For GPU-like architectures, power and speed of such architectures are limited by data transmission speed. Data transmission can consume up to 80% of chip power, and can significantly impact speed of calculations. Edge applications demand low power consumption, but there are currently no known performant hardware implementations that consume less than 50 milliwatts of power.

Some interseting limitations of memristors:

[0004] Memristor-based architectures that use cross-bar technology remain impractical for manufacturing recurrent and feed-forward neural networks. For example, memristor-based cross-bars have a number of disadvantages, including high latency and leakage of currents during operation, that make them impractical. Also, there are reliability issues in manufacturing memristor-based cross-bars, especially when neural networks have both negative and positive weights. For large neural networks with many neurons, at high dimensions, memristor-based cross-bars cannot be used for simultaneous propagation of different signals, which in turn complicates summation of signals, when neurons are represented by operational amplifiers. Furthermore, memristor-based analog integrated circuits have a number of limitations, such as a small number of resistive states, first cycle problem when forming memristors, complexity with channel formation when training the memristors, unpredictable dependency on dimensions of the memristors, slow operations of memristors, and drift of state of resistance.

A lot of their underlying presumptions that are contestable:

[0005] Additionally, the training process required for neural networks presents unique challenges for hardware realization of neural networks. A trained neural network is used for specific inferencing tasks, such as classification. Once a neural network is trained, a hardware equivalent is manufactured. When the neural network is retrained, the hardware manufacturing process is repeated, driving up costs. Although some reconfigurable hardware solutions exist, such hardware cannot be easily mass produced, and cost a lot more (e.g., cost 5 times more) than hardware that is not reconfigurable. Further, edge environments, such as smart-home applications, do not require re-programmability as such. For example, 85% of all applications of neural networks do not require any retraining during operation, so on-chip learning is not that useful. Furthermore, edge applications include noisy environments, that can cause reprogrammable hardware to become unreliable.

Funny they refer to on-chip learning, but don't mention Akida.

It looks like they design a circuit for each specific application:

1 . A method for analog hardware realization of trained convolutional neural networks for voice clarity, comprising:

obtaining a neural network topology and weights of a trained neural network;

transforming the neural network topology into an equivalent analog network of analog components;

computing a weight matrix for the equivalent analog network based on the weights of the trained neural network, wherein each element of the weight matrix represents one or more connections between analog components of the equivalent analog network; and

generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components
.


WO2021262023A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
3.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021262023A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.


US2021406661A1 Analog Hardware Realization of Neural Networks
4.Analog Hardware Realization of Neural Networks
US2021406661A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
###################################################################
Their analog chip claims very low power consumption: 100 microW.

They have a heart rate monitor
https://polyn.ai/wp-content/uploads/2022/02/NeuroSense-V222.pdf

1649041734590.png
 
  • Like
  • Love
  • Fire
Reactions: 31 users

Hittman

Regular
I like this one for short sellers

Buinneach dhearg go dtigidh ort

That you may have red diarrhoea

Akida Ballista
I have another for them... Pog Mo Hon
 
  • Like
  • Haha
Reactions: 3 users

Esq.111

Fascinatingly Intuitive.
POLYN Technology

SearchReset


Not sure we need to spend much time on this mob. Their alias is Polyn Technology.

All their published patents are for analog. From recollection, I think the EUHBP is also focused on analog.

They have software for designing analog NNs.

WO2021259482A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
1.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021259482A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

It appears they have done a Nelson in relation to Akida, referring to TrueNorth and Loihi, but studiously avoiding Akida:

US2021406662A1 ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
2.ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
US2021406662A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of convolutional neural networks for voice clarity. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents one or more connections between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

Complexity of neural networks continues to outpace CPU and GPU computational power as digital microprocessor advances are plateauing. Neuromorphic processors based on spike neural networks, such as Loihi and True North, are limited in their applications. For GPU-like architectures, power and speed of such architectures are limited by data transmission speed. Data transmission can consume up to 80% of chip power, and can significantly impact speed of calculations. Edge applications demand low power consumption, but there are currently no known performant hardware implementations that consume less than 50 milliwatts of power.

Some interseting limitations of memristors:

[0004] Memristor-based architectures that use cross-bar technology remain impractical for manufacturing recurrent and feed-forward neural networks. For example, memristor-based cross-bars have a number of disadvantages, including high latency and leakage of currents during operation, that make them impractical. Also, there are reliability issues in manufacturing memristor-based cross-bars, especially when neural networks have both negative and positive weights. For large neural networks with many neurons, at high dimensions, memristor-based cross-bars cannot be used for simultaneous propagation of different signals, which in turn complicates summation of signals, when neurons are represented by operational amplifiers. Furthermore, memristor-based analog integrated circuits have a number of limitations, such as a small number of resistive states, first cycle problem when forming memristors, complexity with channel formation when training the memristors, unpredictable dependency on dimensions of the memristors, slow operations of memristors, and drift of state of resistance.

A lot of their underlying presumptions that are contestable:

[0005] Additionally, the training process required for neural networks presents unique challenges for hardware realization of neural networks. A trained neural network is used for specific inferencing tasks, such as classification. Once a neural network is trained, a hardware equivalent is manufactured. When the neural network is retrained, the hardware manufacturing process is repeated, driving up costs. Although some reconfigurable hardware solutions exist, such hardware cannot be easily mass produced, and cost a lot more (e.g., cost 5 times more) than hardware that is not reconfigurable. Further, edge environments, such as smart-home applications, do not require re-programmability as such. For example, 85% of all applications of neural networks do not require any retraining during operation, so on-chip learning is not that useful. Furthermore, edge applications include noisy environments, that can cause reprogrammable hardware to become unreliable.

Funny they refer to on-chip learning, but don't mention Akida.

It looks like they design a circuit for each specific application:

1 . A method for analog hardware realization of trained convolutional neural networks for voice clarity, comprising:

obtaining a neural network topology and weights of a trained neural network;

transforming the neural network topology into an equivalent analog network of analog components;

computing a weight matrix for the equivalent analog network based on the weights of the trained neural network, wherein each element of the weight matrix represents one or more connections between analog components of the equivalent analog network; and

generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components
.


WO2021262023A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
3.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021262023A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.


US2021406661A1 Analog Hardware Realization of Neural Networks
4.Analog Hardware Realization of Neural Networks
US2021406661A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
###################################################################
Their analog chip claims very low power consumption: 100 microW.

They have a heart rate monitor
https://polyn.ai/wp-content/uploads/2022/02/NeuroSense-V222.pdf

View attachment 3766

POLYN Technology

SearchReset


Not sure we need to spend much time on this mob. Their alias is Polyn Technology.

All their published patents are for analog. From recollection, I think the EUHBP is also focused on analog.

They have software for designing analog NNs.

WO2021259482A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
1.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021259482A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

It appears they have done a Nelson in relation to Akida, referring to TrueNorth and Loihi, but studiously avoiding Akida:

US2021406662A1 ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
2.ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
US2021406662A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of convolutional neural networks for voice clarity. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents one or more connections between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

Complexity of neural networks continues to outpace CPU and GPU computational power as digital microprocessor advances are plateauing. Neuromorphic processors based on spike neural networks, such as Loihi and True North, are limited in their applications. For GPU-like architectures, power and speed of such architectures are limited by data transmission speed. Data transmission can consume up to 80% of chip power, and can significantly impact speed of calculations. Edge applications demand low power consumption, but there are currently no known performant hardware implementations that consume less than 50 milliwatts of power.

Some interseting limitations of memristors:

[0004] Memristor-based architectures that use cross-bar technology remain impractical for manufacturing recurrent and feed-forward neural networks. For example, memristor-based cross-bars have a number of disadvantages, including high latency and leakage of currents during operation, that make them impractical. Also, there are reliability issues in manufacturing memristor-based cross-bars, especially when neural networks have both negative and positive weights. For large neural networks with many neurons, at high dimensions, memristor-based cross-bars cannot be used for simultaneous propagation of different signals, which in turn complicates summation of signals, when neurons are represented by operational amplifiers. Furthermore, memristor-based analog integrated circuits have a number of limitations, such as a small number of resistive states, first cycle problem when forming memristors, complexity with channel formation when training the memristors, unpredictable dependency on dimensions of the memristors, slow operations of memristors, and drift of state of resistance.

A lot of their underlying presumptions that are contestable:

[0005] Additionally, the training process required for neural networks presents unique challenges for hardware realization of neural networks. A trained neural network is used for specific inferencing tasks, such as classification. Once a neural network is trained, a hardware equivalent is manufactured. When the neural network is retrained, the hardware manufacturing process is repeated, driving up costs. Although some reconfigurable hardware solutions exist, such hardware cannot be easily mass produced, and cost a lot more (e.g., cost 5 times more) than hardware that is not reconfigurable. Further, edge environments, such as smart-home applications, do not require re-programmability as such. For example, 85% of all applications of neural networks do not require any retraining during operation, so on-chip learning is not that useful. Furthermore, edge applications include noisy environments, that can cause reprogrammable hardware to become unreliable.

Funny they refer to on-chip learning, but don't mention Akida.

It looks like they design a circuit for each specific application:

1 . A method for analog hardware realization of trained convolutional neural networks for voice clarity, comprising:

obtaining a neural network topology and weights of a trained neural network;

transforming the neural network topology into an equivalent analog network of analog components;

computing a weight matrix for the equivalent analog network based on the weights of the trained neural network, wherein each element of the weight matrix represents one or more connections between analog components of the equivalent analog network; and

generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components
.


WO2021262023A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
3.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021262023A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.


US2021406661A1 Analog Hardware Realization of Neural Networks
4.Analog Hardware Realization of Neural Networks
US2021406661A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
###################################################################
Their analog chip claims very low power consumption: 100 microW.

They have a heart rate monitor
https://polyn.ai/wp-content/uploads/2022/02/NeuroSense-V222.pdf

View attachment 3766
Afternoon Diogenese,

Great job as per usual.

You have a great way of breaking down complex stuff.

On your above effort, it's like you have spun them around several times , pulled their pants down then given them a gentle kick to wards the exit door.

Great stuff.

Regards,
Esq.
 
  • Like
  • Haha
  • Fire
Reactions: 19 users
POLYN Technology

SearchReset


Not sure we need to spend much time on this mob. Their alias is Polyn Technology.

All their published patents are for analog. From recollection, I think the EUHBP is also focused on analog.

They have software for designing analog NNs.

WO2021259482A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
1.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021259482A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

It appears they have done a Nelson in relation to Akida, referring to TrueNorth and Loihi, but studiously avoiding Akida:

US2021406662A1 ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
2.ANALOG HARDWARE REALIZATION OF TRAINED NEURAL NETWORKS FOR VOICE CLARITY
US2021406662A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of convolutional neural networks for voice clarity. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents one or more connections between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.

Complexity of neural networks continues to outpace CPU and GPU computational power as digital microprocessor advances are plateauing. Neuromorphic processors based on spike neural networks, such as Loihi and True North, are limited in their applications. For GPU-like architectures, power and speed of such architectures are limited by data transmission speed. Data transmission can consume up to 80% of chip power, and can significantly impact speed of calculations. Edge applications demand low power consumption, but there are currently no known performant hardware implementations that consume less than 50 milliwatts of power.

Some interseting limitations of memristors:

[0004] Memristor-based architectures that use cross-bar technology remain impractical for manufacturing recurrent and feed-forward neural networks. For example, memristor-based cross-bars have a number of disadvantages, including high latency and leakage of currents during operation, that make them impractical. Also, there are reliability issues in manufacturing memristor-based cross-bars, especially when neural networks have both negative and positive weights. For large neural networks with many neurons, at high dimensions, memristor-based cross-bars cannot be used for simultaneous propagation of different signals, which in turn complicates summation of signals, when neurons are represented by operational amplifiers. Furthermore, memristor-based analog integrated circuits have a number of limitations, such as a small number of resistive states, first cycle problem when forming memristors, complexity with channel formation when training the memristors, unpredictable dependency on dimensions of the memristors, slow operations of memristors, and drift of state of resistance.

A lot of their underlying presumptions that are contestable:

[0005] Additionally, the training process required for neural networks presents unique challenges for hardware realization of neural networks. A trained neural network is used for specific inferencing tasks, such as classification. Once a neural network is trained, a hardware equivalent is manufactured. When the neural network is retrained, the hardware manufacturing process is repeated, driving up costs. Although some reconfigurable hardware solutions exist, such hardware cannot be easily mass produced, and cost a lot more (e.g., cost 5 times more) than hardware that is not reconfigurable. Further, edge environments, such as smart-home applications, do not require re-programmability as such. For example, 85% of all applications of neural networks do not require any retraining during operation, so on-chip learning is not that useful. Furthermore, edge applications include noisy environments, that can cause reprogrammable hardware to become unreliable.

Funny they refer to on-chip learning, but don't mention Akida.

It looks like they design a circuit for each specific application:

1 . A method for analog hardware realization of trained convolutional neural networks for voice clarity, comprising:

obtaining a neural network topology and weights of a trained neural network;

transforming the neural network topology into an equivalent analog network of analog components;

computing a weight matrix for the equivalent analog network based on the weights of the trained neural network, wherein each element of the weight matrix represents one or more connections between analog components of the equivalent analog network; and

generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components
.


WO2021262023A1 ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
3.ANALOG HARDWARE REALIZATION OF NEURAL NETWORKS
WO2021262023A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.


US2021406661A1 Analog Hardware Realization of Neural Networks
4.Analog Hardware Realization of Neural Networks
US2021406661A1 • 2021-12-30 •
POLYN TECH LIMITED [GB]
Earliest priority: 2020-06-25 • Earliest publication: 2021-12-30
Systems and methods are provided for analog hardware realization of neural networks. The method incudes obtaining a neural network topology and weights of a trained neural network. The method also includes transforming the neural network topology to an equivalent analog network of analog components. The method also includes computing a weight matrix for the equivalent analog network based on the weights of the trained neural network. Each element of the weight matrix represents a respective connection between analog components of the equivalent analog network. The method also includes generating a schematic model for implementing the equivalent analog network based on the weight matrix, including selecting component values for the analog components.
###################################################################
Their analog chip claims very low power consumption: 100 microW.

They have a heart rate monitor
https://polyn.ai/wp-content/uploads/2022/02/NeuroSense-V222.pdf

View attachment 3766
If you listened to what they have said about memristors you would lock the doors, pull down the shades and go and take the whole bottle.

Should I send their insights into memristors to NASA and DARPA it would save them a heap of time. Thanks once again @Diogenese:

"Some interesting limitations of memristors:

[0004] Memristor-based architectures that use cross-bar technology remain impractical for manufacturing recurrent and feed-forward neural networks. For example, memristor-based cross-bars have a number of disadvantages, including high latency and leakage of currents during operation, that make them impractical. Also, there are reliability issues in manufacturing memristor-based cross-bars, especially when neural networks have both negative and positive weights. For large neural networks with many neurons, at high dimensions, memristor-based cross-bars cannot be used for simultaneous propagation of different signals, which in turn complicates summation of signals, when neurons are represented by operational amplifiers. Furthermore, memristor-based analog integrated circuits have a number of limitations, such as a small number of resistive states, first cycle problem when forming memristors, complexity with channel formation when training the memristors, unpredictable dependency on dimensions of the memristors, slow operations of memristors, and drift of state of resistance."

I would be counting my fingers after shaking hands.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Fire
Reactions: 12 users

Dang Son

Regular
Full Movie Zulu Introducing Michael Caine
 
  • Like
  • Love
Reactions: 5 users

Esq.111

Fascinatingly Intuitive.
Chippers,

I am shaw others have seen it , but if you look down the sell order side, some cheeky bugger has got a sell order for 105,491 units @ $10.99

Think that is the biggest by volume I have seen.

Could be a fake sell ???.

Good start.

Esq.
 
  • Like
  • Fire
Reactions: 5 users

mrgds

Regular
Chippers,

I am shaw others have seen it , but if you look down the sell order side, some cheeky bugger has got a sell order for 105,491 units @ $10.99

Think that is the biggest by volume I have seen.

Could be a fake sell ???.

Good start.

Esq.
G"day Esq
Yeah, just try doing that as a retail investor, :rolleyes: .................. aint going to happen, ................. level playing field? :rolleyes:

Screenshot (14).png
 
  • Like
  • Fire
Reactions: 9 users
G"day Esq
Yeah, just try doing that as a retail investor, :rolleyes: .................. aint going to happen, ................. level playing field? :rolleyes:

View attachment 3769
It is completely level but what they don't tell you there are two completely level, levels, and you need an invitation and a special security pass to get down low enough to access the other one. 🤢 FF
 
  • Like
  • Haha
  • Fire
Reactions: 19 users
Hey Cyw, as far as I know there has been no confirmation of Cerence using Akida or offering it as part of their suite of AI tech. But if they don't, or if they haven't thought about it, they'd have to have rocks in their heads IMO.

The way I look at is we know that AI and voice-based technologies such as those offered by Cerence are going to feature very heavily in the next generation of EV's. Conventional voice control systems to date however have tended to consume a fair bit of power. We know from Mercedes that a key to the efficiency of their Vision EQXX is in Akida's ability to "run spiking neural networks, in which data is coded in discrete spikes and energy only consumed when a spike occurs, reducing energy consumption by orders of magnitude".

In one article, which has been posted here many times it says "In the VISION EQXX, this technology enables the“Hey Mercedes” hot-word detection five to ten times more efficiently than conventional voice control. Mercedes-Benz said although neuromorphic computing is still in its infancy, systems like these will be available on the market in just a few years. When applied on scale throughout a vehicle, they have the potential to radically reduce the energy needed to run the latest AI technologies"

Cerence is "dedicated to building the world’s smartest, fastest and most exciting voice assistants that define state-of the-art for the in-car experience" and they say they are "At the center of the automotive universe with the most flexible, open and global AI and voice-powered interaction platform for the industry".

How would it not be in their interest to incorporate Akida which has the ability to radically reduce the energy required to run their own platform?

I could be completely wrong but that's just my way of looking at it.

100% !! B77 !
 
  • Like
  • Haha
  • Fire
Reactions: 4 users

mrgds

Regular
It is completely level but what they don't tell you there are two completely level, levels, and you need an invitation and a special security pass to get down low enough to access the other one. 🤢 FF
Be nice as a retail investor to be able to "implement ones plan" and have a "first in line" sell order according to ones chosen plan.
S/P will probably have to be around $8 b4 you can place a sell at $10, by then your 50th in the queue.
Anywho, ................ long term holder here who can wait, and wont be selling too many even @ $10
Pantene, Akida Ballista,
 
  • Like
  • Fire
  • Love
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Anyone else heard of Autobrains?


Link 1
Autobrains
drew $101.0M in Series C financing led by Temasek and joined by new investors Knorr-Bremse and VinFast along with existing investor BMW i Ventures and strategic partner Continental. Autobrains has developed a self-learning AI system for ADAS and autonomous vehicles up to L4. Instead of large amounts of labeled data, it “maps raw, real-world data to compressed signatures to identify concepts and scenarios for optimal decision-making.” The startup says this approach leads to improved performance in challenging edge cases by understanding contextual elements of driving scenarios. A spin out from AI company Cortica as a joint venture with Continental in 2019, it is based in Tel Aviv, Israel.

Link 2
Autobrains says this “avant-garde approach” enables its AI to learn just like the human brain does, via neural networks. That enables cars powered by its AI to learn, collaborate and interact with the real world with no supervision, using only the data and scenarios from the car’s surroundings to enable decision-making just like the human brain, in real time.



Link 3
A lot of self-driving technology (Mobileye’s being one example) is based around lidar sensors, with a few companies (like Wayve) building systems on lower-cost bases using radar, smartphones and AI to stitch the experience together. Autobrains takes a different approach that might be described as hardware-agnostic, using radar, and also lidar, but only if the OEM has built it in.

The company’s approach comes from more than a decade of R&D. Originally, the startup descends from a company called Cortica AI (which Rachelgauz had founded), which has spent years building AI-based imaging technology applied across a wide variety of use cases (our first coverage of it, in fact, was about developing image recognition for advertising): Autobrains was spun out and initially branded as “Cartica AI” to realize more of the value of the IP as it pertained to the very specific use case of driving. The company says it has more than 250 patents filed on its technology.


Link 4

 
  • Like
  • Fire
  • Wow
Reactions: 7 users

Interesting article. I have read many more detailed articles about why the CCP is not as autocratic as our politicians like us to fear. The spirit of this puff piece and those more detailed and nuanced opinions paint a picture where power is more decentralised and that the Chinese people have so many strong family ties with the Taiwanese Chinese they will be a limiting factor or control over the exercise of military power by China against Taiwan. The desire for unification by the political elite is said to be greatly tempered by the vision of Chinese troops being sent in to fight and kill Chinese.

The strong resistance being shown by Ukraine using western military hardware is also a factor which is having an effect on strategic CCP thinking as Taiwan has been and continues to be armed by the west. Ukraine is showing Russian technology lags behind greatly and so the only way to "win" would be to pull out the hypersonic and mass destruction weaponry and this style of scorched earth approach would definitely not gain popular support amongst the Chinese people.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Fire
Reactions: 17 users
Anyone else heard of Autobrains?


Link 1
Autobrains
drew $101.0M in Series C financing led by Temasek and joined by new investors Knorr-Bremse and VinFast along with existing investor BMW i Ventures and strategic partner Continental. Autobrains has developed a self-learning AI system for ADAS and autonomous vehicles up to L4. Instead of large amounts of labeled data, it “maps raw, real-world data to compressed signatures to identify concepts and scenarios for optimal decision-making.” The startup says this approach leads to improved performance in challenging edge cases by understanding contextual elements of driving scenarios. A spin out from AI company Cortica as a joint venture with Continental in 2019, it is based in Tel Aviv, Israel.

Link 2
Autobrains says this “avant-garde approach” enables its AI to learn just like the human brain does, via neural networks. That enables cars powered by its AI to learn, collaborate and interact with the real world with no supervision, using only the data and scenarios from the car’s surroundings to enable decision-making just like the human brain, in real time.



Link 3
A lot of self-driving technology (Mobileye’s being one example) is based around lidar sensors, with a few companies (like Wayve) building systems on lower-cost bases using radar, smartphones and AI to stitch the experience together. Autobrains takes a different approach that might be described as hardware-agnostic, using radar, and also lidar, but only if the OEM has built it in.

The company’s approach comes from more than a decade of R&D. Originally, the startup descends from a company called Cortica AI (which Rachelgauz had founded), which has spent years building AI-based imaging technology applied across a wide variety of use cases (our first coverage of it, in fact, was about developing image recognition for advertising): Autobrains was spun out and initially branded as “Cartica AI” to realize more of the value of the IP as it pertained to the very specific use case of driving. The company says it has more than 250 patents filed on its technology.


Link 4

Hi Bravo
Definitely a software company and they are relying on Renesas for the hardware to make it all work. I have had a look across their website and not a single career opening needs an understanding of Ai so this is being left to Renesas. Now if only Renesas had Ai that could select relevant data at the sensor to send to the algorithms developed and being developed by Autobrains.

The following extracted paragraph fits completely with their web site:

“Autobrains’ technology holds the promise we have all been looking for to create the paradigm shift in the industry to self-learning AI, bridging the gap to fully autonomous driving,” said Thuy Linh Pham, deputy CEO VinFast, in a statement. “Autobrains captured our attention by applying unsupervised AI software, as opposed to traditional software that is based on manually labeled data, to make self-driving vehicles adaptive to unprecedented behaviors in real-time. We expect that Autobrains will actualize this ambitious goal into a reality in the near future.”

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 16 users
Chippers,

I am shaw others have seen it , but if you look down the sell order side, some cheeky bugger has got a sell order for 105,491 units @ $10.99

Think that is the biggest by volume I have seen.

Could be a fake sell ???.

Good start.

Esq.
He may have seen Elon Musk at our exhibition stand in LA last Wednesday 🥳
 
  • Haha
  • Like
Reactions: 8 users

Esq.111

Fascinatingly Intuitive.
Afternoon Chippers,

Something different,

For the last eight weeks in a row BRN share price , on a Monday, has finnished the trading day in the red.

Drop on a Monday,

7.2.2022 -6.05%
14.2. -8.64%
21.2. -7.83%
28.2. -1.64%
7.3.2022 - 5.85%
14.3. -2.43%
21.3. -3.06%
28.3. -5.75%
4.4.2022 +2.58%

WOOOHOOO.

Today , finally we have a GREEN close.

Regards,
Esq.
 
  • Like
  • Fire
  • Love
Reactions: 52 users

mrgds

Regular
Afternoon Chippers,

Something different,

For the last eight weeks in a row BRN share price , on a Monday, has finnished the trading day in the red.

Drop on a Monday,

7.2.2022 -6.05%
14.2. -8.64%
21.2. -7.83%
28.2. -1.64%
7.3.2022 - 5.85%
14.3. -2.43%
21.3. -3.06%
28.3. -5.75%
4.4.2022 +2.58%

WOOOHOOO.

Today , finally we have a GREEN close.

Regards,
Esq.
(y) and on the daily high (y)
 
  • Like
  • Love
  • Fire
Reactions: 18 users

VictorG

Member
Afternoon Chippers,

Something different,

For the last eight weeks in a row BRN share price , on a Monday, has finnished the trading day in the red.

Drop on a Monday,

7.2.2022 -6.05%
14.2. -8.64%
21.2. -7.83%
28.2. -1.64%
7.3.2022 - 5.85%
14.3. -2.43%
21.3. -3.06%
28.3. -5.75%
4.4.2022 +2.58%

WOOOHOOO.

Today , finally we have a GREEN close.

Regards,
Esq.
Those Irish curses may have had a hand in it, just sayin'
 
  • Like
  • Haha
  • Wow
Reactions: 16 users

Aretemis

Regular
If you liked it you will love Peaky Blinders. Last series still available new series coming shortly.
Absolutely best thing I have seen in years.
My opinion only of course.
FF
I’d have too say Ozark is my fav in recent times
 
  • Like
  • Love
Reactions: 12 users

Aretemis

Regular
I think I'll start this week with an Irish curse to all the short sellers.

“May you be plagued by a powerful itch and never have the nails to scratch it.”
May the fleas of a thousand camels infest your armpits 🤣🤣🤣
 
  • Haha
  • Like
  • Fire
Reactions: 9 users
Top Bottom