Sorry, but your attachment is from 19.6.Shortman indicates otherwise
Sorry, but your attachment is from 19.6.Shortman indicates otherwise
I can just see MF's headline tomorrow:Good Afternoon Diogenese, Fellow Chippers,
Think we should ask Rocket to post the Giff featuring the six or seven Japanese businessmen dressed in Amani suite's drinking beer & whilst rythmically swaying their hips.
WOOOooo HOOOooo.
Regards,
Esq
Pardon my ignorance Xray 1, but what do you mean by " strike action " and in what form is this mentioned.I hope he will be turning things around quickly over the next few months given the Co has it's first AGM "Strike" action against it's name !!! ........... imo, probably something he wouldn't want recorded / disclosed on his CV.
Kurosawa's Seven Saki with Toshiro Mifumi
Sorry bout the above typo, I meant ( A leader ) Ballista of coursePardon my ignorance Xray 1, but what do you mean by " strike action " and in what form is this mentioned.
I'm not familiar with this term from the AGM As one parliamentarian with some other type of form would say....
" Please explain "
Alida Ballista
hotty...
I most certainly do believe. A change is coming!!! Love it.Hi Mia,
Believe!
This is all on the one page:
Top right: NNA = neuromorphic network accelerator.
Automotive Custom SoC Technologies and Solutions (socionextus.com)
![]()
Automotive Custom SoC Technologies and Solutions
Socionext’s advanced automotive custom SoC solutions are designed to help OEMs and tier-one automakers achieve differentiating technologiessocionextus.com
View attachment 38959
These custom SoCs enable a wide range of applications, including ADAS sensors, central computing, networking, in-cabin monitoring, satellite connectivity, and infotainment.
…
Advanced AI Solutions for Automotive
Socionext has partnered with artificial intelligence provider BrainChip to develop optimized, intelligent sensor data solutions based on Brainchip’s Akida® processor IP.
BrainChip’s flexible AI processing fabric IP delivers neuromorphic, event-based computation, enabling ultimate performance while minimizing silicon footprint and power consumption. Sensor data can be analyzed in real-time with distributed, high-performance and low-power edge inferencing, resulting in improved response time and reduced energy consumption.
This video is also on the same page:
View attachment 38960
View attachment 38961
View attachment 38958
Is this us?
View attachment 38955 View attachment 38956 View attachment 38957
![]()
Socionext | 60GHz Radar Sensor | Automotive Applications
60GHz sensors with (TDM-MIMO) processing for detecting the position and movement of vehicle passengers with maximum accuracysocionextus.com
Is this us?
View attachment 38955 View attachment 38956 View attachment 38957
![]()
Socionext | 60GHz Radar Sensor | Automotive Applications
60GHz sensors with (TDM-MIMO) processing for detecting the position and movement of vehicle passengers with maximum accuracysocionextus.com
Right!Hi Mia,
Believe!
This is all on the one page:
Top right: NNA = neuromorphic network accelerator.
Automotive Custom SoC Technologies and Solutions (socionextus.com)
![]()
Automotive Custom SoC Technologies and Solutions
Socionext’s advanced automotive custom SoC solutions are designed to help OEMs and tier-one automakers achieve differentiating technologiessocionextus.com
View attachment 38959
These custom SoCs enable a wide range of applications, including ADAS sensors, central computing, networking, in-cabin monitoring, satellite connectivity, and infotainment.
…
Advanced AI Solutions for Automotive
Socionext has partnered with artificial intelligence provider BrainChip to develop optimized, intelligent sensor data solutions based on Brainchip’s Akida® processor IP.
BrainChip’s flexible AI processing fabric IP delivers neuromorphic, event-based computation, enabling ultimate performance while minimizing silicon footprint and power consumption. Sensor data can be analyzed in real-time with distributed, high-performance and low-power edge inferencing, resulting in improved response time and reduced energy consumption.
This video is also on the same page:
View attachment 38960
View attachment 38961
View attachment 38958
"......would they persist with their clunky NNA from last millennium?"Right!
Now I've had a cold shower, it should be noted that Socionext has had an NNA since at least 2018:
https://socionextus.com/pressreleases/socionext-ai-accelerator-engine-for-edge-computing/#:~:text=SUNNYVALE, Calif., May 11, 2018 – Socionext Inc.,,been designed specifically for deep learning inference processing.
Socionext Develops AI Accelerator Engine Optimized for Edge Computing
Socionext Develops AI Accelerator Engine Optimized for Edge Computing
Share this post
Share TweetShare
Small-sized and Low Power Engine Supports Broad Range of ApplicationsSUNNYVALE, Calif., May 11, 2018 –Socionext Inc., a leading provider of SoC-based solutions, has developed a new Neural Network Accelerator (NNA) engine, optimized for AI processing on edge computing devices. The compact, low power engine has been designed specifically for deep learning inference processing. When implemented, it can achieve 100x performance boost compared with conventional processors for computer vision processing such as image recognition. Socionext will start delivering the Software Development Kit for the FPGA implementation of the NNA in the third quarter of 2018. The company is also planning to develop its SoC products with the NNA.
Socionext currently provides graphics SoC "SC1810" with a built-in proprietary Vision Processor Unit compatible with the computer vision API "OpenVX" developed by the Khronos Group, a standardization organization. The NNA has been designed to work as an accelerator to extend the capability of the VPU. It performs various computer vision processing functions with deep learning, as well as conventional image recognition, for applications including automotive and digital signage, delivering higher performance and lower power consumption.
The NNA incorporates the company's proprietary architecture using the quantization technology that reduces the bits for parameters and activations required for deep learning. The quantization technology is capable of carrying out massive amounts of computing tasks with less resource, greatly reducing the data size, and significantly lowering the system memory bandwidth. In addition, the newly developed on-chip memory circuit design improves the efficiency of computing resource required for deep learning, enabling optimum performance in a very small package. A VPU equipped with the new NNA combined with the latest technologies will be able to achieve 100 times faster processing speed in image recognition compared with a conventional VPU.
View attachment 38967
Now the interesting thing is that Socionext have a patent application dating from mid-2018 whose purpose is to reduce the calculations required for large MAC loads.
US2021081489A1 ARITHMETIC METHOD 20180604
View attachment 38968
View attachment 38969
[0010] An arithmetic method according to the present disclosure is an arithmetic method of performing convolution operation in convolutional layers of a neural network by calculating matrix products, using an arithmetic unit and an internal memory included in a LSI. The arithmetic method includes: determining, for each of the convolutional layers, whether an amount of input data to be inputted to the convolutional layer is smaller than or equal to a predetermined amount of data; selecting a first arithmetic mode and performing convolution operation in the first arithmetic mode, when the amount of input data is determined to be smaller than or equal to the predetermined amount of data in the determining; selecting a second arithmetic mode and performing convolution operation in the second arithmetic mode, when the amount of input data is determined to be larger than the predetermined amount of data in the determining; and outputting output data which is a result obtained by performing convolution operation, in which the performing of convolution operation in the first arithmetic mode includes: storing weight data for the convolutional layer in external memory located outside the LSI; storing the input data for the convolutional layer in the internal memory; and reading the weight data from the external memory into the internal memory part by part as first data of at least one row vector or column vector, and causing the arithmetic unit to calculate a matrix product of the first data and a matrix of the input data stored in the internal memory, the weight data is read, as a whole, from the external memory into the internal memory only once, the performing of convolution operation in the second arithmetic mode includes: storing the input data for the convolutional layer in the external memory located outside the LSI; storing a matrix of the weight data for the convolutional layer in the internal memory; and reading the input data from the external memory into the internal memory part by part as second data of at least one column vector or row vector, and causing the arithmetic unit to calculate a matrix product of the second data and the matrix of the weight data stored in the internal memory, and the input data is read, as a whole, from the external memory into the internal memory only once.
Now it was about 2018 that BrainChip and Socionext began their cooperation, so their original NNA was developed in advance of their association with Akida.
If we assume that this patent is their description of their original NNA, Akida would wipe the floor with it. Akida could perform the functions of the VPU with NNA above in a trice. Given that SocioNext have undoubtedly seen Akida in action, bearing in mind their initial enthusiasm for a Synquacer/Akida engagement, would they persist with their clunky NNA from last millennium?
Wow... I can't believe we are using a metaphor of a mother's love for her child to being a shareholder of BrainChip..!!!An analogy from FF that resonates so strongly with me, I had to request his permission to share:
The current share price is very disappointing.
There is a saying along the lines ‘he has a face that only a mother could love’.
The present share price is ‘butt’ ugly yet we still love Brainchip because like the mother we do not see the share price we see past it to the good heart that beats strongly within.
Like a mother as opposed to Mrs. Jones on the corner we know Brainchip and all its virtues inside out having spent every minute of everyday watching it grow and develop into a fine company over flowing with potential.
We have been to the parent interviews at Carnegie Mellon and noted that while Brainchip has not gained a mass following all its Professors cannot speak too highly of it and have voted it most likely to succeed.
Like the mother we were not surprised unlike Mrs. Jones, when our little Brainchip was accepted into NASA and earmarked to pilot deep space missions.
Nor were we surprised when Quantum Ventura claimed that our little Brainchip would allow Homeland Security to build handheld detectors to protect our ports and would make cyber secure all our critical energy supplies.
Even though we raised our Brainchip to be a source of good for all mankind we could not help but feel a deep sense of pride when the ultimate luxury car brand Mercedes Benz said nice things about our Brainchip.
Like all mothers however being able to announce at Christmas to Mrs. Jones at the Carols in the Park so all around could hear that our Brainchip was going into medical research to find a way to detect cancer was our proudest moment.
Like all mothers we have been annoyed, frustrated and worried when our Brainchip tells us that it has to go out and won’t say where claiming it is a secret particularly when we know some of its friends are military contractors.
Just like mothers we will cross our arms and say ‘but I am your shareholder’ only to be met with silence.
Then like a mother we step aside as we know that those around our Brainchip are all good people and we ultimately trust it has the right core values.
Even though a mother trusts she will still read everything she can to find out about her child’s achievements as she knows her child is not a braggarde and will occasionally require a nudge if she is to find out about its successes.
In the same way Brainchip shareholders continue to research and question.
Mother’s know that raising a child is not something you can hurry and that it takes the time it takes.
She knows there will be missteps along the way but with patience and dedication the end result will likely be much more than you ever hoped. Learning from mistakes requires the mistake to be made and out of mistakes resilience is built.
Some are not cut out to be mothers just as some are not cut out to be shareholders. It is natures way of weeding out the weakest from the gene pool.
Sad but that is how it is as while we are all born equal thereafter the strong survive and learn to thrive in the markets.
That strike was one of the most ridiculous things to happen the company imo. It was orchestrated by a bunch of short sighted gobshites who couldn't see beyond the shareprice!I hope he will be turning things around quickly over the next few months given the Co has it's first AGM "Strike" action against it's name !!! ........... imo, probably something he wouldn't want recorded / disclosed on his CV.
If BrainChip was an ordinary company, ran by ordinary people, with an ordinary product, the above would be true.Wow... I can't believe we are using a metaphor of a mother's love for her child to being a shareholder of BrainChip..!!!
Below has been written using chatgpt lol
As we continue our journey as stakeholders in Brainchip, I feel compelled to address a recent analogy that has been circulating—one that draws parallels between the relationship of a mother and child, and our position as shareholders. While I understand the intention behind this comparison, I find it important to express my belief that such a comparison is both unfounded and, dare I say, absurd.
Undoubtedly, the current share price may be disappointing. However, likening our attachment to Brainchip to the unconditional love and devotion of a mother to her child is a stretch that lacks substantial merit. Our investment in Brainchip is driven by rationality, financial analysis, and a desire to secure our future, rather than an emotional bond tied to unconditional love.
Yes, we have spent considerable time studying Brainchip, exploring its potential, and tracking its progress. But to equate this with a mother's innate knowledge and intimate understanding of her child is an overreach. We are shareholders, not parents, and our relationship with the company is fundamentally different.
Furthermore, the examples given of Brainchip's achievements, acceptance into esteemed institutions, and recognition by industry leaders do not validate the comparison. These are commendable milestones for any company, but they do not mirror the emotional fulfillment a mother experiences when witnessing her child's accomplishments.
While I acknowledge that trust is an essential aspect of any investor-company relationship, it is crucial to maintain a level-headed approach. Our role as shareholders should not be conflated with the unconditional trust and unwavering faith that a mother has in her child. Rather, we should continue to exercise due diligence, seek transparency, and hold Brainchip accountable for delivering on its promises.
In conclusion, the comparison between the relationship of a mother and child and our position as shareholders of Brainchip is an exaggeration that fails to acknowledge the distinctive nature of these connections. Let us approach our investment with a rational mindset, focusing on the financial aspects and future prospects of the company.
Wishing you all continued success in your investment journey.
DYOR I still believe this will help me retire early!! Plz land a contract sooooon!!!
Right!
Now I've had a cold shower, it should be noted that Socionext has had an NNA since at least 2018:
https://socionextus.com/pressreleases/socionext-ai-accelerator-engine-for-edge-computing/#:~:text=SUNNYVALE, Calif., May 11, 2018 – Socionext Inc.,,been designed specifically for deep learning inference processing.
Socionext Develops AI Accelerator Engine Optimized for Edge Computing
Socionext Develops AI Accelerator Engine Optimized for Edge Computing
Share this post
Share TweetShare
Small-sized and Low Power Engine Supports Broad Range of ApplicationsSUNNYVALE, Calif., May 11, 2018 –Socionext Inc., a leading provider of SoC-based solutions, has developed a new Neural Network Accelerator (NNA) engine, optimized for AI processing on edge computing devices. The compact, low power engine has been designed specifically for deep learning inference processing. When implemented, it can achieve 100x performance boost compared with conventional processors for computer vision processing such as image recognition. Socionext will start delivering the Software Development Kit for the FPGA implementation of the NNA in the third quarter of 2018. The company is also planning to develop its SoC products with the NNA.
Socionext currently provides graphics SoC "SC1810" with a built-in proprietary Vision Processor Unit compatible with the computer vision API "OpenVX" developed by the Khronos Group, a standardization organization. The NNA has been designed to work as an accelerator to extend the capability of the VPU. It performs various computer vision processing functions with deep learning, as well as conventional image recognition, for applications including automotive and digital signage, delivering higher performance and lower power consumption.
The NNA incorporates the company's proprietary architecture using the quantization technology that reduces the bits for parameters and activations required for deep learning. The quantization technology is capable of carrying out massive amounts of computing tasks with less resource, greatly reducing the data size, and significantly lowering the system memory bandwidth. In addition, the newly developed on-chip memory circuit design improves the efficiency of computing resource required for deep learning, enabling optimum performance in a very small package. A VPU equipped with the new NNA combined with the latest technologies will be able to achieve 100 times faster processing speed in image recognition compared with a conventional VPU.
View attachment 38967
Now the interesting thing is that Socionext have a patent application dating from mid-2018 whose purpose is to reduce the calculations required for large MAC loads.
US2021081489A1 ARITHMETIC METHOD 20180604
View attachment 38968
View attachment 38969
[0010] An arithmetic method according to the present disclosure is an arithmetic method of performing convolution operation in convolutional layers of a neural network by calculating matrix products, using an arithmetic unit and an internal memory included in a LSI. The arithmetic method includes: determining, for each of the convolutional layers, whether an amount of input data to be inputted to the convolutional layer is smaller than or equal to a predetermined amount of data; selecting a first arithmetic mode and performing convolution operation in the first arithmetic mode, when the amount of input data is determined to be smaller than or equal to the predetermined amount of data in the determining; selecting a second arithmetic mode and performing convolution operation in the second arithmetic mode, when the amount of input data is determined to be larger than the predetermined amount of data in the determining; and outputting output data which is a result obtained by performing convolution operation, in which the performing of convolution operation in the first arithmetic mode includes: storing weight data for the convolutional layer in external memory located outside the LSI; storing the input data for the convolutional layer in the internal memory; and reading the weight data from the external memory into the internal memory part by part as first data of at least one row vector or column vector, and causing the arithmetic unit to calculate a matrix product of the first data and a matrix of the input data stored in the internal memory, the weight data is read, as a whole, from the external memory into the internal memory only once, the performing of convolution operation in the second arithmetic mode includes: storing the input data for the convolutional layer in the external memory located outside the LSI; storing a matrix of the weight data for the convolutional layer in the internal memory; and reading the input data from the external memory into the internal memory part by part as second data of at least one column vector or row vector, and causing the arithmetic unit to calculate a matrix product of the second data and the matrix of the weight data stored in the internal memory, and the input data is read, as a whole, from the external memory into the internal memory only once.
Now it was about 2018 that BrainChip and Socionext began their cooperation, so their original NNA was developed in advance of their association with Akida.
If we assume that this patent is their description of their original NNA, Akida would wipe the floor with it. Akida could perform the functions of the VPU with NNA above in a trice. Given that SocioNext have undoubtedly seen Akida in action, bearing in mind their initial enthusiasm for a Synquacer/Akida engagement, would they persist with their clunky NNA from last millennium?
Hi Mia,
Believe!
This is all on the one page:
Top right: NNA = neuromorphic network accelerator.
Automotive Custom SoC Technologies and Solutions (socionextus.com)
![]()
Automotive Custom SoC Technologies and Solutions
Socionext’s advanced automotive custom SoC solutions are designed to help OEMs and tier-one automakers achieve differentiating technologiessocionextus.com
View attachment 38959
These custom SoCs enable a wide range of applications, including ADAS sensors, central computing, networking, in-cabin monitoring, satellite connectivity, and infotainment.
…
Advanced AI Solutions for Automotive
Socionext has partnered with artificial intelligence provider BrainChip to develop optimized, intelligent sensor data solutions based on Brainchip’s Akida® processor IP.
BrainChip’s flexible AI processing fabric IP delivers neuromorphic, event-based computation, enabling ultimate performance while minimizing silicon footprint and power consumption. Sensor data can be analyzed in real-time with distributed, high-performance and low-power edge inferencing, resulting in improved response time and reduced energy consumption.
This video is also on the same page:
View attachment 38960
View attachment 38961
View attachment 38958
Cardpro, at the risk of being muted, please stfu.Wow... I can't believe we are using a metaphor of a mother's love for her child to being a shareholder of BrainChip..!!!
Below has been written using chatgpt lol
As we continue our journey as stakeholders in Brainchip, I feel compelled to address a recent analogy that has been circulating—one that draws parallels between the relationship of a mother and child, and our position as shareholders. While I understand the intention behind this comparison, I find it important to express my belief that such a comparison is both unfounded and, dare I say, absurd.
Undoubtedly, the current share price may be disappointing. However, likening our attachment to Brainchip to the unconditional love and devotion of a mother to her child is a stretch that lacks substantial merit. Our investment in Brainchip is driven by rationality, financial analysis, and a desire to secure our future, rather than an emotional bond tied to unconditional love.
Yes, we have spent considerable time studying Brainchip, exploring its potential, and tracking its progress. But to equate this with a mother's innate knowledge and intimate understanding of her child is an overreach. We are shareholders, not parents, and our relationship with the company is fundamentally different.
Furthermore, the examples given of Brainchip's achievements, acceptance into esteemed institutions, and recognition by industry leaders do not validate the comparison. These are commendable milestones for any company, but they do not mirror the emotional fulfillment a mother experiences when witnessing her child's accomplishments.
While I acknowledge that trust is an essential aspect of any investor-company relationship, it is crucial to maintain a level-headed approach. Our role as shareholders should not be conflated with the unconditional trust and unwavering faith that a mother has in her child. Rather, we should continue to exercise due diligence, seek transparency, and hold Brainchip accountable for delivering on its promises.
In conclusion, the comparison between the relationship of a mother and child and our position as shareholders of Brainchip is an exaggeration that fails to acknowledge the distinctive nature of these connections. Let us approach our investment with a rational mindset, focusing on the financial aspects and future prospects of the company.
Wishing you all continued success in your investment journey.
DYOR I still believe this will help me retire early!! Plz land a contract sooooon!!!