BRN Discussion Ongoing

HopalongPetrovski

I'm Spartacus!
Right!

Now I've had a cold shower, it should be noted that Socionext has had an NNA since at least 2018:

https://socionextus.com/pressreleases/socionext-ai-accelerator-engine-for-edge-computing/#:~:text=SUNNYVALE, Calif., May 11, 2018 – Socionext Inc.,,been designed specifically for deep learning inference processing.

Socionext Develops AI Accelerator Engine Optimized for Edge Computing​


Socionext Develops AI Accelerator Engine Optimized for Edge Computing​

Share this post
Share TweetShare

Small-sized and Low Power Engine Supports Broad Range of Applications
SUNNYVALE, Calif., May 11, 2018 –Socionext Inc., a leading provider of SoC-based solutions, has developed a new Neural Network Accelerator (NNA) engine, optimized for AI processing on edge computing devices. The compact, low power engine has been designed specifically for deep learning inference processing. When implemented, it can achieve 100x performance boost compared with conventional processors for computer vision processing such as image recognition. Socionext will start delivering the Software Development Kit for the FPGA implementation of the NNA in the third quarter of 2018. The company is also planning to develop its SoC products with the NNA.
Socionext currently provides graphics SoC "SC1810" with a built-in proprietary Vision Processor Unit compatible with the computer vision API "OpenVX" developed by the Khronos Group, a standardization organization. The NNA has been designed to work as an accelerator to extend the capability of the VPU. It performs various computer vision processing functions with deep learning, as well as conventional image recognition, for applications including automotive and digital signage, delivering higher performance and lower power consumption.
The NNA incorporates the company's proprietary architecture using the quantization technology that reduces the bits for parameters and activations required for deep learning. The quantization technology is capable of carrying out massive amounts of computing tasks with less resource, greatly reducing the data size, and significantly lowering the system memory bandwidth. In addition, the newly developed on-chip memory circuit design improves the efficiency of computing resource required for deep learning, enabling optimum performance in a very small package. A VPU equipped with the new NNA combined with the latest technologies will be able to achieve 100 times faster processing speed in image recognition compared with a conventional VPU.

View attachment 38967

Now the interesting thing is that Socionext have a patent application dating from mid-2018 whose purpose is to reduce the calculations required for large MAC loads.

US2021081489A1 ARITHMETIC METHOD 20180604

View attachment 38968

View attachment 38969


[0010] An arithmetic method according to the present disclosure is an arithmetic method of performing convolution operation in convolutional layers of a neural network by calculating matrix products, using an arithmetic unit and an internal memory included in a LSI. The arithmetic method includes: determining, for each of the convolutional layers, whether an amount of input data to be inputted to the convolutional layer is smaller than or equal to a predetermined amount of data; selecting a first arithmetic mode and performing convolution operation in the first arithmetic mode, when the amount of input data is determined to be smaller than or equal to the predetermined amount of data in the determining; selecting a second arithmetic mode and performing convolution operation in the second arithmetic mode, when the amount of input data is determined to be larger than the predetermined amount of data in the determining; and outputting output data which is a result obtained by performing convolution operation, in which the performing of convolution operation in the first arithmetic mode includes: storing weight data for the convolutional layer in external memory located outside the LSI; storing the input data for the convolutional layer in the internal memory; and reading the weight data from the external memory into the internal memory part by part as first data of at least one row vector or column vector, and causing the arithmetic unit to calculate a matrix product of the first data and a matrix of the input data stored in the internal memory, the weight data is read, as a whole, from the external memory into the internal memory only once, the performing of convolution operation in the second arithmetic mode includes: storing the input data for the convolutional layer in the external memory located outside the LSI; storing a matrix of the weight data for the convolutional layer in the internal memory; and reading the input data from the external memory into the internal memory part by part as second data of at least one column vector or row vector, and causing the arithmetic unit to calculate a matrix product of the second data and the matrix of the weight data stored in the internal memory, and the input data is read, as a whole, from the external memory into the internal memory only once.

Now it was about 2018 that BrainChip and Socionext began their cooperation, so their original NNA was developed in advance of their association with Akida.

If we assume that this patent is their description of their original NNA, Akida would wipe the floor with it. Akida could perform the functions of the VPU with NNA above in a trice. Given that SocioNext have undoubtedly seen Akida in action, bearing in mind their initial enthusiasm for a Synquacer/Akida engagement, would they persist with their clunky NNA from last millennium?
"......would they persist with their clunky NNA from last millennium?"

 
  • Haha
  • Like
  • Love
Reactions: 12 users
Socionext EU always pretty good for info.

Haven't read all the posts yet so not sure if this posted already (?) but a few extra clues in here imo....see what you think.

From 4 days ago.


 
  • Like
  • Fire
Reactions: 21 users

Cardpro

Regular
An analogy from FF that resonates so strongly with me, I had to request his permission to share:

The current share price is very disappointing.

There is a saying along the lines ‘he has a face that only a mother could love’.

The present share price is ‘butt’ ugly yet we still love Brainchip because like the mother we do not see the share price we see past it to the good heart that beats strongly within.

Like a mother as opposed to Mrs. Jones on the corner we know Brainchip and all its virtues inside out having spent every minute of everyday watching it grow and develop into a fine company over flowing with potential.

We have been to the parent interviews at Carnegie Mellon and noted that while Brainchip has not gained a mass following all its Professors cannot speak too highly of it and have voted it most likely to succeed.

Like the mother we were not surprised unlike Mrs. Jones, when our little Brainchip was accepted into NASA and earmarked to pilot deep space missions.

Nor were we surprised when Quantum Ventura claimed that our little Brainchip would allow Homeland Security to build handheld detectors to protect our ports and would make cyber secure all our critical energy supplies.

Even though we raised our Brainchip to be a source of good for all mankind we could not help but feel a deep sense of pride when the ultimate luxury car brand Mercedes Benz said nice things about our Brainchip.

Like all mothers however being able to announce at Christmas to Mrs. Jones at the Carols in the Park so all around could hear that our Brainchip was going into medical research to find a way to detect cancer was our proudest moment.

Like all mothers we have been annoyed, frustrated and worried when our Brainchip tells us that it has to go out and won’t say where claiming it is a secret particularly when we know some of its friends are military contractors.

Just like mothers we will cross our arms and say ‘but I am your shareholder’ only to be met with silence.

Then like a mother we step aside as we know that those around our Brainchip are all good people and we ultimately trust it has the right core values.

Even though a mother trusts she will still read everything she can to find out about her child’s achievements as she knows her child is not a braggarde and will occasionally require a nudge if she is to find out about its successes.

In the same way Brainchip shareholders continue to research and question.

Mother’s know that raising a child is not something you can hurry and that it takes the time it takes.

She knows there will be missteps along the way but with patience and dedication the end result will likely be much more than you ever hoped. Learning from mistakes requires the mistake to be made and out of mistakes resilience is built.

Some are not cut out to be mothers just as some are not cut out to be shareholders. It is natures way of weeding out the weakest from the gene pool.

Sad but that is how it is as while we are all born equal thereafter the strong survive and learn to thrive in the markets.
Wow... I can't believe we are using a metaphor of a mother's love for her child to being a shareholder of BrainChip..!!!

Below has been written using chatgpt lol

As we continue our journey as stakeholders in Brainchip, I feel compelled to address a recent analogy that has been circulating—one that draws parallels between the relationship of a mother and child, and our position as shareholders. While I understand the intention behind this comparison, I find it important to express my belief that such a comparison is both unfounded and, dare I say, absurd.

Undoubtedly, the current share price may be disappointing. However, likening our attachment to Brainchip to the unconditional love and devotion of a mother to her child is a stretch that lacks substantial merit. Our investment in Brainchip is driven by rationality, financial analysis, and a desire to secure our future, rather than an emotional bond tied to unconditional love.

Yes, we have spent considerable time studying Brainchip, exploring its potential, and tracking its progress. But to equate this with a mother's innate knowledge and intimate understanding of her child is an overreach. We are shareholders, not parents, and our relationship with the company is fundamentally different.

Furthermore, the examples given of Brainchip's achievements, acceptance into esteemed institutions, and recognition by industry leaders do not validate the comparison. These are commendable milestones for any company, but they do not mirror the emotional fulfillment a mother experiences when witnessing her child's accomplishments.

While I acknowledge that trust is an essential aspect of any investor-company relationship, it is crucial to maintain a level-headed approach. Our role as shareholders should not be conflated with the unconditional trust and unwavering faith that a mother has in her child. Rather, we should continue to exercise due diligence, seek transparency, and hold Brainchip accountable for delivering on its promises.

In conclusion, the comparison between the relationship of a mother and child and our position as shareholders of Brainchip is an exaggeration that fails to acknowledge the distinctive nature of these connections. Let us approach our investment with a rational mindset, focusing on the financial aspects and future prospects of the company.

Wishing you all continued success in your investment journey.


DYOR I still believe this will help me retire early!! Plz land a contract sooooon!!!
 
  • Like
  • Fire
Reactions: 7 users

AusEire

Founding Member. It's ok to say No to Dot Joining
I hope he will be turning things around quickly over the next few months given the Co has it's first AGM "Strike" action against it's name !!! ........... imo, probably something he wouldn't want recorded / disclosed on his CV.
That strike was one of the most ridiculous things to happen the company imo. It was orchestrated by a bunch of short sighted gobshites who couldn't see beyond the shareprice!

Remember the "It's ok to vote no" mob! They were being led by a number of bigger players with an agenda and they couldn't see it!

Onwards and upwards

This Socionext release is great news. Hopefully we'll see an IP licensing agreement out of it next? 🙏
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 46 users

miaeffect

Oat latte lover
Not sure if posted.
Screenshot_20230627-205402_Chrome.jpg

Addition to previous Socionext post, Are we in the right business?
Let's see where NVISO is at now!
 
  • Like
  • Fire
Reactions: 19 users
Wow... I can't believe we are using a metaphor of a mother's love for her child to being a shareholder of BrainChip..!!!

Below has been written using chatgpt lol

As we continue our journey as stakeholders in Brainchip, I feel compelled to address a recent analogy that has been circulating—one that draws parallels between the relationship of a mother and child, and our position as shareholders. While I understand the intention behind this comparison, I find it important to express my belief that such a comparison is both unfounded and, dare I say, absurd.

Undoubtedly, the current share price may be disappointing. However, likening our attachment to Brainchip to the unconditional love and devotion of a mother to her child is a stretch that lacks substantial merit. Our investment in Brainchip is driven by rationality, financial analysis, and a desire to secure our future, rather than an emotional bond tied to unconditional love.

Yes, we have spent considerable time studying Brainchip, exploring its potential, and tracking its progress. But to equate this with a mother's innate knowledge and intimate understanding of her child is an overreach. We are shareholders, not parents, and our relationship with the company is fundamentally different.

Furthermore, the examples given of Brainchip's achievements, acceptance into esteemed institutions, and recognition by industry leaders do not validate the comparison. These are commendable milestones for any company, but they do not mirror the emotional fulfillment a mother experiences when witnessing her child's accomplishments.

While I acknowledge that trust is an essential aspect of any investor-company relationship, it is crucial to maintain a level-headed approach. Our role as shareholders should not be conflated with the unconditional trust and unwavering faith that a mother has in her child. Rather, we should continue to exercise due diligence, seek transparency, and hold Brainchip accountable for delivering on its promises.

In conclusion, the comparison between the relationship of a mother and child and our position as shareholders of Brainchip is an exaggeration that fails to acknowledge the distinctive nature of these connections. Let us approach our investment with a rational mindset, focusing on the financial aspects and future prospects of the company.

Wishing you all continued success in your investment journey.


DYOR I still believe this will help me retire early!! Plz land a contract sooooon!!!
If BrainChip was an ordinary company, ran by ordinary people, with an ordinary product, the above would be true.

I believe BrainChip is an extraordinary Company and it is impossible, to circumcise emotions from being a holder.

The above is a typical, cold, emotionless, A.I. response, suitable for individuals like shorters and other people, with no foresight.

giphy.gif


BrainChip shareholders are cut from a different cloth.
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 31 users
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 23 users

Diogenese

Top 20
Right!

Now I've had a cold shower, it should be noted that Socionext has had an NNA since at least 2018:

https://socionextus.com/pressreleases/socionext-ai-accelerator-engine-for-edge-computing/#:~:text=SUNNYVALE, Calif., May 11, 2018 – Socionext Inc.,,been designed specifically for deep learning inference processing.

Socionext Develops AI Accelerator Engine Optimized for Edge Computing​


Socionext Develops AI Accelerator Engine Optimized for Edge Computing​

Share this post
Share TweetShare

Small-sized and Low Power Engine Supports Broad Range of Applications
SUNNYVALE, Calif., May 11, 2018 –Socionext Inc., a leading provider of SoC-based solutions, has developed a new Neural Network Accelerator (NNA) engine, optimized for AI processing on edge computing devices. The compact, low power engine has been designed specifically for deep learning inference processing. When implemented, it can achieve 100x performance boost compared with conventional processors for computer vision processing such as image recognition. Socionext will start delivering the Software Development Kit for the FPGA implementation of the NNA in the third quarter of 2018. The company is also planning to develop its SoC products with the NNA.
Socionext currently provides graphics SoC "SC1810" with a built-in proprietary Vision Processor Unit compatible with the computer vision API "OpenVX" developed by the Khronos Group, a standardization organization. The NNA has been designed to work as an accelerator to extend the capability of the VPU. It performs various computer vision processing functions with deep learning, as well as conventional image recognition, for applications including automotive and digital signage, delivering higher performance and lower power consumption.
The NNA incorporates the company's proprietary architecture using the quantization technology that reduces the bits for parameters and activations required for deep learning. The quantization technology is capable of carrying out massive amounts of computing tasks with less resource, greatly reducing the data size, and significantly lowering the system memory bandwidth. In addition, the newly developed on-chip memory circuit design improves the efficiency of computing resource required for deep learning, enabling optimum performance in a very small package. A VPU equipped with the new NNA combined with the latest technologies will be able to achieve 100 times faster processing speed in image recognition compared with a conventional VPU.

View attachment 38967

Now the interesting thing is that Socionext have a patent application dating from mid-2018 whose purpose is to reduce the calculations required for large MAC loads.

US2021081489A1 ARITHMETIC METHOD 20180604

View attachment 38968

View attachment 38969


[0010] An arithmetic method according to the present disclosure is an arithmetic method of performing convolution operation in convolutional layers of a neural network by calculating matrix products, using an arithmetic unit and an internal memory included in a LSI. The arithmetic method includes: determining, for each of the convolutional layers, whether an amount of input data to be inputted to the convolutional layer is smaller than or equal to a predetermined amount of data; selecting a first arithmetic mode and performing convolution operation in the first arithmetic mode, when the amount of input data is determined to be smaller than or equal to the predetermined amount of data in the determining; selecting a second arithmetic mode and performing convolution operation in the second arithmetic mode, when the amount of input data is determined to be larger than the predetermined amount of data in the determining; and outputting output data which is a result obtained by performing convolution operation, in which the performing of convolution operation in the first arithmetic mode includes: storing weight data for the convolutional layer in external memory located outside the LSI; storing the input data for the convolutional layer in the internal memory; and reading the weight data from the external memory into the internal memory part by part as first data of at least one row vector or column vector, and causing the arithmetic unit to calculate a matrix product of the first data and a matrix of the input data stored in the internal memory, the weight data is read, as a whole, from the external memory into the internal memory only once, the performing of convolution operation in the second arithmetic mode includes: storing the input data for the convolutional layer in the external memory located outside the LSI; storing a matrix of the weight data for the convolutional layer in the internal memory; and reading the input data from the external memory into the internal memory part by part as second data of at least one column vector or row vector, and causing the arithmetic unit to calculate a matrix product of the second data and the matrix of the weight data stored in the internal memory, and the input data is read, as a whole, from the external memory into the internal memory only once.

Now it was about 2018 that BrainChip and Socionext began their cooperation, so their original NNA was developed in advance of their association with Akida.

If we assume that this patent is their description of their original NNA, Akida would wipe the floor with it. Akida could perform the functions of the VPU with NNA above in a trice. Given that SocioNext have undoubtedly seen Akida in action, bearing in mind their initial enthusiasm for a Synquacer/Akida engagement, would they persist with their clunky NNA from last millennium?



Hi Mia,

Believe!

This is all on the one page:

Top right: NNA = neuromorphic network accelerator.

Automotive Custom SoC Technologies and Solutions (socionextus.com)


View attachment 38959

These custom SoCs enable a wide range of applications, including ADAS sensors, central computing, networking, in-cabin monitoring, satellite connectivity, and infotainment.



Advanced AI Solutions for Automotive

Socionext has partnered with artificial intelligence provider BrainChip
to develop optimized, intelligent sensor data solutions based on Brainchip’s Akida® processor IP.

BrainChip’s flexible AI processing fabric IP delivers neuromorphic, event-based computation, enabling ultimate performance while minimizing silicon footprint and power consumption. Sensor data can be analyzed in real-time with distributed, high-performance and low-power edge inferencing, resulting in improved response time and reduced energy consumption.

This video is also on the same page:
View attachment 38960
View attachment 38961


View attachment 38958


Let's not forget:

BrainChip Joins Technology Partners During CES to Demonstrate Capabilities of Akida-Powered Solutions - BrainChip


BrainChip Joins Technology Partners During CES to Demonstrate Capabilities of Akida-Powered Solutions


Laguna Hills, Calif. – December 29, 2022 –BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced that it will be joining partners Prophesee, Socionext, and VVDN January 5-8 at CES to showcase compelling solutions on constrained edge devices, featuring its Akida™ processors. Akida processors simplify development by supporting today’s mainstream network models and workflows while being future-proofed for next-generation edge AI solutions.



BrainChip joins Socionext in the Vehicle Tech and Advanced Mobility Zone, located in the Las Vegas Convention Center, North Hall in Booth 10654. Socionext will be showcasing its leading-edge technologies and solutions in automotive imaging, AI, and smart sensing to help customers develop feature rich custom SoCs that enable product differentiation with an added competitive edge. Socionext has played an important role in the implementation of BrainChip’s AI technology in SoCs and the companies are working together to satisfy the need for AI in edge computing.
As part of this showcase, BrainChip will be highlighting its functionality with NVISO’s human-behavior software designed for the automotive in-cabin environmen
t.

... another couple of dots for edge servers and in-cabin monitoring?

Where does it end?!
 
  • Like
  • Love
  • Fire
Reactions: 62 users

SERA2g

Founding Member
Wow... I can't believe we are using a metaphor of a mother's love for her child to being a shareholder of BrainChip..!!!

Below has been written using chatgpt lol

As we continue our journey as stakeholders in Brainchip, I feel compelled to address a recent analogy that has been circulating—one that draws parallels between the relationship of a mother and child, and our position as shareholders. While I understand the intention behind this comparison, I find it important to express my belief that such a comparison is both unfounded and, dare I say, absurd.

Undoubtedly, the current share price may be disappointing. However, likening our attachment to Brainchip to the unconditional love and devotion of a mother to her child is a stretch that lacks substantial merit. Our investment in Brainchip is driven by rationality, financial analysis, and a desire to secure our future, rather than an emotional bond tied to unconditional love.

Yes, we have spent considerable time studying Brainchip, exploring its potential, and tracking its progress. But to equate this with a mother's innate knowledge and intimate understanding of her child is an overreach. We are shareholders, not parents, and our relationship with the company is fundamentally different.

Furthermore, the examples given of Brainchip's achievements, acceptance into esteemed institutions, and recognition by industry leaders do not validate the comparison. These are commendable milestones for any company, but they do not mirror the emotional fulfillment a mother experiences when witnessing her child's accomplishments.

While I acknowledge that trust is an essential aspect of any investor-company relationship, it is crucial to maintain a level-headed approach. Our role as shareholders should not be conflated with the unconditional trust and unwavering faith that a mother has in her child. Rather, we should continue to exercise due diligence, seek transparency, and hold Brainchip accountable for delivering on its promises.

In conclusion, the comparison between the relationship of a mother and child and our position as shareholders of Brainchip is an exaggeration that fails to acknowledge the distinctive nature of these connections. Let us approach our investment with a rational mindset, focusing on the financial aspects and future prospects of the company.

Wishing you all continued success in your investment journey.


DYOR I still believe this will help me retire early!! Plz land a contract sooooon!!!
Cardpro, at the risk of being muted, please stfu.
 
  • Haha
  • Like
  • Fire
Reactions: 37 users

buena suerte :-)

BOB Bank of Brainchip
1687905437430.png
 
  • Like
  • Love
Reactions: 16 users
 
  • Like
  • Thinking
  • Love
Reactions: 18 users

Damo4

Regular
  • Like
Reactions: 5 users

buena suerte :-)

BOB Bank of Brainchip
RSU's converted to Ordinary Shares.
Hopefully not to sell onto the market lol
That's what I was thinking also :) cheers Damo
 
  • Like
  • Fire
Reactions: 6 users

buena suerte :-)

BOB Bank of Brainchip
  • Like
  • Love
  • Fire
Reactions: 36 users

Boab

I wish I could paint like Vincent
Hoping to get some news from the tinyML event finishing up tonight (our time) in Amsterdam.🤞🤞

From the Yahoo Finance website:
At the forum, BrainChip will showcase Akida enablement of efficient processing of all sensor modalities – visual analysis such as facial and gesture recognition; sound identification including vocal keyword spotting and voice commands; vibration analysis to detect performance defects; and taste testing with breakdowns of ingredient composition and pH levels.

- ADVERTISEMENT -

Brainchip will drive the discussion on a benchmarking panel that explores the relevance and need for benchmarking Edge AI platforms that help the broader industry in evaluating the right platforms for their needs.

"tinyML events are ideal venues to showcase our achievements in neuroprocessing at the edge that provide unsurpassed sensory performance at the micro- to milli-watt power range, perfect for always-on applications in battery-operated devices," said Rob Telson, BrainChip Vice President of Ecosystem and Partnerships. "Akida is enabling intelligent applications and tasks that were previously not possible, and we are excited to continue building the momentum for Edge AI."

The Akida neural processor is designed to provide a complete ultra-low power Edge AI network processor for vision, audio, smart transducers, vital signs and, more broadly, any sensor application. BrainChip’s scalable solutions, which can be used standalone or integrated into systems on chip to execute today’s models and future networks directly in hardware, empowers the market to create much more intelligent, cost-effective devices and services that can be universally deployed across real-world applications in connected cars, healthcare, consumer electronics, industrial IoT, smart-agriculture and more, including use in a space mission and in the most stringent conditions.

Those interested in a private meeting with BrainChip can contact sales@brainchip.com to schedule an appointment.
 
  • Like
  • Love
  • Fire
Reactions: 40 users

Boab

I wish I could paint like Vincent
An Explanation. A transducer is an electronic device that converts energy from one form to another, the electrical signal is then turned into a usable direct current or voltage that is easily measured. Some examples of transducers include microphones and loudspeakers.
 
  • Like
  • Fire
Reactions: 12 users

Zedjack33

Regular
07B74081-4B93-45B7-9A14-C560CA62EAC5.jpeg
 
  • Like
  • Haha
Reactions: 16 users

Xhosa12345

Regular
More freebies, well earned. Kpi hit again.

Now wheres my sarcasm gif......

clapping-leonardo-dicaprio.gif
 
  • Like
Reactions: 5 users
  • Like
  • Haha
  • Fire
Reactions: 16 users
Top Bottom