BRN Discussion Ongoing

Frangipani

Regular

Sony Semiconductor Brings Inference Close To The Edge​

Steve McDowell
Contributor
Chief Analyst & CEO, NAND Research.


https://www.forbes.com/sites/stevem...close-to-the-edge/?sh=660e80bb34f9#open-web-0
Mar 27, 2024,04:12pm EDT
Sony Semiconductor Solutions Group

Sony Semiconductor Solutions Group
NURPHOTO VIA GETTY IMAGES

AI only matters to businesses if the technology enables competitive differentiation or drives increased efficiencies. The past year has seen technology companies focus on training models that promise to change enterprises across industries. While training has been the focus, the recent NVIDIA GTC event showcased a rapid transition towards inference, where actual business value lay.


AI at the Retail Edge​

Retail is one of the industries that promises to benefit most from AI. Generative AI and large language models aside, retail organizations are already deploying image recognition systems for diverse tasks, from inventory control and loss prevention to customer service.

Earlier this year, Nvidia published its 2024 State of AI in Retail and CPG report that takes a survey-based approach to understanding the use of AI in the retail sector. Nvidia found that 42% percent of retailers already use AI, with an additional 34% assessing or piloting AI programs. Narrowing the aperture, among large retailers with revenues of more than $500 million, the adoption of AI stretches to 64%. That’s a massive market.


The challenge for retailers and the array of ecosystem partners catering to them is that AI can be complex. Large language models and generative AI require infrastructure that scales beyond the capabilities of many retail locations. Using the cloud to solve those problems isn't always practical, either, as applications like vision processing need to be done at the edge, where the data lives.


Sony’s Platform Approach to On-Device Inference​

Sony Semiconductor Solutions Corporation took on the challenge of simplifying vision processing and inference, resulting in the introduction of its AITRIOS edge AI sensing platform. AITRIOS addresses six significant cloud challenges based IoT systems, including handling large data volumes, enhancing data privacy, reducing latency, conserving energy, ensuring service continuity, and securing data.


AITRIOS accelerates the deployment of edge AI-powered sensing solutions across industries, enabling a comprehensive ecosystem for creating solutions that blend edge computing and cloud technologies.




View attachment 60127



April 24, 2024

Edge AI-Driven Vision Detection Solution Introduced at 500 Convenience Store Locations to Measure Advertising Effectiveness​

Sony Semiconductor Solutions Corporation
Atsugi, Japan, April 24, 2024 —

Today, Sony Semiconductor Solutions Corporation (SSS) announced that it has introduced and begun operating an edge AI-driven vision detection solution at 500 convenience store locations in Japan to improve the benefits of in-store advertising.

imx500ai-camera-installed.jpg

Edge AI technology automatically detects the number of digital signage viewers and how long they viewed it.

SSS has been providing 7-Eleven and other retail outlets in Japan with vision-based technology to improve the implementation of digital signage systems and in-store advertising at their brick-and-mortar locations as part of their retail media*1 strategy. To help ensure that effective content is shown for brands and stores, this solution gives partners sophisticated tools to evaluate the effectiveness of advertising on their customers.

As part of this effort, SSS has recently introduced a solution that uses edge devices with on-sensor AI processing to automatically detect when customers see digital signage, count how many people paused to view it, and measure the percentage of viewers. The AI capabilities of the sensor collects data points such as the number of shoppers who enter the detection area, whether they saw the signage, the number who stopped to view the signage, and how long they watched for. The system does not output image data capable of identifying individuals, making it possible to provide insightful measurements while helping to preserve privacy.


Click here for an overview video of the solution and interview with 7-Eleven Japan.


Solution features:

-IMX500 intelligent vision sensor delivers optimal data collection, while helping to preserve privacy.

SSS’s IMX500 intelligent vision sensor with AI-processing capabilities
automatically detects the number of customers who enter the detection area, the number who stopped to view the signage, and how long they viewed it. The acquired metadata (semantic information) is then sent to a back-end system where it’s combined with content streaming information and purchasing data to conduct a sophisticated analysis of advertising effectiveness. Because the system does not output image data that could be used to identify individuals, it helps to preserve customer privacy.

-Edge devices equipped with the IMX500 save space in store.

The IMX500 is made using SSS’s proprietary structure with the pixel chip and logic chip stacked, enabling the entire process, from imaging to AI inference, to be done on a single sensor.
Compact, IMX500-equipped edge devices (approx. 55 x 40 x 35 mm) are unobtrusive in shops, and compared to other solutions that require an AI box or other additional devices for AI inference, can be installed more flexibly in convenience stores and shops with limited space.

-The AITRIOS™ platform contributes to operational stability and system expandability.
  • Only light metadata is output from IMX500 edge devices, minimizing the amount of data transmitted to the cloud. This helps lessen network load, even when adding more devices in multiple stores, compared to solutions that send full image data to the cloud. This curtails communication, cloud storage, and computing costs.
    The IMX500 also handles AI computing, eliminating the need for other devices such as an AI box, resulting in a simple device configuration, streamlining device maintenance and reducing costs of installation.
  • AITRIOS*2, SSS’s edge AI sensing platform, which is used to build and operate the in-store solution, delivers a complete service without the need for third-party tools, enabling simple, sustainable operations.​
  • This solution was developed with Console Enterprise Edition, one of the services offered by AITRIOS, and is installed on the partner’s Microsoft Azure cloud infrastructure. It not only connects easily and compatibly with their existing systems, but also offers system customizability and security benefits, since there is no need to output various data outside the company.
image_e.jpg


*1 A new form of advertising media that provides advertising space for retailers and e-commerce sites using their own platforms
*2 AITRIOS is an AI sensing platform for streamlined device management, AI development, and operation. It offers the development environment, tools, features, etc., which are necessary for deploying AI-driven solutions, and it contributes to shorter roll-out times when launching operations, while ensuring privacy, reducing introductory cost, and minimizing complications. For more information on AITRIOS, visit: https://www.aitrios.sony-semicon.com/en


About Sony Semiconductor Solutions Corporation
Sony Semiconductor Solutions Corporation is a wholly owned subsidiary of Sony Group Corporation and the global leader in image sensors. It operates in the semiconductor business, which includes image sensors and other products. The company strives to provide advanced imaging technologies that bring greater convenience and fun. In addition, it also works to develop and bring to market new kinds of sensing technologies with the aim of offering various solutions that will take the visual and recognition capabilities of both human and machines to greater heights. For more information, please visit https://www.sony-semicon.com/en/index.html.

AITRIOS and AITRIOS logos are the registered trademarks or trademarks of Sony Group Corporation or its affiliated companies.
Microsoft and Azure are registered trademarks of Microsoft Corporation in the United States and other countries.
All other company and product names herein are trademarks or registered trademarks of their respective owners.








Here is some wild speculation: Could this 👆🏻possibly be a candidate for the mysterious Custom Customer SoC, featured in the recent Investor Roadshow presentation (provided the licensing of Akida IP was done via MegaChips)? 🤔


Post in thread 'AITRIOS'
https://thestockexchange.com.au/threads/aitrios.18971/post-31633

342F966B-C42C-412B-BC75-939E72D2CD9A.jpeg
 
  • Like
  • Thinking
  • Love
Reactions: 17 users

toasty

Regular


April 24, 2024

Edge AI-Driven Vision Detection Solution Introduced at 500 Convenience Store Locations to Measure Advertising Effectiveness​

Sony Semiconductor Solutions Corporation
Atsugi, Japan, April 24, 2024 —

Today, Sony Semiconductor Solutions Corporation (SSS) announced that it has introduced and begun operating an edge AI-driven vision detection solution at 500 convenience store locations in Japan to improve the benefits of in-store advertising.

imx500ai-camera-installed.jpg

Edge AI technology automatically detects the number of digital signage viewers and how long they viewed it.

SSS has been providing 7-Eleven and other retail outlets in Japan with vision-based technology to improve the implementation of digital signage systems and in-store advertising at their brick-and-mortar locations as part of their retail media*1 strategy. To help ensure that effective content is shown for brands and stores, this solution gives partners sophisticated tools to evaluate the effectiveness of advertising on their customers.

As part of this effort, SSS has recently introduced a solution that uses edge devices with on-sensor AI processing to automatically detect when customers see digital signage, count how many people paused to view it, and measure the percentage of viewers. The AI capabilities of the sensor collects data points such as the number of shoppers who enter the detection area, whether they saw the signage, the number who stopped to view the signage, and how long they watched for. The system does not output image data capable of identifying individuals, making it possible to provide insightful measurements while helping to preserve privacy.


Click here for an overview video of the solution and interview with 7-Eleven Japan.


Solution features:

-IMX500 intelligent vision sensor delivers optimal data collection, while helping to preserve privacy.

SSS’s IMX500 intelligent vision sensor with AI-processing capabilities
automatically detects the number of customers who enter the detection area, the number who stopped to view the signage, and how long they viewed it. The acquired metadata (semantic information) is then sent to a back-end system where it’s combined with content streaming information and purchasing data to conduct a sophisticated analysis of advertising effectiveness. Because the system does not output image data that could be used to identify individuals, it helps to preserve customer privacy.

-Edge devices equipped with the IMX500 save space in store.

The IMX500 is made using SSS’s proprietary structure with the pixel chip and logic chip stacked, enabling the entire process, from imaging to AI inference, to be done on a single sensor.
Compact, IMX500-equipped edge devices (approx. 55 x 40 x 35 mm) are unobtrusive in shops, and compared to other solutions that require an AI box or other additional devices for AI inference, can be installed more flexibly in convenience stores and shops with limited space.

-The AITRIOS™ platform contributes to operational stability and system expandability.
  • Only light metadata is output from IMX500 edge devices, minimizing the amount of data transmitted to the cloud. This helps lessen network load, even when adding more devices in multiple stores, compared to solutions that send full image data to the cloud. This curtails communication, cloud storage, and computing costs.
    The IMX500 also handles AI computing, eliminating the need for other devices such as an AI box, resulting in a simple device configuration, streamlining device maintenance and reducing costs of installation.
  • AITRIOS*2, SSS’s edge AI sensing platform, which is used to build and operate the in-store solution, delivers a complete service without the need for third-party tools, enabling simple, sustainable operations.​
  • This solution was developed with Console Enterprise Edition, one of the services offered by AITRIOS, and is installed on the partner’s Microsoft Azure cloud infrastructure. It not only connects easily and compatibly with their existing systems, but also offers system customizability and security benefits, since there is no need to output various data outside the company.
image_e.jpg


*1 A new form of advertising media that provides advertising space for retailers and e-commerce sites using their own platforms
*2 AITRIOS is an AI sensing platform for streamlined device management, AI development, and operation. It offers the development environment, tools, features, etc., which are necessary for deploying AI-driven solutions, and it contributes to shorter roll-out times when launching operations, while ensuring privacy, reducing introductory cost, and minimizing complications. For more information on AITRIOS, visit: https://www.aitrios.sony-semicon.com/en


About Sony Semiconductor Solutions Corporation
Sony Semiconductor Solutions Corporation is a wholly owned subsidiary of Sony Group Corporation and the global leader in image sensors. It operates in the semiconductor business, which includes image sensors and other products. The company strives to provide advanced imaging technologies that bring greater convenience and fun. In addition, it also works to develop and bring to market new kinds of sensing technologies with the aim of offering various solutions that will take the visual and recognition capabilities of both human and machines to greater heights. For more information, please visit https://www.sony-semicon.com/en/index.html.

AITRIOS and AITRIOS logos are the registered trademarks or trademarks of Sony Group Corporation or its affiliated companies.
Microsoft and Azure are registered trademarks of Microsoft Corporation in the United States and other countries.
All other company and product names herein are trademarks or registered trademarks of their respective owners.








Here is some wild speculation: Could this 👆🏻possibly be a candidate for the mysterious Custom Customer SoC, featured in the recent Investor Roadshow presentation (provided the licensing of Akida IP was done via MegaChips)? 🤔


Post in thread 'AITRIOS'
https://thestockexchange.com.au/threads/aitrios.18971/post-31633

View attachment 63835

All very nice but what has it got to do with BRN??
 
  • Like
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
On Dec 29, Chinese researchers from Zhejiang University Hangzhou published a paper on arXiv titled Darwin3: A large-scale neuromorphic chip with a Novel ISA and On-Chip Learning. (Take note that submissions on arXiv must be from registered authors and are moderated but not peer-reviewed, although some authors posting preprints on arXiv - and thus benefitting from immediate feedback in the open-access community and extending their potential citation readership - go on to publish them in peer-reviewed journals).

Not for the first time, however, Akida is missing from the comparison with other state-of-the-art neuromorphic chips (plus the table still lists IBM’s TrueNorth instead of the recently unveiled NorthPole). This of course begs the question “Why?!” And the two likeliest answers IMO are: a) the authors did not know about Akida or b) they did not want Akida to outshine their baby.

I’ll leave it to our resident hardware experts to comment on the question whether Darwin3, which constitutes the third generation of the Darwin family of neuromorphic chips and is claimed to have up to 2.35 million neurons and on-chip learning, could be serious future competition.
A quick search here on TSE did not yield any reference to either its predecessors Darwin (2015) or Darwin2 (2019).



View attachment 53414

View attachment 53415

Article published 15 hours ago.

Chinese Chip Ignites Global Neuromorphic Computing Competition​

Environmental monitoring could also benefit from Darwin3. Smart sensors using Darwin3 could analyze environmental data in real-time, providing immediate insights into climate conditions and helping us better manage natural resources.
by SLG Syndication

May 27, 2024

2 mins read

630bd42e-e52c-4796-aab0-7705c07b09da-lk.jpg
[ Illustration: The China Academy]
A typical computer chip, such as one found in a personal desktop for non-professional use, consumes around 100 watts of power. AI, on the other hand, requires significantly more energy. It is estimated that ChatGPT would consume approximately 300 watts per second to answer a single question. In contrast, the human brain is much more energy-efficient, requiring only around 10 watts of power, comparable to that of a lightbulb. This exceptional energy efficiency is one of the reasons why scientists are interested in modeling the next generation of microchips after the human brain.
In the bustling tech landscape of Hangzhou, China, a team of researchers at Zhejiang University has made a significant leap in the world of neuromorphic computing with the development of their latest innovation, the Darwin3 chip. This groundbreaking piece of technology promises to transform how we simulate brain activity, paving the way for advancements in artificial intelligence, robotics, and beyond.

Neuromorphic chips are designed to emulate the architecture and functioning of the human brain. Unlike traditional computers that process information in a linear, step-by-step manner, these chips operate more like our brains, processing multiple streams of information simultaneously and adapting to new data in real-time.
The Darwin3 chip is a marvel of modern engineering, specifically designed to work with Spiking Neural Networks (SNNs). SNNs are a type of artificial neural network that mimics the way neurons and synapses in the human brain communicate. While conventional neural networks use continuous signals to process information, SNNs use discrete spikes, much like the bursts of electrical impulses that our neurons emit.
982bbbe5-ad3d-4bb8-bfaf-7e9546b4147a-lk.jpg
Test environment. (a) The test chip and system board. (b) Application development process.
One of the standout features of Darwin3 is its flexibility in simulating various types of neurons. Just as an orchestra can produce a wide range of sounds by utilizing different instruments, Darwin3 can emulate different neuron models to suit a variety of tasks, from basic pattern recognition to complex decision-making processes.
To achieve this goal, Darwin3’s key innovations is its domain-specific instruction set architecture (ISA). This custom-designed set of instructions allows the chip to efficiently describe diverse neuron models and learning rules, including the integrate-and-fire (LIF) model, Izhikevich model, and Spike-Timing-Dependent Plasticity (STDP). This versatility enables Darwin3 to tackle a wide range of computational tasks, making it a highly adaptable tool for AI development.

Another significant breakthrough is Darwin3’s efficient memory usage. Neuromorphic computing faces the challenge of managing vast amounts of data involved in simulating neuronal connections. Darwin3 overcomes this hurdle with an innovative compression mechanism that dramatically reduces memory usage. Imagine shrinking a massive library of books into a single, compact e-reader without losing any content—this is akin to what Darwin3 achieves with synaptic connections.
Perhaps the most exciting feature of Darwin3 is its on-chip learning capability. This allows the chip to learn and adapt in real-time, much like how humans learn from experience. Darwin3 can modify its behavior based on new information, leading to smarter and more autonomous systems.

The implications of Darwin3’s technology are far-reaching and transformative. In healthcare, prosthetic limbs powered by Darwin3 could learn and adapt to a user’s movements, offering a more intuitive and natural experience. This could significantly enhance the quality of life for amputees.
In robotics, robots equipped with Darwin3 could navigate complex environments with greater ease and efficiency, similar to how humans learn to maneuver through crowded spaces. This capability could revolutionize industries from manufacturing to space exploration.
Environmental monitoring could also benefit from Darwin3. Smart sensors using Darwin3 could analyze environmental data in real-time, providing immediate insights into climate conditions and helping us better manage natural resources.
The Darwin3 chip represents a monumental step forward in neuromorphic computing, bringing us closer to creating machines that can think and learn in ways previously thought impossible. As this technology continues to evolve, we anticipate a future where intelligent systems seamlessly integrate into our daily lives, enhancing everything from medical care to environmental conservation. The research is recently published in the journal National Science Review.
Source: China Academy

 
  • Like
  • Wow
  • Sad
Reactions: 20 users

7für7

Top 20
So, you all sell now because zeeb0t told you to buy his product yeah? Nice move fellas… nice move! HOOOOLD
 

Gazzafish

Regular
https://www.wevolver.com/article/en...tion-using-neuromorphic-computing-and-edge-ai

Extract only below:-

Enhancing Smart Homes with Pose Detection Using Neuromorphic Computing and Edge AI​

author avatar

Ravi Rao
22 May, 2024
FOLLOW
Sponsored by

Enhancing Smart Homes with Pose Detection Using Neuromorphic Computing and Edge AI


Pose detection technology leverages advanced machine learning algorithms to interpret human movements in real-time, enabling seamless, intuitive device control through simple gestures.​

Artificial Intelligence
- Electronics
- Embedded Machine Learning
- Embedded Systems
- Microcontroller

Introduction​

Edge AI transforms smart home technology by enabling real-time data processing directly on devices, reducing latency, and enhancing privacy. In home automation, this leads to more responsive and efficient control systems. One notable application is gesture recognition through pose detection, which allows users to control devices with simple movements.
This article features a project on developing a gesture-based appliance control system using the BrainChip Akida Neural Processor AKD1000 SoC and the Edge Impulse platform. We'll discuss hardware and software requirements, the setup process, data collection, model training, deployment, and practical demonstrations. Additionally, we'll explore integrating the system with Google Assistant for enhanced functionality.

Edge AI in Home Automation​

In home automation, Edge AI enables smart devices to respond quickly to user inputs and environmental changes. This local processing power is crucial for applications requiring immediate feedback, such as security systems, smart lighting, and environmental controls.
By processing data at the edge, smart home devices can operate independently of an internet connection, ensuring continuous functionality. This also reduces the risk of data breaches as sensitive information remains within the local network.

Pose Detection with Edge AI​

Pose detection is a technology that captures and analyzes human body movements and postures in real time. Using machine learning algorithms, pose detection systems identify key points on the human body, such as joints and limbs, and track their positions and movements. This data can then be used to recognize specific gestures and postures, enabling intuitive, hands-free interaction with various devices.
Pose detection typically involves several steps:
  1. Image Capture: A camera or other sensor captures images or video of the user.
  2. Preprocessing: The captured images are processed to enhance quality and remove noise.
  3. Key Point Detection: Machine learning models identify and track key points on the body, such as elbows, knees, and shoulders.
  4. Pose Estimation: The system estimates the user's pose by analyzing the positions and movements of the detected key points.
  5. Gesture Recognition: Specific gestures are identified based on predefined patterns in the user's movements.
Pose detection has a wide range of applications beyond home automation, including:
  • Gaming: Enhancing user experience with motion-controlled games.
  • Healthcare: Monitoring patients' movements and posture for rehabilitation and physical therapy.
  • Fitness: Providing real-time feedback on exercise form and performance.
  • Security: Recognizing suspicious behavior in surveillance systems.
In home automation, pose detection can be particularly powerful, turning everyday tasks into seamless, interactive experiences, and enhancing the overall functionality and appeal of smart homes. In this context, the project "Gesture Appliances Control with Pose Detection" stands out as a great example of how pose detection can be used for home automation. Developed by Christopher Mendez, this innovative idea leverages the BrainChip AKD1000 to enable users to control household appliances with simple finger-pointing gestures.
Further reading: Gesture Recognition and Classification Using Infineon PSoC 6 and Edge AI
By combining neuromorphic processing with machine learning, the system achieves high accuracy and low power consumption, making it a practical and efficient solution for modern smart homes.

Gesture Appliances Control with Pose Detection - BrainChip AKD1000​

Control your TV, Air Conditioner or Lightbulb by just pointing your finger at them, using the BrainChip AKD1000 achieving great accuracy and low power consumption.
Created By: Christopher Mendez
Public Project Link:
Edge Impulse Experts / Brainchip-Appliances-Control-Full-Body
 
  • Like
  • Fire
  • Love
Reactions: 50 users

Dugnal

Member
Article published 15 hours ago.

Chinese Chip Ignites Global Neuromorphic Computing Competition​

Environmental monitoring could also benefit from Darwin3. Smart sensors using Darwin3 could analyze environmental data in real-time, providing immediate insights into climate conditions and helping us better manage natural resources.
by SLG Syndication

May 27, 2024

2 mins read

630bd42e-e52c-4796-aab0-7705c07b09da-lk.jpg
[ Illustration: The China Academy]
A typical computer chip, such as one found in a personal desktop for non-professional use, consumes around 100 watts of power. AI, on the other hand, requires significantly more energy. It is estimated that ChatGPT would consume approximately 300 watts per second to answer a single question. In contrast, the human brain is much more energy-efficient, requiring only around 10 watts of power, comparable to that of a lightbulb. This exceptional energy efficiency is one of the reasons why scientists are interested in modeling the next generation of microchips after the human brain.
In the bustling tech landscape of Hangzhou, China, a team of researchers at Zhejiang University has made a significant leap in the world of neuromorphic computing with the development of their latest innovation, the Darwin3 chip. This groundbreaking piece of technology promises to transform how we simulate brain activity, paving the way for advancements in artificial intelligence, robotics, and beyond.

Neuromorphic chips are designed to emulate the architecture and functioning of the human brain. Unlike traditional computers that process information in a linear, step-by-step manner, these chips operate more like our brains, processing multiple streams of information simultaneously and adapting to new data in real-time.
The Darwin3 chip is a marvel of modern engineering, specifically designed to work with Spiking Neural Networks (SNNs). SNNs are a type of artificial neural network that mimics the way neurons and synapses in the human brain communicate. While conventional neural networks use continuous signals to process information, SNNs use discrete spikes, much like the bursts of electrical impulses that our neurons emit.
982bbbe5-ad3d-4bb8-bfaf-7e9546b4147a-lk.jpg
Test environment. (a) The test chip and system board. (b) Application development process.
One of the standout features of Darwin3 is its flexibility in simulating various types of neurons. Just as an orchestra can produce a wide range of sounds by utilizing different instruments, Darwin3 can emulate different neuron models to suit a variety of tasks, from basic pattern recognition to complex decision-making processes.
To achieve this goal, Darwin3’s key innovations is its domain-specific instruction set architecture (ISA). This custom-designed set of instructions allows the chip to efficiently describe diverse neuron models and learning rules, including the integrate-and-fire (LIF) model, Izhikevich model, and Spike-Timing-Dependent Plasticity (STDP). This versatility enables Darwin3 to tackle a wide range of computational tasks, making it a highly adaptable tool for AI development.

Another significant breakthrough is Darwin3’s efficient memory usage. Neuromorphic computing faces the challenge of managing vast amounts of data involved in simulating neuronal connections. Darwin3 overcomes this hurdle with an innovative compression mechanism that dramatically reduces memory usage. Imagine shrinking a massive library of books into a single, compact e-reader without losing any content—this is akin to what Darwin3 achieves with synaptic connections.
Perhaps the most exciting feature of Darwin3 is its on-chip learning capability. This allows the chip to learn and adapt in real-time, much like how humans learn from experience. Darwin3 can modify its behavior based on new information, leading to smarter and more autonomous systems.

The implications of Darwin3’s technology are far-reaching and transformative. In healthcare, prosthetic limbs powered by Darwin3 could learn and adapt to a user’s movements, offering a more intuitive and natural experience. This could significantly enhance the quality of life for amputees.
In robotics, robots equipped with Darwin3 could navigate complex environments with greater ease and efficiency, similar to how humans learn to maneuver through crowded spaces. This capability could revolutionize industries from manufacturing to space exploration.
Environmental monitoring could also benefit from Darwin3. Smart sensors using Darwin3 could analyze environmental data in real-time, providing immediate insights into climate conditions and helping us better manage natural resources.
The Darwin3 chip represents a monumental step forward in neuromorphic computing, bringing us closer to creating machines that can think and learn in ways previously thought impossible. As this technology continues to evolve, we anticipate a future where intelligent systems seamlessly integrate into our daily lives, enhancing everything from medical care to environmental conservation. The research is recently published in the journal National Science Review.
Source: China Academy

Here is a good example of what Brainchip is missing.. Good PR. I thought the last question at the AMG was the best question there as it eluded to our poor PR. Brainchip only have about 13K of followers when we should have 100'sK of followers given we are suppose to be a world leader in our field. Also having a small amateur looking stand at trade shows etc. I believe our management need to urgently address this situation and start getting a PR agency on the job so we get onto Business channels and business journals etc. The CEO having a yearly interview with a small Australian Stock Analyst company is not going to cut it. He should be having interviews with the likes of Bloomberg channel and others like that... More impressive professional trade show stands and not a table with creased tabletop and a few PC like items demo'ing. Promoting our leading technology to the masses and how we can help improve the world and humanity. Maybe Management have it in mind and are awaiting till we have a couple more contracts.. If we want to be a big professional company then we need to think and act like one... I sure hope we will very soon.
 
  • Like
  • Love
  • Fire
Reactions: 28 users

wilzy123

Founding Member
7für7 said:
Mate….I know people whose grandparents also run a business in which they specialized. That doesn't mean they ( the grandchildren) automatically know everything. The media landscape is constantly changing. Camera work, which is meant to evoke a certain drama, is also evolving. Imagine if films were still shot the same way they were in your grandparents' time… I come from this field as well, so relax… my great-great-great-grandparents were already doing theater, and my ancestors invented drama and comedy…so!? 🤷🏻‍♂️🤦🏻‍♂️

Reply,

Respect is urnt not given

I have taken a few days off as it’s in my blood to get very angry at disrespectful people like yourself 7fur7.

As you started all this....I’ll finish it with this note,

I have spent all my whole life in TV and Film Production being 63 years now with my first job at 16 you do maths, my comment of ...... it’s in my blood ...you took out of context and proceeded to be a smart arse and try and insult me with your above rambling comments, accusing me of being the grandkid running off his grandparents not knowing what I am talking about.

You don’t have a clue but chose to take the path of insult.
I have attached the standard video presentations formats professional producers use and the reasons why it is the standard requirements to engage with the audience in a professional manner , your cut and paste of documentaries / non fiction films that you quickly put up without any reason but IMO to be a smart arse and which you think is correct are just plan wrong and irrelevant for this format.

By the way I have sent this on to the stock broker video team so they take more care next time around when Sean is trying to find which camera to look at, very unprofessional and relevant to me as a shareholder that it needs attention.

My apologies to everyone for this continued discussion, however this is relevant and needs to be cleared up.

I won’t except disrespect from someone whom thinks he knows everything and tries to belittle people to make himself look good imo.

I have said my peace now and that’s all I have to say moving forward.

See below.

1000023411.png
 
  • Haha
Reactions: 4 users

toasty

Regular
Here is a good example of what Brainchip is missing.. Good PR. I thought the last question at the AMG was the best question there as it eluded to our poor PR. Brainchip only have about 13K of followers when we should have 100'sK of followers given we are suppose to be a world leader in our field. Also having a small amateur looking stand at trade shows etc. I believe our management need to urgently address this situation and start getting a PR agency on the job so we get onto Business channels and business journals etc. The CEO having a yearly interview with a small Australian Stock Analyst company is not going to cut it. He should be having interviews with the likes of Bloomberg channel and others like that... More impressive professional trade show stands and not a table with creased tabletop and a few PC like items demo'ing. Promoting our leading technology to the masses and how we can help improve the world and humanity. Maybe Management have it in mind and are awaiting till we have a couple more contracts.. If we want to be a big professional company then we need to think and act like one... I sure hope we will very soon.
I hear you. It almost seems like management are deliberately "hiding their light under a bushel". Perhaps at the behest of a tech major whilst they get a head start? Or maybe the incoming CMO will address the issue and get us the media and public attention we deserve, starting with Australia. I get that the majors and most of the potential customers are overseas but BRN is an Australian public company that should be WAY better represented to the investment community in its "home". I have long held the view that management regard the fact that we are an Oz company as an annoyance and would much rather we were listed in the US. I too would rather we were listed on, say, the NASDAQ but with the numbers the way they are that's not going to happen for a long time yet.
 
  • Like
  • Fire
Reactions: 7 users

Guzzi62

Regular
Here is a good example of what Brainchip is missing.. Good PR. I thought the last question at the AMG was the best question there as it eluded to our poor PR. Brainchip only have about 13K of followers when we should have 100'sK of followers given we are suppose to be a world leader in our field. Also having a small amateur looking stand at trade shows etc. I believe our management need to urgently address this situation and start getting a PR agency on the job so we get onto Business channels and business journals etc. The CEO having a yearly interview with a small Australian Stock Analyst company is not going to cut it. He should be having interviews with the likes of Bloomberg channel and others like that... More impressive professional trade show stands and not a table with creased tabletop and a few PC like items demo'ing. Promoting our leading technology to the masses and how we can help improve the world and humanity. Maybe Management have it in mind and are awaiting till we have a couple more contracts.. If we want to be a big professional company then we need to think and act like one... I sure hope we will very soon.
Sadly analyst or big news outlets have no interest in OTC (penny stock) in the US whatsoever, nada.

We have to get on NASDAQ to gain interest from everyone (US) like: Big institutional investors (Not allowed investing in OTC generally), analyst and large news outlets.

That's where the real big money are.

So at around 4-5US$ a share and things could get interesting very quickly.
 
  • Like
  • Fire
Reactions: 5 users

7für7

Top 20
Here is a good example of what Brainchip is missing.. Good PR. I thought the last question at the AMG was the best question there as it eluded to our poor PR. Brainchip only have about 13K of followers when we should have 100'sK of followers given we are suppose to be a world leader in our field. Also having a small amateur looking stand at trade shows etc. I believe our management need to urgently address this situation and start getting a PR agency on the job so we get onto Business channels and business journals etc. The CEO having a yearly interview with a small Australian Stock Analyst company is not going to cut it. He should be having interviews with the likes of Bloomberg channel and others like that... More impressive professional trade show stands and not a table with creased tabletop and a few PC like items demo'ing. Promoting our leading technology to the masses and how we can help improve the world and humanity. Maybe Management have it in mind and are awaiting till we have a couple more contracts.. If we want to be a big professional company then we need to think and act like one... I sure hope we will very soon.
We will see what this Chinese chip can do…
I personally find Sean's response to this question regarding their PR measures very good. Personally, I'm also glad we don't have CEOs who just make a lot of noise without setting the sails. Look at some who boast too much and then go under... like MULLN and others. Big mouth, nothing behind it. Those who hold their shares long-term don't need short-term hyped news... you can see what happened after the Benz announcement... time will tell.
 
  • Like
  • Love
Reactions: 3 users

7für7

Top 20
Sadly analyst or big news outlets have no interest in OTC (penny stock) in the US whatsoever, nada.

We have to get on NASDAQ to gain interest from everyone (US) like: Big institutional investors (Not allowed investing in OTC generally), analyst and large news outlets.

That's where the real big money are.

So at around 4-5US$ a share and things could get interesting very quickly.
Without solid license signings… big revenue in long term, Nasdaq will smash us like nothing… take care what you wish
 
  • Like
Reactions: 5 users

Guzzi62

Regular
Without solid license signings… big revenue in long term, Nasdaq will smash us like nothing… take care what you wish
I didn't say anything about up-listing with or without income did I?

That just common sense, off course you don't go there before you are ready.
 
  • Like
  • Haha
Reactions: 7 users

7für7

Top 20
I didn't say anything about up-listing with or without income did I?

That just common sense, off course you don't go there before you are ready.
It was just generally meant. More like an Add.
Because some people don’t know what it means to be in the Nasdaq. Even you’re qualified and you are a solid company, when you get in they will try to suck you out. Some companies gave up stock market and turned to normal LLC because they got tired of this game… nothing against your post
 
  • Like
Reactions: 2 users

toasty

Regular
It was just generally meant. More like an Add.
Because some people don’t know what it means to be in the Nasdaq. Even you’re qualified and you are a solid company, when you get in they will try to suck you out. Some companies gave up stock market and turned to normal LLC because they got tired of this game… nothing against your post
Wow, are you an expert on this as well!! :rolleyes:
 
  • Haha
  • Like
Reactions: 4 users

IloveLamp

Top 20
1000016024.gif
 
  • Haha
  • Like
Reactions: 18 users

7für7

Top 20
  • Haha
Reactions: 8 users

toasty

Regular
  • Like
Reactions: 1 users

7für7

Top 20
Just the sort of infantile response I was expecting :D
Yeah thank god your previous response to my post was full of academic analysis and fundamental evidence .
 
  • Haha
  • Like
Reactions: 4 users

mrgds

Regular
Another interesting video which from 10.00 time stamp goes into the next Ai iteration .......... SNNs

 
  • Like
  • Love
Reactions: 10 users
Top Bottom