Space Cadet
Regular
Probably posted already but nice to hear.
https://www.linkedin.com/posts/chet...gecomputing-activity-7195083124312088577-SO7P
https://www.linkedin.com/posts/chet...gecomputing-activity-7195083124312088577-SO7P
What was that company's name. I came across them at an investor road show in Melbourne that I attended only to meet Peter as I was already a shareholderThere is a Western Autralian company which has developed a low cost system for converting natural gas to hydrogen using iron ore as a catalyst while capturing the cabon as graphite which can be used, for example in the manufacture of steel.
They are currently testing a pilot plant in WA.
With all the zero carbon objectives, I think that gas producers would embrace this tech.
Hazer Group HZR ??? I have been looking into them for about 12 months but have been buying more BRN while the SP is where it is. Will probably make an initial investment in HZR soon. Tech looks interesting just "watching the financials".......mmmmWhat was that company's name. I came across them at an investor road show in Melbourne that I attended only to meet Peter as I was already a shareholder
![]()
Edge AI-Driven Vision Detection Solution Introduced at 500 Convenience StoreLocations to Measure Advertising Effectiveness|News Releases|Sony Semiconductor Solutions Group
Sony Semiconductor Solutions Group develops device business which includes Micro display, LSIs, and Semiconductor Laser, in focusing on Image Sensor.www.sony-semicon.com
April 24, 2024
Edge AI-Driven Vision Detection Solution Introduced at 500 Convenience Store Locations to Measure Advertising Effectiveness
Sony Semiconductor Solutions Corporation
Atsugi, Japan, April 24, 2024 —
Today, Sony Semiconductor Solutions Corporation (SSS) announced that it has introduced and begun operating an edge AI-driven vision detection solution at 500 convenience store locations in Japan to improve the benefits of in-store advertising.
![]()
Edge AI technology automatically detects the number of digital signage viewers and how long they viewed it.
SSS has been providing 7-Eleven and other retail outlets in Japan with vision-based technology to improve the implementation of digital signage systems and in-store advertising at their brick-and-mortar locations as part of their retail media*1 strategy. To help ensure that effective content is shown for brands and stores, this solution gives partners sophisticated tools to evaluate the effectiveness of advertising on their customers.
As part of this effort, SSS has recently introduced a solution that uses edge devices with on-sensor AI processing to automatically detect when customers see digital signage, count how many people paused to view it, and measure the percentage of viewers. The AI capabilities of the sensor collects data points such as the number of shoppers who enter the detection area, whether they saw the signage, the number who stopped to view the signage, and how long they watched for. The system does not output image data capable of identifying individuals, making it possible to provide insightful measurements while helping to preserve privacy.
Click here for an overview video of the solution and interview with 7-Eleven Japan.
Solution features:
-IMX500 intelligent vision sensor delivers optimal data collection, while helping to preserve privacy.
SSS’s IMX500 intelligent vision sensor with AI-processing capabilities automatically detects the number of customers who enter the detection area, the number who stopped to view the signage, and how long they viewed it. The acquired metadata (semantic information) is then sent to a back-end system where it’s combined with content streaming information and purchasing data to conduct a sophisticated analysis of advertising effectiveness. Because the system does not output image data that could be used to identify individuals, it helps to preserve customer privacy.
-Edge devices equipped with the IMX500 save space in store.
The IMX500 is made using SSS’s proprietary structure with the pixel chip and logic chip stacked, enabling the entire process, from imaging to AI inference, to be done on a single sensor. Compact, IMX500-equipped edge devices (approx. 55 x 40 x 35 mm) are unobtrusive in shops, and compared to other solutions that require an AI box or other additional devices for AI inference, can be installed more flexibly in convenience stores and shops with limited space.
-The AITRIOS™ platform contributes to operational stability and system expandability.
Only light metadata is output from IMX500 edge devices, minimizing the amount of data transmitted to the cloud. This helps lessen network load, even when adding more devices in multiple stores, compared to solutions that send full image data to the cloud. This curtails communication, cloud storage, and computing costs.
The IMX500 also handles AI computing, eliminating the need for other devices such as an AI box, resulting in a simple device configuration, streamlining device maintenance and reducing costs of installation. AITRIOS*2, SSS’s edge AI sensing platform, which is used to build and operate the in-store solution, delivers a complete service without the need for third-party tools, enabling simple, sustainable operations. This solution was developed with Console Enterprise Edition, one of the services offered by AITRIOS, and is installed on the partner’s Microsoft Azure cloud infrastructure. It not only connects easily and compatibly with their existing systems, but also offers system customizability and security benefits, since there is no need to output various data outside the company.![]()
*1 A new form of advertising media that provides advertising space for retailers and e-commerce sites using their own platforms
*2 AITRIOS is an AI sensing platform for streamlined device management, AI development, and operation. It offers the development environment, tools, features, etc., which are necessary for deploying AI-driven solutions, and it contributes to shorter roll-out times when launching operations, while ensuring privacy, reducing introductory cost, and minimizing complications. For more information on AITRIOS, visit: https://www.aitrios.sony-semicon.com/en
About Sony Semiconductor Solutions Corporation
Sony Semiconductor Solutions Corporation is a wholly owned subsidiary of Sony Group Corporation and the global leader in image sensors. It operates in the semiconductor business, which includes image sensors and other products. The company strives to provide advanced imaging technologies that bring greater convenience and fun. In addition, it also works to develop and bring to market new kinds of sensing technologies with the aim of offering various solutions that will take the visual and recognition capabilities of both human and machines to greater heights. For more information, please visit https://www.sony-semicon.com/en/index.html.
AITRIOS and AITRIOS logos are the registered trademarks or trademarks of Sony Group Corporation or its affiliated companies.
Microsoft and Azure are registered trademarks of Microsoft Corporation in the United States and other countries.
All other company and product names herein are trademarks or registered trademarks of their respective owners.
Here is some wild speculation: Could thispossibly be a candidate for the mysterious Custom Customer SoC, featured in the recent Investor Roadshow presentation (provided the licensing of Akida IP was done via MegaChips)?
Post in thread 'AITRIOS'
https://thestockexchange.com.au/threads/aitrios.18971/post-31633
View attachment 63835
Visit us at Booth 1947B, where we will be showcasing exciting demos, including our Temporal Event-based Neural Networks (TENNs) and the Raspberry Pi 5 with Face and Edge Learning. Don’t miss the opportunity to connect and learn more about our innovative solutions.
The sold-out Raspberry Pi Akida Dev Kit was based on a Raspberry 4…
View attachment 69849
Raspberry Pi offers an AI Kit with a Hailo AI acceleration module containing an NPU for use with the Raspberry Pi 5.
View attachment 69850
Maybe there has been something similar in the works for our company?
Could the mysterious Akida Pico have anything to do with the Raspberry Pi Pico series of microcontrollers?
Hackster.io just revealed what the Akida Pico is all about:
View attachment 70198
![]()
BrainChip Shrinks the Akida, Targets Sub-Milliwatt Edge AI with the Neuromorphic Akida Pico
Second-generation Akida2 neuromorphic computing platform is now available in a battery-friendly form, targeting wearables and always-on AI.www.hackster.io
BrainChip Shrinks the Akida, Targets Sub-Milliwatt Edge AI with the Neuromorphic Akida Pico
Second-generation Akida2 neuromorphic computing platform is now available in a battery-friendly form, targeting wearables and always-on AI.
![]()
Gareth HalfacreeFollow
59 minutes ago • Machine Learning & AI / Wearables
![]()
https://events.hackster.io/impactspotlights
Edge artificial intelligence (edge AI) specialist BrainChip has announced a new entry in its Akida range of brain-inspired neuromorphic processors, the Akida Pico — claiming that it's the "lowest power acceleration coprocessor" yet developed, with eyes on the wearable and sensor-integrated markets.
"Like all of our Edge AI enablement platforms, Akida Pico was developed to further push the limits of AI on-chip compute with low latency and low power required of neural applications," claims BrainChip chief executive officer Sean Hehir of the company's latest unveiling. "Whether you have limited AI expertise or are an expert at developing AI models and applications, Akida Pico and the Akida Development Platform provides users with the ability to create, train and test the most power and memory efficient temporal-event based neural networks quicker and more reliably."
![]()
BrainChip has announced a new entry in its Akida family of neuromorphic processors, the tiny Akida Pico. (: BrainChip)
The Akida Pico is, as the name suggests, based on BrainChip's Akida platform — specifically, the second-generation Akida2. Like its predecessors, it uses neuromorphic processing technology inspired by the human brain to handle selected machine learning and artificial intelligence workloads with a high efficiency — but unlike its predecessors, the Akida Pico has been built to deliver the lowest possible power draw while still offering enough compute performance to be useful.
According to BrainChip, the Akida Pico draws under 1mW under load and uses power island design to offer a "minimal" standby power draw. Chips built around the core are also expected to be extremely small physically, ideal for wearables, thanks to a compact die area and customizable overall footprint through configurable data buffer and model parameter memory specifications. The part, its creators explain, is ideal for always-on AI in battery-powered or high-efficiency systems, where it can be used to wake a more powerful microcontroller or application processor when certain conditions are met.
The Akida Pico is based on the company's second-generation Akida2 platform, but tailored for sub-milliwatt power draw. (: BrainChip)
On the software side, the Akida Pico is supported by BrainChip's in-house MetaTF software flow — allowing the compilation and optimization of Temporal-Enabled Neural Networks (TENNs) for execution on the device. MetaTF also supports importation of existing models developed in TensorFlow, Keras, and PyTorch — meaning, BrainChip says, there's no need to learn a whole new framework to use the Akida Pico.
BrainChip has not yet announced plans to release Akida Pico in hardware, instead concentrating on making it available as Intellectual Property (IP) for others to integrate into their own chip designs; pricing had not been publicly disclosed at the time of writing.
More information is available on the BrainChip website.
energy efficiency
machine learning
artificial intelligence
wearables
gpio
![]()
Gareth HalfacreeFollow
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
View attachment 70199
https://www.embedded.com/brainchips-akida-npu-redefining-ai-processing-with-event-based-architecture
01. Okt. 2024 / 15:55
BrainChip’s Akida NPU: Redefining AI Processing with Event-Based Architecture
![]()
![]()
Maurizio Di Paolo Emilio
6 min read
0
BrainChip has launched the Akida Pico, enabling the development of compact, ultra-low power, intelligent devices for applications in wearables, healthcare, IoT, defense, and wake-up systems, integrating AI into various sensor-based technologies. According to BrainChip, Akida Pico offers the lowest power standalone NPU core (less than 1mW), supports power islands for minimal standby power, and operates within an industry-standard development environment. It’s very small logic die area and configurable data buffer and model parameter memory help optimize the overall die size.
AI era
In the sophisticated artificial intelligence (AI) era of today, including smart technology into consumer items is usually connected with cloud services, complicated infrastructure, and high expenses. Computational power and energy economy are occasionally in conflict in the realm of edge artificial intelligence. Designed for deep learning activities, traditional neural processing units (NPUs) require significant quantities of power, so they are less suited for always-on, ultra-low-power applications including sensor monitoring, keyword detection, and other extreme edge artificial intelligence uses. BrainChip is providing a fresh approach to this challenge.
BrainChip’s solution addresses one of the major challenges in edge AI: how to perform continuous AI processing without draining power. Traditional microcontroller-based AI solutions can manage low-power requirements but often lack the processing capability for complex AI tasks.
Steve Brightfield, CMO at BrainChip![]()
2014 saw the launch of BrainChip, which took its inspiration from Peter Van Der Made’s work on neuromorphic computing concepts. Especially using spiking neural networks (SNNs), this technique replicates how the brain manages information, therefore transforming a fundamentally different method to traditional convolutional neural networks (CNNs). The SNN-based systems of BrainChip only compute when triggered by events rather than doing continuous calculations, hence optimizing power efficiency.
In an interview with Embedded, Steve Brightfield, CMO at BrainChip, talked about how this new method will change the game for ultra-low-power AI apps, showing big steps forward in the field. Brightfield said that this new technology makes it possible for common things like drills, hand tools, and other consumer products to have smart features without costing a lot more. “Today, a battery with a built-in tester can show how healthy it is with a simple color code: green means it’s good, red means it needs to be replaced. Providing a similar indicator, AI in these products can tell you when parts are wearing out before they break. BrainChip’s low-power, low-maintenance AI works in the background without being noticed, so advanced tests can be used by anyone without needing to know a lot about them,” Brightfield said.
Traditional NPUs vs. Event-Based Computing
Brightfield claimed that ordinary NPUs—including those with multiplier-accumulator arrays—run on fixed pipelines, processing every input whether or not it is beneficial. Particularly in cases of sparse data, a typical occurrence in AI applications where most input values have little impact on the final outcome, this inefficacy often leads in wasted calculations. By use of an event-based computing architecture, BrainChip saves computational resources and electricity by activating calculations only when relevant data is present.
“Most NPUs keep calculating all data values, even for sparse data,” Brightfield remarked. “We schedule computations dynamically using our event-based architecture, so cutting out unnecessary processing.”
The Influence of Sparsity
BrainChip’s main benefit comes from using data and neural weights’ sparsity. Traditional NPU architectures can take advantage of weight sparsity with pre-compilation, benefiting from model weight pruning, but cannot dynamically schedule for data sparsity, they must process all of the inputs.
By processing data just when needed, BrainChip’s SNN technology can drastically lower power usage based on the degree of sparsity in the data. BrainChip’s Akida NPU, for instance, could execute only when the sensor detects a significant signal in audio-based edge applications such as gunshot recognition or keyword detection, therefore conserving energy in the lack of acceptable data.
Akida Pico Block Diagram (Source: Brainchip)
Introducing the Akida Pico: Ultra-Low Power NPU for Extreme Edge AI
Designed on a spiking neural network (SNN) architecture, BrainChip’s Akida Pico processor transforms event-based computing. Unlike conventional artificial intelligence models that demand constant processing capability, Akida runs just in response to particular circumstances. For always-on uses like anomaly detection or keyword identification, where power economy is vital, this makes it perfect. The latest innovation from BrainChip is built on the Akida2 event-based computing platform configuration engine, which can execute with power suitable for battery-powered operation of less than a single milliwatt.
Wearables, IoT devices, and industrial sensors are among the jobs that call for continual awareness without draining the battery where the Akida Pico is well suited. Operating in the microwatt to milliwatt power range, this NPU is among the most efficient ones available; it surpasses even microcontrollers in several artificial intelligence applications.
For some always-on artificial intelligence uses, “the Akida Pico can be lower power than microcontrollers,” Brightfield said. “Every microamp counts in extreme battery-powered use cases, depending on how long it is intended to perform.”
The Akida Pico can stay always-on without significantly affecting battery life, whereas microcontroller-based AI systems often require duty cycling—turning the CPU off and on in bursts to save power. For edge AI devices that must run constantly while keeping a low power consumption, this benefit becomes very vital.
BrainChip’s MetaTF software flow allows developers to compile and optimize Temporal-Enabled Neural Networks (TENNs) on the Akida Pico. Supporting models created with TensorFlow/Keras and Pytorch, MetaTF eliminates the need to learn a new machine language framework, facilitating rapid AI application development for the Edge.
Akida Pico die area versus process (mm2) (Source: Brainchip)
Standalone Operation Without a Microcontroller
Another remarkable feature of the Akida Pico is its ability to function alone, that is, without a host microcontroller to manage its tasks. Usually beginning, regulating, and halting operations using a microcontroller, the Akida Pico comprises an integrated micro-sequencer managing the full neural network execution on its own. This architecture reduces total system complexity, latency, and power consumption.
For applications needing a microcontroller, the Akida Pico is a rather useful co-processor for offloading AI tasks and lowering power requirements. From battery-powered wearables to industrial monitoring tools, this flexibility appeals to a wide range of edge artificial intelligence applications.
Targeting Key Edge AI Applications
The ultra-low power characteristics of the Akida Pico help medical devices that need continuous monitoring—such as glucose sensors or wearable heart rate monitors—benefit.
Likewise, good candidates for this technology are speech recognition chores like voice-activated assistants or security systems listening for keywords. Edge artificial intelligence’s toughest obstacle is combining compute requirements with power consumption. In markets where battery life is crucial, the Akida Pico can scale performance while keeping inside limited power budgets.
One of the most notable uses of BrainChip’s artificial intelligence, according to Brightfield, is anomaly detection for motors or other mechanical systems Both costly and power-intensive, traditional methods monitor and diagnose equipment health using cloud-based infrastructure and edge servers. BrainChip embeds artificial intelligence straight within the motor or gadget, therefore flipping this concept on its head.
BrainChip’s ultra-efficient Akida Neural Processor Unit (NPU) for example, may continually examine vibration data from a motor. Should an abnormality, such as an odd vibration, be found, the system sets off a basic alert—akin to turning on an LED. Without internet access or a thorough examination, this “dumb and simple” option warns maintenance staff that the motor needs care instead of depending on distant servers or sophisticated diagnosis sites.
“In the field, a maintenance technician could only glance at the motor. Brightfield said, “they know it’s time to replace the motor before it fails if they spot a red light.” This method eliminates the need for costly software upgrades or cloud access, therefore benefiting equipment in distant areas where connectivity may be restricted.
Regarding keyword detection, BrainChip has included artificial intelligence right into the device. According to Brightfield, with 4-5% more accuracy than historical methods using raw audio data and modern algorithms, the Akida Pico uses just under 2 milliwatts of power to provide amazing results. Temporal Event-Based Neural Networks (TENNS), a novel architecture built from state space models that permits high-quality performance without the requirement for power-hungry microcontrollers, enable this achievement.
As demand for edge AI grows, BrainChip’s advancements in neuromorphic computing and event-based processing are poised to contribute significantly to the development of ultra-efficient, always-on AI systems, providing flexible solutions for various applications.
Tags:
![]()
Maurizio Di Paolo Emilio
Hackster.io just revealed what the Akida Pico is all about:
View attachment 70198
![]()
BrainChip Shrinks the Akida, Targets Sub-Milliwatt Edge AI with the Neuromorphic Akida Pico
Second-generation Akida2 neuromorphic computing platform is now available in a battery-friendly form, targeting wearables and always-on AI.www.hackster.io
BrainChip Shrinks the Akida, Targets Sub-Milliwatt Edge AI with the Neuromorphic Akida Pico
Second-generation Akida2 neuromorphic computing platform is now available in a battery-friendly form, targeting wearables and always-on AI.
![]()
Gareth HalfacreeFollow
59 minutes ago • Machine Learning & AI / Wearables
![]()
https://events.hackster.io/impactspotlights
Edge artificial intelligence (edge AI) specialist BrainChip has announced a new entry in its Akida range of brain-inspired neuromorphic processors, the Akida Pico — claiming that it's the "lowest power acceleration coprocessor" yet developed, with eyes on the wearable and sensor-integrated markets.
"Like all of our Edge AI enablement platforms, Akida Pico was developed to further push the limits of AI on-chip compute with low latency and low power required of neural applications," claims BrainChip chief executive officer Sean Hehir of the company's latest unveiling. "Whether you have limited AI expertise or are an expert at developing AI models and applications, Akida Pico and the Akida Development Platform provides users with the ability to create, train and test the most power and memory efficient temporal-event based neural networks quicker and more reliably."
![]()
BrainChip has announced a new entry in its Akida family of neuromorphic processors, the tiny Akida Pico. (: BrainChip)
The Akida Pico is, as the name suggests, based on BrainChip's Akida platform — specifically, the second-generation Akida2. Like its predecessors, it uses neuromorphic processing technology inspired by the human brain to handle selected machine learning and artificial intelligence workloads with a high efficiency — but unlike its predecessors, the Akida Pico has been built to deliver the lowest possible power draw while still offering enough compute performance to be useful.
According to BrainChip, the Akida Pico draws under 1mW under load and uses power island design to offer a "minimal" standby power draw. Chips built around the core are also expected to be extremely small physically, ideal for wearables, thanks to a compact die area and customizable overall footprint through configurable data buffer and model parameter memory specifications. The part, its creators explain, is ideal for always-on AI in battery-powered or high-efficiency systems, where it can be used to wake a more powerful microcontroller or application processor when certain conditions are met.
The Akida Pico is based on the company's second-generation Akida2 platform, but tailored for sub-milliwatt power draw. (: BrainChip)
On the software side, the Akida Pico is supported by BrainChip's in-house MetaTF software flow — allowing the compilation and optimization of Temporal-Enabled Neural Networks (TENNs) for execution on the device. MetaTF also supports importation of existing models developed in TensorFlow, Keras, and PyTorch — meaning, BrainChip says, there's no need to learn a whole new framework to use the Akida Pico.
BrainChip has not yet announced plans to release Akida Pico in hardware, instead concentrating on making it available as Intellectual Property (IP) for others to integrate into their own chip designs; pricing had not been publicly disclosed at the time of writing.
More information is available on the BrainChip website.
energy efficiency
machine learning
artificial intelligence
wearables
gpio
![]()
Gareth HalfacreeFollow
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
What happened with the last reveal they had or was I dreaming that?
Your don't have to post this. People already did in here
Cheers skipperYour don't have to post this. People already did in here