BRN Discussion Ongoing

Diogenese

Top 20
@Steve10 when Qualcomm revealed their Snapdragon with Prophesee at CES, the blur free photography contribution was not from Brainchip even though many of us thought it was. Therefore it must have been Synsense. Round one to Synsense although there are many rounds to play out over coming years.
Hi Dhm,

Synsense may have beaten us to the punch, but we have the knockout punch.

SynSense boasts a response time of 5ms (200 fps).
Prophesee is capable of 10k fps equivalent.
nViso achieved >1000 fps with Akida.

DYNAP-CNN — the World’s First 1M Neuron, Event-Driven Neuromorphic AI Processor for Vision Processing | SynSense


Computation in DYNAP-CNN is triggered directly by changes in the visual scene, without using a high-speed clock. Moving objects give rise to sequences of events, which are processed immediately by the processor. Since there is no notion of frames, DYNAP-CNN’s continuous computation enables ultra-low-latency of below 5ms. This represents at least a 10x improvement from the current deep learning solutions available in the market for real-time vision processing.


SynSense and Prophesee develop one-chip event-based smart sensing solution


Oct 15, 2021 – SynSense and Prophesee, two leading neuromorphic technology companies, today announced a partnership that will see the two companies leverage their respective expertise in sensing and processing to develop ultra-low-power solutions for implementing intelligence on the edge for event-based vision applications.

… The sensors facilitate machine vision by recording changes in the scene rather than recording the entire scene at regular intervals. Specific advantages over frame-based approaches include better low light response (<1lux) and dynamic range (>120dB), reduced data generation (10x-1000x less than conventional approaches) leading to lower transfer/processing requirements, and higher temporal resolution (microsecond time resolution, i.e. >10k images per second time resolution equivalent).


BrainChip Partners with Prophesee Optimizing Computer Vision AI Performance and Efficiency - BrainChip


Laguna Hills, Calif. – June 19, 2022 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of neuromorphic AI IP, and Prophesee, the inventor of the world’s most advanced neuromorphic vision systems, today announced a technology partnership that delivers next-generation platforms for OEMs looking to integrate event-based vision systems with high levels of AI performance coupled with ultra-low power technologies.

… “By combining our Metavision solution with Akida-based IP, we are better able to deliver a complete high-performance and ultra-low power solution to OEMs looking to leverage edge-based visual technologies as part of their product offerings,” said Luca Verre, CEO and co-founder of Prophesee.


NVISO announces it has reached a key interoperability milestone with BrainChip Akida Neuromorphic IP | NVISO


NVISO announces it has reached a key interoperability milestone with BrainChip Akida Neuromorphic IP

18th July, 2022

NVISO has successfully completed full interoperability of four of its AI Apps from its Human Behavioural AI catalogue to the BrainChip Akida neuromorphic processor achieving a blazingly fast average model throughput of more than 1000 FPS and average model storage of less than 140 KB.

The relevance of nViso is that it uses a frame camera, yet it has a latency of less than 1 ms:

NVISO’s latest Neuro SDK to be demonstrated running on the BrainChip Akida fully digital neuromorphic processor platform at CES 2023 | NVISO

https://www.nviso.ai/en/news/nviso-...l-neuromorphic-processor-platform-at-ces-2023

2nd January, 2023

NVISO’s latest Neuro SDK to be demonstrated running on the BrainChip Akida fully digital neuromorphic processor platform at CES 2023


1679445562118.png

Benchmark performance data for the AI Apps running on the BrainChip neuromorphic AI processor
 
  • Like
  • Love
  • Fire
Reactions: 52 users

HopalongPetrovski

I'm Spartacus!
I received a Brainchip March 2023 Newsletter today. I tried to read it from the point of view of a potential manufacturer.

My opinion is that actual product releases are being held back, partly because the tech is hard to understand and a good example of a product is not out there for manufacturers to see.

I understand why we want to go down the "I.P. license" path, but what if we design a "killer" product and get someone to make it for us? Then we release and sell it for the world to see.

Who better than ourselves to do it to get the ball rolling? Sean H. could make clear the reason why we have taken this step to his contacts and that it is a once-only thing i.e. we are not going into competition.
What’s the “killer product/application“?

I was hoping it was going to be the Nanose covid detector and still could be as I doubt that ship has sailed completely, but it is offf the front pages for the moment.

It needs to address some common need or concern and be implemented in something we are already used to interacting with like our phones.

Maybe some kind of a love meter that can match people according to some analog of their personality and chemistry and initiate a connection?
The two Akida ai’s which already have intimate knowledge of the personas they represent assess/filter each other’s potential partners and establish engagement leading to potential contact between the parties?

I dunno.

Who’s got an idea for a killer product/application for Akida that could gain us a profile and potential user base similar to that achieved recently by Chat GPT?
 
  • Like
  • Haha
  • Fire
Reactions: 8 users

Xray1

Regular
I have to find out and understand that first. I post it as a lover of TSE and as someone who takes the rules seriously. I have a hunch, but I have to look first. So I'm posting this for transparency and because we all need to learn:
View attachment 32748
View attachment 32751

I was not warned and I am one of those who often referred to our rules. I still can not figure it out @zeeb0t

___
I know this from HC, yeeha or yes.
___
I am a thoroughly unsophisticated citizen. Dreddb0t must realize that. I have pointed out other products besides of Akida in a really other topic. But these posts were not moderated. And those I even announced for moderation. So what post of mine is not good for our community?
___
View attachment 32759

They will also have to go through the same R&D process as we have been through once it becomes available (maybe another year+++) ..And Selling/convincing the companies that the product is for them!? And as we have found out it takes years from getting an interested party onboard to them testing and incorporating it into a product and signing up for IP!
IMO we will have taken up a huge chunk of the market by the time any other Neuromorphic chip hits the Ai space! And as Dozz say's who else has on-chip-learning???

GO BRN
I think the other most important factor to consider is on how these future competitors can get around our patents.
 
  • Like
  • Fire
  • Wow
Reactions: 11 users
D

Deleted member 2799

Guest

buena suerte :-)

BOB Bank of Brainchip
What’s the “killer product/application“?

I was hoping it was going to be the Nanose covid detector and still could be as I doubt that ship has sailed completely, but it is offf the front pages for the moment.

It needs to address some common need or concern and be implemented in something we are already used to interacting with like our phones.

Maybe some kind of a love meter that can match people according to some analog of their personality and chemistry and initiate a connection?
The two Akida ai’s which already have intimate knowledge of the personas they represent assess/filter each other’s potential partners and establish engagement leading to potential contact between the parties?

I dunno.

Who’s got an idea for a killer product/application for Akida that could gain us a profile and potential user base similar to that achieved recently by Chat GPT?
A ' BoT' Algorithm reversal system ;):love:;)
 
  • Fire
  • Haha
  • Like
Reactions: 5 users

TECH

Regular
Well, that was a nice surprise, of the 14 companies FF posted yesterday, well, the first one on my way to doubling in 12 months or so
has already been locked in, 13 to go !! :ROFLMAO:

Check the share price and link it with the continuous flow of fantastic growth-bearing announcements, the perfect storm is surely
brewing, "those who hesitate, WILL be lost" in my view.

Someone may have already mentioned these compelling comments in this mornings announcement, but I'm going to repeat them.

"constrained platforms (such as spacecraft and robotics) for commercial and government markets"

"Intellisense can deliver even more advanced, yet energy efficient, cognitive capabilities to its RF system solutions"

“By integrating BrainChip’s Akida processor into our cognitive radio solutions, we will be able to provide our customers with an unparalleled level of performance, adaptability and reliability"

"By utilizing this cutting-edge technology, Intellisense will be able to deliver cognitive radio solutions that are faster, more efficient and more reliable than ever before"

"we look forward to continue partnering with Intellisense to deliver cutting-edge embedded processing with AI on-chip to their customers"

Our market reach has just further expanded as our network of partners continues to grow.

"their customers automatically, indirectly, become ours"....love Brainchip xxx

Tech (Perth) 😉(y)
 
  • Like
  • Fire
  • Love
Reactions: 57 users

buena suerte :-)

BOB Bank of Brainchip
I think the other most important factor to consider is on how these future competitors can get around our patents.
Well I think the answer is 'They can't' , I'm sure any application that comes anywhere close to our specifications/patents will get thrown out!
Also Peter van der Made has designed in very sophisticated encryption which he believes will make it impossible to reverse engineer AKIDA on top of which he and Anil Mankar have trade secrets covering how to put together the neural fabric.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 29 users

Tothemoon24

Top 20
Daniel Situnayake's

Head of ML Edge Impulse


nn to cpp: What you need to know about porting deep learning models to the edge​

Mar 21, 2023
It’s been incredibly exciting to see the big surge of interest in AI at the edge—the idea that we can run sophisticated AI algorithms on embedded devices—over the past couple of weeks. Folks like @ggerganov, who ported Whisper and LLaMA to C++, @nouamanetazi, who ported BLOOM, and @antimatter15, who ported Stanford Alpaca have pulled back the curtain and shown the community that deep neural networks are just code, and they will run on any device that can fit them.
I’ve been building on-device AI for the past four years—first at Google, where we launched TensorFlow Lite for Microcontrollers, and now at Edge Impulse, where I’m Head of Machine Learning—and I’ve written a couple of books along the way. I am thrilled to see so much interest and enthusiasm from a ton of people who are new to the field and have wonderful ideas about what they could build.
Embedded ML is a big field to play in. It makes use of the entire history of computer science and electrical engineering, from semiconductor physics to distributed computing, laid out end to end. If you’re just starting out, it can be a bit overwhelming, and some things may come as a surprise. Here are my top tips for newcomers:

All targets are different. Play to their strengths.​

Edge devices span a huge range of capabilities, from GPU-powered workhorses to ultra-efficient microcontrollers. Every unique system (which we call a target) represents a different set of trade-offs: RAM and ROM availability, clock speed, and processor features designed to speed up deep learning inference, along with peripherals and connectivity features for getting data in and out. Mobile phones and Raspberry Pi-style computers are at the high end of this range; microcontrollers are the mid- to low end. There are even purpose-built deep learning accelerators, including neuromorphic chips—inspired by human neurons’ spiking behavior—designed for low latency and energy efficiency.
There are billions of microcontrollers (aka MCUs) manufactured every year; if you can run a model on an MCU, you can run it anywhere. In theory, the same C++ code should run on any device—but in practice, every line of processor has custom instructions that you’ll need to make use of in order to perform computation fast. There are orders-of-magnitude performance penalties for running naive, unoptimized code. To make matters worse, optimization for different deep learning operations varies from target to target, and not all operations are equally supported. A simple change in convolutional stride or filter size may result in a huge performance difference.
The matrix of targets versus models is extraordinarily vast, and traditional tools for optimization are fragmented and difficult to use. Every vendor has their own toolchain, so moving from one target to another is a challenge. Fortunately, you don’t need to hand-craft C++ for every model and target. There are high-level tools available (like Edge Impulse) that will take an arbitrary model and generate optimized C++ or bytecode designed for a specific target. They’ll let you focus on your application and not worry so much about the implementation details. And you’re not always stuck with fixed architectures: you can design a model specifically to run well on a given target.

Compression is lossy. Quantify that loss.​

It’s common to compress models so that they fit in the constrained memory of smaller devices and run faster (using integer math) on their limited processors. Quantization is the most important form of compression for edge AI. Other approaches, like pruning, are still waiting for adequate hardware support.
Quantization involves reducing the precision of the model’s weights, for example from 32 to 8 bits. It can routinely get you anything from a 2x to an 8x reduction in model size. Since there’s no such thing as a free lunch, this shrinkage will result in reduced model performance. Quantization results in forgetting, as explained in this fantastic paper by my friend @sarahookr (https://arxiv.org/abs/1911.05248). As the model loses precision, it loses performance on the “long tail” of samples in its dataset, especially those that are infrequent or underrepresented.
Forgetting can lead to serious problems, amplifying any bias in the dataset, so it’s absolutely critical to evaluate a quantized model against the same criteria as its full-sized progenitor in order to understand what was lost. For example, a quantized translation model may lose its abilities unevenly: it might “forget” languages or words that occur less frequently.
Typically, you can get a roughly 4x reduction in size (from 32 to 8 bits) with potentially minimal performance impact (always evaluate), and without doing any additional training. If you quantize deeper than 8 bits, it’s generally necessary to do so during training, so you’ll need access to the model’s original dataset.
A fun part of edge AI product design is figuring out how to make clever use of models that have been partially compromised by their design constraints. Model output is just one contextual clue that you can use to understand a given situation. Even an unreliable model can contribute to an overall system that feels like magic.

Devices have sensors. Learn how to use them.​

While it’s super exciting to see today’s large language models ported to embedded devices, they’re only scratching the surface of what’s possible. Edge devices can be equipped with sensors—everything from cameras to radar—that can give them contextual information about the world around them. Combined with deep learning, sensor data gives devices incredible insight into everything from industrial processes to the inner state of a human body.
Today’s large, multimodal models are built using web-scraped data, so they’re biased towards text and vision. The sensors available to an embedded device go far beyond that—you can capture motion, audio, any part of the EM spectrum, gases and other chemicals, and human biosignals, including EEG data representing brain activity! I’m most excited to see the community make use of this additional data to train models that have far more insight than anything possible on the web.
Raw sensor data is highly dimensional and noisy. Digital signal processing algorithms help us sift the signal from the noise. DSP is an incredibly important part of embedded engineering, and many edge processors have on-board acceleration for DSP. As an ML engineer, learning basic DSP gives you superpowers for handling high frequency time series data in your models.

We can’t rely on pre-trained models.​

A lot of ML/AI discourse revolves around pre-trained models, like LLaMA or ResNet, which are passed around as artifacts and treated like black boxes. This approach works fine with data with universal structural patterns, like language or photographs, since any user can provide compatible inputs. It falls apart when the structure of data starts to vary from device to device.
For example, imagine you’ve built an edge AI device with on-board sensors. The model, calibration, and location of these sensors, along with any signal processing, will affect the data they produce. If you capture data with one device, train a model, and then share it with another developer who has a device with different sensors, the data will be different and the model may not work.
Devices are infinitely variable, and as physical objects, they can even change over time. This makes pre-trained models less useful for AI at the edge. Instead, we train custom models for every application. We often use pre-trained models as feature extractors: for example, we might use a pre-trained MobileNet to obtain high-level features from an image sensor, then input those into a custom model—alongside other sensor data—in order to make predictions.

Making on-device the norm.​

I’m confident that edge AI will enable a world of ambient computing, where our built environment is imbued with subtle, embodied intelligence that improves our lives in myriad ways—while remaining grounded in physical reality. It’s a refreshing new vision, diametrically different to the narrative of inevitable centralization that has characterized the era of cloud compute.
The challenges, constraints, and opportunities of embedded machine learning make it the most fascinating branch of computer science. I’m incredibly excited to see the field open up to new people with diverse perspectives and bold ideas.
Our goal at Edge Impulse is to make this field accessible to everyone. If you’re an ML engineer, we have tools to help you transform your existing models into optimized C++ for pretty much any target—and use digital signal processing to add sensor data to the mix. If you’re an embedded engineer or domain expert otherwise new to AI, we take the mystery out of the ML parts: you can upload data, train a model, and deploy it as a C++ library without needing a PhD. It’s easy to get started, just take a look at our docs.
It’s been amazing to see the gathering momentum behind porting deep learning models to C++, and it comes at an exciting time for us. Inside the company, I’ve been personally leading an effort to make this kind of work quick, easy, and accessible for every engineer. Watch this space: we’ll soon be making some big announcements that will open up our field even more.
Warmly,
Dan

Daniel Situnayake's blog​

  • My thoughts on embedded machine learning.
 
  • Like
  • Fire
  • Love
Reactions: 18 users

chapman89

Founding Member
Renesas ARM Cortex-M85 AI chip due to launch in June requires inventory for the launch. They would have produced a few chips already for Embedded World demo's & for select clients such as Plumerai to trial.

The inventory of chips should be in production very soon & be finished by the end of May or possibly earlier to allow for some batch testing.

Revenue now expected next quarter. If they do a small run of 1M chips x 30c BRN IP royalty = $300k. Could be higher depending on the percentage royalty fee. It could be anywhere between 2-15% royalty.
Hi Steve, are you thinking that we are in the M85 already? Be great if we are, and it was a bit of a coincidence that Brainchip and Renesas announced the M85 at a similar time?
 
  • Like
  • Thinking
  • Love
Reactions: 38 users

Slade

Top 20
Hi Steve, are you thinking that we are in the M85 already? Be great if we are, and it was a bit of a coincidence that Brainchip and Renesas announced the M85 at a similar time?

Hello Hello!​

Arm

Following the launch of the Cortex-M85 AI-capable MCU core last year, Paul Williamson, senior VP and GM of the IoT line of business at Arm, told EE Times that Arm will continue to invest in its Ethos product line of dedicated AI accelerator cores. While the Ethos line is “very active” and “a concentration of our continued investment,” Williamson said, Arm believes that in the world of MCUs, it will be important to have a stable, software targetable framework that complements more machine learning capability in the MCU.

Announced at the show was enhanced integration for Arm virtual hardware (AVH) into the latest version of the Keil MCU development kit. AVH allows developers to quickly perform “does it fit” checks for their algorithms on specific Arm cores with specific memory sizes, helping them decide whether or not dedicated accelerators like Ethos are required.

Arm is also working closely with third-party accelerator IP providers for applications that require more acceleration than the Ethos line can offer, including BrainChip (on Arm’s booth, a demo showed an Arm M85 working with BrainChip Akida IP).

 
  • Like
  • Fire
  • Love
Reactions: 80 users
Wouldn't mind Akida being deployed into this market :)



03/14/2023

How Retailers Can Use AI to Increase Sales​

Tech can measure customer attention
By Scott Schlesinger and Scott Siegel


Picking Out Item In Supermarket Main Image


By analyzing customer attention patterns in real time, retailers can quickly adjust their strategies and tactics to better meet customer needs and expectations.

Today’s retail shopping experience has differed drastically in recent years. In addition to visual clutter from store displays, excessive signage, crowded product shelves, loud music, bright lights and competing smells can be a distraction for many consumers. Further, smartphones enabling consumers to check their emails, browse social media or text while shopping vie for shoppers’ limited attention. All of these distractions and more lead to a decrease in customer encounters with products, which can negatively affect a retailer’s bottom line.

Using AI to Capture Customer Attention​

How can today’s retailers mitigate these challenges? Most people are now familiar with artificial intelligence (AI), which promises to solve any problem, but which AI can help increase sales? There are two main types of AI individuals may be familiar with for measuring attention: Eye-tracking AI and biologically inspired AI are both types of artificial intelligence, but they differ in their approach to measuring and interpreting data.

[Read more: "Google Uses AI to Tackle Grocers' Top Concerns"]

Eye-tracking AI involves using computer vision and machine-learning algorithms to track the movements of a person’s eyes as they look at different objects or areas of a screen or environment. In contrast, biologically inspired AI, also known as neuromorphic computing, is a type of artificial intelligence modeled after the structure and function of the human brain. Attention-measuring biologically inspired AI provides retailers with a powerful and superior tool to better understand and engage with their customers.

By analyzing consumer attention patterns, retailers, both online and in brick-and-mortar stores, can identify which products and services are most popular, which marketing campaigns are most effective, and which areas of their stores are getting the most foot traffic. This information, in conjunction with a full analytics platform, can be used to optimize store layouts, product placement, supply chains and marketing campaigns to better connect with customers and increase sales and revenue.

Uses of Attention-Measuring AI​

Many global companies are using both “traditional AI” and biologically inspired AI to maximize insights into consumer behavior optimizing product placement, supply chain efficiency, marketing and new product development. CPG companies PepsiCo, Unilever, Nestle, GSK, and Johnson & Johnson, as well as fashion retailers Nordstrom, H&M and Zara, are among those that recognize the importance of AI to maintain a competitive edge.

Attention-measuring AI can also be used to improve the efficiency and accuracy of inventory management. By analyzing customer attention and product engagement patterns, retailers can identify which products are selling quickly and which aren’t, and adjust their inventory accordingly. This can help retailers avoid overstocking slow-moving products or understocking popular products, leading to improved profitability and customer satisfaction.

Attention-measuring AI can help retailers to optimize their websites to enhance the online shopping experience. Retailers can identify which products are most popular and tailor website content to maximize the likelihood of consumers seeing those products. This can lead to increased website traffic, improved engagement and increased sales. Additionally, the technology can help retailers identify and correct any user experience issues that may be causing customer frustration or online shopping cart abandonment, resulting in a smoother, more productive online shopping experience.

Attention-measuring AI can be used to measure, monitor, and improve customer service in retail. By analyzing customer attention patterns, retailers can identify areas where customers may be experiencing confusion, and adjust their customer service practices accordingly. Retailers can also use the insights provided by attention-measuring AI to develop training programs and leading practices for their customer service representatives, ensuring that they provide the best possible customer experience.

Finally, attention-measuring AI can be used to provide real-time insights and feedback to retailers. By analyzing customer attention patterns in real time, retailers can quickly adjust their strategies and tactics to better meet customer needs and expectations. By leveraging the power of attention-measuring AI, retailers can gain a competitive edge and stay ahead of the curve in an increasingly data-driven marketplace.
 
  • Like
  • Fire
  • Love
Reactions: 22 users
What’s the “killer product/application“?

I was hoping it was going to be the Nanose covid detector and still could be as I doubt that ship has sailed completely, but it is offf the front pages for the moment.

It needs to address some common need or concern and be implemented in something we are already used to interacting with like our phones.

Maybe some kind of a love meter that can match people according to some analog of their personality and chemistry and initiate a connection?
The two Akida ai’s which already have intimate knowledge of the personas they represent assess/filter each other’s potential partners and establish engagement leading to potential contact between the parties?

I dunno.

Who’s got an idea for a killer product/application for Akida that could gain us a profile and potential user base similar to that achieved recently by Chat GPT?
Oh no Hopalong, if the ultimate match-making love meter is created using Akida, then there would be no need for shows like MAFS. We need the drama from these shows in our lives, haha.
 
  • Haha
  • Like
Reactions: 8 users

jk6199

Regular
ASX watching BRN for any announcements that may be fluff?

Meanwhile, price severely manipulated?

I hope the ASX hop on their bike and don't check the seat is missing!
 
  • Haha
  • Like
  • Fire
Reactions: 20 users

HopalongPetrovski

I'm Spartacus!
Oh no Hopalong, if the ultimate match-making love meter is created using Akida, then there would be no need for shows like MAFS. We need the drama from these shows in our lives, haha.
Funnily enough MAFS was the inspiration for the example/concept I posited. 🤣
Frankly I've never watched the show (really and truely) and the snippets I was exposed to were something akin to having bamboo splinters shoved up under my fingernails, but, just about all the younger folk at my work were obsessed with it, and they all use dating apps to "hook up" (God help them) and find willing participants for their sundry debaucheries and life partnering attempts.
It is aimed at a constant and deep seated need in the world which is extant in every culture, socioeconomic group and gender and can be relevant across all age groups (who can still get it up 🤣)
 
  • Haha
  • Like
Reactions: 6 users
Wouldn't mind Akida being deployed into this market :)



03/14/2023

How Retailers Can Use AI to Increase Sales​

Tech can measure customer attention
By Scott Schlesinger and Scott Siegel


Picking Out Item In Supermarket Main Image


By analyzing customer attention patterns in real time, retailers can quickly adjust their strategies and tactics to better meet customer needs and expectations.

Today’s retail shopping experience has differed drastically in recent years. In addition to visual clutter from store displays, excessive signage, crowded product shelves, loud music, bright lights and competing smells can be a distraction for many consumers. Further, smartphones enabling consumers to check their emails, browse social media or text while shopping vie for shoppers’ limited attention. All of these distractions and more lead to a decrease in customer encounters with products, which can negatively affect a retailer’s bottom line.

Using AI to Capture Customer Attention​

How can today’s retailers mitigate these challenges? Most people are now familiar with artificial intelligence (AI), which promises to solve any problem, but which AI can help increase sales? There are two main types of AI individuals may be familiar with for measuring attention: Eye-tracking AI and biologically inspired AI are both types of artificial intelligence, but they differ in their approach to measuring and interpreting data.

[Read more: "Google Uses AI to Tackle Grocers' Top Concerns"]

Eye-tracking AI involves using computer vision and machine-learning algorithms to track the movements of a person’s eyes as they look at different objects or areas of a screen or environment. In contrast, biologically inspired AI, also known as neuromorphic computing, is a type of artificial intelligence modeled after the structure and function of the human brain. Attention-measuring biologically inspired AI provides retailers with a powerful and superior tool to better understand and engage with their customers.

By analyzing consumer attention patterns, retailers, both online and in brick-and-mortar stores, can identify which products and services are most popular, which marketing campaigns are most effective, and which areas of their stores are getting the most foot traffic. This information, in conjunction with a full analytics platform, can be used to optimize store layouts, product placement, supply chains and marketing campaigns to better connect with customers and increase sales and revenue.

Uses of Attention-Measuring AI​

Many global companies are using both “traditional AI” and biologically inspired AI to maximize insights into consumer behavior optimizing product placement, supply chain efficiency, marketing and new product development. CPG companies PepsiCo, Unilever, Nestle, GSK, and Johnson & Johnson, as well as fashion retailers Nordstrom, H&M and Zara, are among those that recognize the importance of AI to maintain a competitive edge.

Attention-measuring AI can also be used to improve the efficiency and accuracy of inventory management. By analyzing customer attention and product engagement patterns, retailers can identify which products are selling quickly and which aren’t, and adjust their inventory accordingly. This can help retailers avoid overstocking slow-moving products or understocking popular products, leading to improved profitability and customer satisfaction.

Attention-measuring AI can help retailers to optimize their websites to enhance the online shopping experience. Retailers can identify which products are most popular and tailor website content to maximize the likelihood of consumers seeing those products. This can lead to increased website traffic, improved engagement and increased sales. Additionally, the technology can help retailers identify and correct any user experience issues that may be causing customer frustration or online shopping cart abandonment, resulting in a smoother, more productive online shopping experience.



Finally, attention-measuring AI can be used to provide real-time insights and feedback to retailers. By analyzing customer attention patterns in real time, retailers can quickly adjust their strategies and tactics to better meet customer needs and expectations. By leveraging the power of attention-measuring AI, retailers can gain a competitive edge and stay ahead of the curve in an increasingly data-driven marketplace.
Rob Telson spoke about this market and basically said that the issue for Brainchip was that power is not an issue yet to retailers. They are all connected to the grid and at this point despite some claiming green credentials are not chasing ways to reduce electricity consumption particularly not in relation to computers so adoption of AKIDA technology is some way off.

Incumbency is going to be an issue where the big players like Woolworths and Coles are concerned and I suspect it will require legislation requiring all to make energy savings that will open these currently locked doors.

As for watching what grabs the attention of male shoppers and tailoring there product offerings towards this could easily lead to some unexpected outcomes.🤣😂

That aside handheld security devices that can detect stolen product under a customers jacket would be a market that a company like Lassen Peak powered by AKIDA might open up in the retail industry. Even a handheld product that a shelf stacker could carry up and down isles that detects wrongly located items moved from their correct position by customers would also possibly be a potential market.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 25 users
Rob Telson spoke about this market and basically said that the issue for Brainchip was that power is not an issue yet to retailers. They are all connected to the grid and at this point despite some claiming green credentials are not chasing ways to reduce electricity consumption particularly not in relation to computers so adoption of AKIDA technology is some way off.

Incumbency is going to be an issue where the big players like Woolworths and Coles are concerned and I suspect it will require legislation requiring all to make energy savings that will open these currently locked doors.

As for watching what grabs the attention of male shoppers and tailoring there product offerings towards this could easily lead to some unexpected outcomes.🤣😂

That aside handheld security devices that can detect stolen product under a customers jacket would be a market that a company like Lassen Peak powered by AKIDA might open up in the retail industry. Even a handheld product that a shelf stacker could carry up and down isles that detects wrongly located items moved from their correct position by customers would also possibly be a potential market.

My opinion only DYOR
FF

AKIDA BALLISTA
Thanks.

Explains a bit from Robs and BRNs perspective for the moment. Maybe in time then.

For me it could (and should) be pitched more around privacy and security maybe?

Data gathering and analytics done on device in store only and converted to merely raw statistical data for use / transmission?

Minimal external data connections?

Wasn't there some issue not long ago with Bunnings or Kmart or someone using facial recognition and storing images etc but the info about it and its use was some token fine print somewhere when a consumer entered a store?
 
  • Like
Reactions: 7 users

TopCat

Regular
Hi @Steve10 , have you come across this before in your TI travels?



TDK has teamed up with Texas Instruments on a compact, battery-powered wireless multi-sensor module that uses edge AI to detect anomalies in machinery and equipment.​


The TDK i3 Micro Module integrates the TI SimpleLink CC2652R7 wireless chip, an ARM Cortex-M4F multiprotocol 2.4-GHz wireless 32-bit microcontroller (MCU) for real-time monitoring. The module also integrates TDK’s IIM-42352 MEMS accelerometer and digital output temperature sensor, edge AI, and mesh network functionality into a single unit, facilitating data aggregation, integration, and processing.


The i3 Micro Module allows users to achieve sensing at most any desired position without physical constraints such as wiring by using a wireless mesh to connect up the modules in the network. This helps identify anomalies in machinery and equipment, enabling Condition-based Monitoring (CbM).

The accelerometer features digital-output X-, Y-, and Z-axis accelerometer with programmable full-scale range of ±2g, ±4g, ±8g, and ±16g, user-programmable interrupts, I3C℠ / I²C / SPI slave host interface, digital-output temperature sensor, an external clock input that supports highly accurate clock input from 31 kHz to 50 kHz, and 20,000g shock tolerance.

Monitoring through real-time visualized empirical equipment data instead of relying on manpower and scheduled maintenance contributes to a a predictive maintenance system to minimize production downtime.


“As more electronics and automation are being implemented in factories and buildings, companies are seeking more efficient ways to anticipate and manage equipment needs, avoiding expensive downtime,” said Nick Smith, product marketing manager at Texas Instruments.

“This collaboration between TI and TDK will bring more technology into factories and increase their efficiency through simpler and more powerful sensing and real-time communication. TI’s connectivity technology is backed by over 20 years of experience to help drive innovations such as these modules.”

“TDK’s i3 Micro Module facilitates numerous benefits to the manufacturing floor to enable digital transformation and an enhanced smart factory environment,” stated Christian Hoffman, General Manager, Corporate Marketing Group at TDK. “The module simplifies the deployment and sensor data gathering process giving users the bandwidth to focus on effective condition based monitoring and predictive maintenance initiatives.”

The i3 Micro Module is expected to be available through most global distribution partners by the fourth quarter of 2023.
 
  • Like
  • Thinking
Reactions: 16 users

charmander

Regular
And they mentioned for “commercial and government markets”

That doesn’t sound like just for NASA

“Intellisense Systems Inc. has selected its neuromorphic technology to improve the cognitive communication capabilities on size, weight and power (SWaP) constrained platforms (such as spacecraft and robotics) for commercial and government markets.”
The word "selected" really does it for me here - sounds very exclusive doesn't it?!

It may sound very obvious, I know, but this just further validates the superiority of the Brainchip technology

imo
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Tothemoon24

Top 20
Let the good times Roll



Mar 21, 2023 - 09:43 am

Mercedes gets ready for new EV platforms​

BADEN-WUERTTEMBERGBEVEQGERMANYHUNGARYJÖRG BURZERKECSKEMÉTMERCEDES BENZMMARASTATT

Mercedes is investing heavily to make its production network fit for the new platforms introduced from 2024. According to a German media report, the manufacturer will start converting its plant in Rastatt this summer. It is where Mercedes will build its first model based on the Mercedes-Benz Modular Architecture (MMA).

Mercedes is doing more than just upgrading the plant in Rastatt: Plants in Kecskemét, Hungary, and Beijing will also be converted. In June, Mercedes announced it would invest more than €2 billion in its European production sites. Jörg Burzer, Head of Production at Mercedes-Benz, now told Automobilwoche that the carmaker was investing “a three-digit million amount per plant.”

Instead of seven compact models, there will only be four in the future. The first of these is likely to be the successor to the CLA. A model from the C-Class segment based on the all-electric MB.EA architecture is also planned for Kecskemét. “Two sporty AMG.EA models will be added to the lineup later, both of which will be produced exclusively in Sindelfingen,” says Burzer, adding that it was too early to talk about anything that would come after that.

In light of possible subsidies in the US, Mercedes also considers expanding its plant in Tuscaloosa. “We’re obviously looking at what happens with the Inflation Reduction Act,” so Burzer. “The framework conditions worldwide are always changing, and we have to react to that if necessary.”

Mercedes announced in July 2021 that it would only introduce new electric platforms from 2025 – combustion models and hybrids will continue to be built on the current platforms. But, there will only be new purely electric developments.

The MB.EA covers all medium-sized and large passenger cars above the MMA and “will be the electric basis of the future BEV portfolio as a scalable modular system.” For the performance models, the AMG.EA will “address the needs of Mercedes-AMG’s technology-savvy and performance-oriented customers.” Electric vans and light commercial vehicles will be based on the VAN.EA. All other new platforms are subsequently also to be purely electric.

The MMA was once announced as the last mixed platform that would be developed focusing on electric vehicles but could also accommodate internal combustion engines.
 
  • Like
  • Fire
  • Love
Reactions: 28 users
Top Bottom