BRN Discussion Ongoing

Vladsblood

Regular
NASDAQ’s up a nice 1.58 percent too!! Imagine how it’s gonna ride on our coattails when we have completed our full listing as the whole new world of beyond the A1 edge is trailing along behind “by 3-5 years”
OUR Brainchip!! $$$$ Let’s do the BRN Nasdaq Splits REAL SOON Chippers. Vlad
 
  • Like
  • Fire
  • Love
Reactions: 6 users

Violin1

Regular
Good morning! Oh, I've subtly overlooked the "Aomori" 😵‍💫 I will be in Tokyo 🧐😂 thanks for the correction
100% Arabica in Gion is the place in Tokyo.
 
  • Like
Reactions: 1 users

Dozzaman1977

Regular
With Akida, will this be possible. Good for the single people of the world!!!!! ;)


1679444929530.png
1679444968415.png
;)
 
  • Like
  • Haha
Reactions: 7 users

Xray1

Regular
With no new IP deals, it's almost a certainty that our next quarterly will be poor. I have no idea how shorters think but it seems safe to say that they are hoping to lower the share price further post release of the quarterly.

Hopefully we can announce a few IP deals prior to the quarterly release to burn them.
We only have 7 trading days to go to the end of this quarterly reporting period.
 
  • Like
Reactions: 7 users

buena suerte :-)

BOB Bank of Brainchip
We only have 7 trading days to go to the end of this quarterly reporting period.
Let's hope it's a goody !! .... Reported by the end of April... And then the AGM!!! 23rd May (8 weeks away!) Hoping we get heaps of positive news before then!!!! EDIT; we are getting heaps of positive news, absolutely we are!!.... just need something to get the SP heading ⬆️⬆️ ;)
 
Last edited:
  • Like
  • Love
Reactions: 24 users

ndefries

Regular
Not happening. Insiders own >>50% and have not indicated any desire to sell. And their dreams and goals are no where near being accomplished. Not even close.
true, but if there is a will there is a way and the likely suitors are some of the least poor :)
 
  • Like
  • Sad
Reactions: 3 users

Diogenese

Top 20
@Steve10 when Qualcomm revealed their Snapdragon with Prophesee at CES, the blur free photography contribution was not from Brainchip even though many of us thought it was. Therefore it must have been Synsense. Round one to Synsense although there are many rounds to play out over coming years.
Hi Dhm,

Synsense may have beaten us to the punch, but we have the knockout punch.

SynSense boasts a response time of 5ms (200 fps).
Prophesee is capable of 10k fps equivalent.
nViso achieved >1000 fps with Akida.

DYNAP-CNN — the World’s First 1M Neuron, Event-Driven Neuromorphic AI Processor for Vision Processing | SynSense


Computation in DYNAP-CNN is triggered directly by changes in the visual scene, without using a high-speed clock. Moving objects give rise to sequences of events, which are processed immediately by the processor. Since there is no notion of frames, DYNAP-CNN’s continuous computation enables ultra-low-latency of below 5ms. This represents at least a 10x improvement from the current deep learning solutions available in the market for real-time vision processing.


SynSense and Prophesee develop one-chip event-based smart sensing solution


Oct 15, 2021 – SynSense and Prophesee, two leading neuromorphic technology companies, today announced a partnership that will see the two companies leverage their respective expertise in sensing and processing to develop ultra-low-power solutions for implementing intelligence on the edge for event-based vision applications.

… The sensors facilitate machine vision by recording changes in the scene rather than recording the entire scene at regular intervals. Specific advantages over frame-based approaches include better low light response (<1lux) and dynamic range (>120dB), reduced data generation (10x-1000x less than conventional approaches) leading to lower transfer/processing requirements, and higher temporal resolution (microsecond time resolution, i.e. >10k images per second time resolution equivalent).


BrainChip Partners with Prophesee Optimizing Computer Vision AI Performance and Efficiency - BrainChip


Laguna Hills, Calif. – June 19, 2022 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of neuromorphic AI IP, and Prophesee, the inventor of the world’s most advanced neuromorphic vision systems, today announced a technology partnership that delivers next-generation platforms for OEMs looking to integrate event-based vision systems with high levels of AI performance coupled with ultra-low power technologies.

… “By combining our Metavision solution with Akida-based IP, we are better able to deliver a complete high-performance and ultra-low power solution to OEMs looking to leverage edge-based visual technologies as part of their product offerings,” said Luca Verre, CEO and co-founder of Prophesee.


NVISO announces it has reached a key interoperability milestone with BrainChip Akida Neuromorphic IP | NVISO


NVISO announces it has reached a key interoperability milestone with BrainChip Akida Neuromorphic IP

18th July, 2022

NVISO has successfully completed full interoperability of four of its AI Apps from its Human Behavioural AI catalogue to the BrainChip Akida neuromorphic processor achieving a blazingly fast average model throughput of more than 1000 FPS and average model storage of less than 140 KB.

The relevance of nViso is that it uses a frame camera, yet it has a latency of less than 1 ms:

NVISO’s latest Neuro SDK to be demonstrated running on the BrainChip Akida fully digital neuromorphic processor platform at CES 2023 | NVISO

https://www.nviso.ai/en/news/nviso-...l-neuromorphic-processor-platform-at-ces-2023

2nd January, 2023

NVISO’s latest Neuro SDK to be demonstrated running on the BrainChip Akida fully digital neuromorphic processor platform at CES 2023


1679445562118.png

Benchmark performance data for the AI Apps running on the BrainChip neuromorphic AI processor
 
  • Like
  • Love
  • Fire
Reactions: 52 users

HopalongPetrovski

I'm Spartacus!
I received a Brainchip March 2023 Newsletter today. I tried to read it from the point of view of a potential manufacturer.

My opinion is that actual product releases are being held back, partly because the tech is hard to understand and a good example of a product is not out there for manufacturers to see.

I understand why we want to go down the "I.P. license" path, but what if we design a "killer" product and get someone to make it for us? Then we release and sell it for the world to see.

Who better than ourselves to do it to get the ball rolling? Sean H. could make clear the reason why we have taken this step to his contacts and that it is a once-only thing i.e. we are not going into competition.
What’s the “killer product/application“?

I was hoping it was going to be the Nanose covid detector and still could be as I doubt that ship has sailed completely, but it is offf the front pages for the moment.

It needs to address some common need or concern and be implemented in something we are already used to interacting with like our phones.

Maybe some kind of a love meter that can match people according to some analog of their personality and chemistry and initiate a connection?
The two Akida ai’s which already have intimate knowledge of the personas they represent assess/filter each other’s potential partners and establish engagement leading to potential contact between the parties?

I dunno.

Who’s got an idea for a killer product/application for Akida that could gain us a profile and potential user base similar to that achieved recently by Chat GPT?
 
  • Like
  • Haha
  • Fire
Reactions: 8 users

Xray1

Regular
I have to find out and understand that first. I post it as a lover of TSE and as someone who takes the rules seriously. I have a hunch, but I have to look first. So I'm posting this for transparency and because we all need to learn:
View attachment 32748
View attachment 32751

I was not warned and I am one of those who often referred to our rules. I still can not figure it out @zeeb0t

___
I know this from HC, yeeha or yes.
___
I am a thoroughly unsophisticated citizen. Dreddb0t must realize that. I have pointed out other products besides of Akida in a really other topic. But these posts were not moderated. And those I even announced for moderation. So what post of mine is not good for our community?
___
View attachment 32759

They will also have to go through the same R&D process as we have been through once it becomes available (maybe another year+++) ..And Selling/convincing the companies that the product is for them!? And as we have found out it takes years from getting an interested party onboard to them testing and incorporating it into a product and signing up for IP!
IMO we will have taken up a huge chunk of the market by the time any other Neuromorphic chip hits the Ai space! And as Dozz say's who else has on-chip-learning???

GO BRN
I think the other most important factor to consider is on how these future competitors can get around our patents.
 
  • Like
  • Fire
  • Wow
Reactions: 11 users
D

Deleted member 2799

Guest

buena suerte :-)

BOB Bank of Brainchip
What’s the “killer product/application“?

I was hoping it was going to be the Nanose covid detector and still could be as I doubt that ship has sailed completely, but it is offf the front pages for the moment.

It needs to address some common need or concern and be implemented in something we are already used to interacting with like our phones.

Maybe some kind of a love meter that can match people according to some analog of their personality and chemistry and initiate a connection?
The two Akida ai’s which already have intimate knowledge of the personas they represent assess/filter each other’s potential partners and establish engagement leading to potential contact between the parties?

I dunno.

Who’s got an idea for a killer product/application for Akida that could gain us a profile and potential user base similar to that achieved recently by Chat GPT?
A ' BoT' Algorithm reversal system ;):love:;)
 
  • Fire
  • Haha
  • Like
Reactions: 5 users

TECH

Regular
Well, that was a nice surprise, of the 14 companies FF posted yesterday, well, the first one on my way to doubling in 12 months or so
has already been locked in, 13 to go !! :ROFLMAO:

Check the share price and link it with the continuous flow of fantastic growth-bearing announcements, the perfect storm is surely
brewing, "those who hesitate, WILL be lost" in my view.

Someone may have already mentioned these compelling comments in this mornings announcement, but I'm going to repeat them.

"constrained platforms (such as spacecraft and robotics) for commercial and government markets"

"Intellisense can deliver even more advanced, yet energy efficient, cognitive capabilities to its RF system solutions"

“By integrating BrainChip’s Akida processor into our cognitive radio solutions, we will be able to provide our customers with an unparalleled level of performance, adaptability and reliability"

"By utilizing this cutting-edge technology, Intellisense will be able to deliver cognitive radio solutions that are faster, more efficient and more reliable than ever before"

"we look forward to continue partnering with Intellisense to deliver cutting-edge embedded processing with AI on-chip to their customers"

Our market reach has just further expanded as our network of partners continues to grow.

"their customers automatically, indirectly, become ours"....love Brainchip xxx

Tech (Perth) 😉(y)
 
  • Like
  • Fire
  • Love
Reactions: 57 users

buena suerte :-)

BOB Bank of Brainchip
I think the other most important factor to consider is on how these future competitors can get around our patents.
Well I think the answer is 'They can't' , I'm sure any application that comes anywhere close to our specifications/patents will get thrown out!
Also Peter van der Made has designed in very sophisticated encryption which he believes will make it impossible to reverse engineer AKIDA on top of which he and Anil Mankar have trade secrets covering how to put together the neural fabric.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 29 users

Tothemoon24

Top 20
Daniel Situnayake's

Head of ML Edge Impulse


nn to cpp: What you need to know about porting deep learning models to the edge​

Mar 21, 2023
It’s been incredibly exciting to see the big surge of interest in AI at the edge—the idea that we can run sophisticated AI algorithms on embedded devices—over the past couple of weeks. Folks like @ggerganov, who ported Whisper and LLaMA to C++, @nouamanetazi, who ported BLOOM, and @antimatter15, who ported Stanford Alpaca have pulled back the curtain and shown the community that deep neural networks are just code, and they will run on any device that can fit them.
I’ve been building on-device AI for the past four years—first at Google, where we launched TensorFlow Lite for Microcontrollers, and now at Edge Impulse, where I’m Head of Machine Learning—and I’ve written a couple of books along the way. I am thrilled to see so much interest and enthusiasm from a ton of people who are new to the field and have wonderful ideas about what they could build.
Embedded ML is a big field to play in. It makes use of the entire history of computer science and electrical engineering, from semiconductor physics to distributed computing, laid out end to end. If you’re just starting out, it can be a bit overwhelming, and some things may come as a surprise. Here are my top tips for newcomers:

All targets are different. Play to their strengths.​

Edge devices span a huge range of capabilities, from GPU-powered workhorses to ultra-efficient microcontrollers. Every unique system (which we call a target) represents a different set of trade-offs: RAM and ROM availability, clock speed, and processor features designed to speed up deep learning inference, along with peripherals and connectivity features for getting data in and out. Mobile phones and Raspberry Pi-style computers are at the high end of this range; microcontrollers are the mid- to low end. There are even purpose-built deep learning accelerators, including neuromorphic chips—inspired by human neurons’ spiking behavior—designed for low latency and energy efficiency.
There are billions of microcontrollers (aka MCUs) manufactured every year; if you can run a model on an MCU, you can run it anywhere. In theory, the same C++ code should run on any device—but in practice, every line of processor has custom instructions that you’ll need to make use of in order to perform computation fast. There are orders-of-magnitude performance penalties for running naive, unoptimized code. To make matters worse, optimization for different deep learning operations varies from target to target, and not all operations are equally supported. A simple change in convolutional stride or filter size may result in a huge performance difference.
The matrix of targets versus models is extraordinarily vast, and traditional tools for optimization are fragmented and difficult to use. Every vendor has their own toolchain, so moving from one target to another is a challenge. Fortunately, you don’t need to hand-craft C++ for every model and target. There are high-level tools available (like Edge Impulse) that will take an arbitrary model and generate optimized C++ or bytecode designed for a specific target. They’ll let you focus on your application and not worry so much about the implementation details. And you’re not always stuck with fixed architectures: you can design a model specifically to run well on a given target.

Compression is lossy. Quantify that loss.​

It’s common to compress models so that they fit in the constrained memory of smaller devices and run faster (using integer math) on their limited processors. Quantization is the most important form of compression for edge AI. Other approaches, like pruning, are still waiting for adequate hardware support.
Quantization involves reducing the precision of the model’s weights, for example from 32 to 8 bits. It can routinely get you anything from a 2x to an 8x reduction in model size. Since there’s no such thing as a free lunch, this shrinkage will result in reduced model performance. Quantization results in forgetting, as explained in this fantastic paper by my friend @sarahookr (https://arxiv.org/abs/1911.05248). As the model loses precision, it loses performance on the “long tail” of samples in its dataset, especially those that are infrequent or underrepresented.
Forgetting can lead to serious problems, amplifying any bias in the dataset, so it’s absolutely critical to evaluate a quantized model against the same criteria as its full-sized progenitor in order to understand what was lost. For example, a quantized translation model may lose its abilities unevenly: it might “forget” languages or words that occur less frequently.
Typically, you can get a roughly 4x reduction in size (from 32 to 8 bits) with potentially minimal performance impact (always evaluate), and without doing any additional training. If you quantize deeper than 8 bits, it’s generally necessary to do so during training, so you’ll need access to the model’s original dataset.
A fun part of edge AI product design is figuring out how to make clever use of models that have been partially compromised by their design constraints. Model output is just one contextual clue that you can use to understand a given situation. Even an unreliable model can contribute to an overall system that feels like magic.

Devices have sensors. Learn how to use them.​

While it’s super exciting to see today’s large language models ported to embedded devices, they’re only scratching the surface of what’s possible. Edge devices can be equipped with sensors—everything from cameras to radar—that can give them contextual information about the world around them. Combined with deep learning, sensor data gives devices incredible insight into everything from industrial processes to the inner state of a human body.
Today’s large, multimodal models are built using web-scraped data, so they’re biased towards text and vision. The sensors available to an embedded device go far beyond that—you can capture motion, audio, any part of the EM spectrum, gases and other chemicals, and human biosignals, including EEG data representing brain activity! I’m most excited to see the community make use of this additional data to train models that have far more insight than anything possible on the web.
Raw sensor data is highly dimensional and noisy. Digital signal processing algorithms help us sift the signal from the noise. DSP is an incredibly important part of embedded engineering, and many edge processors have on-board acceleration for DSP. As an ML engineer, learning basic DSP gives you superpowers for handling high frequency time series data in your models.

We can’t rely on pre-trained models.​

A lot of ML/AI discourse revolves around pre-trained models, like LLaMA or ResNet, which are passed around as artifacts and treated like black boxes. This approach works fine with data with universal structural patterns, like language or photographs, since any user can provide compatible inputs. It falls apart when the structure of data starts to vary from device to device.
For example, imagine you’ve built an edge AI device with on-board sensors. The model, calibration, and location of these sensors, along with any signal processing, will affect the data they produce. If you capture data with one device, train a model, and then share it with another developer who has a device with different sensors, the data will be different and the model may not work.
Devices are infinitely variable, and as physical objects, they can even change over time. This makes pre-trained models less useful for AI at the edge. Instead, we train custom models for every application. We often use pre-trained models as feature extractors: for example, we might use a pre-trained MobileNet to obtain high-level features from an image sensor, then input those into a custom model—alongside other sensor data—in order to make predictions.

Making on-device the norm.​

I’m confident that edge AI will enable a world of ambient computing, where our built environment is imbued with subtle, embodied intelligence that improves our lives in myriad ways—while remaining grounded in physical reality. It’s a refreshing new vision, diametrically different to the narrative of inevitable centralization that has characterized the era of cloud compute.
The challenges, constraints, and opportunities of embedded machine learning make it the most fascinating branch of computer science. I’m incredibly excited to see the field open up to new people with diverse perspectives and bold ideas.
Our goal at Edge Impulse is to make this field accessible to everyone. If you’re an ML engineer, we have tools to help you transform your existing models into optimized C++ for pretty much any target—and use digital signal processing to add sensor data to the mix. If you’re an embedded engineer or domain expert otherwise new to AI, we take the mystery out of the ML parts: you can upload data, train a model, and deploy it as a C++ library without needing a PhD. It’s easy to get started, just take a look at our docs.
It’s been amazing to see the gathering momentum behind porting deep learning models to C++, and it comes at an exciting time for us. Inside the company, I’ve been personally leading an effort to make this kind of work quick, easy, and accessible for every engineer. Watch this space: we’ll soon be making some big announcements that will open up our field even more.
Warmly,
Dan

Daniel Situnayake's blog​

  • My thoughts on embedded machine learning.
 
  • Like
  • Fire
  • Love
Reactions: 18 users

Boab

I wish I could paint like Vincent
  • Like
  • Fire
Reactions: 4 users

chapman89

Founding Member
Renesas ARM Cortex-M85 AI chip due to launch in June requires inventory for the launch. They would have produced a few chips already for Embedded World demo's & for select clients such as Plumerai to trial.

The inventory of chips should be in production very soon & be finished by the end of May or possibly earlier to allow for some batch testing.

Revenue now expected next quarter. If they do a small run of 1M chips x 30c BRN IP royalty = $300k. Could be higher depending on the percentage royalty fee. It could be anywhere between 2-15% royalty.
Hi Steve, are you thinking that we are in the M85 already? Be great if we are, and it was a bit of a coincidence that Brainchip and Renesas announced the M85 at a similar time?
 
  • Like
  • Thinking
  • Love
Reactions: 38 users

Slade

Top 20
Hi Steve, are you thinking that we are in the M85 already? Be great if we are, and it was a bit of a coincidence that Brainchip and Renesas announced the M85 at a similar time?

Hello Hello!​

Arm

Following the launch of the Cortex-M85 AI-capable MCU core last year, Paul Williamson, senior VP and GM of the IoT line of business at Arm, told EE Times that Arm will continue to invest in its Ethos product line of dedicated AI accelerator cores. While the Ethos line is “very active” and “a concentration of our continued investment,” Williamson said, Arm believes that in the world of MCUs, it will be important to have a stable, software targetable framework that complements more machine learning capability in the MCU.

Announced at the show was enhanced integration for Arm virtual hardware (AVH) into the latest version of the Keil MCU development kit. AVH allows developers to quickly perform “does it fit” checks for their algorithms on specific Arm cores with specific memory sizes, helping them decide whether or not dedicated accelerators like Ethos are required.

Arm is also working closely with third-party accelerator IP providers for applications that require more acceleration than the Ethos line can offer, including BrainChip (on Arm’s booth, a demo showed an Arm M85 working with BrainChip Akida IP).

 
  • Like
  • Fire
  • Love
Reactions: 80 users
Wouldn't mind Akida being deployed into this market :)



03/14/2023

How Retailers Can Use AI to Increase Sales​

Tech can measure customer attention
By Scott Schlesinger and Scott Siegel


Picking Out Item In Supermarket Main Image


By analyzing customer attention patterns in real time, retailers can quickly adjust their strategies and tactics to better meet customer needs and expectations.

Today’s retail shopping experience has differed drastically in recent years. In addition to visual clutter from store displays, excessive signage, crowded product shelves, loud music, bright lights and competing smells can be a distraction for many consumers. Further, smartphones enabling consumers to check their emails, browse social media or text while shopping vie for shoppers’ limited attention. All of these distractions and more lead to a decrease in customer encounters with products, which can negatively affect a retailer’s bottom line.

Using AI to Capture Customer Attention​

How can today’s retailers mitigate these challenges? Most people are now familiar with artificial intelligence (AI), which promises to solve any problem, but which AI can help increase sales? There are two main types of AI individuals may be familiar with for measuring attention: Eye-tracking AI and biologically inspired AI are both types of artificial intelligence, but they differ in their approach to measuring and interpreting data.

[Read more: "Google Uses AI to Tackle Grocers' Top Concerns"]

Eye-tracking AI involves using computer vision and machine-learning algorithms to track the movements of a person’s eyes as they look at different objects or areas of a screen or environment. In contrast, biologically inspired AI, also known as neuromorphic computing, is a type of artificial intelligence modeled after the structure and function of the human brain. Attention-measuring biologically inspired AI provides retailers with a powerful and superior tool to better understand and engage with their customers.

By analyzing consumer attention patterns, retailers, both online and in brick-and-mortar stores, can identify which products and services are most popular, which marketing campaigns are most effective, and which areas of their stores are getting the most foot traffic. This information, in conjunction with a full analytics platform, can be used to optimize store layouts, product placement, supply chains and marketing campaigns to better connect with customers and increase sales and revenue.

Uses of Attention-Measuring AI​

Many global companies are using both “traditional AI” and biologically inspired AI to maximize insights into consumer behavior optimizing product placement, supply chain efficiency, marketing and new product development. CPG companies PepsiCo, Unilever, Nestle, GSK, and Johnson & Johnson, as well as fashion retailers Nordstrom, H&M and Zara, are among those that recognize the importance of AI to maintain a competitive edge.

Attention-measuring AI can also be used to improve the efficiency and accuracy of inventory management. By analyzing customer attention and product engagement patterns, retailers can identify which products are selling quickly and which aren’t, and adjust their inventory accordingly. This can help retailers avoid overstocking slow-moving products or understocking popular products, leading to improved profitability and customer satisfaction.

Attention-measuring AI can help retailers to optimize their websites to enhance the online shopping experience. Retailers can identify which products are most popular and tailor website content to maximize the likelihood of consumers seeing those products. This can lead to increased website traffic, improved engagement and increased sales. Additionally, the technology can help retailers identify and correct any user experience issues that may be causing customer frustration or online shopping cart abandonment, resulting in a smoother, more productive online shopping experience.

Attention-measuring AI can be used to measure, monitor, and improve customer service in retail. By analyzing customer attention patterns, retailers can identify areas where customers may be experiencing confusion, and adjust their customer service practices accordingly. Retailers can also use the insights provided by attention-measuring AI to develop training programs and leading practices for their customer service representatives, ensuring that they provide the best possible customer experience.

Finally, attention-measuring AI can be used to provide real-time insights and feedback to retailers. By analyzing customer attention patterns in real time, retailers can quickly adjust their strategies and tactics to better meet customer needs and expectations. By leveraging the power of attention-measuring AI, retailers can gain a competitive edge and stay ahead of the curve in an increasingly data-driven marketplace.
 
  • Like
  • Fire
  • Love
Reactions: 22 users
Top Bottom