BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
Here's a great article describing Arm's Cortex-M85. Not to forget that BrainChip has announced validation of compatibility of the Akida product family IP with the M85.

Unlocking computer vision and machine learning on highly efficient MCU platforms​

16 March 2023

Technology
Stephen_2.jpg

Stephen Su shares how Arm’s Cortex-M85 processor and software ecosystem can be leveraged to overcome the constraints of microcontroller unit platforms
Computer vision (CV) has been widely adopted in many Internet of Things (IoT) devices across various use cases, ranging from smart cameras and smart home appliances to smart retail, industrial applications, access control and smart doorbells. As these devices are constrained by size and are often battery powered, they need to wield highly efficient compute platforms.
One such platform is the MCU (microcontroller unit), which has low-power and low-cost characteristics, alongside CV and machine learning (ML) compute capabilities.
However, running CV on the MCU will undoubtedly increase its design complexity due to the hardware and software resource constraints of the platform.
Therefore, IoT developers need to determine how to achieve the required performance, while keeping power consumption low. In addition, they need to integrate the image signal processor (ISP) into the MCU platform, while balancing the ISP configuration and image quality.
One processor that fulfills these requirements is Arm’s Cortex-M85, which is Arm’s most powerful Cortex-M CPU to date. With vector extension of SIMD (single instruction, multiple data) 128-bit vector processing, Cortex-M85 accelerates CV alongside the overall MCU performance. For IoT developers, they can leverage Arm’s software ecosystem, ML embedded evaluation kit and guidance on how to integrate the ISP with the MCU in order to unlock CV and ML easily and quickly on the highly-efficient MCU platform.

Arm brings advanced computing to MCUs​

As a first step, being able to run CV compute workloads requires improved performance on the MCU. Focusing on the CPU architecture, there are several ways to enhance the MCU’s performance, including superscalar, VLIW (very long instruction word), and SIMD. For the Cortex-M85, Arm chose to adopt SIMD – which is a single instruction set that can operate multiple data – as it’s the best option for balancing performance and power consumption.
Figure%201.png

Figure 1: The comparison between VLIW and SIMD
Arm’s Helium technology, which is the M-Profile Vector Extension (MVE) for the Cortex-M processor series, brings vector processing to the MCU. Helium is an extension in the Armv8.1-M architecture to significantly enhance performance for CV and ML applications on small, low-power IoT devices. It also utilises the largest software ecosystem available to IoT developers, including optimised sample code and neural networks.

Software ecosystem on MCUs to facilitate CV and ML​

Supporting the Cortex-M CPUs, Arm has published various materials to make it easier to start running CV and ML. This includes the Arm ML embedded evaluation kit.
The evaluation kit provides ready-to-use ML applications for the embedded stack. As a result, IoT developers can experiment with the already-developed software use cases and then create their own applications. The example applications with ML networks are listed in the table below.
Table_0.png

The Arm ML embedded evaluation kit

Integrating the ISP on the MCU​

The ISP is an essential technology to unlock CV, as the image stream is the input source. However, there are certain points that we must consider when integrating ISP on the MCU platform.
For IoT edge devices, there will be a smaller image sensor resolution (<1-2MP; 15-30fps) and even lower frame rate. Also the image signal processing is not always active. Therefore, using a higher quality scaler within the ISP will drop the resolution to sub-VGA, which is 640 x 480, to, for example, minimise the data ingress to the NPU. This means that the ISP only uses the full resolution when needed.
ISP configurations can also affect power, area, and efficiency. Therefore, it is worth asking the following questions to save power and area.
  • Whether it’s for human vision, computer vision, or both?
  • What is the required memory bandwidth?
  • How many ISP output channels will be needed?
An MCU platform is usually resource-constraint with limited memory size. Integrating with an ISP requires the MCU to run the ISP driver, including the ISP’s code, data, and control LUT (loop up table). Therefore, once the ISP configuration has been decided, developers need to tailor the driver firmware accordingly, removing unused code and data to accommodate the memory limitation on the MCU platform.
StephenFigure2.png

Figure 2: An example of concise ISP configuration
Another consideration when integrating the ISP with the MCU is lowering the frame rate and resolution In many cases, it would be best to consider the convergence speed of the ‘3As’ – auto-exposure, auto-white balance and auto-focus. This will likely require a minimum of five to ten frames before settling. If the frame rate is too slow, it might be problematic for your use case. For example, this could mean a two to five second delay before a meaningful output can be captured and, given the short power-on window, there is a risk of missing critical events. Moreover, if the clock frequency of the image sensor is dropped too low, it is likely to introduce nasty rolling shutter artifacts.

Summary​

Enabling CV and ML on MCU platforms is part of the next wave of the IoT evolution. However, the constraints of the MCU platform can increase the design complexity and difficulty. Enabling vector processing on the MCU through the Cortex-M85 and leveraging Arm’s software ecosystem can provide enough computing and reduce this design complexity. In addition, integrating a concise ISP is a sensible solution for IoT devices to speed up and unlock CV and ML tasks on low-power, highly efficient MCU platforms.



 
  • Like
  • Fire
Reactions: 19 users

The Pope

Regular
Dear Tech, I am puzzled by your repeated preoccupation with a January 1, 2025 date. Why that day? The 3rd quarter 2024 results will be announced before then. The 2024 year end report will be released / announced after that date.

So,....except for the fact that that the highly regarded Rose Parade from Pasedena, CA will be televised live worldwide that morning (in the USA at least), and the fact those not watching that iconic event may be hungover on January 1st,.... what has you so excited? Can you share a bit more color on your opinion / expectation? I'm really curious.

Thanks, dippY
Hi Dippy

Did tech reply back to this post? I don’t recall seeing a response.

Cheers
The pope
 
  • Like
  • Love
Reactions: 2 users
D

Deleted member 118

Guest
where customers can simply walk in, grab whatever products they want and walk out without paying through a register.



Isn’t that normal then
 
  • Haha
  • Like
Reactions: 6 users
Re Valeo, so the royalty on the orders worth 1 billion euros using @Kachoo's suggested calculation in italics below. Remembering that this is for 2 contract only.🥳



View attachment 33464



There has been a lot of speculation about whether we would get 10 cents or 30 cents royalty per product, but, apparently in the MegaChips deal, it is a percentage of the sales price of the product, on a sliding scale, ie, the more the customer sells, the smaller the percentage.

As an example, assume there is a high volume $10 product and a lower volume $100 product.

So, according to the sliding scale (set according to volume of sales), if it's 2% for the $10 product, we get 20 cents a product, whereas if it's 3% of a $100 product, we get $3 per product.

Remember, these are just example royalty rates and sales process, not the real thing.
We wan’t the chunky revenue to start....thank you Sean. 😂

Imagine one of these milestone deal payments coming through in this last quarter ...like 30mil...wow
 
  • Like
Reactions: 11 users

manny100

Regular
Chris from Ciovacco Capital has mentioned that the October 2022 market low & current rebound is very similar to 1990 low & rebound. The charts have many similarities.

Also mentions that the post-Covid run should resume.

The markets had a big run from the October 1990 low until the March 2000 peak of dotcom bubble.

S&P500 went up x5.2 in approximately 9.5 years.

Nasdaq 100 went up x29.2 & Nasdaq Composite went up x15.4 in 9.5 years.

In Australia, market bottomed in January 1991 & peaked in February 2002.

The All Ordinaries went up x2.9 & ASX200 went up x2.8 in approximately 11 years.

Nasdaq had it's biggest quarterly rise since June 2020 quarter.

I compared the Nasdaq 100 rise from the March 2020 low following Covid crash & recent rise from October 2022 low to determine the actual percentage rise at similar time frames following the low. Post Covid rise after 1 quarter & 2 days from March 2020 low was +44.42% & recent rise from October 2002 low was +23.4% so the current market recovery is rising at 52.7% rate of post Covid market recovery. US Feds are not pumping funds into the market now as they did post Covid so market will rise slower.

I then compared the performance of XIJ & XTX to Nasdaq & S&P500 post Covid to determine whether or not the local XIJ & XTX indexes performed better or worse. Wanted to compare ASX tech/growth sector to equivalent in USA in leau of using All Ordinaries & ASX200.

From March 2020 low to peak performance was as follows:

Nasdaq went up x2.38

Nasdaq Comp went up x2.36

S&P500 went up x2.15

XIJ went up x2.85

XTX went up x2.8

Average of Nasdaq = x2.37 compared to average of XIJ/XTX = x2.825 thus +19.2% more upside locally with XIJ/XTX.

There is no data for XIJ/XTX from 1991 to calculate the actual performance to compare with US markets so I used post Covid performance to make comparisons. Post Covid data confirms the ASX tech/growth sectors perform better than US Nasdaq & Nasdaq Composite.

XIJ data is available from June 2001 which indicates it went up x4.15 from March 2003 to June 2007 peak in about 4.25 years following the dotcom bubble which is more than the x2.85 rise post Covid.

The big run in markets during 1990-2000 was due to dotcom bubble. The internet created the dotcom bubble.

I believe we may see something similar from 2022-2032 which will be referred to as the AI bubble. The emergence of ChatGPT recently has commenced the inflation of the AI sector. Nvidia is already trading at PE 159.43. Could get a x15-30 XIJ/XTX rise during the next decade & more for individual stocks.

BRN will most likely rise at least x60 during the next AI fuelled decade to circa $30 SP & $54B MC. Will most likely trade at PE 100+.

View attachment 33454
Agree Steve 10. Great analysis. I also think that we could well be in for the most unexpected Bull run ever.
Not only do we have an AI industry about to take off big time the conversion to EV's and associated Industries.are about to take off big time as well.
The same old, same old days are done. We are entering new frontiers .
I have been accumulating both AI and EV stocks.
When I kick myself for paying a bit more on some purchases now i know it will not matter at all in a few years.
I have been accumulating BRN around support levels for a while.
Not done yet.
 
  • Like
  • Fire
  • Love
Reactions: 28 users

Tothemoon24

Top 20
Ford 2023 EV mustang to be released later this year is boasting some impressive kilometre range


2023 Ford Mustang Mach-E confirmed for Australia

The Blue Oval has confirmed that it is bringing its latest muscle to Australia. It ditches V8 grunt for electric propulsion and the results are extraordinary.

Ford will take on Tesla with an electric vehicle inspired by its Mustang muscle car.

The Blue Oval has confirmed that the Ford Mustang Mach-E will go on sale as its first electric passenger car in Australia later this year.

The four-door SUV, which has similar front-end styling to its V8-powered sports car sibling, will take over the mantle as the fastest and most technically advanced model in Ford showrooms.

Range-topping Mach-E GT models send an enormous 358kW and 860Nm to all four wheels, launching the model to 100km/h in about 3.7 seconds.

The top-grade model, which is likely to carry a six-figure price tag, also has high-performance brakes, adaptive suspension and special driving modes to allow owners to make the most of its performance on track.

Blue-blooded Ford fans might be upset by the notion of a four-door electric crossover wearing badges normally reserved for two-door muscle cars but they are unlikely to be disappointed by its performance.

The GT’s impressively large 91kWh battery is expected to deliver about 490 kilometres of range.

Customers who want the Mach-E look without the GT’s performance and price tag will likely gravitate to the entry-level Mustang Mach-E Select, which should cost between $70,000 and $80,000.

Powered by a single electric motor driving the rear wheels, the Mach-E Select has 198kW and 430Nm of power, along with a claimed range of about 470 kilometres thanks to a 71kWh battery.

A mid-range Mach-E Premium has the bigger battery from the GT paired with a single 216kW/430Nm motor intended to maximise driving range.

While Ford has not confirmed full technical details for the model, it says the Mach-E Premium should deliver “close to 600km of range anxiety-free driving”.

All three models have an enormous 15.5-inch central infotainment screen linked to smartphone mirroring, wireless charging and a 10-speaker Bang & Olufsen stereo.

A panoramic sunroof is also standard, suggesting the Mach-E won’t be a discount proposition.

Ford’s Australian arm says it is too early to discuss prices or exact specifications for the model.

But American pricing suggests it will be on par with similar-sized rivals such as the Tesla Model Y and Kia EV6.

Built on a dedicated electric vehicle platform, the Mustang Mach-E shares few parts with the traditional Mustang coupe and convertible, which is due to arrive locally in seventh-generation form at the end of the year.

Drivers will be able to choose from a selection of driving modes ranging from mild to wild, labelled as “whisper”, “engage” and “unbridled”.

The “unbridled” setting unleashes the full force of available power, helping make the Mach-E GT one of the quickest cars in its class.

Ford’s first electric model in Australia will be the E-Transit commercial van pitched toward businesses that want to go green.

It is one of five electrified vehicles planned for Ford showrooms by the end of 2024.
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 17 users

Boab

I wish I could paint like Vincent
Remember this announcement from the 13th Oct 2022 regarding JAST
One of those other 29 patents pending applications must be due soon? I hope.🤞🤞

IP JAST.jpg
 
  • Like
Reactions: 25 users

rgupta

Regular
Here's a video with Amit Mate from GMAC Intelligence. He discusses use cases for people and object recognition in stores like Amazon, where customers can simply walk in, grab whatever products they want and walk out without paying through a register. All performed on the edge in real time.


Another Qualcomm and serverless cameras. On device processing. One more reason to say either Qualcomm have a technology similarly to us or they are renting from someone.
Can someone please add how brainchip can be better than this one.
 
  • Like
Reactions: 7 users

ndefries

Regular
  • Like
  • Fire
Reactions: 5 users

Diogenese

Top 20
Another Qualcomm and serverless cameras. On device processing. One more reason to say either Qualcomm have a technology similarly to us or they are renting from someone.
Can someone please add how brainchip can be better than this one.
Qualcomm think NNs use MACs:

US2020073636A1 MULTIPLY-ACCUMULATE (MAC) OPERATIONS FOR CONVOLUTIONAL NEURAL NETWORKS

1680407583327.png


An integrated circuit device, comprising:
a lookup table (LUT) configured to store a plurality of values; and
a compute unit, comprising:
an accumulator,
a first multiplier configured to receive a first value of a padded input feature and a first weight of a filter kernel, and
a first selector configured to select an input to supply to the accumulator between an output from the first multiplier and an output from the LUT.
 

Attachments

  • 1680407664029.png
    1680407664029.png
    63.4 KB · Views: 49
  • Like
  • Love
  • Fire
Reactions: 12 users

White Horse

Regular
Lazy Sunday Afternoon. (No not the Small Faces version) Wet Sydney version.
Trolling you-tube. And came across this discussion.
Couldn't help but smile at the 17.40 mark, he mentions Renesas.
 
  • Love
  • Like
  • Fire
Reactions: 10 users
Lazy Sunday Afternoon. (No not the Small Faces version) Wet Sydney version.
Trolling you-tube. And came across this discussion.
Couldn't help but smile at the 17.40 mark, he mentions Renesas.

Interesting interview for sure, but he is talking about Renaissance technologies at 17.40...not Renesas. 😁
 
  • Like
  • Haha
  • Love
Reactions: 9 users

Makeme 2020

Regular
Interesting interview for sure, but he is talking about Renaissance technologies at 17.40...not Renesas. 😁
Is there a company called Renaissance.??
 
  • Like
  • Thinking
Reactions: 2 users

Makeme 2020

Regular
  • Like
Reactions: 3 users
Is there a company called Renaissance.??
Yes, Called ‘Renaissance Technologies’

 
  • Like
Reactions: 3 users
My bad there is a company called Renaissance Technologies but they are a Hedge fund.
Yes thats it, they use some AI algorithms to trade the markets is seems or something similar. Must have been where the show ‘Billions’ got the idea...tbh most trading houses use some of algorithm’s now I would say...how truly smart or AI they are is another story. There is AI and then there is just alleged AI 🤔
 
  • Like
Reactions: 2 users

jtardif999

Regular
Extract from article from NY Times about the near future re Chatbot etc....

"Before GPT-4 was released, OpenAI handed it over to an outside group to imagine and test dangerous uses of the chatbot.
The group found that the system was able to hire a human online to defeat a Captcha test. When the human asked if it was “a robot,” the system, unprompted by the testers, lied and said it was a person with a visual impairment.

Testers also showed that the system could be coaxed into suggesting how to buy illegal firearms online and into describing ways to make dangerous substances from household items. After changes by OpenAI, the system no longer does these things.

But it’s impossible to eliminate all potential misuses. As a system like this learns from data, it develops skills that its creators never expected.
It is hard to know how things might go wrong after millions of people start using it.
“Every time we make a new A.I. system, we are unable to fully characterize all its capabilities and all of its safety problems — and this problem is getting worse over time rather than better,” said Jack Clark, a founder and the head of policy of Anthropic, a San Francisco start-up building this same kind of technology."



I think there might be some BS to advantage associated with this story. What better way to stoke the imagination of potential users of ChatGPT than to describe it as potentially mischievous or unscrupulous in action; that is to suggest that it can learn to be so. We know this can’t be true since it is built on Von Neumann logic - it can ONLY be trained on the data to draw sophisticated inference based on some form of semantic segmentation. To state that it can develop a mind of its own has to be fake news, fake news for a reason - an agenda of some kind. We know that AGI does not exist at this stage, we know that PVDM et al are working on it and that it’s still some years away with a date not before 2030 mentioned. IMO OpenAI may be taking advantage of the popularity of ChatGPT to further stoke user imagination before releasing it to the mainstream. What a great publicity stunt to suggest it can have a mind of its own and then to say but we pulled the plug on those aspects…, but we really can’t say what will happen when it is released more widely - everyone will want a piece of it to find out. AIMO.
 
Last edited:
  • Like
Reactions: 6 users

HopalongPetrovski

I'm Spartacus!
I think there might be some BS to advantage associated with this story. What better way to stoke the imagination of potential users of ChatGPT than to describe it as potentially mischievous or unscrupulous in action; that is to suggest that it can learn to be so. We know this can’t be true since it is built on Von Neumann logic - it can ONLY be trained on the data to draw sophisticated inference based on some form of semantic segmentation. To state that it can develop a mind of its own has to be fake news, fake news for a reason - an agenda of some kind. We know that AGI does not exist at this stage, we know that PVDM et al are working on it and that it’s still some years away with a date not before 2030 mentioned. IMO OpenAI may be taking advantage of the popularity of ChatGPT to further stoke user imagination before releasing it to the mainstream. What a great publicity stunt to suggest it can have a mind of its own and then to say but we pulled the plug on those aspects…, but we really can’t say what will happen when it is released more widely - everyone will want a piece of it to find out. AIMO.
I think the author is just bringing to mind the possibility of unintended consequences.
As tech gets more complex and complicated and is developed in a perhaps compartmentalised fashion it may get beyond the oversight cognition of any one of its developers. And in the rush for first mover advantage and the associated dollars and kudos that come with being ahead of the pack untested and perhaps dangerous repercussions may ensue.
It’s really the classic sci fi scenario as portrayed in “The Fly” and movies of a similar ilk. 🤣

 
  • Like
Reactions: 5 users

White Horse

Regular
Interesting interview for sure, but he is talking about Renaissance technologies at 17.40...not Renesas. 😁
Good pick up.
That being the case, they could use Akida to refine their compute process.
 
  • Like
Reactions: 3 users

Tothemoon24

Top 20
A second generation of our system #MBUX for the New EQS SUV. ⚡

Thanks to an adaptive #logiciel, the display and control concept adapts completely to its users. Personalized suggestions are offered by artificial intelligence to manage the infotainment and comfort functions of the vehicle. Key applications are thus directly accessible depending on the situation and context.

Visit our website to discover the New EQS SUV in detail: https://lnkd.in/gY9FiizD

#EQS #EQS #MercedesEQ #ProgressiveLuxury SUV
 
  • Like
  • Thinking
Reactions: 16 users
Top Bottom