BRN Discussion Ongoing

IloveLamp

Top 20
Screenshot_20230429_062630_LinkedIn.jpg
 
  • Like
  • Love
  • Thinking
Reactions: 19 users

IloveLamp

Top 20
  • Like
Reactions: 3 users

Foxdog

Regular
No, I'll just wait for the non-announcement thanks.
 
  • Like
  • Haha
Reactions: 2 users

IloveLamp

Top 20
  • Like
Reactions: 2 users

Foxdog

Regular
SynSense
 
Last edited:
  • Like
Reactions: 1 users

The Pope

Regular
Yes there is merit in what you are saying due to synsense and bmw comments via synsense website news in 15 April 2022. Refer link below


Then there is this from TSE forum in 27 Feb 22 with a potential dot join between BRN and BMW

I recall banter between posters on TSE linked to the above but doubt anyone will try to post links (dot joining) to convince you BRN tech is definitely in BMW upcoming EV range.

Maybe BMW have changed camps in the last 12months with exploring AI tech for their vehicles with BRN. From a quick google there doesn’t appear to be any announcement by BRN they are exploring uses of AI tech with BMW as that noted by synsense in the article above on 15 April 2022
Maybe BRN / Bmw have this under a NDA unlike synsense. Who knows but doesn’t it take a few years to develop tech into products before rolling out to customers

BRN can’t be eveywhere but BRN TSE believers like us wish it will be in BMW and many other car manufactures. Fingers crossed BRN is everywhere for us shareholders.
 
  • Like
  • Love
  • Haha
Reactions: 11 users
Interesting article about differing views on the possibility of fully autonomous (level 5) driving:


"Leading Chinese automaker BYD claims completely autonomous driving (AD) is ‘basically impossible’, and that the automation technology would better serve streamlining manufacturing processes.

Translated from Mandarin by CNBC, BYD spokesperson Li Yunfei said “We think self-driving tech that’s fully separated from humans is very, very far away, and basically impossible.”

"Despite Mercedes-Benz having one of the most advanced ADAS systems on the market, with Level 2 autonomous driving systems across its range and the S-Class being offered with Level 3 autonomous technology in Germany, CEO Ola Kallenius said “I think we will surely be deep into the [20]30s before the whole world goes to that (self-driving tech).”

"Elon Musk says Tesla vehicles will soon achieve full self-driving autonomy and will “be able to show to regulators that the car is safer, much more so, than the average human”.

Tesla is yet to receive regulatory approval for its systems, but as Musk says, “we’ve got to prove it to regulators and get the regulatory approvals, which is outside of our control.”
 
  • Like
Reactions: 8 users

Foxdog

Regular
Yes there is merit in what you are saying due to synsense and bmw comments via synsense website news in 15 April 2022. Refer link below


Then there is this from TSE forum in 27 Feb 22 with a potential dot join between BRN and BMW

I recall banter between posters on TSE linked to the above but doubt anyone will try to post links (dot joining) to convince you BRN tech is definitely in BMW upcoming EV range.

Maybe BMW have changed camps in the last 12months with exploring AI tech for their vehicles with BRN. From a quick google there doesn’t appear to be any announcement by BRN they are exploring uses of AI tech with BMW as that noted by synsense in the article above on 15 April 2022
Maybe BRN / Bmw have this under a NDA unlike synsense. Who knows but doesn’t it take a few years to develop tech into products before rolling out to customers

BRN can’t be eveywhere but BRN TSE believers like us wish it will be in BMW and many other car manufactures. Fingers crossed BRN is everywhere for us shareholders.
Thanks for your considered post Pope. I would absolutely love to see us in BMW and Merc together - what a coup that would be. We partner with Prophesee but they and BMW state a partnership with SynSense only for their neuromorphic smart cockpits. There is no mention of Brainchip. Why would we be under an NDA and not SynSense (makes NoSense). Astonishingly this opinion is considered 'ignorant' by some but perhaps the dog ate their homework instead. Keep up the good work Pope, I enjoy your balanced contributions here 👍
 
  • Like
Reactions: 10 users
D

Deleted member 118

Guest
  • Like
  • Fire
  • Love
Reactions: 10 users
D

Deleted member 118

Guest
  • Like
  • Fire
Reactions: 5 users
D

Deleted member 118

Guest
00D963C3-9E7C-495D-98F4-19D2004C0072.png
 
  • Like
  • Fire
Reactions: 7 users

IloveLamp

Top 20
Thanks for your considered post Pope. I would absolutely love to see us in BMW and Merc together - what a coup that would be. We partner with Prophesee but they and BMW state a partnership with SynSense only for their neuromorphic smart cockpits. There is no mention of Brainchip. Why would we be under an NDA and not SynSense (makes NoSense). Astonishingly this opinion is considered 'ignorant' by some but perhaps the dog ate their homework instead. Keep up the good work Pope, I enjoy your balanced contributions here 👍
Oh would you look at that.....dot dot dot 😏💔
Screenshot_20230429_080919_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 29 users

IloveLamp

Top 20
  • Haha
Reactions: 7 users

Pmel

Regular

Foxdog

Regular
Any names mentioned from Brainchip?
Yeah I concede - read the fine print. It clearly says AKIDA being incorporated in all BMW vehicles in lieu of SynSense. Rob Telson likes it too which means commercialization is imminent (but shrouded in NDA's). Just like all of his other 'likes' can be traced directly to revenue producing contracts, oh wait.....what?
 
  • Haha
  • Like
  • Fire
Reactions: 16 users

Labsy

Regular
That is also my hope.

As I said before, the recent flurry of ARM announcements (Akida compatibility with all ARM processors, ARM "more advanced" chip manufacture (IFS?), and the fact that SiFive (an up-and-coming competitor of ARM) and Akida are cozy, leads me to hope that the ARM/BrainChip presentation will be that the new ARM chip will incorporate Akida as its AI block.

Some supporting reasons:

1. ARM presently has an in-house AI block called Helium available with its processor IP. Helium is light weight AI compared to Akida, so replacing Helium with Akida would make the ARM chip "more advanced";

2. Sifive and Akida are a good fit and would give SiFive an advantage over present ARM processors, and ARM will need to swallow any "not-invented-here" attitude they may have if they are to attempt to keep up with SiFive's more efficient RISC-V architecture;

3. BrainChip has joined the ARM partnership group;

4. BrainChip and ARM have both joined the Intel Foundry Services (IFS) fellowship;

5. Why would ARM be doing a presentation for a company they barely know?

Of course, the counter-argument is that, since RISC-V is open-source, ARM is bringing out its own RISC-V processor, which would qualify as "more advanced".

Then again, an ARM RISC -V processor could be mated with Akida. That would be very advanced.
Yes!!!! Dio you are a beakon of light in the darkness cast over us....Thankyou for your priceless input.
 
  • Like
  • Love
  • Fire
Reactions: 28 users

TheDrooben

Pretty Pretty Pretty Pretty Good
This 4C is a joke!!!!!!! But a very bad one! 👿👿👿

I cannot vote from Germany but don't you all dare and vote YES to the bonus of Sean and others!!! (n)🤬 I will then step into a plane and come over to TALK to you all in person!!! The management DOES NOT DESERVE THEIR SALARY AND THEREFORE ALSO NOT A BONUS!!!!

I am so upset, angry, and pissed that I wasted my time and money with this company. I lost a lot of money on this so selling does not make sense at all. All I can do is wait and hope that BrainChip with its AKIDA technology will be bought by a big company. This could bring us a few stocks of a GOOD and SUCCESSFUL investment.

In my view, it is time for Sean Hehir to step down, and to be replaced. All those cute interviews were just words for money. 👿

By the way, how much cash do they have left before they are bankrupt?
200w (2).gif
 
  • Haha
  • Like
  • Love
Reactions: 12 users

Foxdog

Regular
Old but new or is it new but old.

I wonder who these guys actually have contracts with. Their website doesn't elaborate although that could definitely be due to the sensitive nature of defence and intelligence contracts. They're only a small company but it appears they have some traction based on the company blurb.

This is the sort of collaboration that gives me hope - defence/national security contracts can be massive and enduring.
 
  • Like
Reactions: 4 users
I see ARM pushing the M85 for computer vision and ML with MCUs.

Obviously including their Helium and NPU but nice we can also now dovetail with it :)



Unlocking computer vision and machine learning on highly efficient MCU platforms​

16 March 2023

Technology
Stephen_2.jpg

Stephen Su shares how Arm’s Cortex-M85 processor and software ecosystem can be leveraged to overcome the constraints of microcontroller unit platforms

Computer vision (CV) has been widely adopted in many Internet of Things (IoT) devices across various use cases, ranging from smart cameras and smart home appliances to smart retail, industrial applications, access control and smart doorbells. As these devices are constrained by size and are often battery powered, they need to wield highly efficient compute platforms.

One such platform is the MCU (microcontroller unit), which has low-power and low-cost characteristics, alongside CV and machine learning (ML) compute capabilities.

However, running CV on the MCU will undoubtedly increase its design complexity due to the hardware and software resource constraints of the platform.

Therefore, IoT developers need to determine how to achieve the required performance, while keeping power consumption low. In addition, they need to integrate the image signal processor (ISP) into the MCU platform, while balancing the ISP configuration and image quality.

One processor that fulfills these requirements is Arm’s Cortex-M85, which is Arm’s most powerful Cortex-M CPU to date. With vector extension of SIMD (single instruction, multiple data) 128-bit vector processing, Cortex-M85 accelerates CV alongside the overall MCU performance. For IoT developers, they can leverage Arm’s software ecosystem, ML embedded evaluation kit and guidance on how to integrate the ISP with the MCU in order to unlock CV and ML easily and quickly on the highly-efficient MCU platform.

Arm brings advanced computing to MCUs​

As a first step, being able to run CV compute workloads requires improved performance on the MCU. Focusing on the CPU architecture, there are several ways to enhance the MCU’s performance, including superscalar, VLIW (very long instruction word), and SIMD. For the Cortex-M85, Arm chose to adopt SIMD – which is a single instruction set that can operate multiple data – as it’s the best option for balancing performance and power consumption.

Figure%201.png

Figure 1: The comparison between VLIW and SIMD

Arm’s Helium technology, which is the M-Profile Vector Extension (MVE) for the Cortex-M processor series, brings vector processing to the MCU. Helium is an extension in the Armv8.1-M architecture to significantly enhance performance for CV and ML applications on small, low-power IoT devices. It also utilises the largest software ecosystem available to IoT developers, including optimised sample code and neural networks.

Software ecosystem on MCUs to facilitate CV and ML​

Supporting the Cortex-M CPUs, Arm has published various materials to make it easier to start running CV and ML. This includes the Arm ML embedded evaluation kit.

The evaluation kit provides ready-to-use ML applications for the embedded stack. As a result, IoT developers can experiment with the already-developed software use cases and then create their own applications. The example applications with ML networks are listed in the table below.

Table_0.png


The Arm ML embedded evaluation kit

Integrating the ISP on the MCU​

The ISP is an essential technology to unlock CV, as the image stream is the input source. However, there are certain points that we must consider when integrating ISP on the MCU platform.
For IoT edge devices, there will be a smaller image sensor resolution (<1-2MP; 15-30fps) and even lower frame rate. Also the image signal processing is not always active. Therefore, using a higher quality scaler within the ISP will drop the resolution to sub-VGA, which is 640 x 480, to, for example, minimise the data ingress to the NPU. This means that the ISP only uses the full resolution when needed.
ISP configurations can also affect power, area, and efficiency. Therefore, it is worth asking the following questions to save power and area.
  • Whether it’s for human vision, computer vision, or both?
  • What is the required memory bandwidth?
  • How many ISP output channels will be needed?
An MCU platform is usually resource-constraint with limited memory size. Integrating with an ISP requires the MCU to run the ISP driver, including the ISP’s code, data, and control LUT (loop up table). Therefore, once the ISP configuration has been decided, developers need to tailor the driver firmware accordingly, removing unused code and data to accommodate the memory limitation on the MCU platform.

StephenFigure2.png


Figure 2: An example of concise ISP configuration

Another consideration when integrating the ISP with the MCU is lowering the frame rate and resolution In many cases, it would be best to consider the convergence speed of the ‘3As’ – auto-exposure, auto-white balance and auto-focus. This will likely require a minimum of five to ten frames before settling. If the frame rate is too slow, it might be problematic for your use case. For example, this could mean a two to five second delay before a meaningful output can be captured and, given the short power-on window, there is a risk of missing critical events. Moreover, if the clock frequency of the image sensor is dropped too low, it is likely to introduce nasty rolling shutter artifacts.

Summary​

Enabling CV and ML on MCU platforms is part of the next wave of the IoT evolution. However, the constraints of the MCU platform can increase the design complexity and difficulty. Enabling vector processing on the MCU through the Cortex-M85 and leveraging Arm’s software ecosystem can provide enough computing and reduce this design complexity. In addition, integrating a concise ISP is a sensible solution for IoT devices to speed up and unlock CV and ML tasks on low-power, highly efficient MCU platforms.

Embedded vision
Computer vision
Artificial intelligence

 
  • Like
  • Fire
  • Love
Reactions: 13 users
Top Bottom