BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
Arm Tech Talk on May 9th

This Tech Talk will highlight the ease of deploying best-in-class power efficient AI combining BrainChip's Akida Neuromorphic AI Accelerator with the Arm Cortex-M family., including the Arm Cortex-M85. The presentation is designed to showcase the disruptive potential of the technology combination, including a look at the 2nd generation of Akida to create compelling, cloudless Edge AI solutions for advanced vision, video object detection and vital signs prediction.

M85 + Akida 2.0
Juicy combination

But why stop at the M-85 when you can include the whole Arm Cortex M family?

In the fourth quarter of 2020 a measly 4.4 billion chips based on Arm Cortex M were shipped.

Holy swizzle sticks!!!!!!!


giphy (2).gif





drunk-dancing.gif


Screen Shot 2023-05-06 at 1.06.58 pm.png




Screen Shot 2023-05-06 at 1.02.29 pm.png




giphy (3).gif



Extract Only

The biggest Arm processor shipments nowadays aren't the newest and most powerful Cortex-A chips, nor old stalwarts like the ARM7TDMI (inset, above left). Segars explains that the most popular Arm chips are ultra-energy-efficient Arm Cortex-M microcontrollers "which account for three-quarters of Arm-based chip shipments annually and almost half of the cumulative 200 billion chips deployed to date".

Screen Shot 2023-05-06 at 1.28.54 pm.png





 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 61 users

Zedjack33

Regular

Interesting. Ai is getting a lot of attention and gathering speed.
 
  • Like
  • Fire
Reactions: 6 users

MrNick

Regular
But why stop at the M-85 when you can include the whole Arm Cortex M family?

In the fourth quarter of 2020 a measly 4.4 billion chips based on Arm Cortex M were shipped.

Holy swizzle sticks!!!!!!!


View attachment 35787




View attachment 35789

View attachment 35786



View attachment 35785



View attachment 35791


Extract Only
The biggest Arm processor shipments nowadays aren't the newest and most powerful Cortex-A chips, nor old stalwarts like the ARM7TDMI (inset, above left). Segars explains that the most popular Arm chips are ultra-energy-efficient Arm Cortex-M microcontrollers "which account for three-quarters of Arm-based chip shipments annually and almost half of the cumulative 200 billion chips deployed to date".

View attachment 35790




Big.
 
  • Like
  • Fire
Reactions: 8 users

TECH

Regular
I think the significance of ARM collaborating with BrainChip is that it will take the pressure off processor makers (well ARM in particular) in their quest for more FLOPS to handle video, voice and other sensor data. Akida can handle CNN and SNN in real-time and the CNN load has been a major user of processor time, power and FLOPS.

It's like using a screwdriver to drive nails in - screwdrivers are great for screws, but Akida is the nail gun.

SiFive, with its RISC-V processor, is an emerging threat to ARM with all its IP invested in RISC-IV processors. Akida gives ARM the chance to stay competitive, even though SiFive is also likely to adopt Akida. Akida works without processor intervention, so the massive CNN tasks are taken out of the equation by Akida.

Then there's the ARM IPO

Hi Dio,

That's a nice bit of pressure between the two, when pressure mounts, it always leads to something rising to the top, in everything we do
and see in the environment...I know of one NPU or NNPU or AI Accelerator that will end up becoming covered in cream.

You know what I mean ! :ROFLMAO::ROFLMAO: an exciting week awaits as the press articles are continuing to flow.

I would imagine that Global Foundries would be about 75% complete with their obligation at the Fab, before handover to our team of
engineers to commence their in-house testing, things are moving along nicely, as Sean has already eluded to.

Tech :coffee::)
 
  • Like
  • Fire
  • Thinking
Reactions: 29 users

Learning

Learning to the Top 🕵‍♂️
I think the significance of ARM collaborating with BrainChip is that it will take the pressure off processor makers (well ARM in particular) in their quest for more FLOPS to handle video, voice and other sensor data. Akida can handle CNN and SNN in real-time and the CNN load has been a major user of processor time, power and FLOPS.

It's like using a screwdriver to drive nails in - screwdrivers are great for screws, but Akida is the nail gun.

SiFive, with its RISC-V processor, is an emerging threat to ARM with all its IP invested in RISC-IV processors. Akida gives ARM the chance to stay competitive, even though SiFive is also likely to adopt Akida. Akida works without processor intervention, so the massive CNN tasks are taken out of the equation by Akida.

Then there's the ARM IPO ...
Thanks Dio for thoughts.


Screenshot_20230506_135345_Chrome.jpg



Learning 🏖
 
  • Like
  • Haha
  • Fire
Reactions: 16 users

Earlyrelease

Regular
I think the significance of ARM collaborating with BrainChip is that it will take the pressure off processor makers (well ARM in particular) in their quest for more FLOPS to handle video, voice and other sensor data. Akida can handle CNN and SNN in real-time and the CNN load has been a major user of processor time, power and FLOPS.

It's like using a screwdriver to drive nails in - screwdrivers are great for screws, but Akida is the nail gun.

SiFive, with its RISC-V processor, is an emerging threat to ARM with all its IP invested in RISC-IV processors. Akida gives ARM the chance to stay competitive, even though SiFive is also likely to adopt Akida. Akida works without processor intervention, so the massive CNN tasks are taken out of the equation by Akida.

Then there's the ARM IPO ...


Dodgy Knees.
I am a big fan of your posts and knowledge.

But FLOPS- surely something that is cured by a small blue pill is not a IT term!

😎
 
  • Haha
  • Like
  • Sad
Reactions: 21 users

Earlyrelease

Regular
Should never of doubted you Dio.

FLOPS

In computers, FLOPS are floating-point operations per second. Floating-point is, according to IBM, "a method of encoding real numbers within the limits of finite precision available on computers." Using floating-point encoding, extremely long numbers can be handled relatively easily
 
  • Like
  • Fire
Reactions: 5 users

Tothemoon24

Top 20

TSMC growing presence in EV sector​

Monica Chen, Hsinchu; Rodney Chan, DIGITIMES AsiaFriday 5 May 20230

2_b.jpg

Credit: AFP

TSMC, with the support of its ecosystem partners, is striving to strengthen its position in the rapidly expanding market for electric vehicles (EV), according to industry sources.

The pure-play foundry embraces a three-prong strategy for its automotive business. It is directly teaming up with international automakers to obtain long-term supply agreements, developing new and special manufacturing processes for making automotive chips, and building overseas joint-venture fabs with automotive component makers, the sources said.
The sources pointed out that TSMC dominates the global chip foundry market, and high-performance computing chips for self-driving capability must be made using advanced processes. It means TSMC is in pole position in 28nm and below processes needed to make automotive chips, the sources said.
Other foundry houses and IDMs, such as NXP and Infineon, are mostly still at 12nm or 28nm and above in terms of chip manufacturing nodes. TSMC's major foundry competitors Intel and Samsung Electronics have seen slow progress in advancing their manufacturing processes or in developing customer relationships in the automotive field, the sources said.
TSMC is optimistic about growth for its automotive segment, which accounted for about 5% of overall company sales in 2022. In the first quarter of 2023, the percentage rose to 7%. The automotive segment could generate as much as NT$140 billion (US$4.56 billion) in sales in 2023 for TSMC on an estimated overall company sales of NT$2 trillion for the year.
TSMC's sales from the automotive segment alone would be more than the 2022 combined sales of fellow foundry houses Powerchip and VIS, the sources noted. SMIC, UMC, and GlobalFoundries each generated sales of only more than NT$200 billion in 2022.
Under its three-prong strategy for its automotive business, TSMC is securing long-term contracts directly from international automakers, rather than through their component suppliers, the sources said. It has already struck supply contracts with VW, GM, Toyota and Honda, the sources noted. Its other customers reportedly include Tesla, Mercedes-Benz and BMW, the sources said.
Secondly, TSMC is working to accelerate the otherwise lengthy verification for automotive chips. TSMC has been expanding its tech support for a wider range of automotive chips, including 28nm embedded flash memory, 28nm, 22nm, and 16nm mmWave chips and LiDAR sensors, the sources said.
It has also obtained certification for making 22nm MRAM that complies with automotive Grade-1 specs, the sources added.
For its 7nm and below processes for making automotive chips, it has just unveiled N3AE (N3 Auto Early) that offers automotive process design kits (PDKs) based on its 3nm N3E, allowing customers to launch designs on the 3nm node for automotive applications, leading to the fully automotive-qualified N3A process in 2025.
TSMC is establishing overseas joint-venture fabs with automotive component makers. It is building a new fab in Japan, and likely another in Germany, with both targeting the automotive market.
The Japanese project is a joint venture between TSMC, Sony and automotive components maker Denso, and the German fab reportedly would also be a joint venture with companies in the automotive supply chain. Bosch is reportedly one of its partners for the German fab, the sources noted.
TSMC's N3AE was showcased at its 2023 North America Technology Symposium last month, along with other new members in the 3nm family, including: N3P, an enhanced 3nm process for better power, performance and density; and N3X, a process tailored for high performance computing (HPC) applications.
 
  • Like
  • Thinking
  • Love
Reactions: 12 users

manny100

Regular
Seeing that Brainchip mention drones & agriculture as a key target market, the article below seems quite promising:


"Precision AI is accelerating artificial intelligence (AI) through enhancing farming practices and creating the world’s first AI-powered drones for plant-level herbicide applications at a broad-acre scale. Specifically, their Precision Spray Drone System with ZeroDrift can detect weeds from crop in real time and target spray the weeds in a single pass. These drones achieve this feat by utilizing edge computing, which allows faster and more accurate onboard processing, removing the need for broadband internet and creating near instant identification at record speeds – 8x faster than the industry average, in fact."

"Recently, they were chosen for the prestigious John Deere 2023 Startup Collaborator and are excited about its potential to speed up their entry into the commercial market and for the unique collaboration opportunities with like-minded startups."

"In the next five years, the Precision AI team is preparing for the Precision Spray Drone System to hit the market (anticipated in 2024) and continuing to focus on expanding their unique technology to additional industries, cropping systems and farms."

Here is another article Precision AI has released where they talk about taking the technology to the far edge https://www.precision.ai/resources/654/edge-computing

Here is a most recent article about them https://www.precision.ai/resources/837/coming-for-the-mega-farms

"Precision AI says its approach can reduce herbicide use by as much as 90% compared to traditional methods. The startup was one of a dozen winners of BloombergNEF’s 2023 Pioneers award, which aims to spotlight early-stage climate tech innovators with game-changing potential."

For now, Precision AI’s drone is operated with supervision from a human pilot. But McCann says his company is poised to introduce a fully autonomous spraying drone that can take off, fly and land by itself – as long as regulators grant permission.

"The startup plans to commercialize its on-demand spraying service next year, allowing farmers to book as needed — not dissimilar to how consumers order an Uber. It will also sell the spraying drone to farmers who want more control over their crop management and charge a fee for its AI operating software on a pay-as-you-go basis."
Is this competition for BRN or perhaps another eco system partner??
 
  • Thinking
Reactions: 1 users

Tothemoon24

Top 20
This is 6 months old ; I don’t recall seeing it posted before from Megachips . Apologies if so .
Great insight into the company Megachips & awesome too listen to Douglas Fairbairn talk about the mighty Brainchip in such detail .

Running time 17 minutes

It’s all gold from 5 minutes onwards


 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 32 users
Searched and couldn’t see this posted already? Looks like plenty of previous SPECKulative posts though. I have not previously investigated SynSense. What’s the general consensus?

1683350979843.png

1683350832915.png


1683351176999.png



 
Last edited:
  • Like
  • Fire
Reactions: 16 users
Hi Dodgy Knees. Can you e plain this to me please. M85 Accelerator Suppprt as pasted.

Optional coprocessor interface (64-bit) supporting up to 8 coprocessor units for custom compute accelerators. Optional Arm Custom Instructions.

Now it says 64 bit. Now we have 8 bit now. Can that 64 bit be the 8 coprocessors x 8 bit.

I apologise in advance. I can research agit but it doesn't mean I understand it 😔. Source linked below. Block diagram etc.

https://developer.arm.com/Processors/Cortex-M85

SC
 
  • Like
Reactions: 6 users

AusEire

Founding Member. It's ok to say No to Dot Joining
  • Like
Reactions: 2 users

Learning

Learning to the Top 🕵‍♂️
  • Like
  • Love
Reactions: 9 users

AusEire

Founding Member. It's ok to say No to Dot Joining
  • Like
Reactions: 6 users

Diogenese

Top 20
Hi Dodgy Knees. Can you e plain this to me please. M85 Accelerator Suppprt as pasted.

Optional coprocessor interface (64-bit) supporting up to 8 coprocessor units for custom compute accelerators. Optional Arm Custom Instructions.

Now it says 64 bit. Now we have 8 bit now. Can that 64 bit be the 8 coprocessors x 8 bit.

I apologise in advance. I can research agit but it doesn't mean I understand it 😔. Source linked below. Block diagram etc.

https://developer.arm.com/Processors/Cortex-M85

SC

Hi SC,

That's right. Earlier today, someone posted something about Akida and M85, with Akida handling several separate sensor inputs simultaneously, taking advantage of M85's parallel processors.

I think the Akida 8-bits is so we can conform to the proposed standard for model libraries.

I was just reading where just 2 nodes (8 NPUs) of Akida can deliver 30 FPS.

Yesterday, I posted about ARM Helium and ARM can adapt a 128 bit bus to different instruction bit sizes.


The process is discussed here:

New AI technology from Arm delivers intelligence for IoT – Arm®


February 10, 2020



1683360583045.png


1683360639215.png

This ARM patent describes how the ARM processors can adapt to different instruction lenghts:

WO2022023704A1 VECTOR PROCESSING 20200730

1683360821092.png


It has been proposed to provide a vector processing instruction set which is "agnostic" with respect to the physical vector length provided by hardware by which code containing instructions from that instruction set is executed. An example is an instruction set defined by the so-called "Scalable Vector Extension" (SVE) or the SVE2 architectures originating from Arm Ltd. However, the decoding and/or executing of at least some instructions of such an instruction set requires knowledge of an available vector length compatible with that provided by the actual hardware.
...
Apparatus comprises an instruction decoder to decode processing instructions; one or more first registers; first processing circuitry to execute the decoded processing instructions in a first processing mode in which the first processing circuitry is configured to execute the decoded processing instructions using the one or more first registers; and control circuitry to selectively initiate execution of the decoded processing instructions in a second processing mode in which the decoded processing instructions are selectively executed using one or more second registers; the instruction decoder being configured to decode processing instructions selected from a first instruction set in the first processing mode and processing instructions selected from a second instruction set in the second processing mode, in which one or both of the first and second instruction sets comprises at least one instruction which is unavailable in the other of the first and second instruction sets; the instruction decoder being configured to decode one or more mode change instructions to change between the first processing mode and the second processing mode; and the first processing circuitry being configured to change the current processing mode between the first processing mode and the second processing mode in response to execution of a mode change instruction.
 
  • Like
  • Love
  • Fire
Reactions: 37 users

JDelekto

Regular
Hi Dodgy Knees. Can you e plain this to me please. M85 Accelerator Suppprt as pasted.

Optional coprocessor interface (64-bit) supporting up to 8 coprocessor units for custom compute accelerators. Optional Arm Custom Instructions.

Now it says 64 bit. Now we have 8 bit now. Can that 64 bit be the 8 coprocessors x 8 bit.

I apologise in advance. I can research agit but it doesn't mean I understand it 😔. Source linked below. Block diagram etc.

https://developer.arm.com/Processors/Cortex-M85

SC
Sometimes it's easy to get confused with how "bits" are used. In some cases, it refers to how many bits are encoded in data, how many bits are stored in a memory location, or how many bits wide a 'bus' is for pushing data between components.

In the case of the ARM, the interface between the ARM CPU and the co-processors is a 64-bit wide channel, and the co-processor can use as much data from this as it needs.

It can support a maximum of eight co-processors, which could be the same or different. These co-processors are designed to accelerate specific tasks. For example, one could have an Akida co-processor, a floating point math co-processor, a co-processor for high-speed vector operations, and a co-processor for encoding audio or video.

Each of these co-processors would have access to 64 bits of data through their interface, but you can only have a maximum of 8 of these co-processors. It is also interesting to note that there are ARM instructions that encode the number of the co-processor being accessed in the instruction itself.

While only 3 bits are required to access up to 8 different co-processors, the ARM instruction set encodes the co-processor in 4 bits of the instruction. Up to sixteen could be supported, but they only allow up to eight as part of their architectural choice (and more than likely reserve some). In the case of the Cortex M-85 documentation in the link you provided, up to 8 of these accelerator co-processors are supported.

On a different note, Akida 1.0 could support 1-bit to 4-bit weights and activations, while Akida 2.0 allows for 8-bit weights and activations (wisely keeping up with the AI standards committee choices). This data would be transferred between the ARM CPU and the Akida neural fabric over a 64-bit interface. The ARM co-processor instructions transfer memory and register data between the CPU and the co-processor.

I hope I didn't confuse things further.
 
  • Like
  • Love
  • Fire
Reactions: 26 users
Hi SC,

That's right. Earlier today, someone posted something about Akida and M85, with Akida handling several separate sensor inputs simultaneously, taking advantage of M85's parallel processors.

I think the Akida 8-bits is so we can conform to the proposed standard for model libraries.

I was just reading where just 2 nodes (8 NPUs) of Akida can deliver 30 FPS.

Yesterday, I posted about ARM Helium and ARM can adapt a 128 bit bus to different instruction bit sizes.


The process is discussed here:

New AI technology from Arm delivers intelligence for IoT – Arm®


February 10, 2020



View attachment 35805

View attachment 35806
This ARM patent describes how the ARM processors can adapt to different instruction lenghts:

WO2022023704A1 VECTOR PROCESSING 20200730

View attachment 35807

It has been proposed to provide a vector processing instruction set which is "agnostic" with respect to the physical vector length provided by hardware by which code containing instructions from that instruction set is executed. An example is an instruction set defined by the so-called "Scalable Vector Extension" (SVE) or the SVE2 architectures originating from Arm Ltd. However, the decoding and/or executing of at least some instructions of such an instruction set requires knowledge of an available vector length compatible with that provided by the actual hardware.
...
Apparatus comprises an instruction decoder to decode processing instructions; one or more first registers; first processing circuitry to execute the decoded processing instructions in a first processing mode in which the first processing circuitry is configured to execute the decoded processing instructions using the one or more first registers; and control circuitry to selectively initiate execution of the decoded processing instructions in a second processing mode in which the decoded processing instructions are selectively executed using one or more second registers; the instruction decoder being configured to decode processing instructions selected from a first instruction set in the first processing mode and processing instructions selected from a second instruction set in the second processing mode, in which one or both of the first and second instruction sets comprises at least one instruction which is unavailable in the other of the first and second instruction sets; the instruction decoder being configured to decode one or more mode change instructions to change between the first processing mode and the second processing mode; and the first processing circuitry being configured to change the current processing mode between the first processing mode and the second processing mode in response to execution of a mode change instruction.
Thanks for the response. Much appreciated. Tech Talk could be interesting.

SC
 
  • Like
Reactions: 5 users
Sometimes it's easy to get confused with how "bits" are used. In some cases, it refers to how many bits are encoded in data, how many bits are stored in a memory location, or how many bits wide a 'bus' is for pushing data between components.

In the case of the ARM, the interface between the ARM CPU and the co-processors is a 64-bit wide channel, and the co-processor can use as much data from this as it needs.

It can support a maximum of eight co-processors, which could be the same or different. These co-processors are designed to accelerate specific tasks. For example, one could have an Akida co-processor, a floating point math co-processor, a co-processor for high-speed vector operations, and a co-processor for encoding audio or video.

Each of these co-processors would have access to 64 bits of data through their interface, but you can only have a maximum of 8 of these co-processors. It is also interesting to note that there are ARM instructions that encode the number of the co-processor being accessed in the instruction itself.

While only 3 bits are required to access up to 8 different co-processors, the ARM instruction set encodes the co-processor in 4 bits of the instruction. Up to sixteen could be supported, but they only allow up to eight as part of their architectural choice (and more than likely reserve some). In the case of the Cortex M-85 documentation in the link you provided, up to 8 of these accelerator co-processors are supported.

On a different note, Akida 1.0 could support 1-bit to 4-bit weights and activations, while Akida 2.0 allows for 8-bit weights and activations (wisely keeping up with the AI standards committee choices). This data would be transferred between the ARM CPU and the Akida neural fabric over a 64-bit interface. The ARM co-processor instructions transfer memory and register data between the CPU and the co-processor.

I hope I didn't confuse things further.
Yeah my layman's brain understood most of that. Thanks for reply. Most on here would have been surprised I knew 8 x 8 = 64 😆

SC
 
  • Haha
  • Like
  • Thinking
Reactions: 15 users

Deadpool

hyper-efficient Ai
Just for those newcomers that may not be up to speed on how _ucking awesome this company is.


BrainChip is the worldwide leader in edge AI on-chip processing and learning. The company’s first-to-market neuromorphic processor, AkidaTM, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Keeping machine learning local to the chip, independent of the cloud, also dramatically reduces latency while improving privacy and data security. In enabling effective edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at brainchip.com.


Every link below on Arms eco system of partners leads to BRN:cool:
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 65 users
Top Bottom