BRN Discussion Ongoing

Jchandel

Regular
FWIW- below post liked by Intel’s senior process engineer.

“As I have been reading several of my "go-to" tech publications I was struck by the sheer quantity of headlines about the incredible compute power of hardware built to run AI workloads such as training generative AI models in data centers. Yet a plethora of companies are waiting to get their hands on powerful AI chips that can handle complex computing at the edge while operating on small power supplies and requiring little or no dependency on cloud connectivity.

BrainChip and its remarkable team have met these challenging requirements. The company specializes in developing advanced artificial intelligence and machine learning hardware that leverages neuromorphic computing, a method of computer engineering where elements of a computer are modeled after systems in the human brain and nervous system. Brainchip’s technical staff developed a unique architecture and have become the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, which they’ve dubbed Akida.

According to Brainchip, Akida can operate complex inference and learning on extremely low-power AI devices to deliver highly accurate, intelligent, responsive, real-time applications that provide enhanced reliability and security while connecting through a mesh network. Akida has the flexibility of integrating into SoCs that use existing process technologies and a developer platform that empowers partners to test and deploy their AI models using standard workflows.

Their robust offering has attracted the interest of leading companies like Teksun Inc, which focuses on end-to-end IoT and AI product development for applications where processing capability from “vision to decision” is crucial. For example, think of an industrial manufacturing environment where humans and machines are working together and physical safety, production efficiency, machine uptime, and data acquisition are all critical.”

 
  • Like
  • Fire
  • Love
Reactions: 65 users
I see we are a sponsor, attending and Nandan speaking at this years Edge Summit in Sept in amongst the usual suspects.


IMG_20230709_224520.jpg
Screenshot_2023-07-09-22-51-08-39_4641ebc0df1485bf6b47ebd018b5ee76.jpg


Screenshot_2023-07-09-22-51-38-32_4641ebc0df1485bf6b47ebd018b5ee76.jpg


Also noticed our MB friends doing a little spotlight on some relevant points.

IMG_20230709_224418.jpg
 
  • Like
  • Love
  • Fire
Reactions: 35 users
From the updated MegaChips website. Royalties must be around the corner!


Interesting side bit on Quadric.

Just read this that they released only couple of weeks ago.

Couple snips below and it was the paragraph about VIT and most current NPUs I found interesting given Quadric and us both run VIT now.

Should hopefully be a good enhancement for Akida Gen 2 you'd expect.

Trust its impending release as stated by BRN makes some inroads.

Availability:

Engaging with lead adopters now. General availability in Q3’ 2023

Megachips should be happy :)



Quadric 的Vision Transformer 现支持 Chimera GPNPU​

Quadric Announces Vision Transformer Support for Chimera GPNPUs​

Jun. 21, 2023
High-performance implementation of ViT family of ML networks available for SoC designs

Burlingame, CA – June 20, 2023
– Quadric® today announced that its ChimeraTM general purpose neural processing unit (GPNPU) processor intellectual property (IP) supports vision transformer (ViT) machine learning (ML) inference models. These newer ViT models are not supported by almost all NPUs currently in production, making it impractical to run ViT on many existing edge AI system-on-chip (SoC) devices.

ViT models are the latest state-of-the-art ML models for image and vision processing in embedded systems. ViTs were first described in 2021 and now represent the cutting edge of inference algorithms in edge and device silicon. ViTs repeatedly interleave MAC-heavy operations (convolutions and dense layers) with DSP/CPU centric code (Normalization, SoftMax). The general-purpose architecture of the Chimera core family intermixes integer multiply-accumulate (MAC) hardware with a general purpose 32-bit ALU functionality, which enabled Quadric to rapidly port and optimize the ViT_B transformer model.....

Existing NPUs Cannot Support Transformers

Most edge silicon solutions in the market today employ heterogeneous architectures that pair convolution NPU accelerators with traditional CPU and DSP cores. The majority of NPUs in silicon today were designed three or more years ago when the ResNet family of convolution networks was state of the art, and Vision Transformers had not yet taken the AL/ML world by storm. ResNet type networks have at most one normalization layer at the beginning of a model, and one SoftMax at the end, with a long chain of convolution operations making up the bulk of the compute. ResNet models therefore map very neatly into convolution-optimized NPU accelerator cores as part of heterogenous SoC chip architectures.

The emergence of ViT networks broke the underlying assumptions of these NPUs.

Mapping a ViT workload to a heterogeneous SoC would entail repeatedly moving data back and forth between NPU and DSP/CPU – 24 to 26 round trip data transfers for the base case ViT_B. The system power wasted with all those data transfers wipes out the matrix-compute efficiency gains from having the NPU in the first place.
 
  • Like
  • Love
  • Fire
Reactions: 26 users

Diogenese

Top 20
Interesting side bit on Quadric.

Just read this that they released only couple of weeks ago.

Couple snips below and it was the paragraph about VIT and most current NPUs I found interesting given Quadric and us both run VIT now.

Should hopefully be a good enhancement for Akida Gen 2 you'd expect.

Trust its impending release as stated by BRN makes some inroads.

Availability:

Engaging with lead adopters now. General availability in Q3’ 2023

Megachips should be happy :)



Quadric 的Vision Transformer 现支持 Chimera GPNPU​

Quadric Announces Vision Transformer Support for Chimera GPNPUs​

Jun. 21, 2023
High-performance implementation of ViT family of ML networks available for SoC designs

Burlingame, CA – June 20, 2023
– Quadric® today announced that its ChimeraTM general purpose neural processing unit (GPNPU) processor intellectual property (IP) supports vision transformer (ViT) machine learning (ML) inference models. These newer ViT models are not supported by almost all NPUs currently in production, making it impractical to run ViT on many existing edge AI system-on-chip (SoC) devices.

ViT models are the latest state-of-the-art ML models for image and vision processing in embedded systems. ViTs were first described in 2021 and now represent the cutting edge of inference algorithms in edge and device silicon. ViTs repeatedly interleave MAC-heavy operations (convolutions and dense layers) with DSP/CPU centric code (Normalization, SoftMax). The general-purpose architecture of the Chimera core family intermixes integer multiply-accumulate (MAC) hardware with a general purpose 32-bit ALU functionality, which enabled Quadric to rapidly port and optimize the ViT_B transformer model.....

Existing NPUs Cannot Support Transformers

Most edge silicon solutions in the market today employ heterogeneous architectures that pair convolution NPU accelerators with traditional CPU and DSP cores. The majority of NPUs in silicon today were designed three or more years ago when the ResNet family of convolution networks was state of the art, and Vision Transformers had not yet taken the AL/ML world by storm. ResNet type networks have at most one normalization layer at the beginning of a model, and one SoftMax at the end, with a long chain of convolution operations making up the bulk of the compute. ResNet models therefore map very neatly into convolution-optimized NPU accelerator cores as part of heterogenous SoC chip architectures.

The emergence of ViT networks broke the underlying assumptions of these NPUs.

Mapping a ViT workload to a heterogeneous SoC would entail repeatedly moving data back and forth between NPU and DSP/CPU – 24 to 26 round trip data transfers for the base case ViT_B. The system power wasted with all those data transfers wipes out the matrix-compute efficiency gains from having the NPU in the first place.
Hi Fmf,

From their published patent, Quadric has a very clunky software controlled microprocessor/MAC array.

I don't know if they have done a ground-up redesign in the last 18 months - I suppose anything's possible, but ...


US2023083282A1 SYSTEMS AND METHODS FOR ACCELERATING MEMORY TRANSFERS AND COMPUTATION EFFICIENCY USING A COMPUTATION-INFORMED PARTITIONING OF AN ON-CHIP DATA BUFFER AND IMPLEMENTING COMPUTATION-AWARE DATA TRANSFER OPERATIONS TO THE ON-CHIP DATA BUFFER

1688919002224.png




0044] An array core 110 preferably functions as a data or signal processing node (e.g., a small microprocessor) or processing circuit and preferably, includes a register file 112 having a large data storage capacity (e.g., 1024 kb, etc.) and an arithmetic logic unit (ALU) 118 or any suitable digital electronic circuit that performs arithmetic and bitwise operations on integer binary numbers.



[0048] An array core 110 may, additionally or alternatively, include a plurality of multiplier (multiply) accumulators (MACs) 114 or any suitable logic devices or digital circuits that may be capable of performing multiply and summation functions. In a preferred embodiment, each array core 110 includes four (4) MACs and each MAC 114 may be arranged at or near a specific side of a rectangular shaped array core 110 .
 
  • Like
  • Love
Reactions: 23 users

itsol4605

Regular
NASA like analog because it radhard

NASA like analog because it is radhard.
NASA is using digital processors in satellites and on mars rover.
ESA is using LEON processor which is 100% digital.

Analog computers are available but not reliable.
 

MDhere

Regular
Is this finally heading in the direction of MosChip?

ASIC Verification Engineer​

BrainChip Hyderabad, Telangana, India

Responsibilities

- Test plan, Test bench development, execution, and debugging
- Block and Chip level- IP/ASIC/SOC/CPU/AMS Verification
- SystemVerilog with Testbench methodologies UVM
- Verilog, VHDL, C++, Vera, e, System C
- Protocols: PCIe / USB / DDR / Ethernet / ARM 💪/CNN/RNN
- Scripting: Perl, Tcl, Unix sc
ripting

https://www.linkedin.com/jobs/view/...at-brainchip-3650966374/?originalSubdomain=in
Could be Moschip but could also be my love of Tata 🥰🙂
Screenshot_20230710-052624_One UI Home.jpg
 
  • Like
  • Fire
  • Love
Reactions: 47 users

AARONASX

Holding onto what I've got
 
  • Like
  • Fire
  • Love
Reactions: 29 users

Getupthere

Regular

BURLINGAME, Calif.--(BUSINESS WIRE)--Quadric, the company building the industry’s most advanced high-performance edge processing platform optimized for on-device AI at the network edge, announced today a $21M Series B funding round. NSITEXE, Inc., a group company of a leading mobility supplier DENSO, led the round with major investment from MegaChips. Existing investors Leawood VC, Pear VC, Uncork Capital, and Cota Capital also participated in the round.
 
  • Like
  • Thinking
  • Love
Reactions: 13 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 23 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 25 users

IloveLamp

Top 20

Attachments

  • Helium Technology and ARM® Cortex®-M85 in AI and DSP.pdf
    2.2 MB · Views: 208
  • Like
  • Love
  • Fire
Reactions: 22 users

Dozzaman1977

Regular
We must be due for an announcement notification of unquoted securities today. It's been a while and there is 30 million RSU's just sitting there..........🤣
 
  • Haha
  • Like
  • Sad
Reactions: 18 users

IloveLamp

Top 20
Last one i promise 😁


Screenshot_20230710_082147_LinkedIn.jpg
Screenshot_20230710_082255_Chrome.jpg
 
  • Like
  • Love
  • Fire
Reactions: 21 users

Jchandel

Regular
  • Like
  • Fire
Reactions: 2 users

Sam

Nothing changes if nothing changes
Sorry if posted already

 
  • Like
  • Love
Reactions: 11 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 17 users
  • Like
  • Fire
Reactions: 9 users

Rach2512

Regular
So have I got that right, $16,46,213 billion market size by 2030? Or did I put too many zeros on the end, I may have lost count.



Answer:


one quadrillion six hundred forty-six trillion two hundred thirteen billion

I'm confused, they must have got that wrong surely?? 🤪
 

Attachments

  • Screenshot_20230710-082829_Samsung Internet.jpg
    Screenshot_20230710-082829_Samsung Internet.jpg
    376.1 KB · Views: 102
  • Like
  • Haha
  • Fire
Reactions: 11 users
Top Bottom