BRN Discussion Ongoing

We have been tracking Quadric since February on TSE. Given that we are co-partners with MegaChips it behoves us to keep a weather eye on them.

I find it useful to look at the problem a patent addresses. This one seeks to overcome the deficiencies of GPUs in performing NN tasks.

Basically they have designed a reconfigurable array of processors, much like a GPU, but have eliminated some redundant processing steps such as the addition of padding bits where the sensor data is absent.

The Quadric arrangement relies on synchronous operation and data inputs, whereas Akida has utilizes asynchronous data inputs, a particularly sweet adaptation for LiDaR.

For those who don't wish to know the score, turn away now.

US10474398B2 Machine perception and dense algorithm integrated circuit

Autonomous vehicles have been implemented with advanced sensor suites that provide a fusion of sensor data that enable route or path planning for autonomous vehicles. But, modern GPUs are not constructed for handling these additional high computation tasks.

[0006] At best,
to enable a GPU or similar processing circuitry to handle additional sensor processing needs including path planning, sensor fusion, and the like, additional and/or disparate circuitry may be assembled to a traditional GPU. This fragmented and piecemeal approach to handling the additional perception processing needs of robotics and autonomous machines results in a number of inefficiencies in performing computations including inefficiencies in sensor signal processing.

1. An integrated circuit comprising:
a plurality of processing cores,
each processing core of the plurality of processing cores comprising:
at least one processing circuit; and
at least one memory circuit;
a plurality of peripheral cores,
each peripheral core of the plurality of peripheral cores comprising:
at least one memory circuit,
wherein:
at least a subset of the plurality of peripheral cores is arranged along a periphery of a first subset of the plurality of processing cores; and
[ii] a combination of the plurality of processing cores and the plurality of peripheral cores define an integrated circuit array;
a dispatch controller that provides data movement instructions, wherein the data movement instructions comprise a data flow schedule that:
defines an automatic movement of data within the integrated circuit array; and
sets one or more peripheral cores of the plurality of peripheral cores to a predetermined constant value if no data is provided to the one or more peripheral cores according to the predetermined data flow schedule
.



View attachment 6877


130 = dispatch controller
140, 150 = periphery controllers
149, 159 = FIFO registers
160 = sensor data memory
View attachment 6883


112 = register file
122 = register file
114 = MAC
118 = ALU

[0039] An array core 110 may, additionally or alternatively, include a plurality of multiplier (multiply) accumulators (MACs) 114 or any suitable logic devices or digital circuits that may be capable of performing multiply and summation functions. In a preferred embodiment, each array core 110 includes four (4) MACs and each MAC 114 may be arranged at or near a specific side of a rectangular shaped array core 110 , as shown by way of example in FIG. 2.

[0035] An array core 110 preferably functions as a data or signal processing node (e.g., a small microprocessor) or processing circuit and preferably, includes a register file 112 having a large data storage capacity (e.g., 4 kilobyte (KB) or greater, etc.) and an arithmetic logic unit (ALU) 118 or any suitable digital electronic circuit that performs arithmetic and bitwise operations on integer binary numbers. In a preferred embodiment, the register file 112 of an array core 110 may be the only memory element that the processing circuits of an array core no may have direct access to. An array core no may have indirect access to memory outside of the array core and/or the integrated circuit array 105 (i.e., core mesh) defined by the plurality of border cores 120 and the plurality of array cores 110 .

[0036] The register file 112 of an array core no may be any suitable memory element or device, but preferably comprises one or more static random-access memories (SRAMs). The register file 112 may include a large number of registers, such as 1024 registers, that enables the storage of a sufficiently large data set for processing by the array core no. Accordingly, a technical benefit achieved by an arrangement of the large register file 112 within each array core 110 is that the large register file 112 reduces a need by an array core 110 to fetch and load data into its register file 112 for processing. As a result, a number of clock cycles required by the array core 112 to push data into and pull data out of memory is significantly reduced or eliminated altogether
.

[0044] In a traditional integrated circuit (e.g., a GPU or the like), when input image data (or any other suitable sensor data) received for processing compute-intensive application (e.g., neural network algorithm) within such a circuit, it may be necessary to issue padding requests to areas within the circuit which do not include image values (e.g., pixel values) based on the input image data. That is, during image processing or the like, the traditional integrated circuit may function to perform image processing from a memory element that does not contain any image data value. In such instances, the traditional integrated circuit may function to request that a padding value, such as zero, be added to the memory element to avoid subsequent image processing efforts at the memory element without an image data value. A consequence of this typical image data processing by the traditional integrated circuit results in a number of clock cycles spent identifying the blank memory element and adding a computable value to the memory element for image processing or the like by the traditional integrated circuit.

Autonomous vehicles have been implemented with advanced sensor suites that provide a fusion of sensor data that enable route or path planning for autonomous vehicles. But, modern GPUs are not constructed for handling these additional high computation tasks.

[0006] At best, to enable a GPU or similar processing circuitry to handle additional sensor processing needs including path planning, sensor fusion, and the like, additional and/or disparate circuitry may be assembled to a traditional GPU. This fragmented and piecemeal approach to handling the additional perception processing needs of robotics and autonomous machines results in a number of inefficiencies in performing computations including inefficiencies in sensor signal processing.

1. An integrated circuit comprising:
a plurality of processing cores,
each processing core of the plurality of processing cores comprising:
at least one processing circuit; and
at least one memory circuit;
a plurality of peripheral cores,
each peripheral core of the plurality of peripheral cores comprising:
at least one memory circuit,
wherein:
at least a subset of the plurality of peripheral cores is arranged along a periphery of a first subset of the plurality of processing cores; and
[ii] a combination of the plurality of processing cores and the plurality of peripheral cores define an integrated circuit array;
a dispatch controller that provides data movement instructions, wherein the data movement instructions comprise a data flow schedule that:
defines an automatic movement of data within the integrated circuit array; and
sets one or more peripheral cores of the plurality of peripheral cores to a predetermined constant value if no data is provided to the one or more peripheral cores according to the predetermined data flow schedule
.


Brilliant!
Hi @Diogenese

When I first read about Quadric I thought about the fact that there is yet to be a single agreed definition of the Edge and where it actually is in a system.

It struck me then that AKIDA at the far Edge with its mum saying be careful you should not be that close and Quadric at a safer place back from the Edge with its mum saying AKIDA come back and stand with your cousin Quadric was why MegaChips have both solutions.

The following which I extracted from the group of words you posted seems to fit this scenario with AKIDA making all the sensors intelligent and a Quadric processing the AKIDA made relevant data:

“Autonomous vehicles have been implemented with advanced sensor suites that provide a fusion of sensor data that enable route or path planning for autonomous vehicles. But, modern GPUs are not constructed for handling these additional high computation tasks.

[0006] At best, to enable a GPU or similar processing circuitry to handle additional sensor processing needs including path planning, sensor fusion, and the like, additional and/or disparate circuitry may be assembled to a traditional GPU”


If you generally agree then I can allow the rest of the technological differences to happily go over my head?

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Haha
Reactions: 24 users
D

Deleted member 118

Guest
Hi @Diogenese

When I first read about Quadric I thought about the fact that there is yet to be a single agreed definition of the Edge and where it actually is in a system.

It struck me then that AKIDA at the far Edge with its mum saying be careful you should not be that close and Quadric at a safer place back from the Edge with its mum saying AKIDA come back and stand with your cousin Quadric was why MefaChips have both solutions.

The following which I extracted from the group of words you posted seems to fit this scenario with AKIDA making all the sensors intelligent and a Quadric processing the AKIDA made relevant data:

“Autonomous vehicles have been implemented with advanced sensor suites that provide a fusion of sensor data that enable route or path planning for autonomous vehicles. But, modern GPUs are not constructed for handling these additional high computation tasks.

[0006] At best, to enable a GPU or similar processing circuitry to handle additional sensor processing needs including path planning, sensor fusion, and the like, additional and/or disparate circuitry may be assembled to a traditional GPU”


If you generally agree then I can allow the rest of the technological differences to happily go over my head?

My opinion only DYOR
FF

AKIDA BALLISTA
I did read a good article on how both these companies technology will work hand in hand together, I’ll try and find it again.
 
  • Like
  • Love
Reactions: 7 users
I did read a good article on how both these companies technology will work hand in hand together, I’ll try and find it again.
Hi @Rocket577
Can you post the table of fps you put up recently when we were discussing graphics cards? I cannot find it again.
Regards
FF
 
  • Like
Reactions: 2 users
D

Deleted member 118

Guest
  • Like
  • Love
  • Fire
Reactions: 4 users

RobjHunt

Regular
Hi @chapman89 ,

Reading your attachment, shows how out of touch the CEO is including the board and his closed tight hip full of dollar bulging pocketed allies.

As a shareholder , I want to see every single employee benefit and be justly rewarded for their hard work and efforts = team effort… not ones sitting at the top of the helm, does not = team…. (Every single person is as important as their next working colleague, no matter what title one may hold)

Nan. xxx
Spot on Nanna!
 
  • Like
  • Fire
Reactions: 11 users
D

Deleted member 118

Guest
Interesting job positions




OFFER DESCRIPTION​


Project
With the rise of deep learning (DL), our world braces for Artificial Intelligence (AI) in every edge device, creating an urgent need for Edge-AI processing hardware. Unlike existing solutions, this hardware needs to support high throughput, reliable, and secure AI processing at ultra-low power (ULP), combined with a very short time to market.
With its strong position in edge solutions and open processing platforms, the EU is ideally positioned to become the leader in this edge-AI market. However, certain roadblocks keep the EU from assuming this leadership role: Edge processors need to become 100x more energy efficient; Their complexity demands automated design with 10x design-time reduction; They must be secure and reliable to gain acceptance; Finally, they should be flexible and powerful to support the rapidly evolving DL domain.
CONVOLVE addresses these roadblocks in Edge-AI. To that end, it will take a holistic approach with innovations at all levels of the design stack, including:
  • On-edge continuous learning for improved accuracy, self-healing, and reliable adaptation to non-stationary environments
  • Rethinking DL models through dynamic neural networks, event-based execution, and sparsity
  • Transparent compilers supporting automated code optimizations and domain-specific languages
  • Fast compositional design of System-on-Chips (SoC)
  • Digital accelerators for dynamic ANN and SNN
  • ULP memristive circuits for computation-in-memory
  • Holistic integration in SoCs supporting secure execution with real-time guarantees
The CONVOLVE consortium includes some of Europe's strongest research groups and industries, covering the whole design stack and value chain. In a community effort, we will demonstrate Edge-AI computing in real-life vision and audio domains. By combining these innovative ULP and fast design solutions, CONVOLVE will, for the first time, enable reliable, smart, and energy-efficient edge-AI devices at a rapid time-to-market and low cost, and as such, opens the road for EU leadership in edge-processing.
Candidates
We are seeking two highly skilled and motivated postdoc candidates to tackle any of the following four research topics (a single candidate will focus on one topic):
Topic 1: Ultra-low power CGRA for Dynamic ANNs and SNNs: Research and develop near-memory computing engines based on Coarse-Grained Reconfigurable Architectures (CGRA) using a flexible memory fabric for Dynamic Neural Networks. These designs need to be equipped with self-healing mechanisms to (partly) recover in the event of failures, enhancing system-level reliability. The accelerators may also have knobs to exploit near-threshold and approximate computing for extreme energy-efficient operation.
Topic 2: Design-flow for SNNs and ANNs implemented in compiler: Research and develop a high-quality compiler backend for CGRAs targets supporting SNNs and ANNs. Compared to existing solutions, the energy efficiency needs to be improved by exploiting SIMD, advanced memory hierarchy, data reuse, sparsity, software operand bypassing, etc.
Topic 3: Compositional performance analysis and architecture Design Space Exploration (DSE) of a System-on-Chip (SOC): Research and develop an infrastructure to model energy & latency at the SoC level, including the SoC level memory hierarchy and processing host, as well as integrating the different accelerator component models. To support rapid evaluations needed for the DSE, analytical models need to be pursued. The development of compositional models will moreover enable run-time performance assessment of an application when the platform configuration changes (dynamic SoC reconfiguration) due to a failing platform component.
Topic 4: Composable and Secure SoC accelerator platform: Research and develop novel composable and real-time design techniques to realize an ultra-low-power and real-time Trusted Execution Environment (TEE) for an SoC platform consisting of RISC-V cores with several accelerators. Different security features that protect against physical attacks need to be integrated into the SoC platform, while maintaining ultra-low-power and real-time requirements of the applications. The platform should allow easy and secure integration of Post-Quantum Cryptography accelerators and Compute-In-Memory (CIM) based hardware accelerators.
Job requirements
For both positions we are looking for excellent, teamwork-oriented, and research-driven candidates with a PhD degree in Electrical Engineering, Computer Science or AI related topic and strong hardware/software design skills.
 
  • Like
  • Love
Reactions: 11 users
Interesting job positions




OFFER DESCRIPTION​


Project
With the rise of deep learning (DL), our world braces for Artificial Intelligence (AI) in every edge device, creating an urgent need for Edge-AI processing hardware. Unlike existing solutions, this hardware needs to support high throughput, reliable, and secure AI processing at ultra-low power (ULP), combined with a very short time to market.
With its strong position in edge solutions and open processing platforms, the EU is ideally positioned to become the leader in this edge-AI market. However, certain roadblocks keep the EU from assuming this leadership role: Edge processors need to become 100x more energy efficient; Their complexity demands automated design with 10x design-time reduction; They must be secure and reliable to gain acceptance; Finally, they should be flexible and powerful to support the rapidly evolving DL domain.
CONVOLVE addresses these roadblocks in Edge-AI. To that end, it will take a holistic approach with innovations at all levels of the design stack, including:
  • On-edge continuous learning for improved accuracy, self-healing, and reliable adaptation to non-stationary environments
  • Rethinking DL models through dynamic neural networks, event-based execution, and sparsity
  • Transparent compilers supporting automated code optimizations and domain-specific languages
  • Fast compositional design of System-on-Chips (SoC)
  • Digital accelerators for dynamic ANN and SNN
  • ULP memristive circuits for computation-in-memory
  • Holistic integration in SoCs supporting secure execution with real-time guarantees
The CONVOLVE consortium includes some of Europe's strongest research groups and industries, covering the whole design stack and value chain. In a community effort, we will demonstrate Edge-AI computing in real-life vision and audio domains. By combining these innovative ULP and fast design solutions, CONVOLVE will, for the first time, enable reliable, smart, and energy-efficient edge-AI devices at a rapid time-to-market and low cost, and as such, opens the road for EU leadership in edge-processing.
Candidates
We are seeking two highly skilled and motivated postdoc candidates to tackle any of the following four research topics (a single candidate will focus on one topic):
Topic 1: Ultra-low power CGRA for Dynamic ANNs and SNNs: Research and develop near-memory computing engines based on Coarse-Grained Reconfigurable Architectures (CGRA) using a flexible memory fabric for Dynamic Neural Networks. These designs need to be equipped with self-healing mechanisms to (partly) recover in the event of failures, enhancing system-level reliability. The accelerators may also have knobs to exploit near-threshold and approximate computing for extreme energy-efficient operation.
Topic 2: Design-flow for SNNs and ANNs implemented in compiler: Research and develop a high-quality compiler backend for CGRAs targets supporting SNNs and ANNs. Compared to existing solutions, the energy efficiency needs to be improved by exploiting SIMD, advanced memory hierarchy, data reuse, sparsity, software operand bypassing, etc.
Topic 3: Compositional performance analysis and architecture Design Space Exploration (DSE) of a System-on-Chip (SOC): Research and develop an infrastructure to model energy & latency at the SoC level, including the SoC level memory hierarchy and processing host, as well as integrating the different accelerator component models. To support rapid evaluations needed for the DSE, analytical models need to be pursued. The development of compositional models will moreover enable run-time performance assessment of an application when the platform configuration changes (dynamic SoC reconfiguration) due to a failing platform component.
Topic 4: Composable and Secure SoC accelerator platform: Research and develop novel composable and real-time design techniques to realize an ultra-low-power and real-time Trusted Execution Environment (TEE) for an SoC platform consisting of RISC-V cores with several accelerators. Different security features that protect against physical attacks need to be integrated into the SoC platform, while maintaining ultra-low-power and real-time requirements of the applications. The platform should allow easy and secure integration of Post-Quantum Cryptography accelerators and Compute-In-Memory (CIM) based hardware accelerators.
Job requirements
For both positions we are looking for excellent, teamwork-oriented, and research-driven candidates with a PhD degree in Electrical Engineering, Computer Science or AI related topic and strong hardware/software design skills.
Hope Peter and Anil don’t see this add. 😂 FF
 
  • Haha
  • Like
Reactions: 7 users
It’s been removed
Yes I noticed a small crease in the table cloth so I sent a message to Jerome Nadel. He said he would take down the Tweet, get it pressed and put back up. Heads will roll once they establish who allowed this to happen. 🤣😂🤓 FF


AKIDA BALLISTA
 
  • Haha
  • Like
  • Fire
Reactions: 32 users

Slade

Top 20
Trading is looking good.
Stranger Stalking GIF
 
  • Like
  • Haha
  • Love
Reactions: 15 users
D

Deleted member 118

Guest
Yes I noticed a small crease in the table cloth so I sent a message to Jerome Nadel. He said he would take down the Tweet, get it pressed and put back up. Heads will roll once they establish who allowed this to happen. 🤣😂🤓 FF


AKIDA BALLISTA
 
  • Haha
  • Like
  • Love
Reactions: 10 users

White Horse

Regular
Interesting job positions




OFFER DESCRIPTION​


Project
With the rise of deep learning (DL), our world braces for Artificial Intelligence (AI) in every edge device, creating an urgent need for Edge-AI processing hardware. Unlike existing solutions, this hardware needs to support high throughput, reliable, and secure AI processing at ultra-low power (ULP), combined with a very short time to market.
With its strong position in edge solutions and open processing platforms, the EU is ideally positioned to become the leader in this edge-AI market. However, certain roadblocks keep the EU from assuming this leadership role: Edge processors need to become 100x more energy efficient; Their complexity demands automated design with 10x design-time reduction; They must be secure and reliable to gain acceptance; Finally, they should be flexible and powerful to support the rapidly evolving DL domain.
CONVOLVE addresses these roadblocks in Edge-AI. To that end, it will take a holistic approach with innovations at all levels of the design stack, including:
  • On-edge continuous learning for improved accuracy, self-healing, and reliable adaptation to non-stationary environments
  • Rethinking DL models through dynamic neural networks, event-based execution, and sparsity
  • Transparent compilers supporting automated code optimizations and domain-specific languages
  • Fast compositional design of System-on-Chips (SoC)
  • Digital accelerators for dynamic ANN and SNN
  • ULP memristive circuits for computation-in-memory
  • Holistic integration in SoCs supporting secure execution with real-time guarantees
The CONVOLVE consortium includes some of Europe's strongest research groups and industries, covering the whole design stack and value chain. In a community effort, we will demonstrate Edge-AI computing in real-life vision and audio domains. By combining these innovative ULP and fast design solutions, CONVOLVE will, for the first time, enable reliable, smart, and energy-efficient edge-AI devices at a rapid time-to-market and low cost, and as such, opens the road for EU leadership in edge-processing.
Candidates
We are seeking two highly skilled and motivated postdoc candidates to tackle any of the following four research topics (a single candidate will focus on one topic):
Topic 1: Ultra-low power CGRA for Dynamic ANNs and SNNs: Research and develop near-memory computing engines based on Coarse-Grained Reconfigurable Architectures (CGRA) using a flexible memory fabric for Dynamic Neural Networks. These designs need to be equipped with self-healing mechanisms to (partly) recover in the event of failures, enhancing system-level reliability. The accelerators may also have knobs to exploit near-threshold and approximate computing for extreme energy-efficient operation.
Topic 2: Design-flow for SNNs and ANNs implemented in compiler: Research and develop a high-quality compiler backend for CGRAs targets supporting SNNs and ANNs. Compared to existing solutions, the energy efficiency needs to be improved by exploiting SIMD, advanced memory hierarchy, data reuse, sparsity, software operand bypassing, etc.
Topic 3: Compositional performance analysis and architecture Design Space Exploration (DSE) of a System-on-Chip (SOC): Research and develop an infrastructure to model energy & latency at the SoC level, including the SoC level memory hierarchy and processing host, as well as integrating the different accelerator component models. To support rapid evaluations needed for the DSE, analytical models need to be pursued. The development of compositional models will moreover enable run-time performance assessment of an application when the platform configuration changes (dynamic SoC reconfiguration) due to a failing platform component.
Topic 4: Composable and Secure SoC accelerator platform: Research and develop novel composable and real-time design techniques to realize an ultra-low-power and real-time Trusted Execution Environment (TEE) for an SoC platform consisting of RISC-V cores with several accelerators. Different security features that protect against physical attacks need to be integrated into the SoC platform, while maintaining ultra-low-power and real-time requirements of the applications. The platform should allow easy and secure integration of Post-Quantum Cryptography accelerators and Compute-In-Memory (CIM) based hardware accelerators.
Job requirements
For both positions we are looking for excellent, teamwork-oriented, and research-driven candidates with a PhD degree in Electrical Engineering, Computer Science or AI related topic and strong hardware/software design skills.
Following through, it says job has expired.
 
  • Like
Reactions: 4 users
D

Deleted member 118

Guest
  • Like
Reactions: 1 users
Following through, it says job has expired.
Thank goodness I was worried that after reading what the expert financial analysts had to say Peter and Anil would apply and jump ship. Pheew that was close. 😎

FF

AKIDA BALLISTA
 
  • Haha
  • Like
Reactions: 16 users
D

Deleted member 118

Guest
Thank goodness I was worried that after reading what the expert financial analysts had to say Peter and Anil would apply and jump ship. Pheew that was close. 😎

FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Love
Reactions: 9 users

TECH

Regular
Patent History
Publication number
: 20220147797
Type: Application
Filed: Jan 25, 2022
Publication Date: May 12, 2022
Applicant: BrainChip, Inc. (Laguna Hills, CA)
Inventors: Douglas MCLELLAND (Laguna Hills, CA), Kristofor D. CARLSON (Laguna Hills, CA), Harshil K. PATEL (Laguna Hills, CA), Anup A. VANARSE (Laguna Hills, CA), Milind JOSHI (Perth)
Application Number: 17/583,640

Is this a sign of the changing of the guard/s...I don't think so...BUT...check out the 3 Perth based "Dream Team" getting to put their inventors hats on....I'm personally really pleased for Anup and Harshil with whom I've had the pleasure to talk with in person.

This was obviously only published 5/6 days ago, so if this has already been posted, excuse me, as I can't keep up with all the brilliant articles being posted, I'm a slow reader :ROFLMAO:

Good morning from Australia's Brainchip HQ.....Perth :love::love:
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 64 users

miaeffect

Oat latte lover
AU1t.gif

Don't mess up with AKIDA, Shorters!. Our Ken is watching you.
 
  • Haha
  • Like
  • Love
Reactions: 37 users

jk6199

Regular
It would be a great day for an official announcement considering the engine’s warming up green today. Alas, consistent increases will have to do 😉
 
  • Like
  • Love
  • Fire
Reactions: 7 users

Sirod69

bavarian girl ;-)
So nice to see brainchip, Megachips and Edge Impuls at the Embedded Vision Summit and today on our German News there was a talk about self driving with Mercedes, really so many news and connections and then the 1000 eyes, really great I wish you a good day, and now I really know we in Europa are green tomorrow morning
 
  • Like
  • Love
  • Fire
Reactions: 33 users

Diogenese

Top 20
I did read a good article on how both these companies technology will work hand in hand together, I’ll try and find it again.
Thanks Rocket,

That will help prevent me making a complete fool of myself.
 
  • Haha
  • Like
Reactions: 3 users
Top Bottom