BRN Discussion Ongoing

D

Deleted member 118

Guest
  • Like
  • Love
  • Fire
Reactions: 4 users

RobjHunt

Regular
Hi @chapman89 ,

Reading your attachment, shows how out of touch the CEO is including the board and his closed tight hip full of dollar bulging pocketed allies.

As a shareholder , I want to see every single employee benefit and be justly rewarded for their hard work and efforts = team effort… not ones sitting at the top of the helm, does not = team…. (Every single person is as important as their next working colleague, no matter what title one may hold)

Nan. xxx
Spot on Nanna!
 
  • Like
  • Fire
Reactions: 11 users
D

Deleted member 118

Guest
Interesting job positions




OFFER DESCRIPTION​


Project
With the rise of deep learning (DL), our world braces for Artificial Intelligence (AI) in every edge device, creating an urgent need for Edge-AI processing hardware. Unlike existing solutions, this hardware needs to support high throughput, reliable, and secure AI processing at ultra-low power (ULP), combined with a very short time to market.
With its strong position in edge solutions and open processing platforms, the EU is ideally positioned to become the leader in this edge-AI market. However, certain roadblocks keep the EU from assuming this leadership role: Edge processors need to become 100x more energy efficient; Their complexity demands automated design with 10x design-time reduction; They must be secure and reliable to gain acceptance; Finally, they should be flexible and powerful to support the rapidly evolving DL domain.
CONVOLVE addresses these roadblocks in Edge-AI. To that end, it will take a holistic approach with innovations at all levels of the design stack, including:
  • On-edge continuous learning for improved accuracy, self-healing, and reliable adaptation to non-stationary environments
  • Rethinking DL models through dynamic neural networks, event-based execution, and sparsity
  • Transparent compilers supporting automated code optimizations and domain-specific languages
  • Fast compositional design of System-on-Chips (SoC)
  • Digital accelerators for dynamic ANN and SNN
  • ULP memristive circuits for computation-in-memory
  • Holistic integration in SoCs supporting secure execution with real-time guarantees
The CONVOLVE consortium includes some of Europe's strongest research groups and industries, covering the whole design stack and value chain. In a community effort, we will demonstrate Edge-AI computing in real-life vision and audio domains. By combining these innovative ULP and fast design solutions, CONVOLVE will, for the first time, enable reliable, smart, and energy-efficient edge-AI devices at a rapid time-to-market and low cost, and as such, opens the road for EU leadership in edge-processing.
Candidates
We are seeking two highly skilled and motivated postdoc candidates to tackle any of the following four research topics (a single candidate will focus on one topic):
Topic 1: Ultra-low power CGRA for Dynamic ANNs and SNNs: Research and develop near-memory computing engines based on Coarse-Grained Reconfigurable Architectures (CGRA) using a flexible memory fabric for Dynamic Neural Networks. These designs need to be equipped with self-healing mechanisms to (partly) recover in the event of failures, enhancing system-level reliability. The accelerators may also have knobs to exploit near-threshold and approximate computing for extreme energy-efficient operation.
Topic 2: Design-flow for SNNs and ANNs implemented in compiler: Research and develop a high-quality compiler backend for CGRAs targets supporting SNNs and ANNs. Compared to existing solutions, the energy efficiency needs to be improved by exploiting SIMD, advanced memory hierarchy, data reuse, sparsity, software operand bypassing, etc.
Topic 3: Compositional performance analysis and architecture Design Space Exploration (DSE) of a System-on-Chip (SOC): Research and develop an infrastructure to model energy & latency at the SoC level, including the SoC level memory hierarchy and processing host, as well as integrating the different accelerator component models. To support rapid evaluations needed for the DSE, analytical models need to be pursued. The development of compositional models will moreover enable run-time performance assessment of an application when the platform configuration changes (dynamic SoC reconfiguration) due to a failing platform component.
Topic 4: Composable and Secure SoC accelerator platform: Research and develop novel composable and real-time design techniques to realize an ultra-low-power and real-time Trusted Execution Environment (TEE) for an SoC platform consisting of RISC-V cores with several accelerators. Different security features that protect against physical attacks need to be integrated into the SoC platform, while maintaining ultra-low-power and real-time requirements of the applications. The platform should allow easy and secure integration of Post-Quantum Cryptography accelerators and Compute-In-Memory (CIM) based hardware accelerators.
Job requirements
For both positions we are looking for excellent, teamwork-oriented, and research-driven candidates with a PhD degree in Electrical Engineering, Computer Science or AI related topic and strong hardware/software design skills.
 
  • Like
  • Love
Reactions: 11 users
Interesting job positions




OFFER DESCRIPTION​


Project
With the rise of deep learning (DL), our world braces for Artificial Intelligence (AI) in every edge device, creating an urgent need for Edge-AI processing hardware. Unlike existing solutions, this hardware needs to support high throughput, reliable, and secure AI processing at ultra-low power (ULP), combined with a very short time to market.
With its strong position in edge solutions and open processing platforms, the EU is ideally positioned to become the leader in this edge-AI market. However, certain roadblocks keep the EU from assuming this leadership role: Edge processors need to become 100x more energy efficient; Their complexity demands automated design with 10x design-time reduction; They must be secure and reliable to gain acceptance; Finally, they should be flexible and powerful to support the rapidly evolving DL domain.
CONVOLVE addresses these roadblocks in Edge-AI. To that end, it will take a holistic approach with innovations at all levels of the design stack, including:
  • On-edge continuous learning for improved accuracy, self-healing, and reliable adaptation to non-stationary environments
  • Rethinking DL models through dynamic neural networks, event-based execution, and sparsity
  • Transparent compilers supporting automated code optimizations and domain-specific languages
  • Fast compositional design of System-on-Chips (SoC)
  • Digital accelerators for dynamic ANN and SNN
  • ULP memristive circuits for computation-in-memory
  • Holistic integration in SoCs supporting secure execution with real-time guarantees
The CONVOLVE consortium includes some of Europe's strongest research groups and industries, covering the whole design stack and value chain. In a community effort, we will demonstrate Edge-AI computing in real-life vision and audio domains. By combining these innovative ULP and fast design solutions, CONVOLVE will, for the first time, enable reliable, smart, and energy-efficient edge-AI devices at a rapid time-to-market and low cost, and as such, opens the road for EU leadership in edge-processing.
Candidates
We are seeking two highly skilled and motivated postdoc candidates to tackle any of the following four research topics (a single candidate will focus on one topic):
Topic 1: Ultra-low power CGRA for Dynamic ANNs and SNNs: Research and develop near-memory computing engines based on Coarse-Grained Reconfigurable Architectures (CGRA) using a flexible memory fabric for Dynamic Neural Networks. These designs need to be equipped with self-healing mechanisms to (partly) recover in the event of failures, enhancing system-level reliability. The accelerators may also have knobs to exploit near-threshold and approximate computing for extreme energy-efficient operation.
Topic 2: Design-flow for SNNs and ANNs implemented in compiler: Research and develop a high-quality compiler backend for CGRAs targets supporting SNNs and ANNs. Compared to existing solutions, the energy efficiency needs to be improved by exploiting SIMD, advanced memory hierarchy, data reuse, sparsity, software operand bypassing, etc.
Topic 3: Compositional performance analysis and architecture Design Space Exploration (DSE) of a System-on-Chip (SOC): Research and develop an infrastructure to model energy & latency at the SoC level, including the SoC level memory hierarchy and processing host, as well as integrating the different accelerator component models. To support rapid evaluations needed for the DSE, analytical models need to be pursued. The development of compositional models will moreover enable run-time performance assessment of an application when the platform configuration changes (dynamic SoC reconfiguration) due to a failing platform component.
Topic 4: Composable and Secure SoC accelerator platform: Research and develop novel composable and real-time design techniques to realize an ultra-low-power and real-time Trusted Execution Environment (TEE) for an SoC platform consisting of RISC-V cores with several accelerators. Different security features that protect against physical attacks need to be integrated into the SoC platform, while maintaining ultra-low-power and real-time requirements of the applications. The platform should allow easy and secure integration of Post-Quantum Cryptography accelerators and Compute-In-Memory (CIM) based hardware accelerators.
Job requirements
For both positions we are looking for excellent, teamwork-oriented, and research-driven candidates with a PhD degree in Electrical Engineering, Computer Science or AI related topic and strong hardware/software design skills.
Hope Peter and Anil don’t see this add. 😂 FF
 
  • Haha
  • Like
Reactions: 7 users
It’s been removed
Yes I noticed a small crease in the table cloth so I sent a message to Jerome Nadel. He said he would take down the Tweet, get it pressed and put back up. Heads will roll once they establish who allowed this to happen. 🤣😂🤓 FF


AKIDA BALLISTA
 
  • Haha
  • Like
  • Fire
Reactions: 32 users

Slade

Top 20
Trading is looking good.
Stranger Stalking GIF
 
  • Like
  • Haha
  • Love
Reactions: 15 users
D

Deleted member 118

Guest
Yes I noticed a small crease in the table cloth so I sent a message to Jerome Nadel. He said he would take down the Tweet, get it pressed and put back up. Heads will roll once they establish who allowed this to happen. 🤣😂🤓 FF


AKIDA BALLISTA
 
  • Haha
  • Like
  • Love
Reactions: 10 users

White Horse

Regular
Interesting job positions




OFFER DESCRIPTION​


Project
With the rise of deep learning (DL), our world braces for Artificial Intelligence (AI) in every edge device, creating an urgent need for Edge-AI processing hardware. Unlike existing solutions, this hardware needs to support high throughput, reliable, and secure AI processing at ultra-low power (ULP), combined with a very short time to market.
With its strong position in edge solutions and open processing platforms, the EU is ideally positioned to become the leader in this edge-AI market. However, certain roadblocks keep the EU from assuming this leadership role: Edge processors need to become 100x more energy efficient; Their complexity demands automated design with 10x design-time reduction; They must be secure and reliable to gain acceptance; Finally, they should be flexible and powerful to support the rapidly evolving DL domain.
CONVOLVE addresses these roadblocks in Edge-AI. To that end, it will take a holistic approach with innovations at all levels of the design stack, including:
  • On-edge continuous learning for improved accuracy, self-healing, and reliable adaptation to non-stationary environments
  • Rethinking DL models through dynamic neural networks, event-based execution, and sparsity
  • Transparent compilers supporting automated code optimizations and domain-specific languages
  • Fast compositional design of System-on-Chips (SoC)
  • Digital accelerators for dynamic ANN and SNN
  • ULP memristive circuits for computation-in-memory
  • Holistic integration in SoCs supporting secure execution with real-time guarantees
The CONVOLVE consortium includes some of Europe's strongest research groups and industries, covering the whole design stack and value chain. In a community effort, we will demonstrate Edge-AI computing in real-life vision and audio domains. By combining these innovative ULP and fast design solutions, CONVOLVE will, for the first time, enable reliable, smart, and energy-efficient edge-AI devices at a rapid time-to-market and low cost, and as such, opens the road for EU leadership in edge-processing.
Candidates
We are seeking two highly skilled and motivated postdoc candidates to tackle any of the following four research topics (a single candidate will focus on one topic):
Topic 1: Ultra-low power CGRA for Dynamic ANNs and SNNs: Research and develop near-memory computing engines based on Coarse-Grained Reconfigurable Architectures (CGRA) using a flexible memory fabric for Dynamic Neural Networks. These designs need to be equipped with self-healing mechanisms to (partly) recover in the event of failures, enhancing system-level reliability. The accelerators may also have knobs to exploit near-threshold and approximate computing for extreme energy-efficient operation.
Topic 2: Design-flow for SNNs and ANNs implemented in compiler: Research and develop a high-quality compiler backend for CGRAs targets supporting SNNs and ANNs. Compared to existing solutions, the energy efficiency needs to be improved by exploiting SIMD, advanced memory hierarchy, data reuse, sparsity, software operand bypassing, etc.
Topic 3: Compositional performance analysis and architecture Design Space Exploration (DSE) of a System-on-Chip (SOC): Research and develop an infrastructure to model energy & latency at the SoC level, including the SoC level memory hierarchy and processing host, as well as integrating the different accelerator component models. To support rapid evaluations needed for the DSE, analytical models need to be pursued. The development of compositional models will moreover enable run-time performance assessment of an application when the platform configuration changes (dynamic SoC reconfiguration) due to a failing platform component.
Topic 4: Composable and Secure SoC accelerator platform: Research and develop novel composable and real-time design techniques to realize an ultra-low-power and real-time Trusted Execution Environment (TEE) for an SoC platform consisting of RISC-V cores with several accelerators. Different security features that protect against physical attacks need to be integrated into the SoC platform, while maintaining ultra-low-power and real-time requirements of the applications. The platform should allow easy and secure integration of Post-Quantum Cryptography accelerators and Compute-In-Memory (CIM) based hardware accelerators.
Job requirements
For both positions we are looking for excellent, teamwork-oriented, and research-driven candidates with a PhD degree in Electrical Engineering, Computer Science or AI related topic and strong hardware/software design skills.
Following through, it says job has expired.
 
  • Like
Reactions: 4 users
D

Deleted member 118

Guest
Following through, it says job has expired.
Over a month old so positions probably taken
 
  • Like
Reactions: 1 users
Following through, it says job has expired.
Thank goodness I was worried that after reading what the expert financial analysts had to say Peter and Anil would apply and jump ship. Pheew that was close. 😎

FF

AKIDA BALLISTA
 
  • Haha
  • Like
Reactions: 16 users
D

Deleted member 118

Guest
Thank goodness I was worried that after reading what the expert financial analysts had to say Peter and Anil would apply and jump ship. Pheew that was close. 😎

FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Love
Reactions: 9 users

TECH

Regular
Patent History
Publication number
: 20220147797
Type: Application
Filed: Jan 25, 2022
Publication Date: May 12, 2022
Applicant: BrainChip, Inc. (Laguna Hills, CA)
Inventors: Douglas MCLELLAND (Laguna Hills, CA), Kristofor D. CARLSON (Laguna Hills, CA), Harshil K. PATEL (Laguna Hills, CA), Anup A. VANARSE (Laguna Hills, CA), Milind JOSHI (Perth)
Application Number: 17/583,640

Is this a sign of the changing of the guard/s...I don't think so...BUT...check out the 3 Perth based "Dream Team" getting to put their inventors hats on....I'm personally really pleased for Anup and Harshil with whom I've had the pleasure to talk with in person.

This was obviously only published 5/6 days ago, so if this has already been posted, excuse me, as I can't keep up with all the brilliant articles being posted, I'm a slow reader :ROFLMAO:

Good morning from Australia's Brainchip HQ.....Perth :love::love:
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 64 users

miaeffect

Oat latte lover
AU1t.gif

Don't mess up with AKIDA, Shorters!. Our Ken is watching you.
 
  • Haha
  • Like
  • Love
Reactions: 37 users

jk6199

Regular
It would be a great day for an official announcement considering the engine’s warming up green today. Alas, consistent increases will have to do 😉
 
  • Like
  • Love
  • Fire
Reactions: 7 users

Sirod69

bavarian girl ;-)
So nice to see brainchip, Megachips and Edge Impuls at the Embedded Vision Summit and today on our German News there was a talk about self driving with Mercedes, really so many news and connections and then the 1000 eyes, really great I wish you a good day, and now I really know we in Europa are green tomorrow morning
 
  • Like
  • Love
  • Fire
Reactions: 33 users

Diogenese

Top 20
I did read a good article on how both these companies technology will work hand in hand together, I’ll try and find it again.
Thanks Rocket,

That will help prevent me making a complete fool of myself.
 
  • Haha
  • Like
Reactions: 3 users
Thanks Rocket,

That will help prevent me making a complete fool of myself.
I highly doubt you making a fool of yourself is possible.
 
  • Like
  • Love
Reactions: 8 users
D

Deleted member 118

Guest
  • Haha
Reactions: 6 users
D

Deleted member 118

Guest
  • Like
  • Fire
  • Love
Reactions: 5 users
Top Bottom