BRN Discussion Ongoing

Pmel

Regular
Damn it esq, don't say things like that around here. You will get reported and deported back to Hot Crapper.

You've suspiciously written something that an MF article would say. Are you a down ramper by any chance? Well if so then you are not doing a great job at it.

Keep the vibes positive please.
Sean is our golden boy and he deserves all the monies and then some. We hired him because he has very high connections in silicon valley and we are so very close to signing multi billion dollar NDAs with with Nanose and Nintendo. There is no further need to justify why our tech will be in every household items in the world. Just like FF, I am off to spread the good news somewhere else.

Weeeeeeeeee......!!!!!!
Sci Fi Lol GIF by Hallmark Gold Crown







As always,
Not advice. DYOR
Waiting Waiting and waiting. Will it happen, who knows loosing faith
 
  • Like
Reactions: 4 users

buena suerte :-)

BOB Bank of Brainchip
Watching both the AGM and stocks down under again on the AGM the reference Sean mentioned on a deal was…. we are over the year mark on a couple of engagements by a fair bit that normally take between 1 and 2 years to be finalised.

This is a piece from the 2024 AGM transcript where SH mentions...

"closing in on a decision" and .....

"Based on direct prospect feedback I believe we are well positioned in some of the more critical engagements. In a number of cases, we have made it through the down selection process that eliminates many competitors and focuses on only 2 or 3 vendors for final selection."
1720047857025.png


1720049773403.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 54 users

Quiltman

Regular
Waiting Waiting and waiting. Will it happen, who knows loosing faith

I know most here understand this chart : the chart of the entrepreneur.
It's difficult when you are in the midst of the action, privy to all that is going on.
But when you are on the outside looking in, as are shareholders, somewhat manic !

1720050120124.png


What Brainchip in delivering to the market is revolutionary, and IMO following this curve.
We are at that point of " Crash & Burn - ie. give up " , or moving towards the uptrend - and everyone is feeling nervous.

Like many here, I attended the AGM to try to be as informed as possible from the business, not hearsay & scuttlebutt, and I satisfied myself, along with all the data that has been made available, that we will succeed.
But I truly do understand those that are losing hope ... it is to be expected ... we are following the normal & expected path of any disruptive technology or entrepreneurial endeavour.
 
  • Like
  • Love
  • Fire
Reactions: 64 users
More added to my retirement fun @MDhere

IMG_0696.png



1720053044044.gif
 
  • Like
  • Fire
  • Love
Reactions: 19 users

FiveBucks

Regular
Me waiting for a price sensitive announcement.

Animated GIF
 
  • Haha
  • Like
Reactions: 18 users

Earlyrelease

Regular
Me waiting for a price sensitive announcement.

Animated GIF
Good things come to those that wait.

Always hated that saying when mum used to say it but it’s a tried and tested historical saying for a reason.
 
  • Like
  • Fire
Reactions: 20 users

Getupthere

Regular

Edge AI accelerator delivers 60 TOPS at minimal power

Jul 3, 2024 | 7:31 AM

Startup EdgeCortix Inc. recently launched its next-generation SAKURA-II edge AI accelerator, designed for processing generative AI (GenAI) workloads at the edge with minimal power consumption and low latency. This platform is paired with the company’s second-generation dynamic neural accelerator (DNA) architecture to address the most challenging GenAI tasks in the industry.

EdgeCortix, headquartered in Japan, introduced its first-generation AI accelerator, SAKURA-I, in 2022, claiming over 10× performance-per-watt advantage over competing AI inference solutions based on GPUs for real-time edge applications. It also announced the open-source release of its MERA compiler software framework.

The company has leveraged feedback from customers using its first-generation silicon to deliver an updated architecture that offers future-proofing in terms of activation functions and support for a changing AI model landscape, while adding new features, such as sparse computation, advanced power management, mixed-precision support and a new reshaper engine. It also offers better performance per watt and higher memory capacity in multiple form factors.

SAKURA-II edge AI accelerator (Source: EdgeCortix Inc.)

The SAKURA-II accelerator or co-processor is a compelling solution, with a performance of 60 trillion operations per second (TOPS) with 8 W of typical power consumption, mixed-precision support and built-in memory-compression capabilities, said Sakyasingha Dasgupta, CEO and founder of EdgeCortix. Whether running traditional AI models or the latest GenAI solutions at the edge, it is one of the most flexible and power-efficient accelerators, he added.

SAKURA-II can handle complex tasks, such as large language models (LLMs), large vision models (LVMs) and multimodal transformer-based applications in the manufacturing, Industry 4.0, security, robotics, aerospace and telecommunications industries. It can manage multi-billion parameter models, such as Llama 2, Stable Diffusion, DETR and ViT, within a typical power envelope of 8 W.

In a nutshell, the SAKURA-II accelerator is optimized for GenAI and real-time data streaming with low latency. It delivers exceptional energy efficiency (touted as more than 2× the AI compute utilization of other solutions), higher DRAM capacity—up to 32 GB—to handle complex vision and GenAI workloads, and up to 4× more DRAM bandwidth than other AI accelerators and advanced power management for ultra-high-efficiency modes. It also adds sparse computation to reduce the memory bandwidth, a new integrated tensor reshaper engine to minimize the host CPU load, arbitrary activation function and software-enabled mixed precision for near-F32 accuracy.

EdgeCortix’s solutions aim to reduce the cost, power and time of data transfer by moving more AI processing to the site of data creation. The edge AI accelerator platform addresses two big challenges due to “the explosive growth of the latest-generation AI models,” Dasgupta said. The first challenge is the rising computational demand due to these “exponentially growing models” and the resulting rise in hardware costs, he said.

The cost of deploying a solution or the cost of operation, whether it is in a smart city, robotics or the aerospace industry in an edge environment, is critically important, he added.

The second challenge is how to build more power-efficient systems. “Unfortunately, the majority of today’s AI models are a lot more power-hungry in terms of both electricity consumption as well as carbon emissions,” he said. “So how do we build systems with a software and hardware combination that is a lot more energy-efficient, especially for an edge environment that is constrained by power, weight and size? That really drives who we are as a company.”

The company’s core mission, Dasgupta said, is to deliver a solution that brings near-cloud-level performance to the edge environment, while also delivering orders of magnitude better power efficiency.

Performance per watt is a key factor for customers, Dasgupta added, especially in an edge environment, and within that, real-time processing becomes a critical factor.

Data at the edge

Looking at the data center versus edge landscape, most of the data consumed is being generated or processed, especially enterprise data, at the edge and that will continue into the future, Dasgupta said.

According to IDC, 74 zettabytes of data will be generated at the edge by 2025. Moving this enormous amount of data continuously from the edge to the cloud is expensive both in terms of power and time, he adds. “The fundamental tenet of AI is how we bring computation and intelligence to the seat of data creation.”

Dasgupta said EdgeCortix has achieved this by pairing its design ethos of software first with power-efficient hardware to reduce the cost of data transfer.

The latest product caters to the GenAI landscape as well as across multi-billion parameter LLMs and vision models within the constraint of low-power requirements at the edge across applications, Dasgupta said. The latest low-power GenAI solution targets a range of verticals, from smart cities, smart retail and telecom to robotics, the factory floor, autonomous vehicles and even military/aerospace.

The platform

The software-driven unified platform is comprised of the SAKURA AI accelerator, MERA compiler and framework, DNA technology and the AI accelerator modules and boards. It supports both the latest GenAI and convolutional models.

Designed for flexibility and power efficiency, SAKURA-II offers high memory bandwidth, high accuracy and compact form factors. By leveraging EdgeCortix’s latest-generation runtime reconfigurable neural processing engine, DNA-II, SAKURA-II can provide high power efficiency and real-time processing capabilities while simultaneously executing multiple deep neural network models with low latency.

Dasgupta said it is difficult to put a number to latency because it depends on the AI models and applications. Most applications, depending on the larger models, will be below 10 ms, and in some cases, it could be in the sub-millisecond range.

The SAKURA-II hardware and software platform delivers flexibility, scalability and power efficiency. (Source: EdgeCortix Inc.)

The SAKURA-II platform, with the company’s MERA software suite, features a heterogeneous compiler platform, advanced quantization and model calibration capabilities. This software suite includes native support for development frameworks, such as PyTorch, TensorFlow Lite and ONNX. MERA’s flexible host-to-accelerator unified runtime can scale across single, multi-chip and multi-card systems at the edge. This significantly streamlines AI inferencing and shortens deployment times.

In addition, the integration with the MERA Model Zoo offers a seamless interface to Hugging Face Optimum and gives users access to an extensive range of the latest transformer models. This ensures a smooth transition from training to edge inference.

“One of the exciting elements is a new MERA model library that creates a direct interface with Hugging Face where our customers can bring a large number of current-generation transformer models without having to worry about portability,” Dasgupta said.

MERA Software supports diverse neural networks, from convolutions to the latest GenAI models. (Source: EdgeCortix Inc.)

By building a software-first—Software 2.0—architecture with new innovations, Dasgupta said the company has been able to get an order or two orders of magnitude increase in peak performance per watt compared with general-purpose systems, using CPUs and GPUs, depending on the application.

“We are able to preserve high accuracy [99% of FP32] for applications within those constrained environments of the edge; we’re able to deliver a lot more performance per watt, so better efficiency, and much higher speed in terms of the latency-sensitive and real-time critical applications, especially driven by multimodal-type applications with the latest generative AI models,” Dasgupta said. “And finally, a lower cost of operations in terms of even performance per dollar, preserving a significant advantage compared with other competing solutions.”

This drives edge AI accelerator requirements, which has gone into the design of the latest SAKURA product, Dasgupta said.

More details on SAKURA-II

SAKURA-II can deliver up to 60 TOPS of 8-bit integer, or INT8, performance and 30 trillion 16-bit brain floating-point operations per second (TFLOPS), while also supporting built-in mixed precision for handling next-generation AI tasks. The higher DRAM bandwidth targets the demand for higher performance for LLMs and LVMs.

Also new for the SAKURA-II are the advanced power management features, including on-chip power gating and power management capabilities for ultra-high-efficiency modes and a dedicated tensor reshaper engine to manage complex data permutations on-chip and minimize host CPU load for efficient data handling.

Some of the key architecture innovations include sparse computation that natively supports memory footprint optimization to reduce the amount of memory required to move large amounts of models, especially multi-billion-parameter LLMs, Dasgupta said, which has a big impact on performance and power consumption.

The built-in advanced power management mechanisms can switch off different parts of the device while an application is running in a way that can trade off power versus performance, Dasgupta said, which enhances the performance per watt in applications or models that do not require all 60 TOPS.

“We also added a dedicated IP in the form of a new reshaper engine on the hardware itself, and this has been designed to handle large tensor operations,” Dasgupta said. This dedicated engine on-chip improves power as well as reduces latency even further, he added.

Dasgupta said the accelerator architecture is the key building block of the SAKURA-II’s performance. “We have much higher utilization of our transistors on the semiconductor as compared with some of our competitors, especially from a GPU perspective. It is typically somewhere on average 2× better utilization, and that gives us much better performance per watt.”

SAKURA-II also adds support for arbitrary activation functions on the hardware, which Dasgupta calls a future-proofing mechanism, so as new types of arbitrary activation functions come in, they can be extended to the user without having to change the hardware.

It also offers mixed-precision support on the software and hardware to trade off between performance and accuracy. Running some parts of a model at a higher precision and others at a reduced precision, depending on the application, becomes important in multimodal cases, Dasgupta said.

SAKURA-II is available in several form factors to meet different customer requirements. These include a standalone device in a 19 × 19-mm BGA package, a M.2 module with a single device and PCIe cards with up to four devices. The M.2 module offers 8 or 16 GB of DRAM and is designed for space-constrained applications, while the single (16-GB DRAM) and dual (32-GB DRAM) PCIe cards target edge server applications.

SAKURA-II addresses space-constrained environments with the M.2 module form factor, supporting both x86 or Arm systems, and delivers performance by supporting multi-billion-parameter models as well as traditional vision models, Dasgupta said.

“The latest-generation product supports a very powerful compiler and software stack that can marry our co-processor with existing x86 or Arm systems working across different types of heterogeneous landscape in the edge space.”

SAKURA-II M.2 module (Source: EdgeCortix Inc.)

The unified platform also delivers a high amount of compute, up to 240 TOPS, in a single PCIe card with four devices with under 50 W of power consumption.

Dasgupta said power has been maintained at previous levels with the SAKURA-II, so customers are getting a much higher performance per watt. The power consumption is typically about 8 W for the most complex AI models and even less for some applications, he said.

SAKURA-II will be available as a standalone device, with two M.2 modules with different DRAM capacities (8 GB and 16 GB), and single- and dual-device low-profile PCIe cards. Customers can reserve M.2 modules and PCIe cards for delivery in the second half of 2024. The accelerators, M.2 modules and PCIe cards can be pre-ordered.
 
  • Thinking
  • Like
  • Wow
Reactions: 10 users

manny100

Regular

Edge AI accelerator delivers 60 TOPS at minimal power

Jul 3, 2024 | 7:31 AM

Startup EdgeCortix Inc. recently launched its next-generation SAKURA-II edge AI accelerator, designed for processing generative AI (GenAI) workloads at the edge with minimal power consumption and low latency. This platform is paired with the company’s second-generation dynamic neural accelerator (DNA) architecture to address the most challenging GenAI tasks in the industry.

EdgeCortix, headquartered in Japan, introduced its first-generation AI accelerator, SAKURA-I, in 2022, claiming over 10× performance-per-watt advantage over competing AI inference solutions based on GPUs for real-time edge applications. It also announced the open-source release of its MERA compiler software framework.

The company has leveraged feedback from customers using its first-generation silicon to deliver an updated architecture that offers future-proofing in terms of activation functions and support for a changing AI model landscape, while adding new features, such as sparse computation, advanced power management, mixed-precision support and a new reshaper engine. It also offers better performance per watt and higher memory capacity in multiple form factors.

SAKURA-II edge AI accelerator (Source: EdgeCortix Inc.)

The SAKURA-II accelerator or co-processor is a compelling solution, with a performance of 60 trillion operations per second (TOPS) with 8 W of typical power consumption, mixed-precision support and built-in memory-compression capabilities, said Sakyasingha Dasgupta, CEO and founder of EdgeCortix. Whether running traditional AI models or the latest GenAI solutions at the edge, it is one of the most flexible and power-efficient accelerators, he added.

SAKURA-II can handle complex tasks, such as large language models (LLMs), large vision models (LVMs) and multimodal transformer-based applications in the manufacturing, Industry 4.0, security, robotics, aerospace and telecommunications industries. It can manage multi-billion parameter models, such as Llama 2, Stable Diffusion, DETR and ViT, within a typical power envelope of 8 W.

In a nutshell, the SAKURA-II accelerator is optimized for GenAI and real-time data streaming with low latency. It delivers exceptional energy efficiency (touted as more than 2× the AI compute utilization of other solutions), higher DRAM capacity—up to 32 GB—to handle complex vision and GenAI workloads, and up to 4× more DRAM bandwidth than other AI accelerators and advanced power management for ultra-high-efficiency modes. It also adds sparse computation to reduce the memory bandwidth, a new integrated tensor reshaper engine to minimize the host CPU load, arbitrary activation function and software-enabled mixed precision for near-F32 accuracy.

EdgeCortix’s solutions aim to reduce the cost, power and time of data transfer by moving more AI processing to the site of data creation. The edge AI accelerator platform addresses two big challenges due to “the explosive growth of the latest-generation AI models,” Dasgupta said. The first challenge is the rising computational demand due to these “exponentially growing models” and the resulting rise in hardware costs, he said.

The cost of deploying a solution or the cost of operation, whether it is in a smart city, robotics or the aerospace industry in an edge environment, is critically important, he added.

The second challenge is how to build more power-efficient systems. “Unfortunately, the majority of today’s AI models are a lot more power-hungry in terms of both electricity consumption as well as carbon emissions,” he said. “So how do we build systems with a software and hardware combination that is a lot more energy-efficient, especially for an edge environment that is constrained by power, weight and size? That really drives who we are as a company.”

The company’s core mission, Dasgupta said, is to deliver a solution that brings near-cloud-level performance to the edge environment, while also delivering orders of magnitude better power efficiency.

Performance per watt is a key factor for customers, Dasgupta added, especially in an edge environment, and within that, real-time processing becomes a critical factor.

Data at the edge

Looking at the data center versus edge landscape, most of the data consumed is being generated or processed, especially enterprise data, at the edge and that will continue into the future, Dasgupta said.

According to IDC, 74 zettabytes of data will be generated at the edge by 2025. Moving this enormous amount of data continuously from the edge to the cloud is expensive both in terms of power and time, he adds. “The fundamental tenet of AI is how we bring computation and intelligence to the seat of data creation.”

Dasgupta said EdgeCortix has achieved this by pairing its design ethos of software first with power-efficient hardware to reduce the cost of data transfer.

The latest product caters to the GenAI landscape as well as across multi-billion parameter LLMs and vision models within the constraint of low-power requirements at the edge across applications, Dasgupta said. The latest low-power GenAI solution targets a range of verticals, from smart cities, smart retail and telecom to robotics, the factory floor, autonomous vehicles and even military/aerospace.

The platform

The software-driven unified platform is comprised of the SAKURA AI accelerator, MERA compiler and framework, DNA technology and the AI accelerator modules and boards. It supports both the latest GenAI and convolutional models.

Designed for flexibility and power efficiency, SAKURA-II offers high memory bandwidth, high accuracy and compact form factors. By leveraging EdgeCortix’s latest-generation runtime reconfigurable neural processing engine, DNA-II, SAKURA-II can provide high power efficiency and real-time processing capabilities while simultaneously executing multiple deep neural network models with low latency.

Dasgupta said it is difficult to put a number to latency because it depends on the AI models and applications. Most applications, depending on the larger models, will be below 10 ms, and in some cases, it could be in the sub-millisecond range.

The SAKURA-II hardware and software platform delivers flexibility, scalability and power efficiency. (Source: EdgeCortix Inc.)

The SAKURA-II platform, with the company’s MERA software suite, features a heterogeneous compiler platform, advanced quantization and model calibration capabilities. This software suite includes native support for development frameworks, such as PyTorch, TensorFlow Lite and ONNX. MERA’s flexible host-to-accelerator unified runtime can scale across single, multi-chip and multi-card systems at the edge. This significantly streamlines AI inferencing and shortens deployment times.

In addition, the integration with the MERA Model Zoo offers a seamless interface to Hugging Face Optimum and gives users access to an extensive range of the latest transformer models. This ensures a smooth transition from training to edge inference.

“One of the exciting elements is a new MERA model library that creates a direct interface with Hugging Face where our customers can bring a large number of current-generation transformer models without having to worry about portability,” Dasgupta said.

MERA Software supports diverse neural networks, from convolutions to the latest GenAI models. (Source: EdgeCortix Inc.)

By building a software-first—Software 2.0—architecture with new innovations, Dasgupta said the company has been able to get an order or two orders of magnitude increase in peak performance per watt compared with general-purpose systems, using CPUs and GPUs, depending on the application.

“We are able to preserve high accuracy [99% of FP32] for applications within those constrained environments of the edge; we’re able to deliver a lot more performance per watt, so better efficiency, and much higher speed in terms of the latency-sensitive and real-time critical applications, especially driven by multimodal-type applications with the latest generative AI models,” Dasgupta said. “And finally, a lower cost of operations in terms of even performance per dollar, preserving a significant advantage compared with other competing solutions.”

This drives edge AI accelerator requirements, which has gone into the design of the latest SAKURA product, Dasgupta said.

More details on SAKURA-II

SAKURA-II can deliver up to 60 TOPS of 8-bit integer, or INT8, performance and 30 trillion 16-bit brain floating-point operations per second (TFLOPS), while also supporting built-in mixed precision for handling next-generation AI tasks. The higher DRAM bandwidth targets the demand for higher performance for LLMs and LVMs.

Also new for the SAKURA-II are the advanced power management features, including on-chip power gating and power management capabilities for ultra-high-efficiency modes and a dedicated tensor reshaper engine to manage complex data permutations on-chip and minimize host CPU load for efficient data handling.

Some of the key architecture innovations include sparse computation that natively supports memory footprint optimization to reduce the amount of memory required to move large amounts of models, especially multi-billion-parameter LLMs, Dasgupta said, which has a big impact on performance and power consumption.

The built-in advanced power management mechanisms can switch off different parts of the device while an application is running in a way that can trade off power versus performance, Dasgupta said, which enhances the performance per watt in applications or models that do not require all 60 TOPS.

“We also added a dedicated IP in the form of a new reshaper engine on the hardware itself, and this has been designed to handle large tensor operations,” Dasgupta said. This dedicated engine on-chip improves power as well as reduces latency even further, he added.

Dasgupta said the accelerator architecture is the key building block of the SAKURA-II’s performance. “We have much higher utilization of our transistors on the semiconductor as compared with some of our competitors, especially from a GPU perspective. It is typically somewhere on average 2× better utilization, and that gives us much better performance per watt.”

SAKURA-II also adds support for arbitrary activation functions on the hardware, which Dasgupta calls a future-proofing mechanism, so as new types of arbitrary activation functions come in, they can be extended to the user without having to change the hardware.

It also offers mixed-precision support on the software and hardware to trade off between performance and accuracy. Running some parts of a model at a higher precision and others at a reduced precision, depending on the application, becomes important in multimodal cases, Dasgupta said.

SAKURA-II is available in several form factors to meet different customer requirements. These include a standalone device in a 19 × 19-mm BGA package, a M.2 module with a single device and PCIe cards with up to four devices. The M.2 module offers 8 or 16 GB of DRAM and is designed for space-constrained applications, while the single (16-GB DRAM) and dual (32-GB DRAM) PCIe cards target edge server applications.

SAKURA-II addresses space-constrained environments with the M.2 module form factor, supporting both x86 or Arm systems, and delivers performance by supporting multi-billion-parameter models as well as traditional vision models, Dasgupta said.

“The latest-generation product supports a very powerful compiler and software stack that can marry our co-processor with existing x86 or Arm systems working across different types of heterogeneous landscape in the edge space.”

SAKURA-II M.2 module (Source: EdgeCortix Inc.)

The unified platform also delivers a high amount of compute, up to 240 TOPS, in a single PCIe card with four devices with under 50 W of power consumption.

Dasgupta said power has been maintained at previous levels with the SAKURA-II, so customers are getting a much higher performance per watt. The power consumption is typically about 8 W for the most complex AI models and even less for some applications, he said.

SAKURA-II will be available as a standalone device, with two M.2 modules with different DRAM capacities (8 GB and 16 GB), and single- and dual-device low-profile PCIe cards. Customers can reserve M.2 modules and PCIe cards for delivery in the second half of 2024. The accelerators, M.2 modules and PCIe cards can be pre-ordered.
Competition? They are moving away from the cloud?
" According to IDC, 74 zettabytes of data will be generated at the edge by 2025. Moving this enormous amount of data continuously from the edge to the cloud is expensive both in terms of power and time, he adds. “The fundamental tenet of AI is how we bring computation and intelligence to the seat of data creation.”"
 
  • Like
  • Fire
Reactions: 5 users

Diogenese

Top 20
It is a fact that I did not find this paper myself, but it contains tables of an extensive survey of AI chips, conducted by AFRL in conjunction with Dayton Uni.

Table 2 compares 100 AI chips, so it is useful as a compressed, if not 100% accurate, guide to the competition.

https://www.preprints.org/manuscript/202407.0025/download/final_file

...
BrainChip introduced the Akida line of spiking processors. The AKD1000 has 80 NPUs, 3 pJ/synaptic operation, and around 2 W of power consumption [147]. Each NPU consists of eight neural processing engines that run simultaneously and control convolution, pooling, and activation (ReLu) operations [148]. Convolution is normally carried out in INT8 precision, but it can be programmed for INT 1, 2, 3 or 4 precisions while sacrificing 1-3% accuracy. BrainChip has announced future releases of smaller and larger Akida processors under the AKD500, AKD1500, and AKD2000 labels [148]. A trained DNN network can be converted to SNN by using the CNN2SNN tool in the Meta-TF framework for loading a model into an Akida processor. This processor also has on-chip training capability, thus allowing the training of SNNs from scratch by using the Meta-TF framework [146].

...

Several of the edge processors offer on-chip retraining in real-time. This enables retraining of networks without having to send sensitive data to the cloud, thus increasing security and privacy. Intel’s Loihi 2 and Brainchip’s Akida processor can be retrained on local data for personalized applications and faster response rates
.
 
  • Like
  • Love
  • Thinking
Reactions: 41 users
  • Like
  • Love
  • Fire
Reactions: 20 users

Frangipani

Top 20
This isn’t the actual article I was searching for, but it directs you to sooo many New Possibilities
The Bionics market is setting up to take advantage with huge generational improvements for recipients.
Feeling and pressure sensations, as well as hot/cold , thru bionics are at the forefront of ground breaking advancements.

Another space for Akida

Cheers Frangipani

By chance, I stumbled upon the following post about USC graduate Natalie Fung, who was recently awarded a Master in Communications Data Science - with her resilience and can-do attitude, she truly is an inspiration:

08EE1F7E-72AD-458D-A79D-98397D3239DA.jpeg



How heart-warming that USC Viterbi even awarded her service dog a certificate of graduation at the commencement ceremony in May! 😍

284C3F4E-96A6-422C-BE6B-F36391813A75.jpeg



The May 2024 USC website article, which is linked in the first post, mentions the name of her lab:


“Since fall 2021, I’ve been involved with accessibility awareness on campus through the Viterbi Graduate Student Association, from partnering with the Graduate Student Government to put on a disability resource and awareness fair, to creating content on acquiring accommodations at USC.

I’m also a lab manager and research assistant in the Valero Lab under Prof. Francisco Valero-Cuevas, where I previously helped him plan a conference held at USC in conjunction with the National Science Foundation. Four published papers on disability and rehabilitation engineering resulted from the conference!

I’m now being funded by NSF to see how a neuromorphic arm developed in the lab can be directly translated into real life.”




0106FED9-A078-468B-9928-57D4C2627D15.jpeg



The USC Brain-Body Dynamics Lab is led by Francisco Valero-Cuevas, Professor of Biomedical Engineering, Aerospace and Mechanical Engineering, Electrical and Computer Engineering, Computer Science, and Biokinesiology and Physical Therapy (try to fit all this onto one business card!), who has been fascinated with the biomechanics of the human hand for years (see the 2008 article below) and was already talking about utilising neuromorphic chips in robots three years ago, in a ‘research update’ video recorded on June 17, 2021:



“But at the same time we are building physical robots that have what are called ‘neuromorphic circuits’. These are computer chips that are emulating populations of spiking neurons that then feed motors and amplifiers and the like that actually produce manipulation and locomotion.” (from 2:56 min)


D1383C77-443E-4A88-9524-3416F92ACE3D.jpeg


Given that a number of USC Viterbi School of Engineering faculty members are evidently favourably aware of BrainChip (see below) - plus our CTO happens to be a USC School of Engineering alumnus and possibly still has some ties to his alma mater - I wouldn’t be surprised if Valero Lab researchers were also experimenting with Akida.

58597214-60EB-4C0D-BFA9-B95CA5BC188C.jpeg


E0B256FE-A27C-46DC-9F56-EAD2B0BA644E.jpeg

Remember this June 2023 This is our Mission podcast?


After introducing his guest, who also serves as the Executive Vice Dean at the USC Viterbi School of Engineering, Nandan Nayampally says “You know, we go back a long way … in fact, we had common alma maters.” (03:14 min)

Gaurav Sukatme:
From 25:32 min: “I think the partnership between industry and academia is crucial here to make progress.”

From 27:13 min: “You know, companies like yours, like Brainchip, what you are doing with the University Accelerator Program, I like very much - in fact, we’re looking into it, as you know, we’ll be having a phone [?] conversation about exploring that further. I think programs like that are unique and can really make the nexus between a leading company and academia sort of be tighter and be stronger.”

At the end of the podcast, Nandan Nayampally thanks his guest for sharing his insights and closes with the words “…and hopefully we’ll work together much closer soon.” (35:15 min)

Which makes Brainchip’s involvement in CONCRETE (Center of Neuromorphic Computing and Extreme Environment”), well, not concrete, but certainly more likely… 😊
Another USC professor very much aware of Brainchip & Akida:

View attachment 56327

View attachment 56328

(The Hughes Aircraft Electrical Engineering Center houses the Ming Hsieh Department of Electrical and Computer Engineering-Systems, cf.
https://viterbi.usc.edu/news/news/2012/hughes-aircraft-electrical.htm - sections of the now defunct aerospace and defense contractor that gifted its name to the building live on in Raytheon and Boeing.)



View attachment 56321
View attachment 56333


Unfortunately, I don’t have any login credentials, so someone else needs to find out what the authors say about Akida in 17.4.

The preview includes the book’s preface, though, in which our company gets a mention, too.

View attachment 56334

View attachment 56314
Meanwhile, yet another university is encouraging their students to apply for a summer internship at BrainChip:



View attachment 63100


I guess it is just a question of time before USC will be added to the BrainChip University AI Accelerator Program, although Nandan Nayampally is sadly no longer with our company…
 

Attachments

  • 3B031D78-2EE9-4C9A-BAC8-D91BD79211C0.jpeg
    3B031D78-2EE9-4C9A-BAC8-D91BD79211C0.jpeg
    215.3 KB · Views: 59
  • Like
  • Love
  • Thinking
Reactions: 31 users

Satchmo25

Member

Samsung already investing in Axelera startup for ai chip. Hope Akida will be part in this smartphone war!
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users

MrNick

Regular
  • Like
  • Fire
Reactions: 6 users

Frangipani

Top 20
Hi FJ-215,

the article you linked to refers to a different Fraunhofer Institute, Fraunhofer IPMS in Dresden, whereas the Fraunhofer Institute shown in the video is Fraunhofer HHI (Heinrich-Hertz-Institut) in Berlin. (There are 76 Fraunhofer Institutes in total.)

At the very end of the video, there is a reference to a research paper, that I posted about a few weeks ago:

View attachment 63664



https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-417987

View attachment 63666
View attachment 63667



View attachment 63665

And thanks to the video we now know what neuromorphic hardware the researchers used, even though they didn’t reveal it in their paper! 😍


Our friends at Fraunhofer HHI 👆🏻are part of the ongoing Berlin 6G Conference (July 2 - 4, 2024). While three of the above paper’s co-authors are session chairs or speakers, Yuzhen Ke and Mehdi Heshmati are giving live demos of Spiky Spot’s akidaesque gesture recognition skills at the conference expo.

493DA070-688F-4BC0-81B9-E5B527CC2B5B.jpeg



E83A0822-78D1-4FFE-BAA1-91E0AA263A24.jpeg



The Berlin-based researchers have been rather busy conference-hopping in recent weeks: Stockholm (Best Demonstration Award), Antwerp, Denver, Berlin (2x).

I am just not sure whether they have been letting curious visitors to their booth in on the secret behind their obedient robotic dog... 🤔

AE899472-BF5C-4793-AC83-6D8FF13DD685.jpeg


0084B4E9-D2EB-4CA3-B347-B61A8291E57B.jpeg

Some of the Fraunhofer Heinrich Hertz Institute researchers from Berlin who appear to have used Akida in their research, as evidenced by their recent video titled Neuromorphic Wireless Cognition for Connected Intelligence (encouragingly, there has been no denial in the comment section so far), are currently attending the IEEE International Conference on Communications in Denver, where they are presenting their demo video at the German 6G Research and Innovation Cluster booth.


One of their Fraunhofer HHI colleagues was promoting the demo video on LinkedIn earlier today, while at the same time cozying up to Ericsson at the booth opposite them by praising Ericsson’s Head of Research Magnus Frodigh for addressing their common interest in neuromorphic processing and SNNs for 6G in his keynote speech.

After this virtual wave across the aisle, researchers on both sides will surely find the time to have a little chat even closer up, and in the case of Fraunhofer HHI possibly also with others who may have only become aware of the German 6G research delegation’s booth because of Ericsson and Magnus Frodigh getting mentioned in that LinkedIn post.




View attachment 64688
View attachment 64684


View attachment 64679




View attachment 64683

5919EA55-6D42-4350-80E9-7855AA28A20B.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 32 users

Frangipani

Top 20
I like the way Alexandre Mandl from São Paulo, Brazil is asking this legitimate question without dissing NVIDIA:

“Is the next big thing in AI not about raw power but about doing more with less juice?”




D431DF97-4CB9-4F65-82AF-79B9AF0214C6.jpeg


7612CCC7-260B-4901-A3F5-FD18888DAB31.jpeg


4FA8B65D-CE2B-49AA-BA11-926EDAF8EA06.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Frangipani

Top 20

Attachments

  • CDCDE31E-A9DA-454C-A1BA-1A7FDFD87230.jpeg
    CDCDE31E-A9DA-454C-A1BA-1A7FDFD87230.jpeg
    192.6 KB · Views: 46
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 18 users

Frangipani

Top 20
Speaking of BAE Systems:




Robots.Jobs Logo


Digital Microelectronics Technology Development Lead

PUBLISHED: JULY 3, 2024 ONSITE ARLINGTON, VIRGINIA FULL TIME BUSINESS & FINANCEMECHANICAL ENGINEERING
Save
Apply

Description​

Job Description

BAE Systems FAST Labs Microelectronics Science and Technology (MS&T) area architects novel and beyond-next-generation RF, mixed-signal, and digital integrated circuits and chipsets, and delivers those new custom microelectronics capabilities into systems. With ongoing growth, MS&T is seeking a technology leader to help drive its strategy forward.
The candidate for this position will lead Digital Microelectronics research and development pursuits, programs, and strategies. The trajectory of this portfolio includes algorithmic hardware accelerators, digital signal processor SOCs and ASICs, radiation hard microelectronics, non-traditional compute microelectronics architectures, and new high-risk/high-reward novel compute technologies.

The candidate will lead in engagement with external and internal customers to build, win, and execute programs that support the research and development of digital microelectronics technology. They are expected to leverage their position, reputation, and technical expertise to maintain and expand relationships with government R&D organization such as Department of Commerce, ONR, AFRL, OUSD(R&E), OUSD(A&S), DARPA, etc. Through their internal and external network, the candidate is expected to advance the Digital microelectronics technical and programmatic roadmaps while maintaining alignment across the broader MS&T area.

This job can be hybrid (on-site >=50% of time) or fully on-site.
In this job role, qualified candidates can expect to:
  • Lead relevant research programs of 3-50 people as principal investigator
  • Provide business and execution oversight of programs valued at $1-50M
  • Lead pursuits and proposals
  • Expand and maintain strategic relationships with government agencies, external companies, and other BAE Systems groups and business areas
  • Implement a programmatic strategy to fund technology development
  • Pursue personal and portfolio growth so as to support a broad team of scientists, researchers, and technology developers
  • This position can be based out of our Merrimack, NH; Nashua, NH; Burlington, MA; Lexington, MA; Manassas, VA; or Arlington, VA facilities, though it will require collaboration with staff across our business areas and facilities in the Northeast.
Required Education, Experience, & Skills
The ideal candidate will possess the following:
  • Excellent written and oral communication skills
  • Experience capturing and leading projects within the defense microelectronics community, at the cutting edge of technology
  • Prior experience as principal investigator on DoD R&D programs
  • Track record of technical innovation as evidenced by journal and conference publications or patent filings
  • Demonstrable entrepreneurial drive
  • Experience in at least one of the following areas:
  • Digital Signal Processing
  • AI/ML Hardware Acceleration
  • Digital system architecture
  • Neural networks or neuromorphic engineering
  • In-memory compute processors
  • Ability to obtain clearance at the Secret level or higher
Preferred Education, Experience, & Skills
  • Masters or PhD in electrical engineering or related field
Pay Information
Full-Time Salary Range: $140690 – $239140

Please note: This range is based on our market pay structures. However, individual salaries are determined by a variety of factors including, but not limited to: business considerations, local market conditions, and internal equity, as well as candidate qualifications, such as skills, education, and experience.

Employee Benefits: At BAE Systems, we support our employees in all aspects of their life, including their health and financial well-being. Regular employees scheduled to work 20 hours per week are offered: health, dental, and vision insurance; health savings accounts; a 401(k) savings plan; disability coverage; and life and accident insurance. We also have an employee assistance program, a legal plan, and other perks including discounts on things like home, auto, and pet insurance. Our leave programs include paid time off, paid holidays, as well as other types of leave, including paid parental, military, bereavement, and any applicable federal and state sick leave. Employees may participate in the company recognition program to receive monetary or non-monetary recognition awards. Other incentives may be available based on position level and/or job specifics.

Digital Microelectronics Technology Development Lead
103227BR

EEO Career Site Equal Opportunity Employer. Minorities . females . veterans . individuals with disabilities . sexual orientation . gender identity . gender expression
 
  • Like
  • Fire
Reactions: 13 users
Top Bottom