BRN Discussion Ongoing

https://arxiv.org/html/2401.06911v1

"Furthermore, due to the runtime limitation of 20 minutes for jobs on Intelā€™s cloud, on-chip training was not possible, restricting the exploration of this important capability of Loihi".

Akida can solve that.
Hi JB

This is an important find. I have taken the following extract to cover the facts you have revealed:

  • ā€¢
    Superiority of SNN and Loihi 2: The results highlight that, in all cases, Intelā€™s Loihi 2 performs better than the CNN implemented in Xilinxā€™s VCK5000. However, it is worth noting that, as the batch size increases, the advantage of Loihi became less pronounced as the performance points move closer to the EDP line, although it still outperforms the Xilinxā€™s VCK5000 implementation.
  • ā€¢
    Interference detection as a promising use case: Among the considered scenarios, it seems that the interference detection and classification benefited the most from the implementation on Intelā€™s Loihi chipset. Even though the time ratio remained generally higher than one, the energy savings achieved with Loihi were significant. In some cases, the energy ratio reached values as high as 105
  • ā€¢
    SNN encoding: Fig. 3 compares the impact of FFT Rate and FFT TEM encoding for the ID scenario. Interestingly, the type of coding used did not have a significant impact on this comparison.
  • ā€¢
    RRMā€™s Performance: While RRM did not achieve energy savings as pronounced as in the ID scenario, it consistently achieved an energy ratio exceeding 102
Regarding the digital beamforming performance for fast-moving users, we compared the conventional LASSO solution provided by CVX [13] running on Matlab with the solution of S-LCA on Intelā€™s Lava simulator. Firstly, it is worth highlighting that the proposed beamforming formulation yielded sparse beamforming vectors, with both solutions being able to turn off up to 60% of the RF chains without compromising the resulting beampatterns. Regarding performance comparisons between the two solutions, both generated satisfactory beampatterns with the main lobe pointing toward the aircraft. For a numerical comparison, the beamformerā€™s average output power was considered as key performance indicator to assess the beamforming capabilities to mitigate the effects of noise and interference while focusing on the desired signal direction. In this context, the S-LCA solution was able to reach lower levels of beamformerā€™s average output power, around 19% below the value reached by the CVX solution, but with a much higher spreading of values, around 4 times higher than the CVX solution, when comparing the lower and upper quartiles of beamformerā€™s average output power.

V-ARemarks about the results obtained with Intelā€™s Loihi 2​

The results presented in this article were conducted with Loihi 2 but using the remote access from Intelā€™s research cloud. Although Intel offers the possibility of shipping physical chips to INRC partners premises, at the moment of developing these results the primary access to Loihi 2 was through the Intelā€™s neuromorphic research cloud. Obviously, the remote access introduced some additional limitations as it was not possible to control for concurrent usage by other users, which could lead to delays and increased power consumption. Additionally, the specific interface used by Intel on their cloud was not disclosed, potentially resulting in differences when conducting measurements with future releases of Loihi 2. Furthermore, due to the runtime limitation of 20 minutes for jobs on Intelā€™s cloud, on-chip training was not possible, restricting the exploration of this important capability of Loihi.
The cloud interface plays a key role, as it impacts the transfer of spiking signals to the chipset. High input size may span long per-step processing time. For example, in the flexible payload use case, the execution time per example increased from approximately 5 ms to 100 ms when the input size went from 252 neurons to 299 neurons.

VI Conclusion​

While we enter the era of AI, it becomes evident that energy consumption is a limiting factor when training and implementing neural networks with significant number of neurons. This issue becomes particularly relevant for nonterrestrial communication devices, such as satellite payloads, which are in need of more efficient hardware components in order to benefit from the potential of AI techniques.
Neuromorphic processors, such as Intelā€™s Loihi 2, have shown to be more efficient when processing individual data samples and are, therefore, a better fit for use cases where real world data arrives to the chip and it needs to be processed right away. In this article, we verified this hypothesis using real standard processor solutions, such as the Xilinxā€™s VCK5000 and the Intelā€™s Loihi 2 chipset.

Acknowledgments​

This work has been supported by the European Space Agency (ESA) funded under Contract No. 4000137378/22/UK/ND - The Application of Neuromorphic Processors to Satcom Applications. Please note that the views of the authors of this paper do not necessarily reflect the views of ESA. Furthermore, this work was partially supported by the Luxembourg National Research Fund (FNR) under the project SmartSpace (C21/IS/16193290).ā€

The first Fact is that it confirms that Loihi 1 & 2 are most definitely still only research platforms.

The second Fact is that the lack of willingness of Intel to disclose the Cloud interphase it uses would as the paper suggests be a significant limitation for commercial adoption.

The third Fact disclosed that inference time increased on Loihi 2 the greater the number of neurons in use means is a huge issue for Intel as from every thing I have read about AKIDA the more nodes hence neurons in play the faster the inference times. (Perhaps I miss the point here so this needs a Diogenese ruler run over my conclusion.)

Fact four even Loihi 2 despite its limitations is far more efficient than Xilinxā€™s VCK5000 running CNNā€™s.

Finally Fact five using neuromorphic computing for cognitive communication is clearly the way forward.

My opinion only DYOR
Fact Finder
 
  • Like
  • Love
  • Fire
Reactions: 51 users

IloveLamp

Top 20
Screenshot_20240208_131202_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 32 users

IloveLamp

Top 20
Screenshot_20240208_131738_LinkedIn.jpg
 
  • Like
  • Fire
Reactions: 10 users
Hi JB

This is an important find. I have taken the following extract to cover the facts you have revealed:

  • ā€¢
    Superiority of SNN and Loihi 2: The results highlight that, in all cases, Intelā€™s Loihi 2 performs better than the CNN implemented in Xilinxā€™s VCK5000. However, it is worth noting that, as the batch size increases, the advantage of Loihi became less pronounced as the performance points move closer to the EDP line, although it still outperforms the Xilinxā€™s VCK5000 implementation.
  • ā€¢
    Interference detection as a promising use case: Among the considered scenarios, it seems that the interference detection and classification benefited the most from the implementation on Intelā€™s Loihi chipset. Even though the time ratio remained generally higher than one, the energy savings achieved with Loihi were significant. In some cases, the energy ratio reached values as high as 105
  • ā€¢
    SNN encoding: Fig. 3 compares the impact of FFT Rate and FFT TEM encoding for the ID scenario. Interestingly, the type of coding used did not have a significant impact on this comparison.
  • ā€¢
    RRMā€™s Performance: While RRM did not achieve energy savings as pronounced as in the ID scenario, it consistently achieved an energy ratio exceeding 102
Regarding the digital beamforming performance for fast-moving users, we compared the conventional LASSO solution provided by CVX [13] running on Matlab with the solution of S-LCA on Intelā€™s Lava simulator. Firstly, it is worth highlighting that the proposed beamforming formulation yielded sparse beamforming vectors, with both solutions being able to turn off up to 60% of the RF chains without compromising the resulting beampatterns. Regarding performance comparisons between the two solutions, both generated satisfactory beampatterns with the main lobe pointing toward the aircraft. For a numerical comparison, the beamformerā€™s average output power was considered as key performance indicator to assess the beamforming capabilities to mitigate the effects of noise and interference while focusing on the desired signal direction. In this context, the S-LCA solution was able to reach lower levels of beamformerā€™s average output power, around 19% below the value reached by the CVX solution, but with a much higher spreading of values, around 4 times higher than the CVX solution, when comparing the lower and upper quartiles of beamformerā€™s average output power.

V-ARemarks about the results obtained with Intelā€™s Loihi 2​

The results presented in this article were conducted with Loihi 2 but using the remote access from Intelā€™s research cloud. Although Intel offers the possibility of shipping physical chips to INRC partners premises, at the moment of developing these results the primary access to Loihi 2 was through the Intelā€™s neuromorphic research cloud. Obviously, the remote access introduced some additional limitations as it was not possible to control for concurrent usage by other users, which could lead to delays and increased power consumption. Additionally, the specific interface used by Intel on their cloud was not disclosed, potentially resulting in differences when conducting measurements with future releases of Loihi 2. Furthermore, due to the runtime limitation of 20 minutes for jobs on Intelā€™s cloud, on-chip training was not possible, restricting the exploration of this important capability of Loihi.
The cloud interface plays a key role, as it impacts the transfer of spiking signals to the chipset. High input size may span long per-step processing time. For example, in the flexible payload use case, the execution time per example increased from approximately 5 ms to 100 ms when the input size went from 252 neurons to 299 neurons.

VI Conclusion​

While we enter the era of AI, it becomes evident that energy consumption is a limiting factor when training and implementing neural networks with significant number of neurons. This issue becomes particularly relevant for nonterrestrial communication devices, such as satellite payloads, which are in need of more efficient hardware components in order to benefit from the potential of AI techniques.
Neuromorphic processors, such as Intelā€™s Loihi 2, have shown to be more efficient when processing individual data samples and are, therefore, a better fit for use cases where real world data arrives to the chip and it needs to be processed right away. In this article, we verified this hypothesis using real standard processor solutions, such as the Xilinxā€™s VCK5000 and the Intelā€™s Loihi 2 chipset.

Acknowledgments​

This work has been supported by the European Space Agency (ESA) funded under Contract No. 4000137378/22/UK/ND - The Application of Neuromorphic Processors to Satcom Applications. Please note that the views of the authors of this paper do not necessarily reflect the views of ESA. Furthermore, this work was partially supported by the Luxembourg National Research Fund (FNR) under the project SmartSpace (C21/IS/16193290).ā€

The first Fact is that it confirms that Loihi 1 & 2 are most definitely still only research platforms.

The second Fact is that the lack of willingness of Intel to disclose the Cloud interphase it uses would as the paper suggests be a significant limitation for commercial adoption.

The third Fact disclosed that inference time increased on Loihi 2 the greater the number of neurons in use means is a huge issue for Intel as from every thing I have read about AKIDA the more nodes hence neurons in play the faster the inference times. (Perhaps I miss the point here so this needs a Diogenese ruler run over my conclusion.)

Fact four even Loihi 2 despite its limitations is far more efficient than Xilinxā€™s VCK5000 running CNNā€™s.

Finally Fact five using neuromorphic computing for cognitive communication is clearly the way forward.

My opinion only DYOR
Fact Finder
Someone could produce a graph showing Loihi and Akida performance except we wouldn't be on the same page.

SC
 
  • Haha
  • Like
  • Love
Reactions: 17 users
You have to double it.
This company isn't a information company for the stock market, I feel ever since the Mercedes announcement the current management have put a lid on everything, watch the Financials and the Partnerships
 
  • Like
  • Thinking
Reactions: 5 users

Diogenese

Top 20
https://arxiv.org/html/2401.06911v1

"Furthermore, due to the runtime limitation of 20 minutes for jobs on Intelā€™s cloud, on-chip training was not possible, restricting the exploration of this important capability of Loihi".

Akida can solve that.
If one were to criticize the article, one would have to say that Flor Ortiz cocked up badly in:

[8] Flor Ortiz et al.,ā€œOnboard processing in satellite communications using AI accelerators,ā€Aerospace, vol. 10, no. 2, 2023.

because this was the basis of the choice of the Xilinx VCK5000 portable toaster.

"A detailed overview of commercial off-the-shelf (COTS) AI-capable chipsets was presented in [8]. Based on that, and on the current availability and lead times, we selected the Xilinx VCK5000 for the evaluation of the machine learning models described in this article."


VCK5000 Versal Development Card (xilinx.com)



1707365145814.png





You've got to think that, somehow or other, they hadn't heard of Akida when they were planning the tests and relying on a 2023 survey by one of the authors. Navel gazing!

It's a bit like comparing grapes and watermelons.

Still, it only costs ~ $13,000.00 and made Loihi look good.

(OK, so maybe they thought Akida was only available as IP?)
 
Last edited:
  • Like
  • Haha
  • Fire
Reactions: 26 users

Tony Coles

Regular
Hi JB

This is an important find. I have taken the following extract to cover the facts you have revealed:

  • ā€¢
    Superiority of SNN and Loihi 2: The results highlight that, in all cases, Intelā€™s Loihi 2 performs better than the CNN implemented in Xilinxā€™s VCK5000. However, it is worth noting that, as the batch size increases, the advantage of Loihi became less pronounced as the performance points move closer to the EDP line, although it still outperforms the Xilinxā€™s VCK5000 implementation.
  • ā€¢
    Interference detection as a promising use case: Among the considered scenarios, it seems that the interference detection and classification benefited the most from the implementation on Intelā€™s Loihi chipset. Even though the time ratio remained generally higher than one, the energy savings achieved with Loihi were significant. In some cases, the energy ratio reached values as high as 105
  • ā€¢
    SNN encoding: Fig. 3 compares the impact of FFT Rate and FFT TEM encoding for the ID scenario. Interestingly, the type of coding used did not have a significant impact on this comparison.
  • ā€¢
    RRMā€™s Performance: While RRM did not achieve energy savings as pronounced as in the ID scenario, it consistently achieved an energy ratio exceeding 102
Regarding the digital beamforming performance for fast-moving users, we compared the conventional LASSO solution provided by CVX [13] running on Matlab with the solution of S-LCA on Intelā€™s Lava simulator. Firstly, it is worth highlighting that the proposed beamforming formulation yielded sparse beamforming vectors, with both solutions being able to turn off up to 60% of the RF chains without compromising the resulting beampatterns. Regarding performance comparisons between the two solutions, both generated satisfactory beampatterns with the main lobe pointing toward the aircraft. For a numerical comparison, the beamformerā€™s average output power was considered as key performance indicator to assess the beamforming capabilities to mitigate the effects of noise and interference while focusing on the desired signal direction. In this context, the S-LCA solution was able to reach lower levels of beamformerā€™s average output power, around 19% below the value reached by the CVX solution, but with a much higher spreading of values, around 4 times higher than the CVX solution, when comparing the lower and upper quartiles of beamformerā€™s average output power.

V-ARemarks about the results obtained with Intelā€™s Loihi 2​

The results presented in this article were conducted with Loihi 2 but using the remote access from Intelā€™s research cloud. Although Intel offers the possibility of shipping physical chips to INRC partners premises, at the moment of developing these results the primary access to Loihi 2 was through the Intelā€™s neuromorphic research cloud. Obviously, the remote access introduced some additional limitations as it was not possible to control for concurrent usage by other users, which could lead to delays and increased power consumption. Additionally, the specific interface used by Intel on their cloud was not disclosed, potentially resulting in differences when conducting measurements with future releases of Loihi 2. Furthermore, due to the runtime limitation of 20 minutes for jobs on Intelā€™s cloud, on-chip training was not possible, restricting the exploration of this important capability of Loihi.
The cloud interface plays a key role, as it impacts the transfer of spiking signals to the chipset. High input size may span long per-step processing time. For example, in the flexible payload use case, the execution time per example increased from approximately 5 ms to 100 ms when the input size went from 252 neurons to 299 neurons.

VI Conclusion​

While we enter the era of AI, it becomes evident that energy consumption is a limiting factor when training and implementing neural networks with significant number of neurons. This issue becomes particularly relevant for nonterrestrial communication devices, such as satellite payloads, which are in need of more efficient hardware components in order to benefit from the potential of AI techniques.
Neuromorphic processors, such as Intelā€™s Loihi 2, have shown to be more efficient when processing individual data samples and are, therefore, a better fit for use cases where real world data arrives to the chip and it needs to be processed right away. In this article, we verified this hypothesis using real standard processor solutions, such as the Xilinxā€™s VCK5000 and the Intelā€™s Loihi 2 chipset.

Acknowledgments​

This work has been supported by the European Space Agency (ESA) funded under Contract No. 4000137378/22/UK/ND - The Application of Neuromorphic Processors to Satcom Applications. Please note that the views of the authors of this paper do not necessarily reflect the views of ESA. Furthermore, this work was partially supported by the Luxembourg National Research Fund (FNR) under the project SmartSpace (C21/IS/16193290).ā€

The first Fact is that it confirms that Loihi 1 & 2 are most definitely still only research platforms.

The second Fact is that the lack of willingness of Intel to disclose the Cloud interphase it uses would as the paper suggests be a significant limitation for commercial adoption.

The third Fact disclosed that inference time increased on Loihi 2 the greater the number of neurons in use means is a huge issue for Intel as from every thing I have read about AKIDA the more nodes hence neurons in play the faster the inference times. (Perhaps I miss the point here so this needs a Diogenese ruler run over my conclusion.)

Fact four even Loihi 2 despite its limitations is far more efficient than Xilinxā€™s VCK5000 running CNNā€™s.

Finally Fact five using neuromorphic computing for cognitive communication is clearly the way forward.

My opinion only DYOR
Fact Finder
Wow! Love fact five, FF sounds good to me.
 
  • Like
  • Fire
Reactions: 9 users

jtardif999

Regular

There's quite a bit mentioned about AI inferencing for AMD's Versal line up of SOC's... any idea how this compares to Akida? Xilinx went for around $50 billion when AMD aquired them recently... so the Versal edge AI must have something going for it? I'd always hoped that AMD would use Akida IP... seems like they are developing their own tech though.
Yeah it runs on 30 watts of power making it 30-30000 times more power hungry - really great for reduced power at the edge!! BrainChip had a previous relationship with Xilinx in 2017. It seems like that hasnā€™t translated into any meaningful advantage to AMD yet.
 
  • Like
  • Thinking
  • Fire
Reactions: 9 users

7fĆ¼r7

Regular
Afternoon Chippers ,

On the three day chart @ 1 , 2 & 3 min time duration on volume, thinking we should see some decent volume within next hour .... i shall refrain from a price guess today .

:whistle: .

Regards,
Esq
šŸ˜‚ when volume?
 
  • Haha
Reactions: 3 users

Diogenese

Top 20
Isn't it time Redditt found out that Akida is the SNN in ARM's ecosystem?
 
  • Like
  • Fire
  • Haha
Reactions: 33 users
Isn't it time Redditt found out that Akida is the SNN in ARM's ecosystem?
Sshhh you don't want everyone knowing you will have long lost relatives turning up wanting a loan or handout and your life will be ruined just like the mega lottery winners.šŸ˜‚šŸ¤£šŸ¤”ā˜¹
 
  • Haha
  • Like
  • Love
Reactions: 25 users

Ian

Founding Member
  • Like
  • Wow
  • Love
Reactions: 10 users

IloveLamp

Top 20


Europ Assistance is part of the Generali Group. 2022 turnover was a measly 81.5 billion euros according to Wikipedia

Screenshot_20240208_174541_LinkedIn.jpg
Screenshot_20240208_174607_Chrome.jpg
 
  • Like
  • Fire
  • Love
Reactions: 30 users

Esq.111

Fascinatingly Intuitive.
  • Haha
  • Like
Reactions: 6 users
Last edited:
  • Haha
  • Like
  • Thinking
Reactions: 6 users

rgupta

Regular
Yeah it runs on 30 watts of power making it 30-30000 times more power hungry - really great for reduced power at the edge!! BrainChip had a previous relationship with Xilinx in 2017. It seems like that hasnā€™t translated into any meaningful advantage to AMD yet.
Just a bit of confusion
1. I cannot believe AMD is not aware of akida.
2. Why they choose a product which is so power hungry?
3. Is there a case that akida is much difficult to incorporate?
4. Is that a possibility someone will beat AMD by miles by adopting akida as sensor processor?
5. Is that AMD is their management is so bad that cannot see future?
 
  • Like
  • Fire
Reactions: 8 users

rgupta

Regular
Just a bit of confusion
1. I cannot believe AMD is not aware of akida.
2. Why they choose a product which is so power hungry?
3. Is there a case that akida is much difficult to incorporate?
4. Is that a possibility someone will beat AMD by miles by adopting akida as sensor processor?
5. Is that AMD is their management is so bad that cannot see future?
I forget to add another scenario
6. Is that someone blocking off brainchip to partnering with AMD with some better and exclusive agreement.
e.g apple or Nvidia etc
 
  • Like
  • Thinking
  • Fire
Reactions: 7 users

ā€œInfluential space programs​

Microchip is involved in the High-Performance Spaceflight Computing (HPSC) processor project of the U.S. National Aeronautics and Space Administration (NASA) Jet Propulsion Laboratory in La CaƱada Flintridge, Calif. Microchip is developing a space processor that will provide at least 100 times the computational capacity of current spaceflight computers.
Microchip will build the HPSC processor over three years, with the goal of employing the processor on future lunar and planetary exploration missions. Microchip's processor architecture will improve the overall computing efficiency for these missions by enabling computing power to be scalable, based on mission needs. The work is under a $50 million contract, with Microchip contributing significant research and development costs to complete the project.
"We are making a joint investment with NASA on a new trusted and transformative compute platform that will deliver comprehensive Ethernet networking, advanced artificial intelligence and machine learning processing, and connectivity support while offering unprecedented performance gain, fault-tolerance, and security architecture at low power consumption," says Babak Samimi, corporate vice president for Microchip's Communications business unit.
"We will foster an industrywide ecosystem of single-board computer partners anchored on the HPSC processor and Microchip's complementary space-qualified

Microchip is developing the NASA High-Performance Spaceflight Computing (HPSC) processor that will provide at least 100 times the computational capacity of current spaceflight computers.

Microchip is developing the NASA High-Performance Spaceflight Computing (HPSC) processor that will provide at least 100 times the computational capacity of current spaceflight computers.​
total system solutions to benefit a new generation of mission-critical edge compute designs optimized for size, weight, and power," he says.
Current space-qualified computing technology is designed to address the most computationally intensive part of a mission, which leads to overdesigning and inefficient use of computing power. Microchip's new processor will enable the device's processing power to ebb and flow depending on current operational requirements. Certain processing functions can also be turned off when not in use to reduce power consumption.
"Our current spaceflight computers were developed almost 30 years ago," says Wesley Powell, NASA's principal technologist for advanced avionics. "While they have served past missions well, future NASA missions demand significantly increased onboard computing capabilities and reliability. The new computing processor will provide the advances required in performance, fault tolerance, and flexibility to meet these future mission needs."
The U.S. Space Force has kicked-off a program to design next-generation radiation-hardened non-volatile memory chips for future military applications in space in the Advanced Next Generation Strategic Radiation hardened Memory (ANGSTRM) project. The U.S. Air Force Research Laboratory's Space Vehicles Directorate at Kirtland Air Force Base, N.M., issued an ANGSTRM solicitation on behalf of the Space Force last January to develop a strategic rad-hard non-volatile memory device with near-commercial state-of-the-art performance by using advanced packaging and radiation-hardening techniques with state-of-the-art commercial technology.ā€


More info into what microchip is involved in as shown in the table in the link

 
Last edited:
  • Like
  • Wow
Reactions: 6 users
Top Bottom