BRN Discussion Ongoing

IloveLamp

Top 20

1000019709.jpg
 
  • Like
  • Fire
Reactions: 15 users

Rach2512

Regular

Attachments

  • Screenshot_20241112-052339_Samsung Internet.jpg
    Screenshot_20241112-052339_Samsung Internet.jpg
    478.6 KB · Views: 119
  • Like
  • Fire
Reactions: 23 users

MDhere

Regular
Last night I was thinking 30c to the power of 10 then I realised what I was thinking! Ok maybe just x10 but what the heck, bring on the power of "Akida" power of 10 😁
 
  • Like
  • Love
Reactions: 10 users
It would be great if BRN was advertised in a USB stick here on their website ( QV )
 
  • Like
Reactions: 2 users

KiKi

Regular
  • Fire
  • Like
  • Love
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Texas Instruments integrating an NPU as an edge AI accelerator.



TI Integrates Edge AI and Real-Time Control in New Mission-Critical MCUs​

8 hours ago by Duane Benson

Today, TI unveiled new members of the C2000 processor family—one with hardware-accelerated AI and the other with a 64-bit core for real-time control.​


Today at Electronica 2024, Texas Instruments (TI) announced its new TMS320F28P55x series of C2000 MCUs. TI calls the series the industry’s first real-time microcontrollers with an integrated neural processing unit (NPU). Along with that announcement, TI also revealed the new 64-bit F29H85x series of MCUs built around its C29 digital signal processing (DSP) core. The C29 MCUs target automotive applications that require fault-tolerant, low-latency operations and predictive decision-making. Both sets of new MCUs can serve mission-critical applications that require low-latency, real-time detection, calculation, and response.

A Brief Introduction to the Two New MCU Series​

The 40+ variants of the TMS320F28P55x series (datasheet linked) come with an integrated hardware NPU and on-chip Flash memory of up to 1.1 MB. This series also features 24 pulse width modulation (PWM) channels and 39 analog-to-digital (ADC) channels.

TMS320F28P55x and F29H85x MCUs

TMS320F28P55x and F29H85x MCUs. Image (modified) used courtesy of Texas Instruments

The second series of MCUs in this announcement, the F29H85x, provides motor and power control with two to three times the signal chain performance of its predecessors. TI also claims these devices have a five times faster fast Fourier transform (FFT) performance in diagnostics, tuning, and arc detection. Real-time interrupts run four times faster, and the MCUs are reportedly two to three times higher performing for general-purpose code execution. It includes an isolated hardware security module as well.

The TMS320F28P55x Series: Aided by a Powerful NPU​

For over 25 years, TI's C2000 family has provided real-time control in industrial and automotive applications. The newest additions to the family, the TMS320F28P55x series, integrate an NPU as an edge AI hardware accelerator. The NPU enables the MCU to offload AI processing from the primary core to dramatically increase real-time performance. The NPU offers advanced AI-based decision-making without loading down the primary processing core.

Functional block diagram of the TMS320F28P55x

Functional block diagram of the TMS320F28P55x. Image used courtesy of Texas Instruments

Conventional microcontrollers use simple logic to make real-time decisions. They use combinations of “if-then” or state machines to evaluate conditions and essentially make Boolean logic decisions based on given sets of input conditions. When sensor input is obvious and accurate, this type of system can work quite well. However, with more sensors providing input and with fast-changing conditions, ambiguity or sensor lag can lead to invalid input conditions or improper results. With today’s stringent safety and efficiency requirements, Boolean logic is insufficient for many requirements. That’s where edge AI can deliver significant improvements.

Adding NPU-based AI enables greater one-chip functionality

Adding NPU-based AI enables greater one-chip functionality with improved accuracy. Image used courtesy of Texas Instruments

TI notes that NPU capabilities will benefit applications like arc fault detection in solar and energy storage systems and motor-bearing fault detection for predictive maintenance. In both cases, conventional MCU code can misconstrue such faults, misidentifying them or not identifying them soon enough. The NPU allows the MCU to perform more advanced AI-style interpretation of sensor inputs in real time.
The TMS320F28P55x's NPU can also be trained to adapt to different environments with different sensor inputs, greatly increasing detection accuracy. It can run convolution neural network models to learn complex patterns from sensor data. The NPU will offload these calculations from the main CPU core and use AI to detect complex fault conditions, which can result in a five to 10x decrease in latency for detection operations.


The F29H85x Series: Leveraging 64-bit DSP and Real-Time Control​

The F29H85x MCU uses TI’s new C29 DSP core to deliver more than double the real-time performance of its predecessor, the C28. The new F29H85x processor series with the C29 DSP core boasts TI’s very long instruction word (VLIW) architecture, which supports the execution of up to eight instructions per cycle. The MCUs offer cyber security features as well, including a fully isolated hardware security module to protect the system. Further, the hardware safety and security unit uses context-aware memory protection to extend hardware isolation to CPU tasks without interference. The architecture provides security without a performance penalty added to the rest of the MCU.

Improved C29 DSP core

Improved C29 DSP core response receives, processes, and responds more than twice as fast. Image used courtesy of Texas Instruments

The 64-bit DSP with complex math ability can speed the signal chain performance for motor and power control by two to three times over the C28. It has five times the fast Fourier transform (FFT) performance. (FFT is used for systems diagnostics, tuning, and arc detection.) Interrupt response is four times faster than the C28 and general-purpose processing code can be executed two to three times faster.
TI engineered the chips to comply with the International Organization for Standardization (ISO) 26262 and International Electrotechnical Commission (IEC) 61508 automotive and industrial safety standards.

C2000 real-time MCU F28P55x development kit

C2000 real-time MCU F28P55x development kit in TI LaunchPad form factor. Image used courtesy of Texas Instruments

The F29 processors are automotive safety integrity level (ASIL) D and safety integrity level (SIL) 3 certified. ASIL D is the highest of four automotive safety risk management levels. SIL 3 is an industrial standard for risk mitigation used in a number of industry standards. SIL has three levels, with three being the highest. Visit TI at its Electronica booth C4-158.
 
  • Like
  • Thinking
  • Fire
Reactions: 26 users
Getting closer
 

Attachments

  • 570203BC-A258-4BA1-AB4A-16C4D6DCA727.jpeg
    570203BC-A258-4BA1-AB4A-16C4D6DCA727.jpeg
    204.4 KB · Views: 179
  • Like
  • Fire
  • Thinking
Reactions: 22 users

FiveBucks

Regular
Texas Instruments integrating an NPU as an edge AI accelerator.



TI Integrates Edge AI and Real-Time Control in New Mission-Critical MCUs​

8 hours ago by Duane Benson

Today, TI unveiled new members of the C2000 processor family—one with hardware-accelerated AI and the other with a 64-bit core for real-time control.​


Today at Electronica 2024, Texas Instruments (TI) announced its new TMS320F28P55x series of C2000 MCUs. TI calls the series the industry’s first real-time microcontrollers with an integrated neural processing unit (NPU). Along with that announcement, TI also revealed the new 64-bit F29H85x series of MCUs built around its C29 digital signal processing (DSP) core. The C29 MCUs target automotive applications that require fault-tolerant, low-latency operations and predictive decision-making. Both sets of new MCUs can serve mission-critical applications that require low-latency, real-time detection, calculation, and response.

A Brief Introduction to the Two New MCU Series​

The 40+ variants of the TMS320F28P55x series (datasheet linked) come with an integrated hardware NPU and on-chip Flash memory of up to 1.1 MB. This series also features 24 pulse width modulation (PWM) channels and 39 analog-to-digital (ADC) channels.

TMS320F28P55x and F29H85x MCUs

TMS320F28P55x and F29H85x MCUs. Image (modified) used courtesy of Texas Instruments

The second series of MCUs in this announcement, the F29H85x, provides motor and power control with two to three times the signal chain performance of its predecessors. TI also claims these devices have a five times faster fast Fourier transform (FFT) performance in diagnostics, tuning, and arc detection. Real-time interrupts run four times faster, and the MCUs are reportedly two to three times higher performing for general-purpose code execution. It includes an isolated hardware security module as well.

The TMS320F28P55x Series: Aided by a Powerful NPU​

For over 25 years, TI's C2000 family has provided real-time control in industrial and automotive applications. The newest additions to the family, the TMS320F28P55x series, integrate an NPU as an edge AI hardware accelerator. The NPU enables the MCU to offload AI processing from the primary core to dramatically increase real-time performance. The NPU offers advanced AI-based decision-making without loading down the primary processing core.

Functional block diagram of the TMS320F28P55x

Functional block diagram of the TMS320F28P55x. Image used courtesy of Texas Instruments

Conventional microcontrollers use simple logic to make real-time decisions. They use combinations of “if-then” or state machines to evaluate conditions and essentially make Boolean logic decisions based on given sets of input conditions. When sensor input is obvious and accurate, this type of system can work quite well. However, with more sensors providing input and with fast-changing conditions, ambiguity or sensor lag can lead to invalid input conditions or improper results. With today’s stringent safety and efficiency requirements, Boolean logic is insufficient for many requirements. That’s where edge AI can deliver significant improvements.

Adding NPU-based AI enables greater one-chip functionality

Adding NPU-based AI enables greater one-chip functionality with improved accuracy. Image used courtesy of Texas Instruments

TI notes that NPU capabilities will benefit applications like arc fault detection in solar and energy storage systems and motor-bearing fault detection for predictive maintenance. In both cases, conventional MCU code can misconstrue such faults, misidentifying them or not identifying them soon enough. The NPU allows the MCU to perform more advanced AI-style interpretation of sensor inputs in real time.
The TMS320F28P55x's NPU can also be trained to adapt to different environments with different sensor inputs, greatly increasing detection accuracy. It can run convolution neural network models to learn complex patterns from sensor data. The NPU will offload these calculations from the main CPU core and use AI to detect complex fault conditions, which can result in a five to 10x decrease in latency for detection operations.


The F29H85x Series: Leveraging 64-bit DSP and Real-Time Control​

The F29H85x MCU uses TI’s new C29 DSP core to deliver more than double the real-time performance of its predecessor, the C28. The new F29H85x processor series with the C29 DSP core boasts TI’s very long instruction word (VLIW) architecture, which supports the execution of up to eight instructions per cycle. The MCUs offer cyber security features as well, including a fully isolated hardware security module to protect the system. Further, the hardware safety and security unit uses context-aware memory protection to extend hardware isolation to CPU tasks without interference. The architecture provides security without a performance penalty added to the rest of the MCU.

Improved C29 DSP core

Improved C29 DSP core response receives, processes, and responds more than twice as fast. Image used courtesy of Texas Instruments

The 64-bit DSP with complex math ability can speed the signal chain performance for motor and power control by two to three times over the C28. It has five times the fast Fourier transform (FFT) performance. (FFT is used for systems diagnostics, tuning, and arc detection.) Interrupt response is four times faster than the C28 and general-purpose processing code can be executed two to three times faster.
TI engineered the chips to comply with the International Organization for Standardization (ISO) 26262 and International Electrotechnical Commission (IEC) 61508 automotive and industrial safety standards.

C2000 real-time MCU F28P55x development kit

C2000 real-time MCU F28P55x development kit in TI LaunchPad form factor. Image used courtesy of Texas Instruments

The F29 processors are automotive safety integrity level (ASIL) D and safety integrity level (SIL) 3 certified. ASIL D is the highest of four automotive safety risk management levels. SIL 3 is an industrial standard for risk mitigation used in a number of industry standards. SIL has three levels, with three being the highest. Visit TI at its Electronica booth C4-158.
Sounded promising until they mentioned CNNs.
 
  • Like
Reactions: 2 users

Boab

I wish I could paint like Vincent
  • Like
Reactions: 4 users
  • Like
  • Love
  • Fire
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Sounded promising until they mentioned CNNs.

Not too sure if it would necessarily exclude us because of the ability to convert CNNs to SNNs. Another one for our resident technical experts to delve into.



Screenshot 2024-11-12 at 10.23.41 am.png
 
  • Like
  • Fire
  • Love
Reactions: 19 users

FiveBucks

Regular
  • Like
  • Haha
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Some relevant background info here is that Brainchip is part of the Nvidia ecosystem through Nvidia's TAO Toolkit.







Unleashing the Future of Vision AI: Alif and Edge Impulse Upgrade Edge Deployments with Powerful Computer Vision and Full Support for Nvidia's TAO Toolkit​


News provided by​

Alif Semiconductor
Nov 11, 2024, 02:00 ET

Share this article​

https://thestockexchange.com.au/javascript:;

Alif Semiconductor hosts a demonstration at Electronica of an AI vision application created using transfer learning in the TAO toolkit, and deployed to an Ensemble MCU via the Edge Impulse platform
PLEASANTON, Calif., Nov. 11, 2024 /PRNewswire/ -- Alif Semiconductor®, a leading global supplier of secure, connected, power efficient Artificial Intelligence and Machine Learning (AI/ML) microcontrollers (MCUs) and fusion processors, along with Edge AI development platform Edge Impulse, today announced a significant breakthrough in implementing AI vision processing on edge microcontrollers. This is achieved through the introduction of full support for Nvidia's TAO model training toolkit on the Ensemble® and Balletto™ MCU families and the Edge Impulse platform.

Unleashing the Future of Vision AI: Alif and Edge Impulse Upgrade Edge Deployments with Powerful Computer Vision and Full Support for Nvidia’s TAO Toolkit
Unleashing the Future of Vision AI: Alif and Edge Impulse Upgrade Edge Deployments with Powerful Computer Vision and Full Support for Nvidia’s TAO Toolkit

The TAO toolkit has generated considerable enthusiasm among developers of edge AI devices due to its provision of extensive training datasets for common vision processing applications, and its support for transfer learning from pre-trained models. This innovation promises to substantially reduce the cost, time and effort which embedded device OEMs typically invest into model development for AI vision applications.

Until now, deploying TAO-trained models on low-power MCUs for edge applications has been untested and unproven. Now, with the complete integration of Nvidia TAO into its platform, Edge Impulse has established a streamlined process for deploying TAO models on the Alif Ensemble and Balletto families of MCUs and fusion processors. Fully integrated into the Edge Impulse platform, Alif's products feature the Arm® Ethos™-U55, a neural processing unit (NPU) for which Nvidia has optimized the TAO toolkit.

Embedded developers seeking to implement AI vision applications, such as people counting, intruder detection, or robotics, can now utilize the TAO training toolkit and its dataset, confidently deploying either a pre-trained TAO model or a custom model developed through transfer learning via Edge Impulse on Alif Ensemble or Balletto MCUs.

Henrik Flodell, Senior Marketing Director at Alif Semiconductor, stated, "With Alif's AI-optimized MCU ecosystem, high-end embedded vision processing is transitioning from large, expensive microprocessors to next-generation edge MCUs. The integration of the TAO toolkit by Edge Impulse has significantly streamlined the development and deployment of AI vision processing models on Alif MCUs."

Adam Benzion, SVP Partnerships at Edge Impulse, added, "The TAO toolkit accelerates the generation of effective ML models, but it doesn't address the deployment of these models on edge-optimized hardware. Together with Alif, we have solved this challenge by providing a fully integrated workflow from the TAO toolkit's pre-trained models to deployment on Alif edge MCUs."

Attendees can view the Alif/Edge Impulse demonstration of advanced AI vision processing on an Ensemble MCU at Electronica in the Alif Semiconductor booth B4.106 (Munich, Germany, 12-15 November 2024).

To learn more about the Ensemble and Balletto families of edge MCUs and fusion processors, please visit alifsemi.com. To learn more about the Edge Impulse development and deployment platform for AI at the edge, please visit edgeimpulse.com.

About Alif Semiconductor
Alif Semiconductor is the industry-leading supplier of next-generation secure AI/ML-enabled 32-bit microcontrollers. Since 2019, Alif's expanding offering of microcontrollers and fusion processors has been revolutionizing the way developers can create broad, scalable, and connected AI-enabled embedded applications that are genuinely power efficient. Alif Semiconductor is the only choice for power efficient microcontrollers that can handle heavy AI and ML workloads for battery-operated IoT devices. For more information go to https://alifsemi.com/.





Rob Telson "liked" a post about Alif's Ensemble MCU back in November 2022. Thanks to @Evermont for posting.


Screenshot 2024-11-12 at 11.13.07 am.png



Remember this slide from the Edge Impulse TAO Toolkit presentation? From the explanation that Rob Telson gave to @chapman89 at the time, once companies start pushing the limits on their power consumption and performance, they will need to introduce an AI accelerator IP such as Arm or BrainChip.


Screenshot 2024-11-12 at 10.42.52 am.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 18 users

7für7

Top 20
I’m so fed up again today… what a waste of time, nerves, and money… You keep pouring money into it, thinking you’ll become a millionaire overnight, and then you wake up and you’re still standing there, only now you’ve lost your investment as well… Always the same… How can people be so gullible? I’ll never understand it… I got out some time ago, and I’m glad I made that decision… I advise my friends the same, but to each their own…

Oh, by the way, sorry… I’m talking about playing the lottery… A colleague keeps dreaming about his quick millions, but he ignores how much he’s poured into it over his life. “Yeah, but if I hit it big just once, it’ll all be worth it” – yeah, keep dreaming… My advice to put his money into securities instead gets the response, “no, that’s too risky for me”… Well, then 😂😂
 
  • Haha
  • Like
  • Fire
Reactions: 7 users

Boab

I wish I could paint like Vincent
Hey @Esq.111 I'm hoping you can see another afternoon BOOOOM as the handle on the cup looks to be cracking and I'm worried I'm going to spill hot tea all over my b.......
 
  • Haha
  • Like
Reactions: 9 users

FiveBucks

Regular
Some relevant background info here is that Brainchip is part of the Nvidia ecosystem through Nvidia's TAO Toolkit.







Unleashing the Future of Vision AI: Alif and Edge Impulse Upgrade Edge Deployments with Powerful Computer Vision and Full Support for Nvidia's TAO Toolkit​


News provided by​

Alif Semiconductor
Nov 11, 2024, 02:00 ET

Share this article​

https://thestockexchange.com.au/javascript:;

Alif Semiconductor hosts a demonstration at Electronica of an AI vision application created using transfer learning in the TAO toolkit, and deployed to an Ensemble MCU via the Edge Impulse platform
PLEASANTON, Calif., Nov. 11, 2024 /PRNewswire/ -- Alif Semiconductor®, a leading global supplier of secure, connected, power efficient Artificial Intelligence and Machine Learning (AI/ML) microcontrollers (MCUs) and fusion processors, along with Edge AI development platform Edge Impulse, today announced a significant breakthrough in implementing AI vision processing on edge microcontrollers. This is achieved through the introduction of full support for Nvidia's TAO model training toolkit on the Ensemble® and Balletto™ MCU families and the Edge Impulse platform.

Unleashing the Future of Vision AI: Alif and Edge Impulse Upgrade Edge Deployments with Powerful Computer Vision and Full Support for Nvidia’s TAO Toolkit
Unleashing the Future of Vision AI: Alif and Edge Impulse Upgrade Edge Deployments with Powerful Computer Vision and Full Support for Nvidia’s TAO Toolkit

The TAO toolkit has generated considerable enthusiasm among developers of edge AI devices due to its provision of extensive training datasets for common vision processing applications, and its support for transfer learning from pre-trained models. This innovation promises to substantially reduce the cost, time and effort which embedded device OEMs typically invest into model development for AI vision applications.

Until now, deploying TAO-trained models on low-power MCUs for edge applications has been untested and unproven. Now, with the complete integration of Nvidia TAO into its platform, Edge Impulse has established a streamlined process for deploying TAO models on the Alif Ensemble and Balletto families of MCUs and fusion processors. Fully integrated into the Edge Impulse platform, Alif's products feature the Arm® Ethos™-U55, a neural processing unit (NPU) for which Nvidia has optimized the TAO toolkit.

Embedded developers seeking to implement AI vision applications, such as people counting, intruder detection, or robotics, can now utilize the TAO training toolkit and its dataset, confidently deploying either a pre-trained TAO model or a custom model developed through transfer learning via Edge Impulse on Alif Ensemble or Balletto MCUs.

Henrik Flodell, Senior Marketing Director at Alif Semiconductor, stated, "With Alif's AI-optimized MCU ecosystem, high-end embedded vision processing is transitioning from large, expensive microprocessors to next-generation edge MCUs. The integration of the TAO toolkit by Edge Impulse has significantly streamlined the development and deployment of AI vision processing models on Alif MCUs."

Adam Benzion, SVP Partnerships at Edge Impulse, added, "The TAO toolkit accelerates the generation of effective ML models, but it doesn't address the deployment of these models on edge-optimized hardware. Together with Alif, we have solved this challenge by providing a fully integrated workflow from the TAO toolkit's pre-trained models to deployment on Alif edge MCUs."

Attendees can view the Alif/Edge Impulse demonstration of advanced AI vision processing on an Ensemble MCU at Electronica in the Alif Semiconductor booth B4.106 (Munich, Germany, 12-15 November 2024).

To learn more about the Ensemble and Balletto families of edge MCUs and fusion processors, please visit alifsemi.com. To learn more about the Edge Impulse development and deployment platform for AI at the edge, please visit edgeimpulse.com.

About Alif Semiconductor
Alif Semiconductor is the industry-leading supplier of next-generation secure AI/ML-enabled 32-bit microcontrollers. Since 2019, Alif's expanding offering of microcontrollers and fusion processors has been revolutionizing the way developers can create broad, scalable, and connected AI-enabled embedded applications that are genuinely power efficient. Alif Semiconductor is the only choice for power efficient microcontrollers that can handle heavy AI and ML workloads for battery-operated IoT devices. For more information go to https://alifsemi.com/.





Rob Telson "liked" a post about Alif's Ensemble MCU back in November 2022. Thanks to @Evermont for posting.


View attachment 72716


Remember this slide from the Edge Impulse TAO Toolkit presentation? From the explanation that Rob Telson gave to @chapman89 at the time, once companies start pushing the limits on their power consumption and performance, they will need to introduce an AI accelerator IP such as Arm or BrainChip.


View attachment 72715
@Bravo Hopefully Brainchip has a little run like our other baby is having today! WBT!!
 
  • Love
  • Like
Reactions: 3 users

Jefwilto

Regular
I looked on Chatgpt

Q.Is Brainchip and Akida involved with the new model of Mercedes

A.Yes, BrainChip and its Akida AI technology are involved in the development of some advanced systems for the new model of Mercedes-Benz vehicles, particularly in the realm of autonomous driving and edge AI capabilities.



BrainChip’s Akida chip is designed for edge AI applications, meaning it can process data directly on devices (like a car’s onboard systems) without needing to rely on cloud-based processing. This is especially important for applications in autonomous vehicles, where low latency, high reliability, and real-time decision-making are critical.



In fact, Mercedes-Benz has been working with BrainChip’s Akida technology to enhance its vehicle’s ability to process visual, auditory, and sensor data locally in real-time, improving the car’s AI-based features such as driver assistance systems and autonomous driving capabilities. This allows the vehicle to make decisions more quickly and efficiently, without having to rely on distant servers for every calculation.



However, while BrainChip’s Akida technology is a key part of this next-generation automotive AI, it is just one component of the larger ecosystem of technologies that go into a fully autonomous or semi-autonomous vehicle.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Last edited:
  • Wow
  • Fire
  • Like
Reactions: 10 users

7für7

Top 20
I looked on Chatgpt

Is Brainchip and akida involved with the new model of mercedes

Yes, BrainChip and its Akida AI technology are involved in the development of some advanced systems for the new model of Mercedes-Benz vehicles, particularly in the realm of autonomous driving and edge AI capabilities.



BrainChip’s Akida chip is designed for edge AI applications, meaning it can process data directly on devices (like a car’s onboard systems) without needing to rely on cloud-based processing. This is especially important for applications in autonomous vehicles, where low latency, high reliability, and real-time decision-making are critical.



In fact, Mercedes-Benz has been working with BrainChip’s Akida technology to enhance its vehicle’s ability to process visual, auditory, and sensor data locally in real-time, improving the car’s AI-based features such as driver assistance systems and autonomous driving capabilities. This allows the vehicle to make decisions more quickly and efficiently, without having to rely on distant servers for every calculation.



However, while BrainChip’s Akida technology is a key part of this next-generation automotive AI, it is just one component of the larger ecosystem of technologies that go into a fully autonomous or semi-autonomous vehicle.
Yeah i just asked ChatGPT about akida too… mind blowing 🤯

“Akida technology from BrainChip, as it exists today, is a neuromorphic AI processor focused on edge computing, which means it’s designed for low-power, high-efficiency tasks directly on the device rather than relying on cloud processing. In essence, Akida is intended to bring advanced AI capabilities to smart devices, making them faster, more responsive, and power-efficient.

The technologies in Star Wars and Star Trek, like droids (Star Wars) or the ship’s computer (Star Trek), could conceptually use something like Akida. Both series feature AI-driven devices that respond instantly and intelligently to commands—something Akida aims to enable.”

This would be a game changer … both ways… far far away long time ago and… in near future… we are the winner… no matter in which direction ✌️💪
 
  • Like
  • Fire
Reactions: 5 users
Top Bottom