BRN Discussion Ongoing

  • Like
  • Fire
  • Love
Reactions: 7 users

MegaportX

Regular
Good post. At the moment ARM are totally cloud focussed. They are well aware that cloudless AI at the Edge is going to see exponential growth.
While there is room for both cloud and cloudless (AKIDA) ARM will no doubt hedge its bets and utilise AKIDA to take advantage of the expected growth.
Will ARM buy BRN ?🤔🤔
 
  • Thinking
  • Fire
Reactions: 8 users
  • Like
  • Love
  • Fire
Reactions: 7 users

Boab

I wish I could paint like Vincent
Will ARM buy BRN ?🤔🤔
Who knows?
But one thing is certain is that one of ARM's big claims/aims is to save power and I reckon we fit into this category very nicelyhttps://www.linkedin.com/advice/0/what-current-future-trends-challenges-power-management
Reference
1715590687302.png
 
  • Like
  • Fire
  • Love
Reactions: 17 users

Diogenese

Top 20
This article seems to indicate further progression of Masayoshi Son's project codenamed Izanagi, which we discussed in Feb 2024. If realised, this would be a MASSIVE move on behalf of Arm and Softbank! As we know Arm makes most of its money through the royalties that it collects from their customers each time they make a chip using its designs. If they launch their own product next year, then Arm will be competing with the very same companies that it is currently servicing. Whilst at the same time, it plans to reduce their reliance on NVIDIA and also compete with the likes of Amazon Web and Microsoft by using the new AI chips to build their own data centres.

This leads me to beleive that Arm must have something very special up its sleeve, 😝no pun intended! 🥳

Correct me if I'm wrong, but I don't think Arm has ever commented on any other partner technology within their ecosystem being capable of making their own products perform BETTER!

PS : I guess if they intend on using our IP in chips that will be produced as early as next year, they're going to have to get a wriggle on and sign on the dotted line fairly soon one would imagine.


UPDATED 17:43 EDT / MAY 12 2024
2c1fa490-55a5-4696-aa59-04f5fa01aa8f-1-1.jpg
AI

Arm reportedly set to enter AI chip market with first product next year​

headapril-96x96.jpg

BY DUNCAN RILEY
share.png
SHARE
Computer chip designer Arm Holdings Plc is reportedly set to enter the artificial intelligence chip market and is seeking to launch its first product next year.
The decision by Arm to develop its own AI chips is said to be part of a move by parent company SoftBank Group Corp. to transform the group into a sprawling AI powerhouse. Although Arm has been publicly listed since its initial public offering in September, SoftBank still owns approximately 90% of the company.
Nikkei Asia reported today that Arm will set up an AI chip division and is aiming to build a prototype by the northern spring of 2025. Mass production, which will be contracted out to chip manufacturers, including Taiwan Semiconductor Manufacturing Company Ltd., is expected to start later the same year.
Arm will reportedly shoulder the initial development costs, expected to be in the hundreds of billions of yen – 100 billion yen at the current exchange rate is $642 million. SoftBank is also expected to contribute funds to assist. According to Nikkei, once a mass-production system is established, the AI chip business could be spun off and placed under SoftBank.
Arm already supplies circuit architecture for processors used in smartphones and graphic processor units, but the move to design and then subcontract manufacture AI chips would be a first for the company. Currently, Arm makes most of its money through the royalties it collects every time a company makes a chip using its designs. Now, if its vision is realized, Arm will compete with those same companies.
Visions are not rare when it comes to SoftBank Chief Executive Officer Masayoshi Son, who is said to be spearheading the push. Son has a vision of an “AI revolution,” with SoftBank aiming to expand to data centers and robots and. His vision includes bringing together AI, semiconductor and robotics technologies to spur innovation in various industries.
SoftBank plans to build data centers equipped with homegrown Arm chips in the U.S., Europe, Asia and the Middle East as early as 2026.
Because Son has never been shy in thinking big, SoftBank also plans to branch out into power generation through windmills and solar power farms, with an eye on next-generation fusion technology to power their data centers.
While ambitious, Son’s plans are certainly achievable, but what remains unknown is whether Arm will make its designs available to its existing customers or how those customers will respond to Arm entering the AI chip market. SoftBank’s plans to build data centers using its own AI chips will also see it compete against the likes of Amazon Web Services Inc. and Microsoft Corp., both of which currently license Arm circuit architecture for processors.



This is what fetaures on Arm's Partner Ecosystem catalogue.

View attachment 62738


ARM think that NNs use MACs:

Note: This patent addresses the issue of "attention" which enables the processor to refer back to earlier inputs for context - used in natural language processing.

US2024028877A1 NEURAL PROCESSING UNIT FOR ATTENTION-BASED INFERENCE 20220721

[0063] The neural processing unit ( 106 ) includes a central control element ( 110 ), a direct memory access element ( 112 ), an activation output element ( 114 ), a multiplication accumulation engine ( 116 ), a shared buffer ( 118 ), and a weight decoder ( 120 ).

1715593919804.png


[0066] At stage S 11 , the direct memory access element ( 112 ) of the NPU ( 106 ) fetches compressed projection matrices WQ , WK , and WV from the flash memory ( 102 ). The weight decoder ( 120 ) decodes the compressed matrices. The MAC engine ( 116 ) calculates query matrix Q, key matrix K, and value matrix V by multiplying the query projection matrix WQ , the key projection matrix WK , and the value projection matrix WV , by input matrix X.

They also think SNNs are analog:
WO2024018231A2 IMPROVED SPIKING NEURAL NETWORK APPARATUS 20220721

1715594030594.png




1715594047642.png


1715594095809.png


Figure 2 is a schematic diagram showing the structure of a synaptic delay path.

Figure 3 is a schematic diagram showing the structure of a neuron within a spiking neural network apparatus.

A spiking neural network is described that comprises a plurality of neurons in a first layer connected to at least one neuron in a second layer, each neuron in the first layer being connected to the at least one neuron in the second layer via a respective variable delay path. The at least one neuron in the second layer comprises one or more logic components configured to generate an output signal in dependence upon signals received along the variable delay paths from the plurality of neurons in the first layer. A timing component is configured to determine a timing value in response to receiving the output signal from the one or more logic components, and an accumulate component is configured to accumulate a value based timing values from the timing component. A neuron fires in a case that a value accumulated at the accumulate component reaches a threshold value.
 
  • Like
  • Thinking
  • Fire
Reactions: 15 users

Tothemoon24

Top 20
IMG_8920.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 25 users





Top Neuromorphic Computing Stocks for 2024: The Next AI Frontier​

  • Last updated: April 29, 2024

The quest for smarter, faster AI is relentless. Our world is generating a staggering amount of data, and traditional computers are struggling—and guzzling energy to keep up. Neuromorphic computing offers a novel solution. By emulating the brain’s network of neurons and synapses, the goal is to unleash lightning-fast AI that’s also energy efficient. This bleeding-edge technology could be critical for growing markets like edge computing. In this article, we’ll explore the top neuromorphic computing stocks for 2024, ranked by pure-play focus.
Neuromorphic Computing - Brain-Inspired AI Chips
Neuromorphic computing chips aim to mimic the brain’s structure. Credit: ArtemisDiana/Adobe

Tier 1: Pure-Play Neuromorphic Computing Stocks​

These companies have placed their bets squarely on the neuromorphic computing revolution. Investors seeking the most direct exposure to this technology should focus on these specialized neuromorphic computing stocks.

BrainChip Holdings (BRACH): The Akida Advantage​

BrainChip Holdings (BRACH) focuses on ultra-efficient, adaptable neuromorphic chips, designed to bring AI to power-sensitive devices.
BrainChip Holdings is all-in on neuromorphic computing, making them a true pure-play contender in this space. Their focus lies with their Akida AKD1000 neuromorphic processor, designed for low-power, event-based AI applications. Here’s where BrainChip stands out:
  • The Efficiency Crown: Akida is remarkably efficient. It offers a claimed 5-10x improvement in performance-per-watt over traditional AI accelerators, making it ideal for battery-powered devices, edge computing, and in-sensor intelligence – expanding the applications of AI where it wasn’t feasible before.
  • Adaptability is Key: Akida isn’t limited to pre-programmed solutions. It’s designed with on-chip learning capabilities, enabling AI systems to adapt to changing environments and data without needing to communicate with the cloud.
  • Proven Partnerships: BrainChip has actively partnered with companies like Mercedes-Benz (for in-car AI functions like driver attention and gesture control) and space technology leader Vorago Technologies for radiation-hardened solutions in space. These partnerships show their commitment to real-world deployment.
Investor Takeaway: BrainChip’s focus on specialized neuromorphic chips puts them squarely at the forefront of this technological revolution. Their emphasis on efficiency and adaptability hints at the potential to unlock innovative AI use cases, making them a fascinating, albeit inherently riskier, choice for investors seeking pure-play neuromorphic computing stocks.
Computer Vision AI and self driving cars in a smart city.
Neuromorphic chips are ideal for machine vision, edge AI, and smart cities. Credit: Alexander/Adobe.

Tier 2: Tech Giants with Neuromorphic Ambitions​

Tech titans are taking notice of this technology’s potential. These neuromorphic computing stocks offer a chance to invest in established players who have neuromorphic research or pilot projects, though it may not be their core business focus.

Intel (INTC): The Chip Giant’s Neuromorphic Gambit​

Intel (INTC) is exploring the potential of neuromorphic computing with its Loihi research chip, potentially integrating this technology into their future products.
Intel, the established leader in traditional computing, has taken notice of the potential in neuromorphic technology. The centerpiece of their neuromorphic endeavors is Loihi, a research chip designed to mimic the brain’s spiking neural networks. Here’s what makes Intel’s approach noteworthy:
  • Research Powerhouse: Intel isn’t simply dipping a toe in the water. They’ve established the Intel Neuromorphic Research Community (INRC), which boasts over 150 global members. This collaborative effort and focus on research could yield breakthroughs with wide-ranging applications.
  • Loihi’s Learning Curve: Loihi isn’t designed for immediate commercialization. Instead, it serves as a platform for exploration. Intel has demonstrated Loihi’s ability to learn and solve complex problems, including gesture recognition, odor classification, and even learning to play the game “Lava”.
  • Scaling Up: Intel recently unveiled Pohoiki Springs, a large-scale neuromorphic system featuring 768 Loihi chips. This system offers a glimpse into Intel’s ambition for scaling up neuromorphic computing power for future research and development.
Investor Takeaway: Intel’s neuromorphic play carries the weight of a major industry player. While their current focus seems to be on research, positive developments with Loihi could lead to intriguing possibilities down the line. Investors should view Intel’s position in neuromorphic computing stocks as a calculated, long-term bet on the potential of this technology, rather than an immediate payoff.

IBM (IBM): TrueNorth – A Neuromorphic Pioneer​

IBM (IBM) was an early player in neuromorphic computing with its TrueNorth chip, but its commercialization strategy remains unclear.
IBM has been a long-time player in the neuromorphic computing space. Their TrueNorth neurosynaptic processor demonstrates their ambition to tackle AI challenges with brain-inspired hardware. Here’s what distinguishes IBM’s approach:
  • Historic Vision: IBM’s neuromorphic exploration stretches back years, showcasing their enduring commitment. TrueNorth, initially unveiled in 2014, offered a very early glimpse into what neuromorphic chips could achieve.
  • Extreme Efficiency: TrueNorth is engineered for remarkably low power consumption. Boasting a mere 70 milliwatts of power draw while running workloads, its promise was to process data using a fraction of the energy required by traditional computers, making it ideal for edge applications.
  • Beyond the Hype: Despite TrueNorth’s initial buzz, widespread commercialization hasn’t materialized yet. This highlights the challenges of taking a bleeding-edge technology from lab to market.
Investor Takeaway: IBM’s neuromorphic venture represents a calculated gamble by the established tech giant. While they hold valuable intellectual property in this domain (including designs that could potentially scale to a million neurons), investors should be aware it remains unclear how IBM will leverage this in the competitive AI landscape. They might be more likely to integrate neuromorphic concepts into their wider portfolio, rather than focusing on pure-play neuromorphic products.
Phase Change Neurons for Neuromorphic Computing - IBM
IBM is one of the early pioneers in neuromorphic computing. Credit: IBM Research

Qualcomm (QCOM): Neuromorphic Potential on Mobile and the Edge​

Qualcomm (QCOM) has powerful mobile chipsets that make them a potential powerhouse for low-power neuromorphic AI acceleration in everyday devices.
Qualcomm, best known for its dominance in mobile chipsets, is showing interest in the potential of neuromorphic computing. Their focus on mobile devices and edge computing hints at intriguing possibilities for applying neuromorphic concepts, where low power consumption could be a game-changer. Here’s how they stand out:
  • Smartphone Smarts: Qualcomm’s AI capabilities are already in millions of hands via their Snapdragon chips, with their latest Snapdragon 8 Gen 2 platform boasting significant AI performance gains. This focus on optimizing their AI Engine for on-device performance lays a foundation for potential neuromorphic advancements.
  • Not Just Phones: Qualcomm’s presence in areas like automotive (with their Snapdragon Ride Platform) and IoT devices opens up opportunities to apply neuromorphic acceleration in power-sensitive scenarios, such as always-on sensors or self-driving car applications.
  • The Road Ahead: Qualcomm hasn’t explicitly revealed concrete plans for specialized neuromorphic chips. However, their active AI research teams and constant iteration in the AI space suggest they’re keeping a keen eye on neuromorphic developments.
Investor Takeaway: Qualcomm’s position in neuromorphic computing stocks is less defined than pure-play companies. Investors should see their interest as a potential signal for the future direction of edge AI. If Qualcomm successfully integrates efficient neuromorphic capabilities, it could provide a significant competitive advantage across their wide range of products.

Tier 3: Chipmakers with Hybrid Neuromorphic Potential​

Investors looking for a more measured approach should consider these neuromorphic computing stocks. As we’ve explained elsewhere, one of the earliest ways neuromorphic chips could be used is in hybrid systems alongside classical CPUs and GPUs. Chipmakers in this tier are in prime position to provide those hybrid “bridge” solutions.

Advanced Micro Devices (AMD): Neuromorphic Synergies​

Advanced Micro Devices (AMD) is exploring how to integrate neuromorphic capabilities into existing chip designs, offering a potential hybrid solution.
AMD, a major competitor in the chip market, is acknowledging the potential of neuromorphic computing. Their acquisition of Xilinx, with its Versal AI Edge series of adaptive chips, signals their intent to enter this space. Here’s what sets AMD’s approach apart:
  • Hybrid Potential: AMD might not be developing standalone neuromorphic processors, but their focus could lie in integrating neuromorphic capabilities into their existing chip designs. The adaptive nature of Xilinx’s Versal chips, with their mix of programmable logic, AI Engines, and powerful ARM processors, could be key to this strategy.
  • Real-World Focus: AMD has demonstrated interest in real-world applications of AI, including areas like machine vision for manufacturing and data center acceleration. Integrating neuromorphic efficiency into these domains could offer them a competitive edge.
  • The Long Game: AMD’s neuromorphic ambitions might be less immediate than those of pure-play companies. Investors should be aware that their path may involve gradual integration of technologies rather than the rapid development of entirely new chipsets.
Investor Takeaway: AMD’s position in neuromorphic computing stocks is characterized by a cautious, but calculated approach. While they’re unlikely to disrupt the market with a purely neuromorphic product in the near future, their commitment to AI innovation and the potential of their Versal AI Edge platform makes them a company to watch in this evolving space.

Private Neuromorphic Companies to Watch​

Neuromorphic computing is an incredibly young field, and many of the most promising startups are still private. In addition to BrainChip (which is already public), we cover five other innovative startups—SynSense, GrAI Matter Lab, Prophesee, Innatera, and MemComputing—on this page.

What we see today with neuromorphic computing is just the tip of the iceberg. The most compelling applications likely haven’t even been thought of yet. Investing in neuromorphic computing stocks is a bet on the ingenuity of researchers and companies who are unlocking a new language for computing – one with the potential to rewrite the rules of what we think AI can do.
 
  • Like
  • Love
  • Fire
Reactions: 61 users
ARM think that NNs use MACs:

Note: This patent addresses the issue of "attention" which enables the processor to refer back to earlier inputs for context - used in natural language processing.

US2024028877A1 NEURAL PROCESSING UNIT FOR ATTENTION-BASED INFERENCE 20220721

[0063] The neural processing unit ( 106 ) includes a central control element ( 110 ), a direct memory access element ( 112 ), an activation output element ( 114 ), a multiplication accumulation engine ( 116 ), a shared buffer ( 118 ), and a weight decoder ( 120 ).

View attachment 62759

[0066] At stage S 11 , the direct memory access element ( 112 ) of the NPU ( 106 ) fetches compressed projection matrices WQ , WK , and WV from the flash memory ( 102 ). The weight decoder ( 120 ) decodes the compressed matrices. The MAC engine ( 116 ) calculates query matrix Q, key matrix K, and value matrix V by multiplying the query projection matrix WQ , the key projection matrix WK , and the value projection matrix WV , by input matrix X.

They also think SNNs are analog:
WO2024018231A2 IMPROVED SPIKING NEURAL NETWORK APPARATUS 20220721

View attachment 62760



View attachment 62761

View attachment 62762

Figure 2 is a schematic diagram showing the structure of a synaptic delay path.

Figure 3 is a schematic diagram showing the structure of a neuron within a spiking neural network apparatus.

A spiking neural network is described that comprises a plurality of neurons in a first layer connected to at least one neuron in a second layer, each neuron in the first layer being connected to the at least one neuron in the second layer via a respective variable delay path. The at least one neuron in the second layer comprises one or more logic components configured to generate an output signal in dependence upon signals received along the variable delay paths from the plurality of neurons in the first layer. A timing component is configured to determine a timing value in response to receiving the output signal from the one or more logic components, and an accumulate component is configured to accumulate a value based timing values from the timing component. A neuron fires in a case that a value accumulated at the accumulate component reaches a threshold value.
Well natural SNNs are analog..

We've been competing against "good enough" conventional solutions and now instead of playing ball with us, many are trying their hands at "good enough" neuromorphic solutions.

That's business in a cut throat World and it's a bloody big pie.

But we need to start carving out our slices.
 
  • Like
  • Fire
Reactions: 12 users

The Pope

Regular
Good post. At the moment ARM are totally cloud focussed. They are well aware that cloudless AI at the Edge is going to see exponential growth.
While there is room for both cloud and cloudless (AKIDA) ARM will no doubt hedge its bets and utilise AKIDA to take advantage of the expected growth.
Just a few thoughts base on your comments that it appears ARM are still totally cloud focused.
Who is the one company that ARM relies on for revenue the most (not limited to) and that company still promoting nearly everything not at the edge linked to cloud or not?
Also linked to other recent news that ARM looking to start building their own AI chips starting in 2025?

Now with Akida 2.0 IP and BRN holding off developing any reference chips unlike Akida 1000 is due to BRN management announcing (my recall) is that they are not progressing any further producing chips as it may upset a key competitor (don’t want to step on any toes). Who may that be? Lol

Unfortunately BRN is playing with the big boys that atleast a couple don’t want AI at the edge as quickly at BRN management timelines due to their existing investments (ie hardware and other infrastructure) so they milking revenue as much as they can out of their current investments before possibly moving to BRN AI at the edge and buy a IP license and / or generate royalties through products etc for BRN shareholders.

I suggest at least a couple of the big boys dictating all terms of engagement with BRN that have upset the sales tactics / progression by BRN management / BOD thus a couple of BRN staff recently left. Did anyone from BRN sales team or management upset one of the big boys in the negotiations towards a IP licence for now so BRN have had to pivot for now?

Interesting times but Akida will get there. Just like Thomas the tank engine getting up the hill.

Wishing you all the very best with your investment in BRN.

Cheers
The Pope
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 20 users
  • Like
  • Love
  • Fire
Reactions: 34 users

Diogenese

Top 20
Just a few thoughts base on your comments that it appears ARM are still totally cloud focused.
Who is the one company that ARM relies on for revenue the most (not limited to) and that company still promoting nearly everything not at the edge linked to cloud or not?
Also linked to other recent news that ARM looking to start building their own AI chips starting in 2025?

Now with Akida 2.0 IP and BRN holding off developing any reference chips unlike Akida 1000 is due to BRN management announcing (my recall) is that they are not progressing any further producing chips as it may upset a key competitor (don’t want to step on any toes). Who may that be? Lol

Unfortunately BRN is playing with the big boys that atleast a couple don’t want AI at the edge as quickly at BRN management timelines due to their existing investments (ie hardware and other infrastructure) so they milking revenue as much as they can out of their current investments before possibly moving to BRN AI at the edge and buy a IP license and / or generate royalties through products etc for BRN shareholders.

I suggest at least a couple of the big boys dictating all terms of engagement with BRN that have upset the sales tactics / progression by BRN management / BOD thus a couple of BRN staff recently left. Did anyone from BRN sales team or management upset one of the big boys in the negotiations towards a IP licence for now so BRN have had to pivot for now?

Interesting times but Akida will get there. Just like Thomas the tank engine getting up the hill.

Wishing you all the very best with your investment in BRN.

Cheers
The Pope
It's the old infallibility thing again- I heard key "customer" not key "competitor" - but I'm damned if I can find it.
 
  • Like
Reactions: 7 users

The Pope

Regular
It's the old infallibility thing again- I heard key "customer" not key "competitor" - but I'm damned if I can find it.
Lmfao. If I change it to customer does that mean the rest of my comments you are ok with Dio?
Others have said nivida may be considered more like a friend than a competitor or along those lines. Did that comment come from Sean?
Don’t really care if I’m wrong or not but a few on here can get quickly upset…..that is a given. I do laugh that some put people on ignore regardless of their point of view. Still haven’t put anyone on ignore but have gone close. I notice DK6161 hasn’t been on for a while. lol
 
Last edited:
  • Like
  • Love
Reactions: 5 users

manny100

Regular
Lmfao. If I change it to customer does that mean the rest of my comments you are ok with Dio?
Others have said nivida may be considered more like a friend than a competitor or along those lines. Did that comment come from Sean?
Don’t really call if I’m wrong or not but a few on here can get quickly upset…..that is a given. I do laugh that some put people on ignore regardless of their point of view. Still haven’t put anyone on ignore but have gone close. I notice DK6161 hasn’t been on for a while. lol
It was Sean. See attached slide from an investment presentation last year.
Sean said there were 2 distinct approaches. The classical approach consisting of ARM, NVIDIA and Intel and others listed. These companies operate via the cloud.
The other approach is Neuromorphic which is Cloudless. BRN is the only commercial option currently available.
Sean was quite clear thar there is room for both cloud based and cloudless operators some data being more suited to one than the other. Cloudless AI at the Edge is estimated to have huge growth.
I cannot see ARM totally ignoring the cloudless approach. In fact ARM has AKIDA is integrated with the Arm® Cortex®-M85 processor.
If ARM do not seize the AKIDA opportunity some one else will. I do not think ARM would telegraph its intentions at this early stage.
 

Attachments

  • BRN COMPETITORS.pdf
    650.1 KB · Views: 106
  • Like
  • Love
  • Fire
Reactions: 25 users

AI chip startup Deepx raises $80m, receives $529m valuation​

Funding round was led by SkyLake Equity Partners

South Korean AI chip startup Deepx has raised $80 million in a Series C funding round, leading to the company receiving a valuation of $529 million.
The investment round was led by SkyLake Equity Partners, a South Korean private equity firm, and included participation from AJU IB and previous backer Timefolio Asset Management.

DeepX


Founded in 2018 by former Apple and Cisco engineer Lokwon Kim, Deepx produces an 'All-in-4 AI solution' that includes the company’s DX-V1 and DX-V3 processors for use in consumer electronics, and its DX-M1 chip and DX-H1 chips, which have been designed for AI computing boxes and AI servers.

In comments to TechCrunch, Kim said that Deepx would use the funding to mass produce the company’s four existing AI chips, in addition to supporting the development and launch of its next generation of large language model (LLM) on-device solutions.

“Nvidia’s GPU-based solutions are the most cost-effective for large language model services like ChatGPT; the total power consumed by GPUs operating has reached levels exceed the electrical energy of an entire country,” Kim told TechCrunch. “This collaborative operation technology between server-scale AI and on-device large AI models are expected to reduce energy consumption and costs a lot compared to relying solely on data centers.”

Deepx currently employs around 65 people but does not yet have any customers, although Kim said the company was working with more than 100 potential clients and strategic partners. The company also has more than 259 patents pending in the US, China, and South Korea.

Hey no customers... we have something in common!
 
  • Haha
Reactions: 4 users
Unbelievable Wilziy123, you really are a piece of work aren't you, I refrain from commenting often on this forum, and when I do its generally to praise a members comments or thoughts, however, your attitude is very aloof and condescending, your response to Guzzi62 was totally unnecessary, you are very unpleasant! I base this on your years of history not just this particular comment, your memes' reflect this, and I am amazed at the 14 supportive replies you received. I apologize to the rest of the forum for being totally unrelated but felt I had to say something. Sorry and thank you for those who chose to read. (Ok, back to normal transmission)

The ignore button is a wonderful feature.
 
  • Like
  • Love
Reactions: 4 users

Sirod69

bavarian girl ;-)
Attending Embedded Vision Summit 2024? Join BrainChip’s Chris Jones on the final day of the conference, May 23, at 2:05 pm PT for his presentation, "Temporal Event Neural Networks: A More Efficient Alternative to the Transformer" and discover how TENNs revolutionize edge #AI with groundbreaking efficiency and performance. There's still time to register: https://lnkd.in/g-PE4Sqe

1715628055624.png
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Kachoo

Regular
We need more edge box sales !
 
Last edited:
  • Like
  • Fire
Reactions: 9 users

manny100

Regular
AI at the Edge via the cloud is like watching a movie when the sound is out of synch with the actors talking.
Nothing more annoying.
 
  • Like
  • Fire
  • Haha
Reactions: 10 users

IloveLamp

Top 20
  • Like
  • Love
  • Thinking
Reactions: 14 users
  • Like
  • Love
  • Fire
Reactions: 27 users
Top Bottom