BRN Discussion Ongoing

miaeffect

Oat latte lover
Hi Fact Finder, that's OK, they're edible sneakers.

View attachment 58227
images - 2024-03-01T103917.490.jpeg

banana/mango flavour for your breakfast
pizza flavour for your lunch 😋
 
  • Haha
  • Like
Reactions: 12 users

Murphy

Life is not a dress rehearsal!
Hi Bravo
It is the shoe eating bit based on hope alone that worries me most.

Fact Finder
If there is one person here that I DON'T want to eat their shoes, it's Bravo!!😻🙃

If you don't have dreams, you can't have dreams come true!
 
  • Haha
  • Fire
Reactions: 8 users

FJ-215

Regular
There was a mention of Soundhound in connexion with Mercedes.
https://www.soundhound.com/voice-ai...e-control-revolutionizes-the-user-experience/
...

Overcoming the mindset of rigid voice commands​

Developing a voice assistant for any product or application is not an easy task. With the help of our Houndify engineers, the team at Mercedes-Benz was able to extend the voice assistant to the cloud while helping their users change ingrained behaviors around addressing and interacting with voice assistants in a very rigid way.

Changing how we talk to voice assistants—shifting from barking a specific set of commands to having a natural conversation—was a challenge. And the change needed to start within the engineering team
.

In the podcast "Hey Mercedes!" will be there."
Will have to wait for Bloomberg to put up the replay but automotive was mentioned (no names given though)
 
  • Like
  • Fire
Reactions: 3 users

7für7

Top 20
  • Haha
  • Thinking
  • Fire
Reactions: 5 users

Diogenese

Top 20
And the latest, dunno if they’ve perhaps upgraded the DRP with some extra sauce…


Renesas Unveils Powerful Single-Chip RZ/V2H MPU for Next-Gen Robotics with Vision AI and Real-Time Control

New Generation AI Accelerator with 10 TOPS/W Power Efficiency Delivers AI Inference Performance of up to 80 TOPS Without Cooling Fan


February 29, 2024, 8:00 AM Eastern Standard Time


TOKYO--(BUSINESS WIRE)--Renesas Electronics Corporation (TSE:6723), a premier supplier of advanced semiconductor solutions, has expanded its popular RZ Family of microprocessors (MPUs) with a new device targeting high-performance robotics applications.


This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20240229979997/en/


Offering the highest levels of performance within the family, the RZ/V2H enables both vision AI and real-time control capabilities.


The device comes with a new generation of Renesas proprietary AI accelerator, DRP (Dynamically Reconfigurable Processor)-AI3, delivering 10 TOPS/W power efficiency, an impressive 10-fold improvement over previous models. Additionally, pruning technology employed in the DRP-AI3 accelerator significantly improves AI computing efficiency, boosting AI inference performance up to 80 TOPS. This performance boost allows engineers to process vision AI applications directly at edge AI devices without relying on cloud computing platforms. The details of the new DRP-AI3 acceleration technology were recently presented at the International Solid-State Circuits Conference (ISSCC 2024) in San Francisco.


The RZ/V2H incorporates four Arm® Cortex®-A55 CPU cores with a maximum operating frequency of 1.8 GHz for Linux application processing, two Cortex-R8 cores running at 800 MHz for high-performance real-time processing, and one Cortex-M33 as a sub core. By integrating these cores into a single chip, the device can effectively manage both vision AI and real-time control tasks, making it ideal for demanding robotics applications of the future. Since the RZ/V2H consumes less power, it eliminates the need for cooling fans and other heat-dissipating components. This means engineers can design systems that are smaller in size, less expensive, and more reliable.


“As a market leader in motor control microprocessors, Renesas is ready to take on the next challenge to drive the advancement of the robotics market with AI technology,” said Daryl Khoo, Vice President of the Embedded Processing 1st Business Division at Renesas. “The RZ/V2H will facilitate the development of next-generation autonomous robots with vision AI capabilities, that have the ability to think independently and control movements in real time."


Renesas has applied its proprietary DRP technology to develop the OpenCV Accelerator that speeds up the processing of OpenCV, an open-source industry standard library for computer vision processing. The resulting speed improvement is up to 16 times faster compared to CPU processing. The combination of the DRP-AI3 and the OpenCV Accelerator enhances both AI computing and image processing algorithms, enabling the power-efficient, real-time execution of Visual SLAM(Note 1) used in applications such as robot vacuum cleaners.


To accelerate development, Renesas also released AI Applications, a library of pre-trained models for various use cases, and the AI SDK (Software Development Kit) for rapid development of AI applications. By running these software on the RZ/V2H's evaluation board, engineers can evaluate AI applications easily and earlier in the design process, even if they do not have extensive knowledge of AI.


“We are thrilled to be part of the launch of the RZ/V2H, which combines AI technology with real-time control,” says Rolf Segger, founder of SEGGER Microcontroller GmbH. “SEGGER’s J-Link debug probe, widely adopted by numerous embedded development projects globally, will provide the support needed for the RZ/V2H, helping accelerate the development of next-generation robotic innovations. We look forward to this next phase in our multi-decade long partnership with Renesas."


Winning Combinations


Renesas has developed the "Visual Detection Single Board Computer" that uses camera images to identify its surroundings, and to determine and control its movements in real-time. This solution combines the RZ/V2H with power management ICs and VersaClock programmable clock generators to support power-efficient industrial robots and machinery. Its efficient design eliminates the requirement for an additional cooling fan, keeping the solution BOM and size down. These Winning Combinations are technically vetted system architectures from mutually compatible devices that work together seamlessly to bring an optimized, low-risk design for faster time to market. Renesas offers more than 400 Winning Combinations with a wide range of products from the Renesas portfolio to enable customers to speed up the design process and bring their products to market more quickly. They can be found at renesas.com/win.


Availability


The RZ/V2H is available today, along with the evaluation board and the AI SDK. More information about the device and development tools are available at: https://www.renesas.com/rzv2h.


About Renesas Electronics Corporation


Renesas Electronics Corporation (TSE: 6723) empowers a safer, smarter and more sustainable future where technology helps make our lives easier. The leading global provider of microcontrollers, Renesas combines our expertise in embedded processing, analog, power and connectivity to deliver complete semiconductor solutions. These Winning Combinations accelerate time to market for automotive, industrial, infrastructure and IoT applications, enabling billions of connected, intelligent devices that enhance the way people work and live. Learn more at renesas.com. Follow us on LinkedIn, Facebook, X, YouTube, and Instagram.


(Note 1) Visual SLAM (Simultaneous Localization and Mapping) is a technology that analyzes images captured by on-board cameras on robots and drones and estimates their own position while simultaneously creating detailed maps of their surroundings.


(Remarks) This DRP-AI technology uses a part of the results of work commissioned by the New Energy and Industrial Technology Development Organization (NEDO). Arm, Arm Cortex are trademarks or registered trademarks of Arm Limited in the EU and other countries. All names of products or services mentioned in this press release are trademarks or registered trademarks of their respective owners.


View source version on businesswire.com: https://www.businesswire.com/news/home/20240229979997/en/


Contacts


Media Contacts:
Americas
Akiko Ishiyama
Renesas Electronics Corporation
+ 1-408-887-9006
akiko.ishiyama.xf@renesas.com
When the Renesas licence was issued, they stated that they would use DRP-AI for the heavy lifting, and Akida for the fiddly bits in a context which I understood to mean in different products.

Their N:M pruning is significant as it acts as a dynamic selectable compression of the weights, allowing Renesas to choose a level of accuracy balanced against speed and power.
 
  • Like
  • Fire
  • Thinking
Reactions: 17 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Jeez… you have hairy legs… are you a “woke” type of woman?

You think my legs are hairy? You should see my toes!
 
  • Haha
  • Love
  • Like
Reactions: 11 users

Getupthere

Regular

TSMC founder says unnamed customers want 10 new fabs to build AI chips​

News
By Anton Shilov
published 1 day ago
More AI requires more silicon.
Comments (10)

TSMC

(Image credit: TSMC)

TSMC founder and industry icon Morris Chang says that customers have approached him to build up to ten new fabs for AI processors, an incredible request that speaks to an insatiable demand. The request isn't entirely surprising, as the demand for processors used for AI applications is booming, and it is well known that market leader Nvidia cannot satisfy it. Meanwhile, the amount of AI compute performance available to companies like OpenAI appears insufficient, which is why companies are demanding more processors from existing suppliers, and some are even planning to build their own silicon.

"They are not talking about tens of thousands of wafers," said Morris Chang, the founder of TSMC, at a conference in Japan, reports Nikkei. "They are talking about fabs, [saying] 'We need so many fabs. We need three fabs, five fabs, 10 fabs.' Well, I can hardly believe that one." The report says Chang predicts demand for AI processors to be in the middle, "between tens of thousands of wafers and tens of fabs."


Advertisement
ADVERTISEMENT

SCROLL TO CONTINUE WITH CONTENT
TSMC is one of a few companies on the planet that builds semiconductor manufacturing facilities that have a production capacity of around 100 thousand wafer starts per month. These 'Gigafabs' tend to produce processors using a variety of advanced process technologies. Running such large plants allows TSMC to reuse expensive wafer fab equipment for different process nodes, which greatly optimizes utilization rates and costs.


But a Gigafab costs a lot of money: a large 3nm-capable fab may cost well over $20 billion when fully built and equipped, and it requires years of construction. Meanwhile, TSMC's 2024 capital expenditure (CapEx) budget is between $28 billion and $32 billion, so the company isn't building multiple Gigafabs every year. Building ten leading-edge fabs would have cost well over $200 billion, and that does not include the cost of supporting the supply chain
 
  • Like
  • Fire
  • Wow
Reactions: 14 users

7für7

Top 20
You think my legs are hairy? You should see my toes!
as Marty McFly would say….

You are far ahead of your time... but your kids... will love it.
 
  • Haha
  • Love
  • Like
Reactions: 4 users

Diogenese

Top 20
  • Haha
  • Fire
  • Like
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Fujitsu Launches Advanced AI Applications to Simplify 5G+ Network Challenges​


Fujitsu Network Communications introduced Virtuora® IA, a collection of network applications powered by artificial intelligence (AI) that use network-focused machine learning (ML) models and inherent telecommunications expertise to significantly improve mobile network operators’ (MNOs) network performance with drastically simplified operations.

Fujitsu’s AI-powered network applications employ neural network modeling to provide practical network optimization benefits that offer real competitive advantage in today’s 5G market and beyond.

As pressure to reduce operational expenditures intensifies and network complexity escalates, MNOs require new tools to minimize total cost of ownership (TCO). With the power of AI, Fujitsu’s Virtuora Intelligent Applications enable MNOs to streamline complex network operations, improve performance, reduce costs, and speed service delivery, helping generate additional revenue streams and boost profitability.

Fujitsu’s AI-powered network applications leverage unique neural network modeling plus pre-trained ML models with inherent networking expertise to vastly improve efficiency. With a software architecture compatible with large language model (LLM) technology and generative AI, the applications provide continuous learning and unleash self-healing multivendor networks. This allows greater network automation to reduce problem resolution times, eliminate repetitive tasks, and enable operations to make faster, more accurate decisions.

Fujitsu’s new collection of intelligent network applications helps MNOs quickly investigate network problems and service disruptions. Virtuora IA applies real-time inference to develop a precise analysis of problems, creating multivendor, multidomain ML models that use a neural network to reveal contextual network insights. With the ability to adapt to evolving network behavior, this intelligence translates into proactive action including automated notification processes and remediation. In a recent trial, a Tier 1 MNO was able to sort and understand data from hundreds of thousands of network nodes and end points, reducing thousands of network anomalies into root causes and their locations within minutes.
Greg Manganello, senior vice president of the network software and integration business unit at Fujitsu

Our innovative new AI-powered network applications are part of an expanding portfolio of intelligent software solutions designed to help network operators realize light-touch operations for greater agility, lower TCO and unrivalled performance. We have combined decades of network operations experience with our extensive AI research to deliver immediate business value within a trusted multivendor network management ecosystem.

 
  • Like
  • Love
  • Wow
Reactions: 15 users

Tothemoon24

Top 20
Akida written all over this , without being written 🙃

IMG_8513.jpeg




NEWS HIGHLIGHTS



  • Intel announced its future Intel® Xeon® processor with built-in AI acceleration, code-named Granite Rapids-D, and highlighted first-call validation with key partners.
  • Intel, in collaboration with 5G core software suppliers, previewed its next-gen Intel Xeon processor for 5G core, code-named Sierra Forest, demonstrating a 2.7x performance per rack improvement1.
  • Intel announced Intel’s Edge Platform, a modular and open software platform that enables enterprises to build, deploy, run, manage and scale edge and AI solutions on standard hardware with cloud-like simplicity.
  • Intel will announce extended benefits of the AI PC to commercial designs with the new Intel vPro® platform, on day two of MWC 2024.


BARCELONA, Spain, Feb. 26, 2024 – At MWC 2024, Intel announced new platforms, solutions and services spanning network and edge AI, Intel® Core™ Ultra processors and the AI PC, and more.

In an era where technological advancements are integral to staying competitive, Intel is delivering products and solutions for its customers, partners and expansive ecosystem to capitalize on the emerging opportunities of artificial intelligence and built-in automation, to improve total cost of ownership (TCO) and operational efficiency, and to deliver new innovations and services.

More: Intel Announces New Edge Platform for Scaling AI Applications | Intel Unleashes 2.7x Performance per Rack Improvement for 5G Core | Intel at MWC Barcelona 2024(Press Kit)

Across today’s announcements, Intel is focused on empowering the industry to further modernize and monetize 5G, edge and enterprise infrastructures and investments, and to take advantage of bringing AI Everywhere. For more than a decade, and alongside Intel’s customers and partners, the company has been transforming today’s network infrastructure from fixed-function to a software-defined platform and driving success at the edge with more than 90,000 real-world deployments2.

“Intel is delivering innovations for our partners and their customers across network, edge and enterprises to modernize their networks, monetize new services at the edge, and bring AI everywhere,” said Sachin Katti, senior vice president and general manager of the Network and Edge Group at Intel. “Intel’s network- and edge-optimized SOC strategy uniquely integrates general purpose compute and acceleration for networking, AI and vRAN workloads, and we are announcing market-leading next generation products for 5G core with Sierra Forest and 5G vRAN with Granite Rapids-D.”

Utilizing Built-in AI Acceleration, Intel Spearheads the Future of Modern Network Innovation

Announced last year, 4th Gen Intel® Xeon® processors with Intel® vRAN Boost (code-named Sapphire Rapids EE) deliver up to twice the capacity3 for virtual radio access network (vRAN) workloads compared with the previous generation. The capacity increase allows operators to double their number of cell sites or subscribers while providing an additional 20% reduction4 in vRAN compute power consumption by removing the need for external acceleration to reduce system complexity and costs.

Further extending its vRAN leadership, while driving down vRAN costs and power consumption and delivering it at a global scale, Intel announced its future Xeon processor Granite Rapids-D, featuring the latest generation of P-cores. This future processor will deliver significant gains in performance and power efficiency utilizing improved Intel AVX for vRAN and integrated Intel vRAN Boost acceleration alongside other architectural and feature enhancements. Silicon is currently sampling. Samsung has demonstrated a first-call at their research and development lab in Suwon, South Korea. Ericsson has also demonstrated a first-call validation in the Ericsson-Intel joint lab in Santa Clara, California. These accomplishments underscore the ease of gen-over-gen software portability and ecosystem readiness when the product launches. Intel is also working with Dell Technologies, Hewlett Packard Enterprise (HPE), Lenovo, Mavenir, Red Hat, Wind River and other leading ecosystem partners to ensure market readiness. Granite Rapids-D is planned to launch in 2025, following the launch of Granite Rapids server CPUs in 2024.

Artificial intelligence will play a pivotal role in helping operators optimize the performance, efficiency and intelligent management of resources in the evolving vRAN environment. To help operators and developers build, train, optimize and deploy AI models for vRAN use cases on general purpose servers in their existing network footprint, Intel is introducing early availability of the Intel® vRAN AI Development Kit to select partners. Built on Intel AI-optimized libraries, frameworks and tools, the optimized AI models in the development kit, when combined with 4th Gen Intel Xeon processors’ built-in AI acceleration, power management and enhanced telemetry capabilities, offers potential for operators to reconfigure their network dynamically to conceivably save costs, extract more value from infrastructure and support new revenue streams. Intel is working with AT&T, Deutsche Telekom, SK Telecom and Vodafone to showcase the benefits AI can bring to the RAN.

Innovating in 5G Core Performance and Power Savings 

Intel architecture is the backbone of cloud-native, software-defined core networks around the world, with most virtualized network servers running on Intel CPUs. As the primary choice for operators, equipment builders and software providers, Intel Xeon platforms have set the bar in commercial deployments for 5G core performance with superior TCO and comprehensive power management – all delivered via a world-class ecosystem.

For operators, the company today previewed its next-gen Intel Xeon processor Sierra Forest that will launch later this year to expand Intel’s CPU roadmap by offering up to 288 Efficient-cores (E-cores) on a single chip. It is well-suited for 5G core workloads to advance network core performance and power savings. By utilizing Intel’s latest E-core technology, operators will recognize greater energy and cost savings, driving to a 2.7x performance per rack improvement1 and industry-leading performance per rack for 5G core workloads5.

Operators and ecosystem partners – including BT Group, Dell Technologies, Ericsson, HPE, KDDI, Lenovo, and SK Telecom – are also showing interest in this ground-breaking next-gen platform, which is optimized for high performance per watt, core density and throughput.

For additional power savings and energy efficiency, Intel announced broad availability and industry adoption for the Intel® Infrastructure Power Manager software for 5G core, with Casa Systems, NEC, Nokia, and Samsung planning to deliver in 2024. Intel Infrastructure Power Manager enables operators to take advantage of the built-in telemetry of Intel Xeon processors to reduce CPU power by an average of 30% while maintaining key telco performance metrics6. Multiple operators are exploring lab trials for delivering carbon offset and TCO savings.

The Right Platform is Everything for Scaling AI and Edge Solutions

At the edge, enterprises want to innovate, be efficient and improve time to market by delivering new intelligent services. They are starting to leverage the tremendous amount of data they generate at the edge to achieve enhanced customer experience, scale operations through automation while being price-competitive, and dealing with the impacts of labor shortages. This is driving a tremendous new opportunity for edge AI.

Intel is leveraging its expansive installed base and deep expertise from more than 90,000 edge deployments today with a footprint spanning more than 200 million processors sold in the past 10 years2 to help customers quickly and efficiently take advantage of the edge AI opportunity.

Announced today, Intel’s Edge Platform has unique capabilities, including support for heterogeneous components for lower TCO and zero-touch, policy-based management of infrastructure and applications, and AI across a fleet of edge nodes with a single pane of glass. Additionally, AI runtime with OpenVINO™ inference is built-in to enable real-time AI inferencing optimization and dynamic workload placement within the infrastructure software for application deployment.

As an evolution of the solution first introduced at Intel Innovation 2023 under the code-name Project Strata, Intel’s Edge Platform will be generally available later this quarter, with some partners and end users already taking advantage of its offerings. In support of Intel’s Edge Platform, Intel is working across the ecosystem and with industry leaders such as Amazon Web Services, Capgemini, Lenovo, L&T Technology Services, SAP, Red Hat, Vericast, Verizon Business and Wipro.

Delivering the Best AI PC Experience for Businesses of All Sizes

On day two of MWC 2024, Intel and Microsoft will host an AI PC industry reception at the Intel booth. Watch the Intel Newsroom for details.

Delivering Choice in Acceleration

For emerging spaces where protocols and use cases are still being defined – like vRAN, OpenRAN, 6G and AI – FPGAs enable first-to-market advantages and deliver maximum flexibility with dynamic, low-power, low-latency, high-throughput solutions. Intel’s Programmable Solutions Group (PSG) will launch two new radio macro and mMIMO Enablement Packages as well as Intel® Precision Time Protocol Servo, which allows customers to implement any timing configuration based on the 1588 timing precision protocol to synchronize devices in the Radio Access Network.

Additionally, the now standalone Programmable Solutions Group will conduct a business vision and strategy webinar on Feb. 29. Watch the Intel Newsroom for details.

Visit the Intel Booth at MWC 2024 (Hall 3, Booth 3E31), and don’t miss the Technology Showcase to see firsthand the latest partner innovations, including:



  • Creating modern networks of the future to deliver peak performance and power savings.
  • Scaling AI across vertical industries to drive better business outcomes.
  • Delivering the AI PC with new features and manageability for organizations of all sizes.
 
  • Like
  • Fire
  • Love
Reactions: 46 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

AI's Next Phase With Qualcomm CEO Cristiano Amon - 1 day ago​


 
  • Like
  • Fire
Reactions: 8 users
Morning DB,

We aren't out of the woods just yet. One more hurdle to get over this month, the amended agreement with LDA Capital.

BrainChip Holdings Announces Third Amendment to Financial Instrument

"Under the terms of the agreement, the company will issue 40 million Collateral Shares by the
earlier of the next Capital Call or 31 March 2024.
Any issuance of Shares by the Company will be
done so under the Company’s Listing Rule 7.1 placement capacity and will be subject to the
Company’s available placement capacity at that time."


Need to see what the market makes of this when it happens. It is a known known the seems to have been forgotten.
Hey FJ, yeah I knew about that, but...

"Under this Third Amendment, the company has agreed to an additional Minimum Drawdown Amount of $12M to be drawn no later than 31 December 2024.
Under the terms of the agreement, the company will issue 40 million Collateral Shares by the earlier of the next Capital Call or 31 March 2024. Any issuance of Shares by the Company will be done so under the Company’s Listing Rule 7.1 placement capacity and will be subject to the Company’s available placement capacity at that time".


The shares are just being issued then (there may not even be an announcement for that, as it's already been said).
That's not the date of the Capital Call, which I think the Company, will choose to do at an opportune time, before the end of the year.
 
  • Like
  • Fire
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
  • Love
Reactions: 15 users
Hi Bravo
Also very suspicious that they use terms like ‘chip’, ‘silicon’, ‘neural’ and the ones that convince me are ‘customers’, ‘partners’, and ‘ecosystems’.😂🤣🤡😂

We do need to be careful with our wild speculation as the amazing volumes we have seen must mean some new eyes are coming here to find out what the fuss is all about.

We don’t need to push speculation so hard when we have close to 60 known engagements, products being released with partners and five year technology leads and about to be the FIRST to run unconnected optimised LLMs on the Edge with AKIDA 2.0 and work proceeding to unleash the power of 200 AKIDA nodes on data centres.😎

My opinion only DYOR
Fact Finder
 
  • Like
  • Fire
  • Love
Reactions: 47 users

Beebo

Regular
Earlier charts prior to about 2018 showed Qualcomm as a competitor. I’ve often wondered whether that was because they were no longer considered a competitor after this point. But there’s no indication that we can find so far in their tech to indicate that we are working with

AI's Next Phase With Qualcomm CEO Cristiano Amon - 1 day ago​



Don’t be surprised if Qualcomm is actually our competition on the Top 2-3 leaderboard for the edge AI market…..the leaderboard that Sean expects us to be on.
 
  • Like
  • Thinking
Reactions: 5 users

MDhere

Regular

AI's Next Phase With Qualcomm CEO Cristiano Amon - 1 day ago​



NIce find Bravo, love 25sec mark 40sec mark and 4min 17 to 4.42 mark. and i may like the rest too but i narrowed it down :)
 
  • Like
  • Haha
  • Fire
Reactions: 11 users
  • Haha
Reactions: 10 users

7für7

Top 20
I think we'll be trading sideways for a while now. Since it's Friday today, I don't expect any major miracles. I've also put the idea of price-sensitive news out of my mind for now. The fact that they have clarified for the time being that we are positioned for the future and making progress is okay. I find all contributions interesting and the idea that Brainchip, for example, has something to do with Apple, tempting. However, it remains a speculative wishful thinking. At some point, when the next or the next but one quarterly reports come out, we'll be wiser! Unfortunately, I'm not technically savvy and therefore rely on clear messages without having to interpret too much. That's the downside of a simple investor who simply shares the company's vision!

DYOR ok?
 
  • Like
  • Fire
  • Love
Reactions: 12 users
When the Renesas licence was issued, they stated that they would use DRP-AI for the heavy lifting, and Akida for the fiddly bits in a context which I understood to mean in different products.

Their N:M pruning is significant as it acts as a dynamic selectable compression of the weights, allowing Renesas to choose a level of accuracy balanced against speed and power.
So sounding like they may have used AKIDAs N of M coding, as inspiration for their N:M pruning, assuming JAST was the first to use this method?..
 
  • Sad
  • Like
  • Thinking
Reactions: 3 users
Top Bottom