BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
Not hardware though. No way 100b arm chips are going to be made + installed +sold between now and end 2025 (1.5 years).



Arm to roll out 100 biliion devices for AI by 2025​



Jun 3, 2024

https://www.facebook.com/sharer.php...-roll-out-100-biliion-devices-for-ai-by-2025/
Arm Holdings, is aiming to roll 100 billion Arm devices out in 2025. This initiative is part of a broader strategy to establish Arm as a key player in the rapidly expanding AI chip market, projected to grow from $30 billion in 2024 to over $200 billion by 2032.
To achieve this, Arm will establish a dedicated AI chip division. The division aims to produce prototypes by spring 2025 and begin mass production by the fall of the same year. The manufacturing will be managed by contract manufacturers, including potential collaborations with Taiwan Semiconductor Manufacturing Corp (TSMC) and others. The development costs for these AI chips are expected to reach hundreds of billions of yen, with financial backing from Arm and contributions from SoftBank.
Strategically, there is a possibility that the AI chip business could be spun off and integrated under SoftBank once mass production is underway. This move aligns with SoftBank CEO Masayoshi Son’s vision of transforming SoftBank into an AI powerhouse. Son has earmarked a $64 billion investment to drive innovations across various sectors, including data centers, robotics, and power generation. He envisions integrating AI, semiconductor, and robotics technologies to revolutionize industries such as shipping, pharmaceuticals, finance, manufacturing, and logistics.
image-4.png

At the Computex forum in Taipei, Arm highlighted their target of having 100 billion Arm devices worldwide by the end of 2025, emphasizing the role of AI-ready devices. Furthermore, Arm unveiled new AI chip designs and software tools aimed at enhancing AI capabilities in smartphones. These new designs are expected to improve compute and graphics performance by over 30% and accelerate AI inference by 59% for machine learning and computer vision tasks.
Arm’s strategy includes providing ready-for-manufacturing blueprints, a departure from their previous approach of offering abstract designs. This shift aims to expedite the development process for their partners, such as Samsung and TSMC. Samsung is already collaborating with Arm on 3nm process technology to meet the growing demand for generative AI in mobile devices. TSMC is working with Arm to enhance AI performance and efficiency through their Open Innovation Platform ecosystem.

With these initiatives, Arm is not positioning itself as a competitor but as an enabler, helping its customers bring AI-driven chips and devices to market more swiftly. Chris Bergey, Arm’s General Manager, expressed the company’s vision of combining a platform for tightly coupled accelerators with customer Neural Processing Units (NPUs) to foster the rapid development of powerful AI chips and devices.
As Arm moves forward with its plans, it aims to capture a significant share of the burgeoning AI chip market, capitalizing on the unmet demand currently dominated by Nvidia. The company’s focus on AI chip development is a pivotal step in SoftBank’s broader strategy to lead the AI revolution and transform various industries through cutting-edge technology.





Extract
Screenshot 2024-06-04 at 9.38.09 am.png



 
  • Like
  • Love
  • Fire
Reactions: 53 users

IMG_1896.jpeg
 
  • Like
  • Fire
Reactions: 10 users

Damo4

Regular


I didn't think I'd enjoy this Podcast as much as I did, with fantastic discussion between 2 clearly switched on people.
I love the way he explains Neuromorphic benefits, in a way most can understand, not overly technical but not too vague either.
Great to hear his thoughts on Edge Ai rebounding back to influence Cloud based technology too, and Sean confirming it's how compute has occurred for a while now.

Possibly one of the most re-assuring listens yet.
Goes to show being flexible in an emerging market it likely the ticket to success, imagine if BRN just stuck to Version 1 and cared only about specific use cases?
 
  • Like
  • Love
  • Fire
Reactions: 33 users

Diogenese

Top 20

Arm to roll out 100 biliion devices for AI by 2025​



Jun 3, 2024

https://www.facebook.com/sharer.php?u=https://www.gizmochina.com/2024/06/03/arm-to-roll-out-100-biliion-devices-for-ai-by-2025/
Arm Holdings, is aiming to roll 100 billion Arm devices out in 2025. This initiative is part of a broader strategy to establish Arm as a key player in the rapidly expanding AI chip market, projected to grow from $30 billion in 2024 to over $200 billion by 2032.
To achieve this, Arm will establish a dedicated AI chip division. The division aims to produce prototypes by spring 2025 and begin mass production by the fall of the same year. The manufacturing will be managed by contract manufacturers, including potential collaborations with Taiwan Semiconductor Manufacturing Corp (TSMC) and others. The development costs for these AI chips are expected to reach hundreds of billions of yen, with financial backing from Arm and contributions from SoftBank.
Strategically, there is a possibility that the AI chip business could be spun off and integrated under SoftBank once mass production is underway. This move aligns with SoftBank CEO Masayoshi Son’s vision of transforming SoftBank into an AI powerhouse. Son has earmarked a $64 billion investment to drive innovations across various sectors, including data centers, robotics, and power generation. He envisions integrating AI, semiconductor, and robotics technologies to revolutionize industries such as shipping, pharmaceuticals, finance, manufacturing, and logistics.
image-4.png

At the Computex forum in Taipei, Arm highlighted their target of having 100 billion Arm devices worldwide by the end of 2025, emphasizing the role of AI-ready devices. Furthermore, Arm unveiled new AI chip designs and software tools aimed at enhancing AI capabilities in smartphones. These new designs are expected to improve compute and graphics performance by over 30% and accelerate AI inference by 59% for machine learning and computer vision tasks.
Arm’s strategy includes providing ready-for-manufacturing blueprints, a departure from their previous approach of offering abstract designs. This shift aims to expedite the development process for their partners, such as Samsung and TSMC. Samsung is already collaborating with Arm on 3nm process technology to meet the growing demand for generative AI in mobile devices. TSMC is working with Arm to enhance AI performance and efficiency through their Open Innovation Platform ecosystem.

With these initiatives, Arm is not positioning itself as a competitor but as an enabler, helping its customers bring AI-driven chips and devices to market more swiftly. Chris Bergey, Arm’s General Manager, expressed the company’s vision of combining a platform for tightly coupled accelerators with customer Neural Processing Units (NPUs) to foster the rapid development of powerful AI chips and devices.
As Arm moves forward with its plans, it aims to capture a significant share of the burgeoning AI chip market, capitalizing on the unmet demand currently dominated by Nvidia. The company’s focus on AI chip development is a pivotal step in SoftBank’s broader strategy to lead the AI revolution and transform various industries through cutting-edge technology.





Extract
View attachment 64344


Oh! By the way, someone said they heard a rumour that BRN had put production of Akida 2 + TeNNs SoC on hold so as not to upset a customer's applecart. I wonder if that was an arm's length deal?

PS: "Applecart" - Freudian slip re mobile phones?
 
  • Like
  • Haha
  • Thinking
Reactions: 46 users

toasty

Regular
Oh! By the way, someone said they heard a rumour that BRN had put production of Akida 2 + TeNNs SoC on hold so as not to upset a customer's applecart. I wonder if that was an arm's length deal?

PS: "Applecart" - Freudian slip re mobile phones?
Imagine for a moment if this did include our IP. Even at a measly $1 per item............................... :cool:
 
  • Like
  • Fire
Reactions: 6 users

rgupta

Regular
Imagine for a moment if this did include our IP. Even at a measly $1 per item............................... :cool:
Even arm does not get that much royality from those devices. 100 billion dollars of royality will make arm atleast one trillion dollar company.
Dyor
 
  • Like
  • Fire
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
46f0b54b55d6e528dc33fbe184bbad0090d06920_hq.gif


Screenshot 2024-06-04 at 12.13.56 pm.png



AMD CEO: Pricier Ryzens will probably get more powerful NPUs​

But will you see Ryzens with more than sixteen cores? It doesn't sound that likely, CEO Lisa Su says.
Mark Hachman

By Mark Hachman
Senior Editor JUN 2, 2024 10:39 PM PDT
Lisa Su, AMD

Image: Mark Hachman / IDG


You’d expect that more expensive PC processors would feature faster clock speeds, more cache, more powerful integrated graphics. Future Ryzens will probably scale NPU TOPS, too.
“AI will be everywhere.” If there was a theme to Dr. Lisa Su’s post-keynote press conference, at the Computex 2024 show in Taipei, it was that. Su touched on the theme time and time again. And AI will be everywhere across AMD’s CPU lineup — but not necessarily with the same potency.

On stage, Su joked with Microsoft’s Windows and devices chief Pavan Davuluri that an NPU’s TOPS don’t come for free, echoing the old adage that whatever hardware a chipmaker builds, software will suck it up.
“That’s why I was sort of kidding with Pavan on stage,” Su said, in response to my question. “Nothing is for free, when you look at these products both from an overall power standpoint, as well as this overall cost standpoint.

“I think what we’re seeing is AI will truly be everywhere. Our expectation is that the current Copilot+ and (AMD Ryzen AI 300 Copilot+ PCs, or “Strix Point,”) at 50+ TOPS will start more at the higher end of the stack,” Su added. “But we would expect that you will see AI throughout our entire stack as we go forward.
“You’re going to see at the top end that we’re going to continue to scale the TOPS because we are big, big believers in the more local TOPS you have, the more capable your AI PCs are going to be,” Su concluded. “We believe people are going to value that and so it’s worth it, to put it on chip locally.”

Su didn’t describe how AMD will differentiate various Ryzens with NPU capabilities. But there’s a history here: In 2021, AMD mixed and matched parts from various Zen generations under the Ryzen 5000 name. AMD could conceivably do the same with future Ryzens, taking older NPUs and combining them with various CPUs and GPUs.

But that’s not to say we could see just an NPU, either. In response to another question about whether AMD could develop a neuromorphic chip like Intel’s Loihi, Su seemed open to the possibility. “I think as we go forward, we always look at some specific types of what’s called new acceleration technologies,” she said. “I think we could see some of these going forward.”

But could Ryzens get bigger, with more cores? Su demurred while answering a question about whether AMD would ever go beyond the current 16-core count. “There’s no physical reason that we couldn’t go more than 16 cores,” she said.
Su pointed out that software developers don’t always use all the core AMD already provides. “I think the key is just going at the pace that the software guys can utilize them,” she said.
 
  • Like
  • Love
  • Haha
Reactions: 31 users
Long term shareholders deserve nothing less than BRN in ARM phones
 
  • Like
Reactions: 11 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Screenshot 2024-06-04 at 12.26.03 pm.png
 
  • Like
  • Fire
  • Love
Reactions: 36 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Wow! The competition is really heating up!

Come on Qualcomm! Just admit it. We know you want us. Sorry, I mean "NEED" us!


 
  • Like
  • Love
  • Haha
Reactions: 20 users

miaeffect

Oat latte lover
View attachment 64348

View attachment 64349


AMD CEO: Pricier Ryzens will probably get more powerful NPUs​

But will you see Ryzens with more than sixteen cores? It doesn't sound that likely, CEO Lisa Su says.
Mark Hachman

By Mark Hachman
Senior Editor JUN 2, 2024 10:39 PM PDT
Lisa Su, AMD

Image: Mark Hachman / IDG


You’d expect that more expensive PC processors would feature faster clock speeds, more cache, more powerful integrated graphics. Future Ryzens will probably scale NPU TOPS, too.
“AI will be everywhere.” If there was a theme to Dr. Lisa Su’s post-keynote press conference, at the Computex 2024 show in Taipei, it was that. Su touched on the theme time and time again. And AI will be everywhere across AMD’s CPU lineup — but not necessarily with the same potency.

On stage, Su joked with Microsoft’s Windows and devices chief Pavan Davuluri that an NPU’s TOPS don’t come for free, echoing the old adage that whatever hardware a chipmaker builds, software will suck it up.
“That’s why I was sort of kidding with Pavan on stage,” Su said, in response to my question. “Nothing is for free, when you look at these products both from an overall power standpoint, as well as this overall cost standpoint.

“I think what we’re seeing is AI will truly be everywhere. Our expectation is that the current Copilot+ and (AMD Ryzen AI 300 Copilot+ PCs, or “Strix Point,”) at 50+ TOPS will start more at the higher end of the stack,” Su added. “But we would expect that you will see AI throughout our entire stack as we go forward.
“You’re going to see at the top end that we’re going to continue to scale the TOPS because we are big, big believers in the more local TOPS you have, the more capable your AI PCs are going to be,” Su concluded. “We believe people are going to value that and so it’s worth it, to put it on chip locally.”

Su didn’t describe how AMD will differentiate various Ryzens with NPU capabilities. But there’s a history here: In 2021, AMD mixed and matched parts from various Zen generations under the Ryzen 5000 name. AMD could conceivably do the same with future Ryzens, taking older NPUs and combining them with various CPUs and GPUs.

But that’s not to say we could see just an NPU, either. In response to another question about whether AMD could develop a neuromorphic chip like Intel’s Loihi, Su seemed open to the possibility. “I think as we go forward, we always look at some specific types of what’s called new acceleration technologies,” she said. “I think we could see some of these going forward.”

But could Ryzens get bigger, with more cores? Su demurred while answering a question about whether AMD would ever go beyond the current 16-core count. “There’s no physical reason that we couldn’t go more than 16 cores,” she said.
Su pointed out that software developers don’t always use all the core AMD already provides. “I think the key is just going at the pace that the software guys can utilize them,” she said.
my-blood-pressure-is-spiking-high-blood-pressure.gif
 
  • Haha
  • Like
Reactions: 9 users

IloveLamp

Top 20
View attachment 64348

View attachment 64349


AMD CEO: Pricier Ryzens will probably get more powerful NPUs​

But will you see Ryzens with more than sixteen cores? It doesn't sound that likely, CEO Lisa Su says.
Mark Hachman

By Mark Hachman
Senior Editor JUN 2, 2024 10:39 PM PDT
Lisa Su, AMD

Image: Mark Hachman / IDG


You’d expect that more expensive PC processors would feature faster clock speeds, more cache, more powerful integrated graphics. Future Ryzens will probably scale NPU TOPS, too.
“AI will be everywhere.” If there was a theme to Dr. Lisa Su’s post-keynote press conference, at the Computex 2024 show in Taipei, it was that. Su touched on the theme time and time again. And AI will be everywhere across AMD’s CPU lineup — but not necessarily with the same potency.

On stage, Su joked with Microsoft’s Windows and devices chief Pavan Davuluri that an NPU’s TOPS don’t come for free, echoing the old adage that whatever hardware a chipmaker builds, software will suck it up.
“That’s why I was sort of kidding with Pavan on stage,” Su said, in response to my question. “Nothing is for free, when you look at these products both from an overall power standpoint, as well as this overall cost standpoint.

“I think what we’re seeing is AI will truly be everywhere. Our expectation is that the current Copilot+ and (AMD Ryzen AI 300 Copilot+ PCs, or “Strix Point,”) at 50+ TOPS will start more at the higher end of the stack,” Su added. “But we would expect that you will see AI throughout our entire stack as we go forward.
“You’re going to see at the top end that we’re going to continue to scale the TOPS because we are big, big believers in the more local TOPS you have, the more capable your AI PCs are going to be,” Su concluded. “We believe people are going to value that and so it’s worth it, to put it on chip locally.”

Su didn’t describe how AMD will differentiate various Ryzens with NPU capabilities. But there’s a history here: In 2021, AMD mixed and matched parts from various Zen generations under the Ryzen 5000 name. AMD could conceivably do the same with future Ryzens, taking older NPUs and combining them with various CPUs and GPUs.

But that’s not to say we could see just an NPU, either. In response to another question about whether AMD could develop a neuromorphic chip like Intel’s Loihi, Su seemed open to the possibility. “I think as we go forward, we always look at some specific types of what’s called new acceleration technologies,” she said. “I think we could see some of these going forward.”

But could Ryzens get bigger, with more cores? Su demurred while answering a question about whether AMD would ever go beyond the current 16-core count. “There’s no physical reason that we couldn’t go more than 16 cores,” she said.
Su pointed out that software developers don’t always use all the core AMD already provides. “I think the key is just going at the pace that the software guys can utilize them,” she said.




Hey thar...............

1000016206.jpg
 
Last edited:
  • Haha
  • Thinking
Reactions: 5 users
  • Like
  • Fire
Reactions: 9 users

Diogenese

Top 20
"the design of MB.OS demands a different approach because we are decoupling the hardware and software innovation cycles and integration steps. This will make software development and integration much faster, and it also facilitates the constant flow of innovation into the vehicle, resulting in better products for our customers."
 
  • Like
  • Thinking
  • Fire
Reactions: 22 users

Learning

Learning to the Top 🕵‍♂️

Learning 🪴
 
  • Like
  • Fire
  • Love
Reactions: 11 users

7für7

Top 20
  • Like
  • Fire
Reactions: 3 users

Diogenese

Top 20
... and now the good news you've all been waiting for ...


alcohol prevents DVT.




Alcohol, in low to moderate amounts, thins the blood, reducing the risk of clots. But moderation is key - and doctors don't recommend drinking alcohol to protect against DVT. The relationship between alcohol and deep vein thrombosis may depend on what, and how much, you pour in your glass.

DVT and Alcohol - WebMD

www.webmd.com/dvt/dvt-alcohol

www.webmd.com/dvt/dvt-alcohol
 
Last edited:
  • Haha
  • Love
  • Like
Reactions: 14 users

toasty

Regular
  • Haha
  • Like
Reactions: 4 users

Perhaps this sweet will have a bit of a spike in it?


1717479331553.png


Unigen Corporation Achieves Industry First with Introduction of New AI Module​

  • JUNE 3, 2024

“Biscotti” E1.S AI Modules Demonstrate Breakthrough Video Processing Performance In An Air-Cooled Environment​

Unigen Corporation has set a new benchmark with its Biscotti Artificial Intelligence (AI) E1.S Module. When integrated with an AMD Genoa server running the latest Video Management System (VMS) AI software, Biscotti will allow big box stores, warehouses, smart cities, transportation systems, and factories to gather hundreds of video streams from IP security cameras and process them in a single server, either on the premises or in a co-location data center.
For the first time ever, an air-cooled AI server has processed 64 streams of standard model YOLOv4 in 720p resolution at 25 frames per second with 80 class object detection. In plain language, this means decoding and processing live AI video from 64 IP cameras while utilizing less than 50% of the CPU or AI. This has been accomplished all without the expense, weight and power use of liquid cooling.
“Unigen’s innovative approach to integrating our Hailo-8 AI accelerator has changed how we approach the server market,” said Orr Danon, CEO of Hailo. “By using an E1.S to deliver a full 52 TOPS at a low power consumption, we enable servers to run cooler and deliver faster AI processing.”
“AIC is proud to have collaborated with Unigen to be the first server OEM to deliver this innovative AI architecture,” said Michael Liang, CEO of AIC. “Our EB202-CB-UG server can support 8 Unigen Biscottis with a total power consumption of under 500 watts, delivering 21,500 frames per second on the resnet_v1_50 neural network benchmark, all at a low TCO.”
“Unigen has addressed the global need to reduce the power footprint of AI inference data centers.” said Jennifer Cooke, analyst at IDC. “The Biscotti architecture is a compelling offering for organizations that require high-performance systems yet are conscious of the need to operate in a manner consistent with their corporate environmental sustainability goals.”
High Performance, Low Power: The Biscotti E1.S AI Module provides 52 TOPS from as little as 10 Watts. By integrating two Hailo-8 Edge AI processors, each featuring up to 26 tera-operations per second (TOPS), Biscotti provides exceptional performance in the realm of edge processor modules. The advanced architecture harnesses the core properties of neural networks, allowing edge devices to run deep learning applications at full scale more efficiently, effectively, and sustainably than other AI chips and solutions. By targeting an E1.S standard form factor, it becomes feasible to power both AI processors, resulting in performance that excels in power efficiency.
Plug-and-Play for Servers and Edge Devices: Biscotti can be inserted directly into E1.S slots, typically used by SSDs, to instantly enhance server configurations with AI capabilities. It supports multiple parallel Neural Networks from a large array of camera inputs, or can be integrated into a single Large Language Model (LLM) array to solve complex AI cases. With significantly lower power than GPU modules or Add-In-Cards, a solution using Biscotti can change the game for a data center’s power envelope.
Neural Network Models & Application Support: The integrated Hailo AI processors on Biscotti have a robust software suite that supports state-of-the-art deep learning models and applications. Additionally, it is equipped with a comprehensive dataflow compiler that enables customers to port their neural network models easily and quickly. Biscotti supports various AI frameworks, including TensorFlow, TensorFlow Lite, Keras, PyTorch, and ONNX, making it ideal for edge neural networks today and generative AI in the future.
Unigen is excited to announce its participation at ComputeX in Taipei, taking place from June 4 to 7, 2024. Attendees are cordially invited to visit the AIC (N0806) and Network Optix (N0314) booths, where Unigen will showcase the Biscotti AI E1.S Modules in action.
About Unigen
Unigen, founded in 1991, is an established global leader in the design and manufacture of original and custom SSD, DRAM, NVDIMM modules and Enterprise IO solutions. Headquartered in Newark, California, the company operates state of the art manufacturing facilities (ISO-9001/14001/13485 and IATF 16949) in the Silicon Valley Bay Area of California and near Hanoi Vietnam, along with five additional engineering and support facilities located around the globe. Unigen markets its products to both enterprise and client OEMs worldwide focused on embedded, industrial, networking, server, telecommunications, imaging, automotive and medical device industries. Unigen also offers best in class electronics manufacturing services (EMS), including new product introduction and volume production, supply chain management, assembly & test, TaaS (Test-as-a-Service) and post-sales support. Learn more about Unigen’s products and services at unigen.com.
 
  • Like
  • Haha
  • Fire
Reactions: 7 users
Top Bottom