BRN Discussion Ongoing

Frangipani

Regular
Oops! pardon my punctuation - it should, of course, be : "bloated shorter's bots, and manipulators"

So you reckon there is just one massive shorter that controls them all? đŸ˜± Kind of like Sauron and his army of Orcs? Mumbling away in the Dark Tongue of Mordor


"Ash nazg durbatulûk, ash nazg gimbatul, ash nazg thrakatulûk, agh burzum-ishi krimpatul."


4E08AEFB-28C4-4DB2-9C11-A91D51470241.jpeg


Nah, seems like a conspiracy theory to me.
I’d simply go with swapping the ’ and the s

 
  • Haha
  • Like
  • Wow
Reactions: 6 users

DK6161

Regular
Apple we should be in with all the 3 year lead hype Or are we a lemon ?. One would assume if Apple are announcing in June then we should have a some kind of agreement with someone by now.
Yep, sorry didn't read the article.
But I think it is definitely us.

Definitely not advice though.
Cheers
 
  • Haha
  • Like
Reactions: 2 users

Earlyrelease

Regular
Tech
Just to clarify the Perth Crew drinks is not a boys event. We have many female shareholders and most shareholders bring their partners and even a tiny little one (Sera hope Bub is well) so a broad group altogether.
So yes let’s hope the tone is like normal and that is to enjoy each others company and sharing the dreams of what may be one day.

Any SH in Perth tomorrow after work PM me and as there is room for a few more for the booking numbers made.
 
  • Like
  • Love
  • Fire
Reactions: 23 users
I can’t help but think of one of my biggest mistakes I have made while owning Stocks,
And that was selling to bloody early thinking that a company would do something in a period of some years.
Then selling out of that company only to see it climb to great heights.
Time it can be your friend, but having patience is a skill.

I am even more excited for the next few weeks wondering if something interesting will happen.
Not long now until the AGM
 
  • Like
  • Love
  • Fire
Reactions: 39 users

Rach2512

Regular
Tech
Just to clarify the Perth Crew drinks is not a boys event. We have many female shareholders and most shareholders bring their partners and even a tiny little one (Sera hope Bub is well) so a broad group altogether.
So yes let’s hope the tone is like normal and that is to enjoy each others company and sharing the dreams of what may be one day.

Any SH in Perth tomorrow after work PM me and as there is room for a few more for the booking numbers made.
Count me in.
 
  • Like
Reactions: 2 users

charles2

Regular

Media Alert: BrainChip CEO Addresses Latest Company Updates on Investor Podcast​


Business Wire
Tue, Apr 30, 2024 at 9:00 AM PDT3 min read

In This Article:
326ced2d9554334468b2013df181f979

LAGUNA HILLS, Calif., April 30, 2024--(BUSINESS WIRE)--BrainChip Holdings Ltd(ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, invites current and potential investors on May 1 at 3:00pm PT, to join CEO Sean Hehir as he answers the most frequently asked questions that shareholders have raised during the past quarter.

In this fifth episode of BrainChip’s Quarterly Investor Podcast, Director of Global Investor Relations Tony Dawe presents questions to Hehir across a range of topics including company updates, product demand, IP sales status, and other various matters of shareholder interests.
Among the topics covered in this podcast are:
  • Significant milestone achieved with the launch of Akidaℱ into space onboard the Ant-61 spacecraft.
  • Recap of company reception at Embedded World Conference.
  • Update on the Akida Edge AI Box.
  • Shareholder insight regarding proxy advisors, put option agreements and investor engagement, among others.
"From the expansion of our space heritage to increasing commercial prospects through well-attended trade shows, this quarter served as an important building block for BrainChip’s continued success," said Hehir. "We share these updates with the investor community through our quarterly BrainChip Investor Podcast – giving status updates, answering questions that have been posed to us, and presenting an important real-time look at where the company stands and the opportunities that lay before us.
BrainChip’s Quarterly Investor Podcast is in addition to the company’s popular monthly "This is Our Mission" podcast series, which provides AI industry insight to listeners including users, developers, analysts, technical and financial press, and investors. Past episodes of BrainChip podcasts are available at https://brainchip.com/podcast.
About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY) BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, AkidaTM, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables Edge learning local to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be integrated into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like Tensorflow/Keras. In enabling effective Edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.
Story Continues
View Comments

 
  • Like
  • Fire
  • Love
Reactions: 45 users

MDhere

Regular
I don't think Sean does stay as a director....his position on the board is spilled like all the rest, unless he's the MD?? I know he's the CEO and a director but is he the MD??
I'm MD đŸ€ŁđŸ€Ł
 
  • Haha
  • Like
Reactions: 13 users

TECH

Regular
Tech
Just to clarify the Perth Crew drinks is not a boys event. We have many female shareholders and most shareholders bring their partners and even a tiny little one (Sera hope Bub is well) so a broad group altogether.
So yes let’s hope the tone is like normal and that is to enjoy each others company and sharing the dreams of what may be one day.

Any SH in Perth tomorrow after work PM me and as there is room for a few more for the booking numbers made.

Excellent, a real family affair, have a great evening. (y)
 
  • Like
  • Love
Reactions: 6 users

rgupta

Regular
Apple we should be in with all the 3 year lead hype Or are we a lemon ?. One would assume if Apple are announcing in June then we should have a some kind of agreement with someone by now.
To me processing on the edge is a concept and upto a couple of years ago no one was the taker of that idea. Brainchip was the neumro uno on that end. But now there is a lot of talk on that which to me is victory of our vision. To me competition is good for any market and I donot mind even if apple is our competitor coz at this price valuation is so low that pressure is minimum.
Dyor
 
  • Like
  • Fire
Reactions: 8 users

IloveLamp

Top 20
đŸ€”
1000015482.jpg
 
  • Like
  • Thinking
Reactions: 3 users

DK6161

Regular
Morning fellow chippers,
Now that the latest 4C is out, let's move on and look to the future.
Any idea what the next 4C revenue will be? What would be our next target achievements? Would be a good KPI for the company and key people?
Cheers
 
  • Haha
Reactions: 2 users

IloveLamp

Top 20
Morning fellow chippers,
Now that the latest 4C is out, let's move on and look to the future.
Any idea what the next 4C revenue will be? What would be our next target achievements? Would be a good KPI for the company and key people?
Cheers
Closed our short position have we đŸ€š lol?

1000012799.gif
 
Last edited:
  • Haha
  • Like
  • Love
Reactions: 14 users

7fĂŒr7

Top 20
Im just curious how many people here are still confident and trust the management! Would be nice if you like this comment if you are a „yes“ person! ✌
 
  • Like
  • Love
Reactions: 34 users

TheDon

Regular
Sean has a 3 to 5 year plan which has the approval from the board. If you can't wait that long in my opinion sell and move on. I'm happy to keep buying while the price is soo low and even happier when it goes up.

DYOR
TheDon
 
  • Like
  • Fire
  • Love
Reactions: 31 users
Morning fellow chippers,
Now that the latest 4C is out, let's move on and look to the future.
Any idea what the next 4C revenue will be? What would be our next target achievements? Would be a good KPI for the company and key people?
Cheers
1714522391509.gif
 
  • Haha
  • Love
Reactions: 6 users

toasty

Regular
Sean has a 3 to 5 year plan which has the approval from the board. If you can't wait that long in my opinion sell and move on. I'm happy to keep buying while the price is soo low and even happier when it goes up.

DYOR
TheDon
If there had not been so many comments by management that indicated imminent commercial success I would be less annoyed than I find myself today. As someone posted the other day, whatever happened to under promising and over delivering? My view is that Sean and the board did not, and may not now, have any real idea of the timelines involved in seeing AKIDA through to commercial success. That and the lack of transparency around activities is very frustrating and frankly quite worrying. Blaming NDA's seems increasingly like a smoke screen. I REALLY hope that the next announcement will justify the way they have conducted themselves to date...........
 
  • Like
  • Love
Reactions: 20 users

7fĂŒr7

Top 20
For my part, I'm satisfied because we're on schedule. I understand that such a new technology is useless without a corresponding new end product. The team is constantly showing us what Akida is capable of and what it could do if only one of the potential customers could finally manage to bring out a product that can handle this performance. The box is not a mass-produced item... that should also be clear. And we will only see new licenses when a customer sees the application scope of their new product. Until then, Akida is being tested on a partnership basis. However, it's getting tighter for our potential customers. Because everyone is working flat out to bring out a new product that can use this technology. And that also works in our favour. Just my opinion!
 
  • Like
  • Love
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its bicepsđŸ’Ș!
Check this out Brain Fam!

The alarm over power hungry AI chips is growing. Talk about being in the right place at the right time! Our technology can solve many of the issues raised in this article!

TSMC CEO CC Wei in the company’s latest earnings call put it this way: “Almost all the AI innovators are working with TSMC to address the insatiable AI-related demand for energy-efficient computing power.”





Power-hungry AI chips face a reckoning, as chipmakers promise ‘efficiency’​

By Matt HamblenApr 30, 2024 8:38am

Jensen holds chips


Nvidia's new Blackwell accelerator GPU (the bigger of the two held by CEO Jensen Huang) is a megachip that can pull 1,200 watts in the largest configuration. The energy draw from modern accelerators like Blackwell is 'not sustainable' says one industry insider, but new chip designs will improve efficiency and promise to relieve electricity demand on data centers. (Screenshot)

Nvidia’s newest megachip, Blackwell, is by all accounts a modern-day miracle. It has 200 billion transistors and promises enough processing power to handle the largest AI models when thousands of these GPUs are ganged together in a mega data center.
But Blackwell and other powerful accelerator chips coming to market are making people nervous-- especially data center operators and electric utilities, even regulators around the globe. One version of a single Blackwell chip for data center use draws 1,200 watts of electricity, an insane amount of power compared to just a few years ago. Largely as a result of accelerator chip growth, some data centers are building their own power plants to handle the load while regulators in Amsterdam and other cities in Europe are telling data centers they cannot expand due to limited electric supply.

It’s not just Nvidia’s GPUs that are gargantuan. Blackwell is part of a trend ranging across all chip design firms. Even hyperscalers and carmakers like Tesla are designing their own custom chips, often pushing the laws of physics to increase energy efficiency with 3D designs and chiplets. Tesla’s Dojo chip has 25 chiplets. These chip design approaches are helping increase power efficiency, but data centers meanwhile are still growing to support AI, including GenAI. Currently, 1.5 % to 2% of the world’s electricity is used by data centers and the vast majority of that energy is used by chips and circuit boards that support them. Growth in data center energy consumption is a hockey stick.



“The trend is not sustainable”
“The chip industry has been on a trend that’s not sustainable,” said long-time chip industry insider Henri Richard, president of Rapidus in the Americas. The company is erecting a 2nm process node chip fab in northern Japan with billions in support from the Japanese government.
“Years ago, we were saying you can’t go up to 150 watts, and now we’re at 1,200 watts! Something needs to change. If you think about taking that growth curve and projecting into the future, we just can’t have 3 kilowatt chips,“ Richard said in an interview with Fierce Electronics from his US office in Santa Clara, Calif.

Shrinking chip process nodes from 10nm to 5nm to 2n is part of the solution, he said. With Moore’s Law’s decreasing benefits, however, ”there’s a need to architect the systems and chips in a different way that deals with the concentration of power and deals with the amount of cooling you can do,” he added. “Even immersion cooling makes it hard to feed the chips with electricity. Chiplets will be one way to balance between the front end and back end.”

In a blog that woke up some elements of the AI-fixated world, Arm CEO Rene Haas wrote recently about future AI workloads becoming larger, pressing the need for more compute and more power. “Finding ways to reduce the power requirements for these large data centers is paramount to achieving the societal breakthroughs and realizing the AI promise,” he said. “In other words, no electricity, no AI.”


What data centers face with power consuming chips
In a data center with thousands of Blackwell chips and other processors, the electricity load becomes enormous, sending engineers scurrying to find available power in locales where there isn’t enough juice readily available, even with the help of renewables from solar, wind, hydroelectric or geothermal. Once there is enough power pumped to developable land in an area like Loudon County, Va., west of Washington, D.C., the anxiety is compounded over what happens inside dozens of hot server racks. Engineers are proposing new ways to keep the circuit boards and chips cool enough to keep from catching fire or melting down, causing a catastrophe for vital data, expensive equipment and corporate bottom lines.
An entire industry has emerged to cool data centers to guard against the heat generated by servers and their power-hungry chips. Liquid cooling of server racks has become an art form; one of the latest approaches is immersion of entire data centers, prompting the delicate proposition of how a data center connects electricity underwater with humans around. Meanwhile, hyperscalers are planning ways to build small nuclear reactors or other power generators near their data center hubs to ensure a reliable and plentiful energy supply.
Investors are going bonkers for more power for data centers: OpenAI CEO Sam Altman just invested $20 million in Exowatt, an energy startup focusing on AI data centers. Keeping chips cool enough to operate optimally also may require air-cooling technology that gulps down more power, amplifying the problem. Even so, as a rule of thumb, half the electricity needed by a data center goes to light up the processors--from GPUs to CPUs to NPUs, and whatever becomes the next chip TLA . Related circuits and boards raise the energy draw.
Nvidia’s Jensen Huang defines the long view for AI accelerators
Nvidia CEO Jensen Huang and many other semiconductor leaders justify, perhaps rightly so, the power mongering of modern accelerator chips like Blackwell when matched against the enormous compute power of AI and GenAI and the impact such technologies will have on future generations of companies and customers with the creation of new pharmaceuticals, climate analysis, autonomous vehicles and robots and more. He and his engineering teams speak often about the Laws of Physics and recognize what metals and other materials and chip architectures can distribute heat generated from electricity traversing a server rack, and then, across acres of server racks.
Modern chip designs give Nvidia, Intel, AMD, Qualcomm, cloud providers and a growing army of smaller design firms are adding an enormous density to circuit boards so that servers and server racks take up less floor space, while cranking out many times more teraflops per server than just a year ago. Performance per watt metric is usually expressed as TFLOPS/watt to make it easy to compare systems and chips from different vendors.
Huang’s CadenceLIVE discourse on longitudinality
Huang talked about this density and its related power draw at CadenceLIVE Silicon Valley in April, speaking in the abstract about how this computing density is justified by the advantages of AI across an entire population of users. “Remember, you design a chip once, but you ship it a trillion times,” he said in a fireside chat. “You design a data center once, but you save 6% power
that is enjoyed by a billion people.” Huang was, of course, speaking about the entire ecosystem, far beyond the wattage of a single Blackwell or other GPU used in a broader category of accelerated computing. He took a few sentences to make his point, but it is worth a read:
“The power usage of accelerated computing is incredibly high because the computers are incredibly dense,” Huang said. “Whatever optimization we can do for power utilization translates directly into more performance, measures as more productivity, generating revue or directly into savings. For the same amount of performance you could get something smaller. Power management in accelerated computing directly translates into all the things you care about.
“Accelerated computing took tens of thousands of general purpose servers and consumed 10x, 20x more cost and 20x, 30x more energy and reduced it into something that is incredibly dense. So the density of accelerated computing is the reason why people will think it’s power hungry and costs a lot money. But if you look at from an ISO [an international standard] of work done or throughput, in fact you save an enormous amount of money. That’s the reason why it is essential as CPU scaling has slowed that we have to move towards accelerated computing because you’re not going to continue to scale out that traditional way anyways. Accelerated computing is essential.”
Later in the same conversation with Cadence CEO Anirudh Devgan, Huang added: “AI actually helps people save energy
How would we have found 6% more savings [in one example from Cadence] or 10x more savings that wasn’t possible without AI? So you invest in the training of the model once and then millions of engineers can benefit from it and billions of people across decades will get to enjoy the savings.
“That’s the way to think about cost and investments, not just on an instance-by-instance basis but, in healthcare speak, longitudinally. You have to 
 look at money savings, energy savings longitudinally, across the entire span of not just the products you are building but the way you are designing the products, the products you build and the impact of the products being felt. When you look at it longitudinally like that, AI is going to be utterly transformative in helping us with climate change, using less power, being more energy efficient and so on.”
Voices outside of Nvidia
Other luminaries than Huang in chip design and production of chips have also recently weighed in. TSMC CEO CC Wei in the company’s latest earnings call put it this way: “Almost all the AI innovators are working with TSMC to address the insatiable AI-related demand for energy-efficient computing power.” The key word: “insatiable.”
Cadence CEO Devgan noted in his onstage conversation with Huang that AI models can have 1 trillion parameters, which compares to 100 trillion synapses, or connections, in the human brain. He projected that it is only a matter of time before somebody builds an AI model that is very big, on the order of the human brain. Doing so will require “a huge amount of software compute, the whole data search infrastructure and the whole energy infrastructure,” he said.
Cadence makes and supports a number of ways to improve designs for energy efficiency for accelerators (which Nvidia used to develop Blackwell) and has developed a digital twin system to help data centers design their operations more efficiently.
Over at AMD, the company has set a goal of delivering an increase of 30x in the energy efficiency of its product by 2025, based on a 2020 baseline of accelerated compute node. Last year’s introduction of the MI300X accelerator put the company even closer to that goal. A blog posted last year by AMD’s Sam Naffziger, senior vice president and product technology architect, describes the progress.
Naffziger warned that the industry can’t rely solely on smaller transistors, and needs a holistic design perspective that includes packaging, architecture, memory, software and more.
Intel’s neuromorphic push
Intel has also made an aggressive push into energy efficiency, most recently announcing it has built the world’s largest neuromorphic system to enable sustainable AI. Code-named Hala Point, it uses Intel’s Loihi 2 processor and can support up to 20 qauadrillion operations per second, rivaling GPUs and CPUs. Its application is clearly for research so far.
for research purposes

Intel’s description of Hala Point claims the entire system consumes a maximum of 2,600 watts of power, little more than double that of Nvidia;s Blackwell: “Hala Point packages 1,152 Loihi 2 processors produced on Intel 4 process node in a six-rack-unit data center chassis the size of a microwave oven. The system supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores, consuming a maximum of 2,600 watts of power. It also includes over 2,300 embedded x86 processors for ancillary computations.”
Jennifer Huffstetler, chief product sustainability officer at Intel, told Fierce Electronics via email, “Intel is looking at future computing technologies as a solution for AI workloads, namely neuromorphic, that promise to deliver greater computing performance at much lower power consumption. Computing demands are only increasing, especially with new AI workloads. To deliver on the performance desired, the power consumption of GPUs and CPUs is also increasing.”
Intel already has a three-pronged approach to greater efficiency that includes optimization of AI models, software and hardware. With hardware, Intel innovations have saved 1000 terawatt hours from 2010-2020, Huffstetler estimated. Gaudi accelerators provide about a doubling in energy efficiency while Xeon Scalable processors provide a 2.2x increase in energy efficiency. (Xeons are designed for data center, edge and workstation workloads.) The upcoming Gaudi 3 accelerators deliver 50% on average better inference and 40% average better inference power efficiency, she claimed. Intel is also in the liquid cooling business, which can provide a 30% improvement in energy savings over air cooling inside the data center.
Yes, greater 'efficiency,' but
.
Despite all the efforts of major chip designers, the power dilemma is still real. Yes, a data center might have fewer racks with the latest accelerators, resulting in lower power draw, but growth in AI means companies will only seek to expand compute capabilities—more servers, more racks, more energy suck. “Newer chips have more performance per watt, yes, but the AI models are also growing, so it’s not clear that the overall requirement for power is going down all that much,” said Jack Gold, founding analyst at J. Gold Associates.
While Blackwell in the GB200 form factor with liquid cooled racks sucks down 1200 watts per chip, Gold noted that a typical AI chip uses just half-- 650 watts of power. He tallied up the energy draw this way: Add in memory, interconnect and a CPU controller, and that figure can jump to 1 kilowatt for each module. In the recent example of Meta, which at one point deployed 10,000 modules (with many more to come), that amount alone would require 10 megawatts of power. A city the size of Cleveland with 3 million people uses about 5,000 megawatts, so in essence a single data center of that Meta size would take 2% of the city’s power. A typical power plant might generate about 500 megawatts.
“The bottom line is that AI data centers are indeed [facing problems] in trying to find areas where there is enough power and power that is low cost enough to provide for their needed consumption,” Gold said. The cost of power is the single biggest expense in a data center after the capital cost for equipment.
Bob O’Donnell, founding analyst at Technalysis, said he somewhat understands Huang’s “longitudinal” argument in favor of power consumption for AI chips laid out at the Cadence event. “Accelerator chips do take more power, but in the long run have more positive benefits for the environment, pharma and other areas because of all you learn,” he told Fierce. “They are extraordinarily dense, but compared to other options they are more power efficient.“
“The summary is that power for AI chips is getting a huge amount of focus and attention by a lot of different players. It’s not going to be solved or go away with an enormous demand for more power. But the capabilities of GenAI are so great that people feel a need to pursue it.”


 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 51 users

toasty

Regular
For my part, I'm satisfied because we're on schedule. I understand that such a new technology is useless without a corresponding new end product. The team is constantly showing us what Akida is capable of and what it could do if only one of the potential customers could finally manage to bring out a product that can handle this performance. The box is not a mass-produced item... that should also be clear. And we will only see new licenses when a customer sees the application scope of their new product. Until then, Akida is being tested on a partnership basis. However, it's getting tighter for our potential customers. Because everyone is working flat out to bring out a new product that can use this technology. And that also works in our favour. Just my opinion!
What schedule.....timelines keep getting pushed back and the company provides no transparency around timeframes. Not sure how you can say "we're on schedule".
 
  • Like
  • Fire
  • Love
Reactions: 12 users

FiveBucks

Regular
Check this out Brain Fam!

The alarm over power hungry AI chips is growing. Talk about being in the right place at the right time! Our technology can solve many of the issues raised in this article!





Power-hungry AI chips face a reckoning, as chipmakers promise ‘efficiency’​

By Matt HamblenApr 30, 2024 8:38am

Jensen holds chips


Nvidia's new Blackwell accelerator GPU (the bigger of the two held by CEO Jensen Huang) is a megachip that can pull 1,200 watts in the largest configuration. The energy draw from modern accelerators like Blackwell is 'not sustainable' says one industry insider, but new chip designs will improve efficiency and promise to relieve electricity demand on data centers. (Screenshot)

Nvidia’s newest megachip, Blackwell, is by all accounts a modern-day miracle. It has 200 billion transistors and promises enough processing power to handle the largest AI models when thousands of these GPUs are ganged together in a mega data center.
But Blackwell and other powerful accelerator chips coming to market are making people nervous-- especially data center operators and electric utilities, even regulators around the globe. One version of a single Blackwell chip for data center use draws 1,200 watts of electricity, an insane amount of power compared to just a few years ago. Largely as a result of accelerator chip growth, some data centers are building their own power plants to handle the load while regulators in Amsterdam and other cities in Europe are telling data centers they cannot expand due to limited electric supply.

It’s not just Nvidia’s GPUs that are gargantuan. Blackwell is part of a trend ranging across all chip design firms. Even hyperscalers and carmakers like Tesla are designing their own custom chips, often pushing the laws of physics to increase energy efficiency with 3D designs and chiplets. Tesla’s Dojo chip has 25 chiplets. These chip design approaches are helping increase power efficiency, but data centers meanwhile are still growing to support AI, including GenAI. Currently, 1.5 % to 2% of the world’s electricity is used by data centers and the vast majority of that energy is used by chips and circuit boards that support them. Growth in data center energy consumption is a hockey stick.



“The trend is not sustainable”
“The chip industry has been on a trend that’s not sustainable,” said long-time chip industry insider Henri Richard, president of Rapidus in the Americas. The company is erecting a 2nm process node chip fab in northern Japan with billions in support from the Japanese government.
“Years ago, we were saying you can’t go up to 150 watts, and now we’re at 1,200 watts! Something needs to change. If you think about taking that growth curve and projecting into the future, we just can’t have 3 kilowatt chips,“ Richard said in an interview with Fierce Electronics from his US office in Santa Clara, Calif.

Shrinking chip process nodes from 10nm to 5nm to 2n is part of the solution, he said. With Moore’s Law’s decreasing benefits, however, ”there’s a need to architect the systems and chips in a different way that deals with the concentration of power and deals with the amount of cooling you can do,” he added. “Even immersion cooling makes it hard to feed the chips with electricity. Chiplets will be one way to balance between the front end and back end.”

In a blog that woke up some elements of the AI-fixated world, Arm CEO Rene Haas wrote recently about future AI workloads becoming larger, pressing the need for more compute and more power. “Finding ways to reduce the power requirements for these large data centers is paramount to achieving the societal breakthroughs and realizing the AI promise,” he said. “In other words, no electricity, no AI.”


What data centers face with power consuming chips
In a data center with thousands of Blackwell chips and other processors, the electricity load becomes enormous, sending engineers scurrying to find available power in locales where there isn’t enough juice readily available, even with the help of renewables from solar, wind, hydroelectric or geothermal. Once there is enough power pumped to developable land in an area like Loudon County, Va., west of Washington, D.C., the anxiety is compounded over what happens inside dozens of hot server racks. Engineers are proposing new ways to keep the circuit boards and chips cool enough to keep from catching fire or melting down, causing a catastrophe for vital data, expensive equipment and corporate bottom lines.
An entire industry has emerged to cool data centers to guard against the heat generated by servers and their power-hungry chips. Liquid cooling of server racks has become an art form; one of the latest approaches is immersion of entire data centers, prompting the delicate proposition of how a data center connects electricity underwater with humans around. Meanwhile, hyperscalers are planning ways to build small nuclear reactors or other power generators near their data center hubs to ensure a reliable and plentiful energy supply.
Investors are going bonkers for more power for data centers: OpenAI CEO Sam Altman just invested $20 million in Exowatt, an energy startup focusing on AI data centers. Keeping chips cool enough to operate optimally also may require air-cooling technology that gulps down more power, amplifying the problem. Even so, as a rule of thumb, half the electricity needed by a data center goes to light up the processors--from GPUs to CPUs to NPUs, and whatever becomes the next chip TLA . Related circuits and boards raise the energy draw.
Nvidia’s Jensen Huang defines the long view for AI accelerators
Nvidia CEO Jensen Huang and many other semiconductor leaders justify, perhaps rightly so, the power mongering of modern accelerator chips like Blackwell when matched against the enormous compute power of AI and GenAI and the impact such technologies will have on future generations of companies and customers with the creation of new pharmaceuticals, climate analysis, autonomous vehicles and robots and more. He and his engineering teams speak often about the Laws of Physics and recognize what metals and other materials and chip architectures can distribute heat generated from electricity traversing a server rack, and then, across acres of server racks.
Modern chip designs give Nvidia, Intel, AMD, Qualcomm, cloud providers and a growing army of smaller design firms are adding an enormous density to circuit boards so that servers and server racks take up less floor space, while cranking out many times more teraflops per server than just a year ago. Performance per watt metric is usually expressed as TFLOPS/watt to make it easy to compare systems and chips from different vendors.
Huang’s CadenceLIVE discourse on longitudinality
Huang talked about this density and its related power draw at CadenceLIVE Silicon Valley in April, speaking in the abstract about how this computing density is justified by the advantages of AI across an entire population of users. “Remember, you design a chip once, but you ship it a trillion times,” he said in a fireside chat. “You design a data center once, but you save 6% power
that is enjoyed by a billion people.” Huang was, of course, speaking about the entire ecosystem, far beyond the wattage of a single Blackwell or other GPU used in a broader category of accelerated computing. He took a few sentences to make his point, but it is worth a read:
“The power usage of accelerated computing is incredibly high because the computers are incredibly dense,” Huang said. “Whatever optimization we can do for power utilization translates directly into more performance, measures as more productivity, generating revue or directly into savings. For the same amount of performance you could get something smaller. Power management in accelerated computing directly translates into all the things you care about.
“Accelerated computing took tens of thousands of general purpose servers and consumed 10x, 20x more cost and 20x, 30x more energy and reduced it into something that is incredibly dense. So the density of accelerated computing is the reason why people will think it’s power hungry and costs a lot money. But if you look at from an ISO [an international standard] of work done or throughput, in fact you save an enormous amount of money. That’s the reason why it is essential as CPU scaling has slowed that we have to move towards accelerated computing because you’re not going to continue to scale out that traditional way anyways. Accelerated computing is essential.”
Later in the same conversation with Cadence CEO Anirudh Devgan, Huang added: “AI actually helps people save energy
How would we have found 6% more savings [in one example from Cadence] or 10x more savings that wasn’t possible without AI? So you invest in the training of the model once and then millions of engineers can benefit from it and billions of people across decades will get to enjoy the savings.
“That’s the way to think about cost and investments, not just on an instance-by-instance basis but, in healthcare speak, longitudinally. You have to 
 look at money savings, energy savings longitudinally, across the entire span of not just the products you are building but the way you are designing the products, the products you build and the impact of the products being felt. When you look at it longitudinally like that, AI is going to be utterly transformative in helping us with climate change, using less power, being more energy efficient and so on.”
Voices outside of Nvidia
Other luminaries than Huang in chip design and production of chips have also recently weighed in. TSMC CEO CC Wei in the company’s latest earnings call put it this way: “Almost all the AI innovators are working with TSMC to address the insatiable AI-related demand for energy-efficient computing power.” The key word: “insatiable.”
Cadence CEO Devgan noted in his onstage conversation with Huang that AI models can have 1 trillion parameters, which compares to 100 trillion synapses, or connections, in the human brain. He projected that it is only a matter of time before somebody builds an AI model that is very big, on the order of the human brain. Doing so will require “a huge amount of software compute, the whole data search infrastructure and the whole energy infrastructure,” he said.
Cadence makes and supports a number of ways to improve designs for energy efficiency for accelerators (which Nvidia used to develop Blackwell) and has developed a digital twin system to help data centers design their operations more efficiently.
Over at AMD, the company has set a goal of delivering an increase of 30x in the energy efficiency of its product by 2025, based on a 2020 baseline of accelerated compute node. Last year’s introduction of the MI300X accelerator put the company even closer to that goal. A blog posted last year by AMD’s Sam Naffziger, senior vice president and product technology architect, describes the progress.
Naffziger warned that the industry can’t rely solely on smaller transistors, and needs a holistic design perspective that includes packaging, architecture, memory, software and more.
Intel’s neuromorphic push
Intel has also made an aggressive push into energy efficiency, most recently announcing it has built the world’s largest neuromorphic system to enable sustainable AI. Code-named Hala Point, it uses Intel’s Loihi 2 processor and can support up to 20 qauadrillion operations per second, rivaling GPUs and CPUs. Its application is clearly for research so far.
for research purposes

Intel’s description of Hala Point claims the entire system consumes a maximum of 2,600 watts of power, little more than double that of Nvidia;s Blackwell: “Hala Point packages 1,152 Loihi 2 processors produced on Intel 4 process node in a six-rack-unit data center chassis the size of a microwave oven. The system supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores, consuming a maximum of 2,600 watts of power. It also includes over 2,300 embedded x86 processors for ancillary computations.”
Jennifer Huffstetler, chief product sustainability officer at Intel, told Fierce Electronics via email, “Intel is looking at future computing technologies as a solution for AI workloads, namely neuromorphic, that promise to deliver greater computing performance at much lower power consumption. Computing demands are only increasing, especially with new AI workloads. To deliver on the performance desired, the power consumption of GPUs and CPUs is also increasing.”
Intel already has a three-pronged approach to greater efficiency that includes optimization of AI models, software and hardware. With hardware, Intel innovations have saved 1000 terawatt hours from 2010-2020, Huffstetler estimated. Gaudi accelerators provide about a doubling in energy efficiency while Xeon Scalable processors provide a 2.2x increase in energy efficiency. (Xeons are designed for data center, edge and workstation workloads.) The upcoming Gaudi 3 accelerators deliver 50% on average better inference and 40% average better inference power efficiency, she claimed. Intel is also in the liquid cooling business, which can provide a 30% improvement in energy savings over air cooling inside the data center.
Yes, greater 'efficiency,' but
.
Despite all the efforts of major chip designers, the power dilemma is still real. Yes, a data center might have fewer racks with the latest accelerators, resulting in lower power draw, but growth in AI means companies will only seek to expand compute capabilities—more servers, more racks, more energy suck. “Newer chips have more performance per watt, yes, but the AI models are also growing, so it’s not clear that the overall requirement for power is going down all that much,” said Jack Gold, founding analyst at J. Gold Associates.
While Blackwell in the GB200 form factor with liquid cooled racks sucks down 1200 watts per chip, Gold noted that a typical AI chip uses just half-- 650 watts of power. He tallied up the energy draw this way: Add in memory, interconnect and a CPU controller, and that figure can jump to 1 kilowatt for each module. In the recent example of Meta, which at one point deployed 10,000 modules (with many more to come), that amount alone would require 10 megawatts of power. A city the size of Cleveland with 3 million people uses about 5,000 megawatts, so in essence a single data center of that Meta size would take 2% of the city’s power. A typical power plant might generate about 500 megawatts.
“The bottom line is that AI data centers are indeed [facing problems] in trying to find areas where there is enough power and power that is low cost enough to provide for their needed consumption,” Gold said. The cost of power is the single biggest expense in a data center after the capital cost for equipment.
Bob O’Donnell, founding analyst at Technalysis, said he somewhat understands Huang’s “longitudinal” argument in favor of power consumption for AI chips laid out at the Cadence event. “Accelerator chips do take more power, but in the long run have more positive benefits for the environment, pharma and other areas because of all you learn,” he told Fierce. “They are extraordinarily dense, but compared to other options they are more power efficient.“
“The summary is that power for AI chips is getting a huge amount of focus and attention by a lot of different players. It’s not going to be solved or go away with an enormous demand for more power. But the capabilities of GenAI are so great that people feel a need to pursue it.”



This is a fantastic find. Bravo Bravo!
 
  • Like
  • Love
  • Sad
Reactions: 19 users
Top Bottom