Samsung, Hyundai back AI startup Tenstorrent: Everyone wants competition to Nvidia, says CEO Keller
Nvidia's GPUs are not the end-game for AI, says Keller. 'When the aliens land, I don't think they'll be asking us did we invent CUDA.'
By Tiernan Ray, Contributing Writeron Aug. 3, 2023
sankai/Getty Images
Chip giant Nvidia is the most powerful force in
artificial intelligence today, more powerful than Microsoft, Google, or OpenAI. Its GPU chips are the dominant form of computing in the industry for programs such as
ChatGPT. A raft of startups have failed to stem that dominance despite years of trying.
And yet, the world still hungers for competition, and Nvidia may be vulnerable because, some believe, the economics of Nvidia's dominance cannot be sustained.
"Nvidia has monopoly [profit margins]," said Jim Keller, CEO of AI chip startup Tenstorrent, in an exclusive interview with ZDNET. "If you want to go build a high-performance solution with AI inside of it, Nvidia will command most of the margin in the product. The problem with the winner-take-all strategy is it generates an economic environment where people really want an alternative."
Keller is a rockstar of the computer-chip world, known for a long string of chip successes, from turning around Advanced Micro Devices's processor business to creating the basis of Apple's custom processor business to building Tesla's Autopilot chip platform. He believes the industry's frustration with Nvidia's control, and the emerging technology of RISC-V, have opened a path for alternatives.
They are a technology leadership company," said Keller of Hyundai, "They are making money and investing it in technology because they see a path to building next-generation products with AI."
Also: How to achieve hyper-personalization using generative AI platforms
More to the point, Hyundai "want to build their own products and to hit the cost points and, and performance points," said Keller. "You can't give 60% gross margin to Nvidia for standard product, to be honest, and they're looking for options."
Hyundai's executive vice president and head of the global strategy office, Heung-soo Kim, said in prepared remarks, "Tenstorrent's high growth potential and high-performance AI semiconductors will help the Group secure competitive technologies for future mobilities."
With this investment, the Group expects to develop optimized but differentiated semiconductor technology that will aid future mobilities and strengthen internal capabilities in AI technology development."
Keller and Hyundai Motor Co. Executive Vice President and Head of the Global Strategy Office, Heung-soo Kim.
Tenstorrent
Samsung's investment makes particular sense for one of the world's largest contract makers of semiconductors. The company has produced many of the chips for which Keller is famous, including the Tesla Autopilot. Samsung knows that from tiny acorns come mighty oaks, and today's startup could be tomorrow's big customer for chip-making.
Also: These tiny mushroom-based chips could power your devices and help save our planet
The head of Samsung's Semiconductor Innovation Center, Marco Chisari, said in prepared remarks, "Tenstorrent's industry-leading technology, executive leadership, and aggressive roadmap motivated us to co-lead this funding round," adding, "We are excited by the opportunity to work with Tenstorrent to accelerate AI and compute innovations."
Keller lauded companies both in prepared remarks announcing the funding, stating, "The trust in Tenstorrent shown by Hyundai Motor Group and Samsung Catalyst Fund leading our round is truly humbling."
For Keller, who has many times overbuilt the world's fastest chips, the argument is principally economic, but also heavily technological.
"I don't believe this is the end game for AI at all," he said, meaning, "GPUs running CUDA and PyTorch."
"When the aliens land, I don't think they'll be asking us, Did we invent CUDA?" quips Keller, referring to Nvidia's software platform for running those neural networks.
"While GPUs have success today, they're not the obvious best answer, they're more like the good-enough answer that was available," said Keller of the dominance of Nvidia chips such as the H100 "Hopper" GPU, which is Nvidia's leading product for running neural networks.
Also: Generative AI and the fourth why: Building trust with your customer
The more-advanced generative AI models, especially those coming from the open-source software community, will lead to a profound change in the field's distinction between training and inference. "I think the AI engine of the future… will have a fairly diverse set of capabilities that won't look like inference versus training," but more like a fusion of the two.
He concedes that Nvidia built an incredible lead in AI by co-founder and CEO Jensen Huang's wise decision to focus the company's efforts on software very early.
"The AI software challenge is harder than anybody thought, to be honest," observed Keller. "Most of the AI startups were started by hardware guys." Nvidia, he noted, "had a longer investment in that software stack, partly because they invested in HPC," high-performance computing, which is for complex scientific workloads, "when nobody wanted to," said Keller. That required special programming frameworks to be developed for the GPU. "They invested and they got some stuff to work."
But, said Keller, the world is changing. The rise of open-source alternatives to CUDA, the various AI frameworks such as TensorFlow and PyTorch, and the open-source models created by companies such as
Stability.ai and
MosaicML, and hosted by hubs such as Hugging Face, are promising, he said. "The intriguing thing is the amount of open source collaboration that's happening on the software front, which we need to match on the hardware front."
To match that open-source effort in hardware, Keller is betting on RISC-V, the open-source chip instruction set that was developed over a decade ago at the University of California at Berkeley by a peer of Keller's, the
renowned chip pioneer Dr. David Patterson and his colleagues.
Also: AI will change the role of developers forever. Here's why that's good news
For Keller, who is an astute problem solver on a grand scale, there is something crucial that is coming together on the economic front and the technological front. It seems akin to other moments in technology when Keller made a profound impact, often against the prevailing wisdom in the industry.
"I like to explore the space and understand it and then do something," he said.
At the legendary Digital Equipment Corp., in the 1980s and 1990s, he built the world's fastest chip at the time. One of Keller's former startups, P.A. Semi Inc., was bought by Apple in 2008 and became the basis for the "A-series" silicon that now powers all Apple devices, an unlikely break from Intel. Tesla was "just a small engineering company" turning out no more than a quarter-million cars when Keller lead a team to develop the hardware for Tesla's Autopilot, now in every car.
He resuscitated AMD's moribund chip development at a time when "Everybody told me AMD was going to go bankrupt," Keller recalled. His efforts laid the foundation that not only brought the company back from the brink but turned it into a chip powerhouse.
In Tenstorrent,
founded in 2016, Keller saw something intriguing. The company has focused on the chip opportunity in the explosion in size of deep learning AI models such as OpenAI's GPT, programs that demand greater and greater performance.
Also: GPT-3.5 vs GPT-4: Is ChatGPT Plus worth its subscription fee?
Keller, who had been an angel investor in Tenstorrent when he was still at Tesla, knew founder Ljubisa Bajic, who had worked for Keller at AMD. "I had a chance to look at a whole bunch of proposals for AI engines, and I thought what he was doing was quite interesting," recalled Keller.
He was interested enough
to take the top spot at Tenstorrent in January of 2021. "It [Tenstorrent] was on some level a research project, and I felt that we were starting to figure out what the research project was," he explained.
What is clear, said Keller, is that the arrival of AI is merging with the arrival of RISC-V and the economic pressure of Nvidia's dominance.
"Computation will be dominated by AI" moving forward, said Keller. The generative neural networks that Tenstorrent was built to address, as they increase in scale, are demanding more and more silicon horsepower, and so they are coming to dominate all chip design.
As in his past efforts, Keller didn't simply adopt the playbook as he found it at Tenstorrent. He has made the surprising decision to not only continue with the dedicated AI chips but to also build a general-purpose CPU that can handle the management of the AI chips.
"We decided to build a RISC-V processor to be a general-purpose computing companion to the AI processor because general-purpose computing and AI are gonna work together, and they need to be tightly embedded."
To do so, "I hired some of the best designers from AMD, Apple, and Nvidia," said Keller, who said he's into "the team adventure." "We have a great CPU team; I'm really, really excited about it."
Also: AMD vs. Intel: The top gaming CPUs for pros, creators, and casual players
At one time, ARM, the privately held unit of SoftBank Group, which is preparing for an initial public offering, was a potential savior for the chip industry caught between Intel and Nvidia's dominance. That has changed said Keller. "I talked to ARM quite a bit, and ARM had two big problems," observed Keller.
"One is they're way too expensive now," he said. What had been a workable economic situation for companies licensing ARM's technology has turned into a matter of constantly raising prices, demanding a higher and higher percentage of the products customers built with ARM's technology.
The other problem, he said, is that ARM wouldn't make changes to the fundamental instructions to accommodate new forms of handling data that AI requires. "AI is changing fast," Keller observed. He turned to Silicon Valley startup
SiFive, which has made a business licensing a version of chip CPUs based on RISC-V. "They [ARM] didn't wanna make the modifications I needed on my next chip; SiFive said, 'Sure.'"
With RISC-V, both Tenstorrent and its customers can have control, Keller emphasized, unlike dealing with a monopolist. "Another design mission was, how do you build great technology where people have the rights to license?" he said.
As a result of the openness of RISC-V, "Slowly RISC-V is gonna replace everything," said Keller, meaning, ARM, Nvidia's own instruction set, and the legacy x86 code on which the Intel empire is built.
In addition to the new RISC-V-based CPU, the Tenstorrent AI accelerator is undergoing a massive change to embed RISC-V capability inside of it. "Our AI engine has a large matrix multiplier, a tensor processor, a vector processor, but it also has five little RISC-V processors that basically issue the AI instruction stream," explained Keller.
The company is just starting to sell its first two generations of chips and is rapidly moving to have first silicon of a third generation and working on the fourth.
The Tenstorrent business will have several avenues to make money. The general-purpose CPU being designed is a "high-end processor" that "has licensing value," and then there's the AI accelerator chip that is both a part the company can sell but "also a RISC-V AI engine that other people could license."
There should be plenty of takers given that industry is chaffing at the Nvidia tax, as one might call the high prices of H100 and the other parts.
"I've talked to a lot of people who it's not so much that they dislike Nvidia as a technology, or they dislike Jensen [Huang]," he said.
It's purely a business matter.
Also: 4 ways to detect generative AI hype from reality
"I've talked to power-supply companies, microcontroller companies, autonomous driving startups, data center edge server makers," he said. "If you make your own chip and you want an AI engine in there, you can't put a $2,000 GPU in it," he observed. "I talked to two robotics companies who basically had the same line: I can't put a $10,000 GPU in a $10,000 robot."
Of course, Nvidia seems these days like a company with no competition. It
routinely dominates benchmark tests of chip performance such as MLPerf. Its data center business, which contains the AI chips, towers above the AI sales of Intel and Advanced Micro Devices. The unit may double in revenue terms this year, to $31 billion, almost two-thirds of all of Intel's annual revenue.
A raft of startups packed with incredibly talented engineers, such as Cerebras Systems, Graphcore, and Samba Nova Systems, have failed to make a dent in Nvidia, despite the fact that everyone knows everyone wants an alternative to Nvidia.
None of that phases Keller, who has fought and won many battles in a lifetime of chip design. For one thing, those companies haven't leveraged RISC-V, which is a game-changer, in his view. "If we had come up with the open-source RISC-V AI engine five years ago" that Tenstorrent is now building, said Keller, "Then 50 startups could have been innovating on that rather than solving the same problem 15 different ways, 50 different times," as Cerebras and others have been doing.
On a simpler level, people always assume the status quo will stay in place, and that's never the case.
"The war for computation has been over many times," mused Keller. "Mainframes won it, and then mini-computers won it, and then workstations won it, and then PCs won it, and then mobile won it -- the war has been won, so let's start the next battle!"
On an even simpler level, "I think computers are an adventure," he said. "I like to design computers; I'm into the adventure."