Frangipani
Regular
This is the body of the email, I just sent Tony Dawe.
I will let the forum know, if there is a response or information I can post, regarding the subject
"You may have noticed, there has been some discussion on TSEx about Dr Tony Lewis's LinkedIn comment, concerning small LMs and neuromorphic hardware.
His comments, in my opinion, were ambiguous.
He said both, neuromorphic hardware using them and the VSLMs themselves "hold promise".
Also stating, to his knowledge, BrainChip would be the first to implement this, at the Edge.
To me, he said they are still working on it, but it could also be, that he, being commercially minded, was saying..
"We've done it, but AKD2000 doesn't exist yet, so it's still just in simulation".
In FactFinder's post, about the November 6th private shareholder meeting, he stated the following.
"It has been confirmed by the CEO Sean Hehir in person to myself and others that Brainchip has succeeded in developing and running a number of Large Language Models, LLMs, on AKIDA 2.0 and that AKIDA 3.0 is on track and that its primary purpose will be to focus on running LLM's as its point of technology difference".
Straight off the bat, FactFinder refers to LLMs, not the "very small" or "tiny" LMs, that Dr Tony, refers to having promise.
So either he misquoted Sean, or Sean referenced LLMs.
From the above meeting, it appears that, at the very least, very small LMs are successfully running, in simulation, on AKIDA 2.0 and this is now public knowledge?
My questions are..
Are VSLMs running successfully on AKIDA 2.0, or are they still ironing out the bugs and this will be more an AKIDA 3.0 focused, game?
If they are running successfully, this would be considered quite a huge achievement (being a World first and considering the current hunger, for such technology).
Why hasn't the Company made a proper statement/Tweet, or something, when the information is apparently public knowledge?
It seems to me, that it would be in the Company's best interest (as well as us shareholders of course) to at least "tap" the drum and not have to rely on Chinese whispers etc?"
Nothing to share yet, unless I want to get nasty..
No, I'm not a Janet Jackson fan
Hi @DingoBorat,
it is a shame your email appears not to have been responded to in a satisfying manner by our company. Hopefully someone at Brainchip will follow up soon, though, with the content of the article below surely sending shock waves through the neuromorphic hardware community right now!
The tech is way over my head, but it looks as if researchers at the Korea Advanced Institute of Science and Technology (KAIST) have found a way to run LLMs on edge devices after all and were also the first in the world to publicly demonstrate and announce their success.
KAIST develops human brain-like AI chip
Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed an AI semiconductor capable of processing large language model (LLM) data at ultra-high speeds while significantly reducing power consumption, according to the Ministry of Science and ICT.
m.koreatimes.co.kr
Business
2024-03-06 16:31
KAIST develops human brain-like AI chip
Yoo Hoi-jun, center, a KAIST professor, and Kim Sang-yeob, left, a member of Yoo's research team, demonstrate a neuromorphic AI semiconductor that uses computing technology mimicking the behavior of the human brain at the ICT ministry's headquarters in Sejong, Wednesday. Yonhap
By Baek Byung-yeul
Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed an AI semiconductor capable of processing large language model (LLM) data at ultra-high speeds while significantly reducing power consumption, according to the Ministry of Science and ICT.
The ICT ministry said Wednesday that a research team led by professor Yoo Hoi-jun at KAIST's processing-in-memory research center developed the world's first complementary-transformer AI chip using Samsung Electronics' 28-nanometer manufacturing process.
The complementary-transformer AI chip is a neuromorphic computing system that mimics the structure and function of the human brain. Utilizing a deep learning model commonly used in visual data processing, the research team successfully implemented this transformer function, gaining insights into how neurons process information.
This technology, which learns context and meaning by tracking relationships within data, such as words in a sentence, is a source technology for generative AI services like ChatGPT, the ministry said.
The research team demonstrated the functionality of the complementary-transformer AI chip at the ICT ministry's headquarters in Sejong on Wednesday.
Kim Sang-yeob, a member of the research team, conducted various tasks such as sentence summarization, translation and question-and-answer tasks using OpenAI's LLM, GPT-2, on a laptop equipped with a built-in complementary-transformer AI chip, all without requiring an internet connection. As a result, the performance was notably enhanced, with the tasks completed at least three times faster, and in some cases up to nine times faster, compared to running GPT-2 on an internet-connected laptop.
To implement LLMs typically utilized in generative AI tasks, a substantial number of graphic processing units (GPUs) and 250 watts of power are typically required. However, the KAIST research team managed to implement the language model using a compact AI chip measuring just 4.5 millimeters by 4.5 millimeters.
"Neuromorphic computing is a technology that even companies like IBM and Intel have not been able to implement, and we are proud to be the first in the world to run the LLM with a low-power neuromorphic accelerator," Yoo said.
He predicted this technology could emerge as a core component for on-device AI, facilitating AI functions to be executed within a device even without requiring an internet connection. Due to its capacity to process information within devices, on-device AI offers faster operating speed and lower power consumption compared to cloud-based AI services that rely on network connectivity.
"Recently, with the emergence of generative AI services like ChatGPT and the need for on-device AI, demand and performance requirements for AI chips are rapidly increasing. Our main goal is to develop innovative AI semiconductor solutions that meet these changing market needs. In particular, we aim to focus on research that identifies and provides solutions to additional problems that may arise during the commercialization process," Yoo added.
The research team said this semiconductor uses only 1/625 of the power and is only 1/41 the size of Nvidia's GPU for the same tasks.
Baek Byung-yeul
baekby@koreatimes.co.kr