BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
Like some of us including me said before getting absolutely bashed in the forum for simple facts.
If we're not involved in the 2024 product roadmap of Samsung, then we're screwed.
No more lead for us. The biggest company that designs and uses its own chipsets does neuromorphic now.
Can't wait for the comments. Is someone gonna write a 3 pager about haunting and ridiculing me again, or will it just be reported?

Hi @DerAktienDude,

From the articles I've read so far, it would imply that KAIST are using neuromorphic technology, which doesn't necessarily mean they developed that component of it.

Here are three examples.

1.

Screenshot 2024-03-08 at 9.09.36 am.png



2.
Screenshot 2024-03-08 at 9.18.16 am.png

3.

Screenshot 2024-03-08 at 9.35.02 am.png




And then there's also Tony Lewis's previous comment on Linkedin.

Screenshot 2024-03-08 at 9.16.28 am.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 36 users

Evermont

Stealth Mode
  • Like
  • Thinking
  • Love
Reactions: 15 users

AARONASX

Holding onto what I've got
From 9.45min they talk about the new Kaist chip, still calling it "The World's First".



This video was posted just over 20min ago.


Hard to hear and understand fully what he was saying, but from what I took in was Kaist are using something "compression techniques ".. mentioning spikes neuros has poor accuracy and toys only...obviously saying poor is just them trying to make their alternative look and seem better, nor do they hold the patentes needed for the accuracy they claim is lacking. (maybe jealous lol)

Competition is good for the market, do they have a chip, yes, great whoop-de-do! ...do the sell IP to a wider market working to integrate their technology with others, probably not yet! ....do they have multiple foundries, partners, etc, i don't think so.

IMO
 
  • Like
  • Fire
Reactions: 16 users

IloveLamp

Top 20
1000013990.jpg
 
  • Like
  • Fire
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hard to hear and understand fully what he was saying, but from what I took in was Kaist are using something "compression techniques ".. mentioning spikes neuros has poor accuracy and toys only...obviously saying poor is just them trying to make their alternative look and seem better, nor do they hold the patentes needed for the accuracy they claim is lacking. (maybe jealous lol)

Competition is good for the market, do they have a chip, yes, great whoop-de-do! ...do the sell IP to a wider market working to integrate their technology with others, probably not yet! ....do they have multiple foundries, partners, etc, i don't think so.

IMO
Hi @AARONASX, I think he may be saying "efficient data compression techniques" which just sounds like techniques employed to make the LLM more compact.
 
Last edited:
  • Like
Reactions: 9 users

AARONASX

Holding onto what I've got
Hi @AARONASX, I think he may be saying "efficient data compression techniques" which just sounds like techniques employed to make the LLM's more compact.
Thanks Bravo :-D
 
  • Like
Reactions: 8 users
Nice to see Circle8 have us on their site now.



Screenshot_2024-03-08-08-30-12-46_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Fire
  • Love
Reactions: 51 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Bummer! This article does seem indicate that KAIST have developed a neuromorphic system different to ours.

Amongst other things it states "The research team developed a unique DNN-to-SNN equivalent conversion technique to solve this problem. This is a method of precisely controlling the spike occurrence threshold to increase the accuracy of the method of converting the existing deep artificial neural network (DNN) structure into a spiking neural network (SNN). Regarding this, the research team stated, “We were able to achieve accuracy at the level of a deep artificial neural network (DNN) while maintaining the energy efficiency of a spiking neural network (SNN).”



KAIST develops AI semiconductor that resembles the human brain... “The core of ultra-low power and high-speed technology”

Digital Daily Publication Date 2024-03-06 14:43:34

Sejong = Reporter Chae Seong-oh
Professor Hoejun Yoo of the Department of Electrical and Electronic Engineering at KAIST is explaining complementary-transformer technology at the Sejong Government Complex on the 6th.  [ⓒ Digital Daily]
Professor Hoejun Yoo of the Department of Electrical and Electronic Engineering at KAIST is explaining complementary-transformer technology at the Sejong Government Complex on the 6th. [ⓒ Digital Daily]

[Digital Daily Reporter Chae Seong-o] The Korea Advanced Institute of Science and Technology (KAIST) research team has developed an artificial intelligence (AI) semiconductor 'complementary-transformer' that can process large language models at ultra-high speed of 0.4 seconds while consuming ultra-low power of 400 milliwatts (㎽). Complementary-Transformer) was developed for the first time in the world, it was announced on the 6th. It was developed through Samsung Electronics’ 28-nano process.

The KAIST PIM Semiconductor Research Center and the Artificial Intelligence (AI) Semiconductor Graduate School's Professor Yoo Hoi-jun's research team have developed large language models (LLMs) such as GPT, which run through a large number of GPUs and 250 watts of power consumption, into small 4.5 mm We succeeded in implementing it with ultra-low power on an AI semiconductor chip.

In particular, it is characterized by implementing transformer operation using spiking neural network (SNN), a neuromorphic computing technology that mimics the operation of the human brain. This research, in which Dr. Sang-yeop Kim participated as the first author, was presented and demonstrated at the International Society of Solid State Circuit Design (ISSCC) held in San Francisco from the 19th to the 23rd of last month.

Existing neuromorphic computing technology is inaccurate compared to convolutional neural networks (CNN) and is mainly capable of simple image classification tasks. The research team raised the accuracy of neuromorphic computing technology to the same level as CNN and proposed a complementary-deep neural network (C-DNN) that can be applied to a variety of applications beyond simple image classification.

Complementary deep neural network technology uses a mixture of deep artificial neural networks (DNN) and spiking neural networks (SNN) and is a technology that can minimize power by allocating input data to different neural networks depending on their size.

Just as the human brain consumes a lot of energy when there is a lot to think about and consumes less energy when there is little to think about, a spiking neural network (SNN) that mimics the brain consumes a lot of power when the size of the input value is large and the When the size is small, it consumes less power.

This study actually proved that ultra-low power and high-performance on-device AI is possible by applying last year's complementary-deep neural network technology to LLM, and is the world's first to implement research content that had been limited to theoretical research in the form of an AI semiconductor. It is meaningful.

In particular, the research team focused on the practical scalability of neuromorphic computing and studied whether it could successfully perform advanced language processing tasks such as sentence generation, translation, and summarization. The biggest challenge in this process is achieving high accuracy in the neuromorphic network. In general, neuromorphic systems have high energy efficiency, but due to limitations in the learning algorithm, they tend to be less accurate when performing complex tasks, and act as a major obstacle in tasks that require high precision and performance, such as large language models. .

The research team developed a unique DNN-to-SNN equivalent conversion technique to solve this problem. This is a method of precisely controlling the spike occurrence threshold to increase the accuracy of the method of converting the existing deep artificial neural network (DNN) structure into a spiking neural network (SNN). Regarding this, the research team stated, “We were able to achieve accuracy at the level of a deep artificial neural network (DNN) while maintaining the energy efficiency of a spiking neural network (SNN).”


The research team said that in the future, they plan to expand the scope of research on neuromorphic computing to various application fields beyond language models, while also identifying and improving problems related to commercialization.

Professor Hoi-Jun Yoo of the Department of Electrical and Electronic Engineering at KAIST said, "This research is significant in that it not only solved the power consumption problem of existing AI semiconductors, but also successfully implemented the application of actual giant language models such as GPT-2." “Neuromorphic computing is a core technology for ultra-low-power, high-performance on-device AI that is essential in the artificial intelligence era, so we will continue to conduct related research in the future,” he explained.

Jeon Young-soo, Director of Information and Communication Industry Policy at the Ministry of Science and ICT, said, "This research outcome is significant in that it actually confirmed the possibility that AI semiconductors can develop into neuromorphic computing beyond NPU and PIM." “As the importance of AI semiconductors was emphasized in the discussion, we will actively support them so that they can continue to produce world-class research results in the future.”


 
  • Like
  • Sad
  • Thinking
Reactions: 23 users

Boab

I wish I could paint like Vincent
Bummer! This article does seem indicate that KAIST have developed a neuromorphic system different to ours.

Amongst other things it states "The research team developed a unique DNN-to-SNN equivalent conversion technique to solve this problem. This is a method of precisely controlling the spike occurrence threshold to increase the accuracy of the method of converting the existing deep artificial neural network (DNN) structure into a spiking neural network (SNN). Regarding this, the research team stated, “We were able to achieve accuracy at the level of a deep artificial neural network (DNN) while maintaining the energy efficiency of a spiking neural network (SNN).”



KAIST develops AI semiconductor that resembles the human brain... “The core of ultra-low power and high-speed technology”
Digital Daily Publication Date 2024-03-06 14:43:34

Sejong = Reporter Chae Seong-oh
Professor Hoejun Yoo of the Department of Electrical and Electronic Engineering at KAIST is explaining complementary-transformer technology at the Sejong Government Complex on the 6th.  [ⓒ Digital Daily]
Professor Hoejun Yoo of the Department of Electrical and Electronic Engineering at KAIST is explaining complementary-transformer technology at the Sejong Government Complex on the 6th. [ⓒ Digital Daily]

[Digital Daily Reporter Chae Seong-o] The Korea Advanced Institute of Science and Technology (KAIST) research team has developed an artificial intelligence (AI) semiconductor 'complementary-transformer' that can process large language models at ultra-high speed of 0.4 seconds while consuming ultra-low power of 400 milliwatts (㎽). Complementary-Transformer) was developed for the first time in the world, it was announced on the 6th. It was developed through Samsung Electronics’ 28-nano process.

The KAIST PIM Semiconductor Research Center and the Artificial Intelligence (AI) Semiconductor Graduate School's Professor Yoo Hoi-jun's research team have developed large language models (LLMs) such as GPT, which run through a large number of GPUs and 250 watts of power consumption, into small 4.5 mm We succeeded in implementing it with ultra-low power on an AI semiconductor chip.

In particular, it is characterized by implementing transformer operation using spiking neural network (SNN), a neuromorphic computing technology that mimics the operation of the human brain. This research, in which Dr. Sang-yeop Kim participated as the first author, was presented and demonstrated at the International Society of Solid State Circuit Design (ISSCC) held in San Francisco from the 19th to the 23rd of last month.

Existing neuromorphic computing technology is inaccurate compared to convolutional neural networks (CNN) and is mainly capable of simple image classification tasks. The research team raised the accuracy of neuromorphic computing technology to the same level as CNN and proposed a complementary-deep neural network (C-DNN) that can be applied to a variety of applications beyond simple image classification.

Complementary deep neural network technology uses a mixture of deep artificial neural networks (DNN) and spiking neural networks (SNN) and is a technology that can minimize power by allocating input data to different neural networks depending on their size.

Just as the human brain consumes a lot of energy when there is a lot to think about and consumes less energy when there is little to think about, a spiking neural network (SNN) that mimics the brain consumes a lot of power when the size of the input value is large and the When the size is small, it consumes less power.

This study actually proved that ultra-low power and high-performance on-device AI is possible by applying last year's complementary-deep neural network technology to LLM, and is the world's first to implement research content that had been limited to theoretical research in the form of an AI semiconductor. It is meaningful.

In particular, the research team focused on the practical scalability of neuromorphic computing and studied whether it could successfully perform advanced language processing tasks such as sentence generation, translation, and summarization. The biggest challenge in this process is achieving high accuracy in the neuromorphic network. In general, neuromorphic systems have high energy efficiency, but due to limitations in the learning algorithm, they tend to be less accurate when performing complex tasks, and act as a major obstacle in tasks that require high precision and performance, such as large language models. .

The research team developed a unique DNN-to-SNN equivalent conversion technique to solve this problem. This is a method of precisely controlling the spike occurrence threshold to increase the accuracy of the method of converting the existing deep artificial neural network (DNN) structure into a spiking neural network (SNN). Regarding this, the research team stated, “We were able to achieve accuracy at the level of a deep artificial neural network (DNN) while maintaining the energy efficiency of a spiking neural network (SNN).”

The research team said that in the future, they plan to expand the scope of research on neuromorphic computing to various application fields beyond language models, while also identifying and improving problems related to commercialization.

Professor Hoi-Jun Yoo of the Department of Electrical and Electronic Engineering at KAIST said, "This research is significant in that it not only solved the power consumption problem of existing AI semiconductors, but also successfully implemented the application of actual giant language models such as GPT-2." “Neuromorphic computing is a core technology for ultra-low-power, high-performance on-device AI that is essential in the artificial intelligence era, so we will continue to conduct related research in the future,” he explained.

Jeon Young-soo, Director of Information and Communication Industry Policy at the Ministry of Science and ICT, said, "This research outcome is significant in that it actually confirmed the possibility that AI semiconductors can develop into neuromorphic computing beyond NPU and PIM." “As the importance of AI semiconductors was emphasized in the discussion, we will actively support them so that they can continue to produce world-class research results in the future.”


They've done well but appear to have a long way to go to catch us.
The research team said that in the future, they plan to expand the scope of research on neuromorphic computing to various application fields beyond language models, while also identifying and improving problems related to commercialization.
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Diogenese

Top 20
They've been dissecting this thing for years.....
I reckon we are in the S24.... or I'm a monkeys unlce. If I'm wrong then validates neuromophic on mobiles for competitors either way. Great news 🚀🚀🚀
I suppose, without the DNA evidence, we'll just have to wait for the royalties.
 
  • Like
  • Haha
  • Fire
Reactions: 19 users

hotty4040

Regular
I suppose, without the DNA evidence, we'll just have to wait for the royalties.
Dodgy, have you got your head around this KAIST interloper at all, as of this moment in time, TIA

Akida Ballista

>>>>> KAIST, what's that all about for goodness sake <<<<<

hotty...
 
  • Like
  • Haha
Reactions: 5 users

while I love this as an amazing statement for our chipper, I’m wanting something more on this post, in brackets or asterix below with a notable credit from a peer of industry, or an official recognition of some Independant assessment certified etc……. give me kudos with backed up credential's as verification …… I know this groups research is second to none on topic, but in my industry there’s a common problem with some Co. , where u start believing your own BS…… where are the testimonials please ……..🙏

Qualified rant over….. !

IMG_1291.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

while I love this as an amazing statement for our chipper, I’m wanting something more on this post, in brackets or asterix below with a notable credit from a peer of industry, or an official recognition of some Independant assessment certified etc……. give me kudos with backed up credential's as verification …… I know this groups research is second to none on topic, but in my industry there’s a common problem with some Co. , where u start believing your own BS…… where are the testimonials please ……..🙏

Qualified rant over….. !

View attachment 58686

Screenshot 2024-03-08 at 12.55.09 pm.png

Screenshot 2024-03-08 at 12.55.37 pm.png
Screenshot 2024-03-08 at 12.55.44 pm.png
Screenshot 2024-03-08 at 12.55.30 pm.png

Screenshot 2024-03-08 at 12.55.57 pm.png
Screenshot 2024-03-08 at 12.56.04 pm.png
Screenshot 2024-03-08 at 12.56.10 pm.png
Screenshot 2024-03-08 at 12.56.17 pm.png
Screenshot 2024-03-08 at 12.56.22 pm.png
Screenshot 2024-03-08 at 12.55.50 pm.png
 
  • Like
  • Love
  • Fire
Reactions: 65 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Screenshot 2024-03-08 at 12.56.37 pm.png
Screenshot 2024-03-08 at 12.57.51 pm.png
Screenshot 2024-03-08 at 12.56.31 pm.png
 
  • Like
  • Love
  • Fire
Reactions: 53 users

HopalongPetrovski

I'm Spartacus!
Yeah, but...🤣

 
  • Haha
  • Like
  • Love
Reactions: 4 users
Whilst this is from Sep 23, I don't recall seeing or reading it.

Worth a read through imo.




Disruptive’s Substack



BrainChip: The Cloud-Free Future is Here
28 SEPT 2023


Introduction
Comprehending a revolutionary concept is not easy. Too often we get intimidated by jargon, struggle to see the core concept, or simply pretend that we are too busy.
That happened to me when I became curious about BrainChip (NASDAQ: BRCHF, ASX: BRN). It doesn’t help that Elon Musk’s Neuralink is inserting chips into the brains of monkeys or pigs. Such images come to mind when we hear of BrainChip. But no, here you won’t learn about such brain-computer interfaces. BrainChip is a company that commercialized a new semiconductor architecture. So far, this might cause some readers to yawn, stop, and search for other distractions. However, I will explain to you in plain language why this matters for you, as a long-term investor.
Let's start with something you have experienced. Recall the last time you purchased a computer. You were asked to choose between different processor brands and versions, then there was the decision to select a graphic card and finally the question on RAM. All these are chips; semiconductors, right? Apart from the sleek marketing message and promises toward the capabilities that come with selecting one over the other, did you really care about comprehending what was in front of you?
What Is the Problem that BrainChip Has Solved?
In its most simple version, here is the elevator pitch: BrainChip invented and patented a semiconductor architecture that lowers their energy consumption by magnitudes.
At this stage, I will not try to cover all the disclaimers and conditions that will come naturally with such a revolutionary statement. Just let it sink in. Does it matter to me if the energy consumption of my PC gets lowered significantly? Probably not. But when we leave our home turf and think of other concepts like the cloud or data centers, we arrive where the action is. Did you know that these boiler rooms of the Internet consume more than 1% of all energy worldwide1? And finally mentioning the elephant in the room, AI: it costs a company about 100x more to answer a question you type into ChatGPT or Bard in comparison to a Google search.
Now, lifting off and getting the helicopter view: there are huge efficiency and performance gains in case we could find a way to realize what BrainChip’s new chip architecture is promising.
The Details
And here’s where most will get lost: technical jargon. One needs to expand their vocabulary to understand where the investment thesis is. These are concepts not visible or under-appreciated in our daily lives. So, let's dive in!
This above-mentioned boiler room of the internet is not running on steam anymore. We are electrified. Electricity is needed to allow semiconductors to come up with endless sequences of zeros and ones. The plumbing, to stay with the analogy, is what makes electrons take different pathways through the circuitry.
Chip giants like Intel, AMD, and NVIDIA have optimized their designs to achieve a truly astonishing number of computations. Progress has been relentless and it seems no law of physics can stop them. The most advanced chips used in today's desktops or data centers have one thing in common: they must conduct an incomprehensible number of calculations. Electrons race through their circuitry and generate so much heat that data centers spend more on air-conditioning than on the actual semiconductor hardware.1
Here you might say that over the last years, your phone has seldom grown warm and can often do similar calculations on a smaller scale. This is the achievement of a small UK company, now famous and known on the NASDAQ by the ticker ARM. More than 30 years ago, they came up with a novel energy-efficient chip architecture. Today, virtually all phones use this arm design.2
Vocabulary
At this stage, let's get accustomed to a few technical terms required to understand the investment case I am preparing for BrainChip:
- CPU: Central Processing Unit, used for most day-to-day calculations
- GPU: Graphic Processing Unit, used to render graphics and A.I.-type calculations
- Instructions Set: Commands (vocabulary) the processor understands
- Architecture: Circuitry depending on the complexity of this set of instructions
- X86: Complex instruction set, practically unchanged since the 1970s
- Sequential Processing: A CPU starts and finishes a calculation in sequential order
- Parallel Processing: Runs calculations in parallel
- Core: A unit that conducts the individual calculation
Transition
Why is this transition still ongoing, considering the obvious advantages coming with a more energy-efficient architecture? Processors require a set of instructions to do their calculations. Software and hardware need to speak the same language to execute these instructions. Based on a specific task, like running a spreadsheet (CPU) or generating a 3D animation (GPU), different chips and sets of instructions give the best results.
We will always remain in a certain flux: one technology finds more adoptions, and the other gets scaled down. Initially, we fine-tuned the X86 architecture and added more and more cores. This kept up with the demands to a degree. Data centers expanded with parallel computing designs. This became more and more unsustainable and expensive. With the widespread introduction of generative AI (text, images, music, and code), we are experiencing a watershed moment right now.
For home computers, Apple is leading the field to bring arm architecture into our homes. Data centers can’t change their hardware as nimbly as we consumers can. They are stuck for a while with an expensive set of legacy X86-style hardware for the near future.


Data
Don’t you sometimes wonder where all this data resides? Sure, we can see a memory stick or a hard drive. But the bulk of the world's data resides in the cloud, aka data centers.
The amount of data generated by humans writing a message, filling out a form or saving a picture is less than the data volume generated when machines have exchanges with each other. The gap is growing exponentially.3 You might wonder why machines decide to generate data. No, they are not sentient – not so far at least. This data volume is generated by sensors, as well as from simulations, machine learning, and blockchain. Obvious sensors like temperature probes or traffic cameras might come to mind. But we’re getting side-tracked by attempting to understand each of these autonomous data sources: our world is awash in non-human-generated synthetic data. All this data is backed up to central servers where programs run operations to make sense of it.

The Edge
This non-human data is generated to a large degree “on the edge”. Devices/sensors generate data, flowing to the cloud for interpretation.4 This is what is causing the increase in data, inundating our internet with traffic and increasing the size of data centers.

View attachment 58660

BrainChip
Here comes our company. A micro-cap. A company with just a hundred employees and no turnover to speak of. Investor circles call these companies “story stocks.” As they have no turnover to show for it, they convince via their story. That should give the context. Nothing is certain in this domain. Risk is abundant. Success can take a generation.
Arm proved its design & functionality 20 – 30 years ago without instant success. It required the widespread adoption of mobile phone-computer hybrids, commonly referred to now just as phones. I want to provide the reasoning why this will repeat with BrainChip: we are currently experiencing an equivalent catalytic event, as evidenced by the exponential generation of non-human data volumes.
Their Secret Sauce
Understanding how this start-up-like company can find a solution to the global data dilemma requires additional vocabulary.
- Neuromorphic: Brain-like semiconductor architecture
- IoT: Internet of Things, the non-human data tsunami minions
- Event-Based: Running a calculation when needed, when an event occurs
- Latency: The time it takes between exchanges
- Neural Network: A machine-learning model simulating how our brain works
- Spike: Data burst that occurs when an event is recorded
- Convolution: mathematical operation to extract features from images or signals
You could say that they patented the operation model of our brain. This is a lot to take in. We are talking about a set of instructions. Brainchip is not selling a synthetic brain, with neurons and synapses. Their founder was an early proponent of what is now called neuromorphic engineering. The concept was too abstract for many years to warrant commercial attention.
Intriguing? Here is the catch: at this stage, most potential investors will get lost further researching what has been achieved. To a large degree, this is the reason why the community surrounding BrainChip is regarded with suspicion. It is simply difficult to wrap your head/brain around this concept.
In Simple Terms
Our brain operates on a meager 20 watts, a light bulb's worth of energy consumption. This is the case when playing chess or daydreaming. Evolutionarily, we have achieved something that has not been replicated by any commercial chip architecture. It’s because our brain works in an event-based manner. Example needed?
Imagine looking at a blank sheet of A4 paper that has a dot in its center. Humans have no difficulty identifying this dot. In comparison, a camera combined with image analysis will analyze each pixel, line by line, to determine that a certain location has a higher density of contrast. Calculations will then determine the event (localization of the dot). The software can’t differentiate between the data (the white part of the sheet of paper) and the significance of the dot (the black part). It is just stoically analyzing the data from top to bottom.
The neurons of our brain will only pass on information when an event occurs. This way, it can remain in a certain way dormant and only consume energy during an event!
This concept has fascinated scientists and culminated in a neuromorphic design logic. The founder of BrainChip noticed the commercial value and patented these advances. They run what’s called a spiking neural network (SNN) on their chip.
Why Now?
Investors wait until there is a need to fund new innovations. This inflection point is now. We are surrounded by sensors in cars, outdoors, homes, and a multitude of smart devices. Examples needed?
- Driver-Alertness: Detects if a driver is losing attention
- Crowd Management: Build-up of city traffic, or crowds during an event
- Biometric Recognition: Border control and traveling
- Alexa/Siri: Low latency keyword detection to complex questions
- Hearing Aids: Discern and amplify sounds selectively to understand what was said
- Vital Signs: Wearables support monitoring and preventive medicine
- Industrial Predictive Maintenance: Alerts get sent before equipment or infrastructure breaks down
These advances have already been implemented. And we will soon have fully autonomous, self-driving cars/taxis. The amount of additional data that will be transferred with these applications will inundate the internet. Unless we can stop these sensors from communicating their data to the cloud.
Neural Networks
This data (consisting of video/images, sound, and other measurable sensor results) needs to be classified and converted into a neural network. Currently, these data streams are analyzed in a process called convolution, resulting in a Convolutional Neural Network (CNN). This is achieved in a central location utilizing top-of-the-range GPUs. The process can take a year, cost millions, and is reliant on high-quality human-screened data. Once completed the neural network will be installed on the final device (car, hearing aid or sensor). But things stop here. This neural network is a one-trick pony. It can’t learn from its observations. In case something changes, all needs to be recalibrated at HQ.
BrainChip generates as well a neural network, but it's a Spiking Neural Network (SNN). It can be quickly trained on a much smaller number of lower quality, non-human validated samples. On top of it, once the SNN has been established, it keeps learning and continuously updates its model. How is this possible - SNNs are not living, right? Apart from CNNs, BrainChips SNN model parameters (called weights & biases) are not fixed. These values get changed on the chip's memory when the SNN “learns.”
How Can Brainchip Make Money?
They are the only company with a commercial neuromorphic chip architecture and corresponding patent-protected intellectual property (IP). Like arm’s revenue model, BrainChip sells its IP to anyone in the business of chip design. They are already a:
- member of arm’s A.I. partnership program
- partner with Intel’s foundry services
- selected company by PROPHESEE, the global leader in IoT machine vision
- selected company by SiFive: open-source AI chip design
Additionally, they work with enablement partners to provide a vertically complete solution to simplify evaluation and implementation. Completing the offering are integration partners offering ready-to-use system-on-chip (SoC) products.
As their business is essentially licensing software, they have very high margins. Currently, arm is valued at about 50 billion dollars. I see it entirely feasible that BrainChip will reach a comparable valuation when its design architecture gets adopted. They are currently valued at about 200 million dollars. Where else can you get such a 250x potential?
View attachment 58661
The Solution
With BrainChip’s processor, called Akida, all computations are performed on the chip. It requires no internet connection at all.
- It operates at a fraction of the 20 Watts that our brain requires. So, no cooling or main power sources are required; a simple battery will do!
- All data collected remains where it was observed/generated. No more privacy worries about hacked cloud servers. The data does not leave the sensor chip.
In Summary
Data processing will remain in constant flux and hardware updates are costly. Even so, we presently experience the confluence of multiple evolutions:
- X86 architecture will be phased out as arm’s processors have proven faster and more energy efficient. Soon, every computer will run on less complex arm-like instruction architecture
- Chronological data from IoT devices, demand forecasting or image processing will be enabled by BrainChip’s Akida in real-time and cloud-independent
- Governments won’t allow Internet giants to amass data or cross-border transfer of data to central server farms. Regulation will benefit technologies that can function without the need for data transfers
I haven’t seen this article before. It is hands down one of the best description of what the problem is that BrainChip solves, how it does it and that we are at that inflection point where the problem must be solved. Might have to send it to my reluctant family members who I have been trying to explain BrainChips’s potential to unsuccessfully do far.
Thank you for sharing this @Fullmoonfever it is a fantastic find 🤩
 
  • Like
  • Love
  • Fire
Reactions: 33 users

Boab

I wish I could paint like Vincent
re my suggestion to have a YouTube aimed at electronics enthusiasts, Tony Dawe said " Our products are not intended for sale to a broad market of retail customers, they are intended to be sold primarily to product developers, tech companies and chip manufacturers, who typically have a fairly good understanding of Edge AI applications and the requirements for ultra-low power consumption along with improved performance at the Edge. "

So it's not part of the business plan which is kind of a shame and my urgings to have Silicon Chip Magazine on board won't happen. With the release of the Edge Box I thought a broad market would open up. I don't know, but maybe BRN wishes to keep control of usage and collaboration for the present due to our ability to cope and our desire to be an IP Licensing seller.

I don't suppose we will ever see sets of Akida cufflinks or earrings for sale at the next AGM.:cry:
The Edge box could still be a good little earner as I think it was Sean that said customers wanting more than 5-10 boxes would be directed to contact VVDN for their purchases.
 
  • Like
  • Fire
  • Thinking
Reactions: 9 users

Esq.111

Fascinatingly Intuitive.
Afternoon Chippers ,

Got a whiff of this announcement from poster Neomax at the other site.

AMERICAN GOV guaranteeing chip space & production capacity for military ( protect the boundaries ) .... and domestic dilldonics ( keep the population titillated ) .

Mar 6, 2024

Intel Stands to Win $3.5 Billion to Produce Chips for Military​

Mackenzie Hawkins, Bloomberg News

(Bloomberg) -- The US government is poised to invest $3.5 billion in Intel Corp. so the chipmaker can produce advanced semiconductors for military and intelligence programs, according to congressional aides.
The money, tucked into a fast-moving spending bill the House passed on Wednesday, would establish Intel as a dominant domestic player in the lucrative defense market.
The funding, which would run over three years, is for the “secure enclave” program. It comes from a broader $39 billion Chips and Science Act grant pool that’s designed to convince chipmakers to produce semiconductors in the US. More than 600 companies have expressed interest in the funding.
The Wall Street Journal reported in November that Intel was in talks for between $3 billion and $4 billion in government subsidies from the program.
Intel is set to receive a total Chips Act incentive package of more than $10 billion that includes both grants and loans, Bloomberg has reported. The company declined to comment on the pending $3.5 billion investment.

Read More: Intel in Talks for Over $10 Billion in Chips Act Incentives
“We are still reviewing the effect of the appropriations text on the program,” the Commerce Department said in a statement. “The department looks forward to continuing to work with Congress on implementing the Chips and Science Act in a manner the promotes our economic and national security.”
The Senate is expected to pass the legislation by a Saturday deadline.
The funding comes as Commerce prepares to announce multi-billion-dollar awards to advanced chipmakers like Intel and Asian rivals Taiwan Semiconductor Manufacturing Co. and Samsung Electronics Co., all with the goal of building domestic manufacturing capabilities.
The agency has already announced three grants, including a smaller national-security focused award to the American subsidiary of BAE Systems Plc and a $1.5 billion grant to GlobalFoundries, which produces older-generation semiconductors.
Senators Maria Cantwell, who chairs the Commerce Committee, and Roger Wicker and Jack Reed, the top Republican and Democrat on the Armed Services Committee, raised concerns last year about the decision to provide an award to one company to build a secure enclave at a higher cost than what might otherwise be required to secure those chips, the aides said.
The initiative is separate from an existing Defense Department program that identifies secure facilities to supply military chips, including from firms like GlobalFoundries and IBM. The Pentagon has also separately awarded $238 million to eight regional technology hubs focused on semiconductors with defense applications.
©2024 Bloomberg L.P.

Regards ,
Esq.
 
  • Like
  • Fire
  • Love
Reactions: 31 users

skutza

Regular
Last edited:
  • Like
  • Sad
  • Fire
Reactions: 5 users
Top Bottom