BRN Discussion Ongoing

Frangipani

Regular
This is the body of the email, I just sent Tony Dawe.
I will let the forum know, if there is a response or information I can post, regarding the subject 👍

"You may have noticed, there has been some discussion on TSEx about Dr Tony Lewis's LinkedIn comment, concerning small LMs and neuromorphic hardware.

His comments, in my opinion, were ambiguous.
He said both, neuromorphic hardware using them and the VSLMs themselves "hold promise".

Also stating, to his knowledge, BrainChip would be the first to implement this, at the Edge.

To me, he said they are still working on it, but it could also be, that he, being commercially minded, was saying..
"We've done it, but AKD2000 doesn't exist yet, so it's still just in simulation".


In FactFinder's post, about the November 6th private shareholder meeting, he stated the following.

"It has been confirmed by the CEO Sean Hehir in person to myself and others that Brainchip has succeeded in developing and running a number of Large Language Models, LLMs, on AKIDA 2.0 and that AKIDA 3.0 is on track and that its primary purpose will be to focus on running LLM's as its point of technology difference".

Straight off the bat, FactFinder refers to LLMs, not the "very small" or "tiny" LMs, that Dr Tony, refers to having promise.
So either he misquoted Sean, or Sean referenced LLMs.

From the above meeting, it appears that, at the very least, very small LMs are successfully running, in simulation, on AKIDA 2.0 and this is now public knowledge?

My questions are..

Are VSLMs running successfully on AKIDA 2.0, or are they still ironing out the bugs and this will be more an AKIDA 3.0 focused, game?

If they are running successfully, this would be considered quite a huge achievement (being a World first and considering the current hunger, for such technology).
Why hasn't the Company made a proper statement/Tweet, or something, when the information is apparently public knowledge?

It seems to me, that it would be in the Company's best interest (as well as us shareholders of course) to at least "tap" the drum and not have to rely on Chinese whispers etc?"


Nothing to share yet, unless I want to get nasty..



No, I'm not a Janet Jackson fan 😛



2626A476-B14D-4AD6-907A-ED7D712C5E3F.jpeg


Hi @DingoBorat,

it is a shame your email appears not to have been responded to in a satisfying manner by our company. Hopefully someone at Brainchip will follow up soon, though, with the content of the article below surely sending shock waves through the neuromorphic hardware community right now!

The tech is way over my head, but it looks as if researchers at the Korea Advanced Institute of Science and Technology (KAIST) have found a way to run LLMs on edge devices after all and were also the first in the world to publicly demonstrate and announce their success.



Business

2024-03-06 16:31

KAIST develops human brain-like AI chip​

Yoo Hoi-jun, center, a KAIST professor, and Kim Sang-yeob, left, a member of Yoo's research team, demonstrate a neuromorphic AI semiconductor that uses computing technology mimicking the behavior of the human brain at the ICT ministry's headquarters in Sejong, Wednesday. Yonhap

Yoo Hoi-jun, center, a KAIST professor, and Kim Sang-yeob, left, a member of Yoo's research team, demonstrate a neuromorphic AI semiconductor that uses computing technology mimicking the behavior of the human brain at the ICT ministry's headquarters in Sejong, Wednesday. Yonhap


By Baek Byung-yeul

Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed an AI semiconductor capable of processing large language model (LLM) data at ultra-high speeds while significantly reducing power consumption, according to the Ministry of Science and ICT.

The ICT ministry said Wednesday that a research team led by professor Yoo Hoi-jun at KAIST's processing-in-memory research center developed the world's first complementary-transformer AI chip using Samsung Electronics' 28-nanometer manufacturing process.


The complementary-transformer AI chip is a neuromorphic computing system that mimics the structure and function of the human brain. Utilizing a deep learning model commonly used in visual data processing, the research team successfully implemented this transformer function, gaining insights into how neurons process information.

This technology, which learns context and meaning by tracking relationships within data, such as words in a sentence, is a source technology for generative AI services like ChatGPT, the ministry said.

The research team demonstrated the functionality of the complementary-transformer AI chip at the ICT ministry's headquarters in Sejong on Wednesday.

Kim Sang-yeob, a member of the research team, conducted various tasks such as sentence summarization, translation and question-and-answer tasks using OpenAI's LLM, GPT-2, on a laptop equipped with a built-in complementary-transformer AI chip, all without requiring an internet connection. As a result, the performance was notably enhanced, with the tasks completed at least three times faster, and in some cases up to nine times faster, compared to running GPT-2 on an internet-connected laptop.

To implement LLMs typically utilized in generative AI tasks, a substantial number of graphic processing units (GPUs) and 250 watts of power are typically required. However, the KAIST research team managed to implement the language model using a compact AI chip measuring just 4.5 millimeters by 4.5 millimeters.

"Neuromorphic computing is a technology that even companies like IBM and Intel have not been able to implement, and we are proud to be the first in the world to run the LLM with a low-power neuromorphic accelerator," Yoo said.

He predicted this technology could emerge as a core component for on-device AI, facilitating AI functions to be executed within a device even without requiring an internet connection. Due to its capacity to process information within devices, on-device AI offers faster operating speed and lower power consumption compared to cloud-based AI services that rely on network connectivity.

"Recently, with the emergence of generative AI services like ChatGPT and the need for on-device AI, demand and performance requirements for AI chips are rapidly increasing. Our main goal is to develop innovative AI semiconductor solutions that meet these changing market needs. In particular, we aim to focus on research that identifies and provides solutions to additional problems that may arise during the commercialization process," Yoo added.

The research team said this semiconductor uses only 1/625 of the power and is only 1/41 the size of Nvidia's GPU for the same tasks.

Baek Byung-yeul
baekby@koreatimes.co.kr

 
  • Like
  • Thinking
  • Sad
Reactions: 20 users
  • Haha
Reactions: 4 users

Diogenese

Top 20
  • Haha
Reactions: 6 users
View attachment 58597

Hi @DingoBorat,

it is a shame your email appears not to have been responded to in a satisfying manner by our company. Hopefully someone at Brainchip will follow up soon, though, with the content of the article below surely sending shock waves through the neuromorphic hardware community right now!

The tech is way over my head, but it looks as if researchers at the Korea Advanced Institute of Science and Technology (KAIST) have found a way to run LLMs on edge devices after all and were also the first in the world to publicly demonstrate and announce their success.



Business

2024-03-06 16:31

KAIST develops human brain-like AI chip​

Yoo Hoi-jun, center, a KAIST professor, and Kim Sang-yeob, left, a member of Yoo's research team, demonstrate a neuromorphic AI semiconductor that uses computing technology mimicking the behavior of the human brain at the ICT ministry's headquarters in Sejong, Wednesday. Yonhap's research team, demonstrate a neuromorphic AI semiconductor that uses computing technology mimicking the behavior of the human brain at the ICT ministry's headquarters in Sejong, Wednesday. Yonhap

Yoo Hoi-jun, center, a KAIST professor, and Kim Sang-yeob, left, a member of Yoo's research team, demonstrate a neuromorphic AI semiconductor that uses computing technology mimicking the behavior of the human brain at the ICT ministry's headquarters in Sejong, Wednesday. Yonhap


By Baek Byung-yeul

Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed an AI semiconductor capable of processing large language model (LLM) data at ultra-high speeds while significantly reducing power consumption, according to the Ministry of Science and ICT.

The ICT ministry said Wednesday that a research team led by professor Yoo Hoi-jun at KAIST's processing-in-memory research center developed the world's first complementary-transformer AI chip using Samsung Electronics' 28-nanometer manufacturing process.


The complementary-transformer AI chip is a neuromorphic computing system that mimics the structure and function of the human brain. Utilizing a deep learning model commonly used in visual data processing, the research team successfully implemented this transformer function, gaining insights into how neurons process information.

This technology, which learns context and meaning by tracking relationships within data, such as words in a sentence, is a source technology for generative AI services like ChatGPT, the ministry said.

The research team demonstrated the functionality of the complementary-transformer AI chip at the ICT ministry's headquarters in Sejong on Wednesday.

Kim Sang-yeob, a member of the research team, conducted various tasks such as sentence summarization, translation and question-and-answer tasks using OpenAI's LLM, GPT-2, on a laptop equipped with a built-in complementary-transformer AI chip, all without requiring an internet connection. As a result, the performance was notably enhanced, with the tasks completed at least three times faster, and in some cases up to nine times faster, compared to running GPT-2 on an internet-connected laptop.

To implement LLMs typically utilized in generative AI tasks, a substantial number of graphic processing units (GPUs) and 250 watts of power are typically required. However, the KAIST research team managed to implement the language model using a compact AI chip measuring just 4.5 millimeters by 4.5 millimeters.

"Neuromorphic computing is a technology that even companies like IBM and Intel have not been able to implement, and we are proud to be the first in the world to run the LLM with a low-power neuromorphic accelerator," Yoo said.

He predicted this technology could emerge as a core component for on-device AI, facilitating AI functions to be executed within a device even without requiring an internet connection. Due to its capacity to process information within devices, on-device AI offers faster operating speed and lower power consumption compared to cloud-based AI services that rely on network connectivity.

"Recently, with the emergence of generative AI services like ChatGPT and the need for on-device AI, demand and performance requirements for AI chips are rapidly increasing. Our main goal is to develop innovative AI semiconductor solutions that meet these changing market needs. In particular, we aim to focus on research that identifies and provides solutions to additional problems that may arise during the commercialization process," Yoo added.

The research team said this semiconductor uses only 1/625 of the power and is only 1/41 the size of Nvidia's GPU for the same tasks.

Baek Byung-yeul
baekby@koreatimes.co.kr

"Neuromorphic computing is a technology that even companies like IBM and Intel have not been able to implement, and we are proud to be the first in the world to run the LLM with a low-power neuromorphic accelerator," Yoo said.

What's significant here, is that they are talking about running ChatGPT-2 (while although not the current GPT-4, it's still a big deal) on it and not the small to tiny Language Models, that Tony has said we are working on..

They obviously already have a chip, but they're saying they are still research and we don't know what their commercialisation plans are..

Their claim is bold, which BrainChip should refute, if they are in a position to do so.
 
  • Like
  • Thinking
  • Fire
Reactions: 11 users
I guess he doesn't think BrainChip has reached the "Tipping point" yet then? 🤔

And he has an unexplainable "new" ethos?

20240306_233041.jpg


He will do much better for us over there, if he can garner the support and following there, that he once had.

There are many more eyes there, that will notice, if he can manage "Top Rated" posts.

It's a good thing he's changed his approach also, as the smarter trolls there, will tear shreds out of his commentary, if he uses information, the way he did here.

However, personally, I think BrainChip is now beyond the influence of share forums and has reached the tipping point, as far as Global recognition is concerned, anyway..

We are on the World stage now and are rapidly moving, from speculative grade, to investment grade.

A couple of new IP deals, would seal that opinion..

Any potential investors, who find BRN an attractive "punt", influenced by reading "Share Forum" posts, are just fortunate, in my opinion.
 
  • Like
  • Love
  • Thinking
Reactions: 16 users

skutza

Regular
Now I'm not sure if this has been discussed before as the Thesis is a little old, but it was a very interesting read and gave an insight on how AKIDA can be used.

(I also liked this part, HP certainly know about us :)) Finally, our work on AKIDA was the only project selected among 400 summer intern projects at Hewlett Packard Labs to win the 2020 Best-in-Class award.

 
  • Like
  • Love
  • Fire
Reactions: 11 users

Frangipani

Regular
What's significant here, is that they are talking about running ChatGPT-2 (while although not the current GPT-4, it's still a big deal) on it and not the small to tiny Language Models, that Tony has said we are working on..

They obviously already have a chip, but they're saying they are still research and we don't know what their commercialisation plans are..

Their claim is bold, which BrainChip should refute, if they are in a position to do so.

Exactly.
And finally answer your legitimate question whether or not Sean Hehir had indeed told select participants of that by-invitation-only shareholder meeting in Sydney in November that Brainchip had succeeded in developing and running a number of LLMs on Akida 2.0 (as claimed by a poster here on TSE), and if true, why the remaining shareholders (which constitute the vast majority) still have not been informed about this amazing breakthrough via official channels four months later.

I also wonder why none of the other attendees of that Sydney gathering have so far shared with us their recollection of what Sean Hehir actually said. 🤔
 
  • Like
Reactions: 6 users

Diogenese

Top 20
"Neuromorphic computing is a technology that even companies like IBM and Intel have not been able to implement, and we are proud to be the first in the world to run the LLM with a low-power neuromorphic accelerator," Yoo said.

What's significant here, is that they are talking about running ChatGPT-2 (while although not the current GPT-4, it's still a big deal) on it and not the small to tiny Language Models, that Tony has said we are working on..

They obviously already have a chip, but they're saying they are still research and we don't know what their commercialisation plans are..

Their claim is bold, which BrainChip should refute, if they are in a position to do so.

Just to keep things in perspective:


A Short History Of ChatGPT: How We Got To Where We Are Today (forbes.com)

https://www.forbes.com/sites/bernar...we-got-to-where-we-are-today/?sh=2e2b79c2674f

GPT-1, the model that was introduced in June 2018, was the first iteration of the GPT (generative pre-trained transformer) series and consisted of 117 million parameters. This set the foundational architecture for ChatGPT as we know it today. GPT-1 demonstrated the power of unsupervised learning in language understanding tasks, using books as training data to predict the next word in a sentence.

GPT-2, which was released in February 2019, represented a significant upgrade with 1.5 billion parameters. It showcased a dramatic improvement in text generation capabilities and produced coherent, multi-paragraph text. But due to its potential misuse, GPT-2 wasn't initially released to the public. The model was eventually launched in November 2019 after OpenAI conducted a staged rollout to study and mitigate potential risks.

GPT-3 was a huge leap forward in June 2020. This model was trained on a staggering 175 billion parameters. Its advanced text-generation capabilities led to widespread use in various applications, from drafting emails and writing articles to creating poetry and even generating programming code. It also demonstrated an ability to answer factual questions and translate between languages.

When GPT-3 launched, it marked a pivotal moment when the world started acknowledging this groundbreaking technology. Although the models had been in existence for a few years, it was with GPT-3 that individuals had the opportunity to interact with ChatGPT directly, ask it questions, and receive comprehensive and practical responses. When people were able to interact directly with the LLM like this, it became clear just how impactful this technology would become.

GPT-4, the latest iteration, continues this trend of exponential improvement, with changes like:
● Improved model alignment — the ability to follow user intention
● Lower likelihood of generating offensive or dangerous output
● Increased factual accuracy
● Better steerability — the ability to change behavior according to user requests
● Internet connectivity – the latest feature includes the ability to search the Internet in real-time

Each milestone brings us closer to a future where AI seamlessly integrates into our daily lives, enhancing our productivity, creativity, and communication
.

Here are a couple of KAIST patent applications:

US2023098672A1 ENERGY-EFFICIENT RETRAINING METHOD OF GENERATIVE NEURAL NETWORK FOR DOMAIN-SPECIFIC OPTIMIZATION 20210924

YOO HOI JUN [KR]; KIM SO YEON [KR]

1709732832730.png


the present invention to provides an energy-efficient retraining method of a generative neural network for domain-specific optimization capable of selecting only some layers of a previously trained generative neural network, i.e. only layers that play a key role in improving retraining performance, at the time of retraining of the generative neural network, and retraining only the selected layers, whereby it is possible to greatly reduce operation burden while maintaining the existing performance.

the mobile device maintains original weights without weight update for unselected layers, after selecting the k continuous layers, does not perform even back propagation for unselected layers before a first one of the selected layers, performs forward propagation in only a first epoch of retraining, and reuses a result of the forward propagation of the first epoch in repeated retraining epochs thereafter.



US2023072432A1 APPARATUS AND METHOD FOR ACCELERATING DEEP NEURAL NETWORK LEARNING FOR DEEP REINFORCEMENT LEARNING 20210831

1709732878478.png


Provided is a deep neural network (DNN) learning accelerating apparatus for deep reinforcement learning, the apparatus including: a DNN operation core configured to perform DNN learning for the deep reinforcement learning; and a weight training unit configured to train a weight parameter to accelerate the DNN learning and transmit it to the DNN operation core, the weight training unit including: a neural network weight memory storing the weight parameter; a neural network pruning unit configured to store a sparse weight pattern generated as a result of performing the weight pruning based on the weight parameter; and a weight prefetcher configured to select/align only pieces of weight data of which values are not zero (0) from the neural network weight memory using the sparse weight pattern and transmit the pieces of weight data of which the values are not zero to the DNN operation core.

While this is an impressive achievement by KAIST,

A. ChatGPT2 is (relatively) small beer.
B. I didn't see any of our secret sauce.
 
  • Like
  • Fire
  • Wow
Reactions: 26 users
Just to keep things in perspective:


A Short History Of ChatGPT: How We Got To Where We Are Today (forbes.com)

https://www.forbes.com/sites/bernar...we-got-to-where-we-are-today/?sh=2e2b79c2674f

GPT-1, the model that was introduced in June 2018, was the first iteration of the GPT (generative pre-trained transformer) series and consisted of 117 million parameters. This set the foundational architecture for ChatGPT as we know it today. GPT-1 demonstrated the power of unsupervised learning in language understanding tasks, using books as training data to predict the next word in a sentence.

GPT-2, which was released in February 2019, represented a significant upgrade with 1.5 billion parameters. It showcased a dramatic improvement in text generation capabilities and produced coherent, multi-paragraph text. But due to its potential misuse, GPT-2 wasn't initially released to the public. The model was eventually launched in November 2019 after OpenAI conducted a staged rollout to study and mitigate potential risks.

GPT-3 was a huge leap forward in June 2020. This model was trained on a staggering 175 billion parameters. Its advanced text-generation capabilities led to widespread use in various applications, from drafting emails and writing articles to creating poetry and even generating programming code. It also demonstrated an ability to answer factual questions and translate between languages.

When GPT-3 launched, it marked a pivotal moment when the world started acknowledging this groundbreaking technology. Although the models had been in existence for a few years, it was with GPT-3 that individuals had the opportunity to interact with ChatGPT directly, ask it questions, and receive comprehensive and practical responses. When people were able to interact directly with the LLM like this, it became clear just how impactful this technology would become.

GPT-4, the latest iteration, continues this trend of exponential improvement, with changes like:
● Improved model alignment — the ability to follow user intention
● Lower likelihood of generating offensive or dangerous output
● Increased factual accuracy
● Better steerability — the ability to change behavior according to user requests
● Internet connectivity – the latest feature includes the ability to search the Internet in real-time

Each milestone brings us closer to a future where AI seamlessly integrates into our daily lives, enhancing our productivity, creativity, and communication
.

Here are a couple of KAIST patent applications:

US2023098672A1 ENERGY-EFFICIENT RETRAINING METHOD OF GENERATIVE NEURAL NETWORK FOR DOMAIN-SPECIFIC OPTIMIZATION 20210924

YOO HOI JUN [KR]; KIM SO YEON [KR]

View attachment 58609

the present invention to provides an energy-efficient retraining method of a generative neural network for domain-specific optimization capable of selecting only some layers of a previously trained generative neural network, i.e. only layers that play a key role in improving retraining performance, at the time of retraining of the generative neural network, and retraining only the selected layers, whereby it is possible to greatly reduce operation burden while maintaining the existing performance.

the mobile device maintains original weights without weight update for unselected layers, after selecting the k continuous layers, does not perform even back propagation for unselected layers before a first one of the selected layers, performs forward propagation in only a first epoch of retraining, and reuses a result of the forward propagation of the first epoch in repeated retraining epochs thereafter.



US2023072432A1 APPARATUS AND METHOD FOR ACCELERATING DEEP NEURAL NETWORK LEARNING FOR DEEP REINFORCEMENT LEARNING 20210831

View attachment 58610

Provided is a deep neural network (DNN) learning accelerating apparatus for deep reinforcement learning, the apparatus including: a DNN operation core configured to perform DNN learning for the deep reinforcement learning; and a weight training unit configured to train a weight parameter to accelerate the DNN learning and transmit it to the DNN operation core, the weight training unit including: a neural network weight memory storing the weight parameter; a neural network pruning unit configured to store a sparse weight pattern generated as a result of performing the weight pruning based on the weight parameter; and a weight prefetcher configured to select/align only pieces of weight data of which values are not zero (0) from the neural network weight memory using the sparse weight pattern and transmit the pieces of weight data of which the values are not zero to the DNN operation core.

While this is an impressive achievement by KAIST,

A. ChatGPT2 is (relatively) small beer.
B. I didn't see any of our secret sauce.
So sounds like you don't think their tech, is/will be much chop in other areas then..

And calling GPT2 a LLM, isn't quite accurate, in the context, that even GPT3 has more than ten times the parameters.

The big question, which we may not get the answer to...

Is...
How do the State of the Art, "small/tiny" language models, that BrainChip have been working on and are "possibly" running on AKIDA 2.0 simulation, compare to ChatGPT2?
 
  • Like
  • Thinking
  • Fire
Reactions: 7 users

BrainShit

Regular
  • Like
  • Fire
Reactions: 13 users

Frangipani

Regular
  • Like
  • Fire
Reactions: 9 users

BrainShit

Regular
BrainChip's #neuromorphic tech in Lower Orbit! Akida's journey unfolds, bringing groundbreaking possibilities on earth and beyond. Stay tuned as the story continues to unfold!
 

Attachments

  • Screenshot_20240306_180954_X.jpg
    Screenshot_20240306_180954_X.jpg
    534 KB · Views: 119
  • Like
  • Love
  • Fire
Reactions: 49 users
Now we know why Peter retired.

Hopkins engineers collaborate with ChatGPT4 to design brain-inspired chips | Hub (jhu.edu)

https://hub.jhu.edu/2024/03/04/chatgpt4-brain-inspired-chips/

HOPKINS ENGINEERS COLLABORATE WITH CHATGPT4 TO DESIGN BRAIN-INSPIRED CHIPS

Systems could power energy-efficient, real-time machine intelligence for next-generation autonomous vehicles, robots 20240303



Through step-by-step prompts to ChatGPT4, starting with mimicking a single biological neuron and then linking more to form a network, they generated a full chip design that could be fabricated.
I actually sent an email to the Company a couple or so months ago (after someone here, posted an article about a company that had developed an enhanced electric motor design, through the use of Generative A.I.) asking whether they were making use of Generative A.I. to advance developments and problem solve (fearing that pride may make them avoid, or not consider this).

The reply, was along the lines of, that they were using whatever "tools" were available.

One of the things that concerns me, is that Generative A.I. when appropriately directed at a specific task, can achieve things, that our brightest minds, may not have actually "thought" of.

AlphaGo, the A.I. developed to play "Go" against human players (considered one of the hardest and oldest board games) came up with game strategies, that humans hadn't "come up with" in the ~4000 years, that the game had been played.
(It has been said, that there are more possible combinations of moves, than atoms in the Universe).

If you look at the way Sora (OpenA.I.) is able to generate visuals from text, it is just incredible.
Imagine future versions, where whole books, by your favourite author, can be reproduced as a film, trimmed to whatever length you like.
Want to watch it again, but with maybe a "darker" theme, just prompt it.
In fact just redoing it, is likely to produce a different result (watch the same film again, but not..)
Turn the Texas Chainsaw Massacre, into a children's fairytale? Easy.. (well maybe 🤔..)

It doesn't surprise me, that these tools can produce these kind of things, even patents can be easily circumnavigated.

Sorry, that's my rain for this early morning..

That's why sealing deals, is so important now.

The Technological Clock hands are spinning faster than ever before.
At some point, the hands will become irrelevant.
 
Last edited:
  • Like
  • Thinking
  • Love
Reactions: 18 users
It is accurate, nevertheless, as all members of Open AI’s GPT family qualify as LLMs.



View attachment 58614

View attachment 58615
Fair enough, but I still disagree 😛

The definition of something, with changing relative context, can not remain constant, in my opinion.

Not even sure if that made sense, but it's past even my bedtime...
 
  • Haha
  • Like
Reactions: 5 users

Frangipani

Regular
BrainChip's #neuromorphic tech in Lower Orbit! Akida's journey unfolds, bringing groundbreaking possibilities on earth and beyond. Stay tuned as the story continues to unfold!

Hi BrainShit,

I am just reposting the picture in the X post you shared as a thumbnail image, as people who don’t click on the attachment may not even notice that the Brainchip synapse symbol in it is actually stylised as a satellite! Love it! 😍

Whoever thought of this and also of the catchy phrase “The sky is no longer the limit.” (that was already featured in the rocket liftoff images posted shortly after the Transporter-10 rideshare mission launch) deserves an extra round of applause! Awesome! 👏 👏 👏

C41AC988-A020-46A4-91E3-9F125442AE5D.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 69 users

IloveLamp

Top 20
This is hit and run too, Fact Finder just put up a post on HC at 6:02PM this evening post# 72778782 regarding Ericsson 6G zero energy and Akida.

I am having problems with getting links and images up on here at present, hence no charts lately and being time poor means some one else needs to go over to HC and get this very interesting post copied and put up here.
Hi McHale,

I had the same trouble a while ago. Have you tried pressing this icon?

1000013955.jpg
 
  • Like
Reactions: 3 users

TECH

Regular
BrainChip's #neuromorphic tech in Lower Orbit! Akida's journey unfolds, bringing groundbreaking possibilities on earth and beyond. Stay tuned as the story continues to unfold!

Is that a UAP ? oh no sorry, it's Peters Spiking Neuron, one of Brainchip's Trade Marks....great promo article.

Tech.
 
  • Like
  • Love
Reactions: 14 users

IloveLamp

Top 20
With no news about NEW IP agreement, is it possible that if a Company did take up
an IP agreement but specify a NDA, could this be happening❓
I believe that the answer to your question is yes and no.....

It could be happening in the early stages, but non disclosure becomes illegal at a point where the company "knows" it's material in nature..... (my interpretation, dyor) ......

.....a lot of grey in there to delay announcing things to give customers the advantage they so desperately seek.....

I also would not be surprised to learn that the u.s government had a hand in keeping us under wraps just a little longer too considering our military and government links......

Pure speculation,....... but is it impossible?
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 20 users

Andy38

The hope of potential generational wealth is real
Hi Tech.
Just wondering if you were up for a catch up as I am currently staying in Manganui (pub) for a few days on way up to the 90 mile next week.
I haven't met anyone who is into Brainchip other than friends and family who I coerced into joining me.
I do like you thoughts and input very positive.

Cheers
Great spot!
 
  • Like
  • Fire
  • Love
Reactions: 42 users
Top Bottom