BRN Discussion Ongoing

miaeffect

Oat latte lover
  • Like
  • Fire
  • Love
Reactions: 21 users
Our mates at EdgX also a member of this cluster with quite a few other entities.

Obviously gives Akida additional exposure through these connections.



View attachment 57110

Edge artificial intelligence in space​

Imagine a future having thousands of interconnected satellites, powered by brain-inspired AI technology, delivering realtime data connectivity to earth.
EDGX is pioneering the next generation of onboard AI technology for space.
We envision a future where intelligent space systems bring huge economical and societal benefits to humanity. Our mission is to transform the space industry by making next generation spacecrafts much more intelligent, adaptive and efficient.
EDGX designs a new innovative data processing unit inspired by the human brain, delivering onboard learning, modularity, energy efficiency, low latency and neuromorphic computing to the next generation of satellites.
EDGX_square-250x250.png

EDGX​

EDGX

Groeneweg 17​

9320 Aalst​

T +32 472 43 95 42​

20240218_023407.jpg
 
  • Haha
Reactions: 6 users
I guess we could be heading back to the asx 200 next month? On top of hopefully a good quarterly report next week I can see a very big short squeeze coming up.



1708198892124.gif
 
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 11 users

rgupta

Regular
G
I didn't say I didn't believe it, I just said it hadn't been stated, I can't say I know it as a fact.

If it is true and small LMs have been run independently of the cloud on AKIDA 2.0 IP and it is free and public knowledge, don't you think an actual statement by the Company, that they have achieved this, is "newsworthy" even if it's "just" through social media?

How can I mislead others, with "my interpretation" of what Dr Tony Lewis has said?
I'm using his exact words, of "hold promise" and BrainChip "will" be the first.

Your interpretation, is that he's saying we have, which does not align with the words he used.

I would Love to see a public confirmation, that BrainChip has achieved this and If it's not a secret and in a World that is going nuts about Language Models, why hasn't it?
Your doubts may be valid but on the same hand it proves brainchip is seriously working on that end. It also proves we are the front runner and about to reach the destination if not at the destination.
For an investor an assurance from a person like Dr Tony Lewis is a big thing.
We all know everyone is looking for small customisable models of chat gpt, and if brainchip can bring them at friction of cost, memory, energy and latency that will be game changer for the whole word.
End of the day investments in a startup is always speculative but at these prices I believe there is least amount of risk even as a speculation.
 
  • Like
  • Fire
  • Thinking
Reactions: 13 users

Tothemoon24

Top 20

AI has a large and growing carbon footprint, but there are potential solutions on the horizon


Spiking neural networks​



Published: February 17, 2024 3.07am AEDT
Shirin Dora, Loughborough University

Given the huge problem-solving potential of artificial intelligence (AI), it wouldn’t be far-fetched to think that AI could also help us in tackling the climate crisis. However, when we consider the energy needs of AI models, it becomes clear that the technology is as much a part of the climate problem as a solution.
The emissions come from the infrastructure associated with AI, such as building and running the data centres that handle the large amounts of information required to sustain these systems.
But different technological approaches to how we build AI systems could help reduce its carbon footprint. Two technologies in particular hold promise for doing this: spiking neural networksand lifelong learning.
The lifetime of an AI system can be split into two phases: training and inference. During training, a relevant dataset is used to build and tune – improve – the system. In inference, the trained system generates predictions on previously unseen data.

You can trust this article because it’s written by academics.​

About us
For example, training an AI that’s to be used in self-driving cars would require a dataset of many different driving scenarios and decisions taken by human drivers.
After the training phase, the AI system will predict effective manoeuvres for a self-driving car. Artificial neural networks (ANN), are an underlying technology used in most current AI systems.
They have many different elements to them, called parameters, whose values are adjusted during the training phase of the AI system. These parameters can run to more than 100 billion in total.
While large numbers of parameters improve the capabilities of ANNs, they also make training and inference resource-intensive processes. To put things in perspective, training GPT-3 (the precursor AI system to the current ChatGPT) generated 502 metric tonnes of carbon, which is equivalent to driving 112 petrol powered cars for a year.
GPT-3 further emits 8.4 tonnes of CO₂ annuallydue to inference. Since the AI boom started in the early 2010s, the energy requirements of AI systems known as large language models (LLMs) – the type of technology that’s behind ChatGPT – have gone up by a factor of 300,000.
With the increasing ubiquity and complexity of AI models, this trend is going to continue, potentially making AI a significant contributor of CO₂ emissions. In fact, our current estimates could be lower than AI’s actual carbon footprintdue to a lack of standard and accurate techniques for measuring AI-related emissions.
Chimneys at a power station.

Leonid Sorokin / Shutterstock

Spiking neural networks​

The previously mentioned new technologies, spiking neural networks (SNNs) and lifelong learning (L2), have the potential to lower AI’s ever-increasing carbon footprint, with SNNs acting as an energy-efficient alternative to ANNs.
ANNs work by processing and learning patterns from data, enabling them to make predictions. They work with decimal numbers. To make accurate calculations, especially when multiplying numbers with decimal points together, the computer needs to be very precise. It is because of these decimal numbers that ANNs require lots of computing power, memory and time.
This means ANNs become more energy-intensive as the networks get larger and more complex. Both ANNs and SNNs are inspired by the brain, which contains billions of neurons (nerve cells) connected to each other via synapses.
Like the brain, ANNs and SNNs also have components which researchers call neurons, although these are artificial, not biological ones. The key difference between the two types of neural networks is in the way individual neurons transmit information to each other.
Neurons in the human brain communicate with each other by transmitting intermittent electrical signals called spikes. The spikes themselves do not contain information. Instead, the information lies in the timing of these spikes. This binary, all-or-none characteristic of spikes (usually represented as 0 or 1) implies that neurons are active when they spike and inactive otherwise.
This is one of the reasons for energy efficient processing in the brain.
Just as Morse code uses specific sequences of dots and dashes to convey messages, SNNs use patterns or timings of spikes to process and transmit information. So, while the artificial neurons in ANNs are always active, SNNs consume energy only when a spike occurs.
Otherwise, they have closer to zero energy requirements. SNNs can be up to 280 times more energy efficient than ANNs.
My colleagues and I are developing learning algorithms for SNNs that may bring them even closer to the energy efficiency exhibited by the brain. The lower computational requirements also imply that SNNs might be able to make decisions more quickly.
These properties render SNNs useful for broad range of applications, including space exploration, defence and self-driving cars because of the limited energy sources available in these scenarios.

Lifelong learning​

L2 is another strategy for reducing the overall energy requirements of ANNs over the course of their lifetime that we are also working on.
Training ANNs sequentially (where the systems learn from sequences of data) on new problems causes them to forget their previous knowledgewhile learning new tasks. ANNs require retraining from scratch when their operating environment changes, further increasing AI-related emissions.
L2 is a collection of algorithms that enable AI models to be trained sequentially on multiple tasks with little or no forgetting. L2 enables models to learn throughout their lifetime by building on their existing knowledge without having to retrain them from scratch.
The field of AI is growing fast and other potential advancements are emerging that can mitigate the energy demands of this technology. For instance, building smaller AI models that exhibit the same predictive capabilities as that of a larger model.
Advances in quantum computing – a different approach to building computers that harnesses phenomena from the world of quantum physics – would also enable faster training and inference using ANNs and SNNs. The superior computing capabilities offered by quantum computing could allow us to find energy-efficient solutions for AI at a much larger scale.
The climate change challenge requires that we try to find solutions for rapidly advancing areas such as AI before their carbon footprint becomes too large.
 
  • Like
  • Fire
  • Love
Reactions: 52 users

TopCat

Regular

AI has a large and growing carbon footprint, but there are potential solutions on the horizon


Spiking neural networks​



Published: February 17, 2024 3.07am AEDT
Shirin Dora, Loughborough University

Given the huge problem-solving potential of artificial intelligence (AI), it wouldn’t be far-fetched to think that AI could also help us in tackling the climate crisis. However, when we consider the energy needs of AI models, it becomes clear that the technology is as much a part of the climate problem as a solution.
The emissions come from the infrastructure associated with AI, such as building and running the data centres that handle the large amounts of information required to sustain these systems.
But different technological approaches to how we build AI systems could help reduce its carbon footprint. Two technologies in particular hold promise for doing this: spiking neural networksand lifelong learning.
The lifetime of an AI system can be split into two phases: training and inference. During training, a relevant dataset is used to build and tune – improve – the system. In inference, the trained system generates predictions on previously unseen data.

You can trust this article because it’s written by academics.​

About us
For example, training an AI that’s to be used in self-driving cars would require a dataset of many different driving scenarios and decisions taken by human drivers.
After the training phase, the AI system will predict effective manoeuvres for a self-driving car. Artificial neural networks (ANN), are an underlying technology used in most current AI systems.
They have many different elements to them, called parameters, whose values are adjusted during the training phase of the AI system. These parameters can run to more than 100 billion in total.
While large numbers of parameters improve the capabilities of ANNs, they also make training and inference resource-intensive processes. To put things in perspective, training GPT-3 (the precursor AI system to the current ChatGPT) generated 502 metric tonnes of carbon, which is equivalent to driving 112 petrol powered cars for a year.
GPT-3 further emits 8.4 tonnes of CO₂ annuallydue to inference. Since the AI boom started in the early 2010s, the energy requirements of AI systems known as large language models (LLMs) – the type of technology that’s behind ChatGPT – have gone up by a factor of 300,000.
With the increasing ubiquity and complexity of AI models, this trend is going to continue, potentially making AI a significant contributor of CO₂ emissions. In fact, our current estimates could be lower than AI’s actual carbon footprintdue to a lack of standard and accurate techniques for measuring AI-related emissions.
Chimneys at a power station.

Leonid Sorokin / Shutterstock

Spiking neural networks​

The previously mentioned new technologies, spiking neural networks (SNNs) and lifelong learning (L2), have the potential to lower AI’s ever-increasing carbon footprint, with SNNs acting as an energy-efficient alternative to ANNs.
ANNs work by processing and learning patterns from data, enabling them to make predictions. They work with decimal numbers. To make accurate calculations, especially when multiplying numbers with decimal points together, the computer needs to be very precise. It is because of these decimal numbers that ANNs require lots of computing power, memory and time.
This means ANNs become more energy-intensive as the networks get larger and more complex. Both ANNs and SNNs are inspired by the brain, which contains billions of neurons (nerve cells) connected to each other via synapses.
Like the brain, ANNs and SNNs also have components which researchers call neurons, although these are artificial, not biological ones. The key difference between the two types of neural networks is in the way individual neurons transmit information to each other.
Neurons in the human brain communicate with each other by transmitting intermittent electrical signals called spikes. The spikes themselves do not contain information. Instead, the information lies in the timing of these spikes. This binary, all-or-none characteristic of spikes (usually represented as 0 or 1) implies that neurons are active when they spike and inactive otherwise.
This is one of the reasons for energy efficient processing in the brain.
Just as Morse code uses specific sequences of dots and dashes to convey messages, SNNs use patterns or timings of spikes to process and transmit information. So, while the artificial neurons in ANNs are always active, SNNs consume energy only when a spike occurs.
Otherwise, they have closer to zero energy requirements. SNNs can be up to 280 times more energy efficient than ANNs.
My colleagues and I are developing learning algorithms for SNNs that may bring them even closer to the energy efficiency exhibited by the brain. The lower computational requirements also imply that SNNs might be able to make decisions more quickly.
These properties render SNNs useful for broad range of applications, including space exploration, defence and self-driving cars because of the limited energy sources available in these scenarios.

Lifelong learning​

L2 is another strategy for reducing the overall energy requirements of ANNs over the course of their lifetime that we are also working on.
Training ANNs sequentially (where the systems learn from sequences of data) on new problems causes them to forget their previous knowledgewhile learning new tasks. ANNs require retraining from scratch when their operating environment changes, further increasing AI-related emissions.
L2 is a collection of algorithms that enable AI models to be trained sequentially on multiple tasks with little or no forgetting. L2 enables models to learn throughout their lifetime by building on their existing knowledge without having to retrain them from scratch.
The field of AI is growing fast and other potential advancements are emerging that can mitigate the energy demands of this technology. For instance, building smaller AI models that exhibit the same predictive capabilities as that of a larger model.
Advances in quantum computing – a different approach to building computers that harnesses phenomena from the world of quantum physics – would also enable faster training and inference using ANNs and SNNs. The superior computing capabilities offered by quantum computing could allow us to find energy-efficient solutions for AI at a much larger scale.
The climate change challenge requires that we try to find solutions for rapidly advancing areas such as AI before their carbon footprint becomes too large.
We’re certainly in the right spot at the right time. Going to be an exciting future watching this unfold!!
 
  • Like
  • Love
  • Fire
Reactions: 34 users

Tothemoon24

Top 20
Nothing like a Sunday morning rumour to get the bloody pumping

IMG_8402.png

It would be the first NPU core-count increase since 2020


Rumor mill: Apple might have been slow to jump onto the generative AI bandwagon, but the company is starting to go all-in on artificial intelligence. According to a new rumor, nowhere will that be more apparent than in the iPhone 16, which is said to come with a massively upgraded Neural Engine for on-device AI tasks.


Apple's future generation of iPhone, iPad, and MacBook chips, the M4 and A18, will have an increased number of cores in their improved Neural Engines, writes Taiwanese publication Economic Daily News.


Apple first introduced its dual-core Neural Engine in the A11 Bionic SoC found in the iPhone 8/8 Plus and iPhone X, which released in 2017. The company bumped the Neural Engine's cores to 8 in the A13 that launched in the iPhone 11 series in 2019, doubling the count to 16 in the A14 that debuted in the iPhone 12/10th-gen iPads a year later.


Apple has stuck with 16 cores in its iPhone's Neural Engines since 2020, though the component's performance has still improved with each generation – Cupertino says the iPhone 15 Pro's A17 Pro chip's Neural Engine is twice as fast as the one in the iPhone 14 Pro. It sounds as if the A18 could double the core count to 32, which would match the Mac Studio and the Mac Pro that are configured with an M1 Ultra or M2 Ultra SoC.

The latest rumor follows reports that Apple's future products will likely run generative AI models using built-in hardware instead of cloud services.

There are plenty of advantages to using on-device silicon for generative AI tasks rather than relying on remote servers and cloud platforms. Google made a big deal about the AI processing abilities of its Tensor G3 chip, which it claims pushes the boundaries of on-device machine learning, bringing the latest in Google AI research directly to the phone. That statement was put under scrutiny when YouTube channel Mrwhosetheboss found that most of the Pixel 8 Pro's new generative AI features need to be processed in the cloud, meaning a constant internet connection is required.

Earlier this month, Apple CEO Tim Cook confirmed that the company would announce new generative AI features for its products later this year.
 
  • Like
  • Fire
  • Thinking
Reactions: 46 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
Reactions: 12 users

Worker122

Regular
Hi All
If you are interested in trying to understand what markets are now all about I recommend reading the linked interview with DAVID EINHORN. The following is a brief introduction to the interview:

“This week, we speak with David Einhorn. president of Greenlight Capital. He launched the value-oriented in 1996. Since inception, Greenlight has generated about 13% annually, and ~2900% total return versus the S&P500’s 1117% total and 9.5% annual returns.

He famously shorted Allied Capital in the 2000’s and Lehman Brothers about a year before it collapsed into bankruptcy in 2008. Time magazine named him to their “100 most influential people in the world” in 2013.

In our wide-ranging discussion, Einhorn stated that “Market structures are broken and value investing is dead.”


It is quite a detailed interview but it does provide a possible way to understand how algorithmic trading could have been the cause of Brainchip being driven to well below true value and why algorithmic trading might well take Brainchip well past true value.

My opinion only DYOR
Fact Finder
Interesting read FF,
That led me to another term I had not been aware of but suspected was a tactic used by shorters in collusion.
A Bear Raid.
In a typical bear raid, short sellers may conspire beforehand to quietly establish large short positions in the target stock. Since the short interest in the stock increases the risk of a short squeeze that can inflict substantial losses on the shorts, the short sellers cannot afford to wait patiently for months until their short strategy works out.


The next step in the bear raid is akin to a smear campaign, with whispers and rumors about the company spread by unknown sources. These rumors can be anything that portrays the target company in a negative light, such as allegations of accounting fraud, an SEC investigation, an earnings miss, financial difficulties, and so on. The rumors may cause nervous investors to exit the stock in droves, driving the price down further and giving the short sellers the profit they are looking for.
 
  • Like
  • Sad
  • Thinking
Reactions: 20 users

TopCat

Regular
Nothing like a Sunday morning rumour to get the bloody pumping

View attachment 57124
It would be the first NPU core-count increase since 2020


Rumor mill: Apple might have been slow to jump onto the generative AI bandwagon, but the company is starting to go all-in on artificial intelligence. According to a new rumor, nowhere will that be more apparent than in the iPhone 16, which is said to come with a massively upgraded Neural Engine for on-device AI tasks.


Apple's future generation of iPhone, iPad, and MacBook chips, the M4 and A18, will have an increased number of cores in their improved Neural Engines, writes Taiwanese publication Economic Daily News.


Apple first introduced its dual-core Neural Engine in the A11 Bionic SoC found in the iPhone 8/8 Plus and iPhone X, which released in 2017. The company bumped the Neural Engine's cores to 8 in the A13 that launched in the iPhone 11 series in 2019, doubling the count to 16 in the A14 that debuted in the iPhone 12/10th-gen iPads a year later.


Apple has stuck with 16 cores in its iPhone's Neural Engines since 2020, though the component's performance has still improved with each generation – Cupertino says the iPhone 15 Pro's A17 Pro chip's Neural Engine is twice as fast as the one in the iPhone 14 Pro. It sounds as if the A18 could double the core count to 32, which would match the Mac Studio and the Mac Pro that are configured with an M1 Ultra or M2 Ultra SoC.

The latest rumor follows reports that Apple's future products will likely run generative AI models using built-in hardware instead of cloud services.

There are plenty of advantages to using on-device silicon for generative AI tasks rather than relying on remote servers and cloud platforms. Google made a big deal about the AI processing abilities of its Tensor G3 chip, which it claims pushes the boundaries of on-device machine learning, bringing the latest in Google AI research directly to the phone. That statement was put under scrutiny when YouTube channel Mrwhosetheboss found that most of the Pixel 8 Pro's new generative AI features need to be processed in the cloud, meaning a constant internet connection is required.

Earlier this month, Apple CEO Tim Cook confirmed that the company would announce new generative AI features for its products later this year.
“There have been a lot of reports about Apple's dive into generative AI recently. We heard last week that in order to avoid many of the problems associated with the technology, such as hallucinations and massive energy usage, Apple is working on AI models that run on its devices rather than the cloud.”

 
  • Like
  • Thinking
  • Fire
Reactions: 26 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

AI has a large and growing carbon footprint, but there are potential solutions on the horizon


You can trust this article because it’s written by academics.​

About us
For example, training an AI that’s to be used in self-driving cars would require a dataset of many different driving scenarios and decisions taken by human drivers.
After the training phase, the AI system will predict effective manoeuvres for a self-driving car. Artificial neural networks (ANN), are an underlying technology used in most current AI systems.
They have many different elements to them, called parameters, whose values are adjusted during the training phase of the AI system. These parameters can run to more than 100 billion in total.
While large numbers of parameters improve the capabilities of ANNs, they also make training and inference resource-intensive processes. To put things in perspective, training GPT-3 (the precursor AI system to the current ChatGPT) generated 502 metric tonnes of carbon, which is equivalent to driving 112 petrol powered cars for a year.
GPT-3 further emits 8.4 tonnes of CO₂ annuallydue to inference. Since the AI boom started in the early 2010s, the energy requirements of AI systems known as large language models (LLMs) – the type of technology that’s behind ChatGPT – have gone up by a factor of 300,000.
With the increasing ubiquity and complexity of AI models, this trend is going to continue, potentially making AI a significant contributor of CO₂ emissions. In fact, our current estimates could be lower than AI’s actual carbon footprintdue to a lack of standard and accurate techniques for measuring AI-related emissions.
Chimneys at a power station.

Leonid Sorokin / Shutterstock

Spiking neural networks​

The previously mentioned new technologies, spiking neural networks (SNNs) and lifelong learning (L2), have the potential to lower AI’s ever-increasing carbon footprint, with SNNs acting as an energy-efficient alternative to ANNs.
ANNs work by processing and learning patterns from data, enabling them to make predictions. They work with decimal numbers. To make accurate calculations, especially when multiplying numbers with decimal points together, the computer needs to be very precise. It is because of these decimal numbers that ANNs require lots of computing power, memory and time.
This means ANNs become more energy-intensive as the networks get larger and more complex. Both ANNs and SNNs are inspired by the brain, which contains billions of neurons (nerve cells) connected to each other via synapses.
Like the brain, ANNs and SNNs also have components which researchers call neurons, although these are artificial, not biological ones. The key difference between the two types of neural networks is in the way individual neurons transmit information to each other.
Neurons in the human brain communicate with each other by transmitting intermittent electrical signals called spikes. The spikes themselves do not contain information. Instead, the information lies in the timing of these spikes. This binary, all-or-none characteristic of spikes (usually represented as 0 or 1) implies that neurons are active when they spike and inactive otherwise.
This is one of the reasons for energy efficient processing in the brain.
Just as Morse code uses specific sequences of dots and dashes to convey messages, SNNs use patterns or timings of spikes to process and transmit information. So, while the artificial neurons in ANNs are always active, SNNs consume energy only when a spike occurs.
Otherwise, they have closer to zero energy requirements. SNNs can be up to 280 times more energy efficient than ANNs.
My colleagues and I are developing learning algorithms for SNNs that may bring them even closer to the energy efficiency exhibited by the brain. The lower computational requirements also imply that SNNs might be able to make decisions more quickly.
These properties render SNNs useful for broad range of applications, including space exploration, defence and self-driving cars because of the limited energy sources available in these scenarios.

Lifelong learning​

L2 is another strategy for reducing the overall energy requirements of ANNs over the course of their lifetime that we are also working on.
Training ANNs sequentially (where the systems learn from sequences of data) on new problems causes them to forget their previous knowledgewhile learning new tasks. ANNs require retraining from scratch when their operating environment changes, further increasing AI-related emissions.
L2 is a collection of algorithms that enable AI models to be trained sequentially on multiple tasks with little or no forgetting. L2 enables models to learn throughout their lifetime by building on their existing knowledge without having to retrain them from scratch.
The field of AI is growing fast and other potential advancements are emerging that can mitigate the energy demands of this technology. For instance, building smaller AI models that exhibit the same predictive capabilities as that of a larger model.
Advances in quantum computing – a different approach to building computers that harnesses phenomena from the world of quantum physics – would also enable faster training and inference using ANNs and SNNs. The superior computing capabilities offered by quantum computing could allow us to find energy-efficient solutions for AI at a much larger scale.
The climate change challenge requires that we try to find solutions for rapidly advancing areas such as AI before their carbon footprint becomes too large.
How cool is that! Even the academics are starting to understand why we’ve all been so excited! 😝
 
  • Like
  • Fire
  • Haha
Reactions: 21 users
“There have been a lot of reports about Apple's dive into generative AI recently. We heard last week that in order to avoid many of the problems associated with the technology, such as hallucinations and massive energy usage, Apple is working on AI models that run on its devices rather than the cloud.”

Quite a few jobs going at apple for ai/ml currently, not sure any are of relevance.

 
  • Like
Reactions: 4 users

IloveLamp

Top 20
  • Like
  • Fire
  • Thinking
Reactions: 18 users

Tothemoon24

Top 20
IMG_8403.jpeg
IMG_8404.jpeg
 
  • Like
  • Fire
  • Wow
Reactions: 7 users
An interesting valuation 🙏

 
  • Like
  • Wow
  • Haha
Reactions: 20 users
A light read

 
  • Like
  • Fire
Reactions: 4 users

Quatrojos

Regular
How cool is that! Even the academics are starting to understand why we’ve all been so excited! 😝
I’ve never heard an academic say ‘you can trust me because I’m an academic.’
 
  • Haha
  • Like
Reactions: 6 users

IloveLamp

Top 20
  • Fire
  • Like
Reactions: 3 users
View attachment 57125
View attachment 57126
It's worth following the link, to OpenAI's Sora demonstrations...

Very impressive and accurate, text to imagery, some with quite long descriptions, faithfully generated.

Future iterations of this, could convert an entire novel to a film.

How many times have you heard, that the film, didn't capture the essence of the book?

The Future, film/entertainment/games sector, is not only going to be completely different, but fully customisable.
 
  • Like
  • Love
Reactions: 5 users
Top Bottom