Your doubts may be valid but on the same hand it proves brainchip is seriously working on that end. It also proves we are the front runner and about to reach the destination if not at the destination.I didn't say I didn't believe it, I just said it hadn't been stated, I can't say I know it as a fact.
If it is true and small LMs have been run independently of the cloud on AKIDA 2.0 IP and it is free and public knowledge, don't you think an actual statement by the Company, that they have achieved this, is "newsworthy" even if it's "just" through social media?
How can I mislead others, with "my interpretation" of what Dr Tony Lewis has said?
I'm using his exact words, of "hold promise" and BrainChip "will" be the first.
Your interpretation, is that he's saying we have, which does not align with the words he used.
I would Love to see a public confirmation, that BrainChip has achieved this and If it's not a secret and in a World that is going nuts about Language Models, why hasn't it?
We’re certainly in the right spot at the right time. Going to be an exciting future watching this unfold!!AI has a large and growing carbon footprint, but there are potential solutions on the horizon
Spiking neural networks
AI has a large and growing carbon footprint, but there are potential solutions on the horizon
Technological approaches could help reduce the carbon impact of artificial intelligence systems.theconversation.com
Published: February 17, 2024 3.07am AEDT
Shirin Dora, Loughborough University
Given the huge problem-solving potential of artificial intelligence (AI), it wouldn’t be far-fetched to think that AI could also help us in tackling the climate crisis. However, when we consider the energy needs of AI models, it becomes clear that the technology is as much a part of the climate problem as a solution.
The emissions come from the infrastructure associated with AI, such as building and running the data centres that handle the large amounts of information required to sustain these systems.
But different technological approaches to how we build AI systems could help reduce its carbon footprint. Two technologies in particular hold promise for doing this: spiking neural networksand lifelong learning.
The lifetime of an AI system can be split into two phases: training and inference. During training, a relevant dataset is used to build and tune – improve – the system. In inference, the trained system generates predictions on previously unseen data.
You can trust this article because it’s written by academics.
About us
For example, training an AI that’s to be used in self-driving cars would require a dataset of many different driving scenarios and decisions taken by human drivers.
After the training phase, the AI system will predict effective manoeuvres for a self-driving car. Artificial neural networks (ANN), are an underlying technology used in most current AI systems.
They have many different elements to them, called parameters, whose values are adjusted during the training phase of the AI system. These parameters can run to more than 100 billion in total.
While large numbers of parameters improve the capabilities of ANNs, they also make training and inference resource-intensive processes. To put things in perspective, training GPT-3 (the precursor AI system to the current ChatGPT) generated 502 metric tonnes of carbon, which is equivalent to driving 112 petrol powered cars for a year.
GPT-3 further emits 8.4 tonnes of CO₂ annuallydue to inference. Since the AI boom started in the early 2010s, the energy requirements of AI systems known as large language models (LLMs) – the type of technology that’s behind ChatGPT – have gone up by a factor of 300,000.
With the increasing ubiquity and complexity of AI models, this trend is going to continue, potentially making AI a significant contributor of CO₂ emissions. In fact, our current estimates could be lower than AI’s actual carbon footprintdue to a lack of standard and accurate techniques for measuring AI-related emissions.
Leonid Sorokin / Shutterstock
Spiking neural networks
The previously mentioned new technologies, spiking neural networks (SNNs) and lifelong learning (L2), have the potential to lower AI’s ever-increasing carbon footprint, with SNNs acting as an energy-efficient alternative to ANNs.
ANNs work by processing and learning patterns from data, enabling them to make predictions. They work with decimal numbers. To make accurate calculations, especially when multiplying numbers with decimal points together, the computer needs to be very precise. It is because of these decimal numbers that ANNs require lots of computing power, memory and time.
This means ANNs become more energy-intensive as the networks get larger and more complex. Both ANNs and SNNs are inspired by the brain, which contains billions of neurons (nerve cells) connected to each other via synapses.
Like the brain, ANNs and SNNs also have components which researchers call neurons, although these are artificial, not biological ones. The key difference between the two types of neural networks is in the way individual neurons transmit information to each other.
Neurons in the human brain communicate with each other by transmitting intermittent electrical signals called spikes. The spikes themselves do not contain information. Instead, the information lies in the timing of these spikes. This binary, all-or-none characteristic of spikes (usually represented as 0 or 1) implies that neurons are active when they spike and inactive otherwise.
This is one of the reasons for energy efficient processing in the brain.
Just as Morse code uses specific sequences of dots and dashes to convey messages, SNNs use patterns or timings of spikes to process and transmit information. So, while the artificial neurons in ANNs are always active, SNNs consume energy only when a spike occurs.
Otherwise, they have closer to zero energy requirements. SNNs can be up to 280 times more energy efficient than ANNs.
My colleagues and I are developing learning algorithms for SNNs that may bring them even closer to the energy efficiency exhibited by the brain. The lower computational requirements also imply that SNNs might be able to make decisions more quickly.
These properties render SNNs useful for broad range of applications, including space exploration, defence and self-driving cars because of the limited energy sources available in these scenarios.
Lifelong learning
L2 is another strategy for reducing the overall energy requirements of ANNs over the course of their lifetime that we are also working on.
Training ANNs sequentially (where the systems learn from sequences of data) on new problems causes them to forget their previous knowledgewhile learning new tasks. ANNs require retraining from scratch when their operating environment changes, further increasing AI-related emissions.
L2 is a collection of algorithms that enable AI models to be trained sequentially on multiple tasks with little or no forgetting. L2 enables models to learn throughout their lifetime by building on their existing knowledge without having to retrain them from scratch.
The field of AI is growing fast and other potential advancements are emerging that can mitigate the energy demands of this technology. For instance, building smaller AI models that exhibit the same predictive capabilities as that of a larger model.
Advances in quantum computing – a different approach to building computers that harnesses phenomena from the world of quantum physics – would also enable faster training and inference using ANNs and SNNs. The superior computing capabilities offered by quantum computing could allow us to find energy-efficient solutions for AI at a much larger scale.
The climate change challenge requires that we try to find solutions for rapidly advancing areas such as AI before their carbon footprint becomes too large.
Interesting read FF,Hi All
If you are interested in trying to understand what markets are now all about I recommend reading the linked interview with DAVID EINHORN. The following is a brief introduction to the interview:
“This week, we speak with David Einhorn. president of Greenlight Capital. He launched the value-oriented in 1996. Since inception, Greenlight has generated about 13% annually, and ~2900% total return versus the S&P500’s 1117% total and 9.5% annual returns.
He famously shorted Allied Capital in the 2000’s and Lehman Brothers about a year before it collapsed into bankruptcy in 2008. Time magazine named him to their “100 most influential people in the world” in 2013.
In our wide-ranging discussion, Einhorn stated that “Market structures are broken and value investing is dead.”
Transcript: David Einhorn, Greenlight Capital - The Big Picture
The transcript from this week’s, MiB: David Einhorn, Greenlight Capital, is below. You can stream and download our full conversation, including any podcast extras, on Apple Podcasts, Spotify, YouTube, and Bloomberg. All of our earlier podcasts on your favorite pod hosts can be found here. ~~~...ritholtz.com
It is quite a detailed interview but it does provide a possible way to understand how algorithmic trading could have been the cause of Brainchip being driven to well below true value and why algorithmic trading might well take Brainchip well past true value.
My opinion only DYOR
Fact Finder
“There have been a lot of reports about Apple's dive into generative AI recently. We heard last week that in order to avoid many of the problems associated with the technology, such as hallucinations and massive energy usage, Apple is working on AI models that run on its devices rather than the cloud.”Nothing like a Sunday morning rumour to get the bloody pumping
View attachment 57124
It would be the first NPU core-count increase since 2020
Rumor mill: Apple might have been slow to jump onto the generative AI bandwagon, but the company is starting to go all-in on artificial intelligence. According to a new rumor, nowhere will that be more apparent than in the iPhone 16, which is said to come with a massively upgraded Neural Engine for on-device AI tasks.
Apple's future generation of iPhone, iPad, and MacBook chips, the M4 and A18, will have an increased number of cores in their improved Neural Engines, writes Taiwanese publication Economic Daily News.
Apple first introduced its dual-core Neural Engine in the A11 Bionic SoC found in the iPhone 8/8 Plus and iPhone X, which released in 2017. The company bumped the Neural Engine's cores to 8 in the A13 that launched in the iPhone 11 series in 2019, doubling the count to 16 in the A14 that debuted in the iPhone 12/10th-gen iPads a year later.
Apple has stuck with 16 cores in its iPhone's Neural Engines since 2020, though the component's performance has still improved with each generation – Cupertino says the iPhone 15 Pro's A17 Pro chip's Neural Engine is twice as fast as the one in the iPhone 14 Pro. It sounds as if the A18 could double the core count to 32, which would match the Mac Studio and the Mac Pro that are configured with an M1 Ultra or M2 Ultra SoC.
The latest rumor follows reports that Apple's future products will likely run generative AI models using built-in hardware instead of cloud services.
There are plenty of advantages to using on-device silicon for generative AI tasks rather than relying on remote servers and cloud platforms. Google made a big deal about the AI processing abilities of its Tensor G3 chip, which it claims pushes the boundaries of on-device machine learning, bringing the latest in Google AI research directly to the phone. That statement was put under scrutiny when YouTube channel Mrwhosetheboss found that most of the Pixel 8 Pro's new generative AI features need to be processed in the cloud, meaning a constant internet connection is required.
Earlier this month, Apple CEO Tim Cook confirmed that the company would announce new generative AI features for its products later this year.
How cool is that! Even the academics are starting to understand why we’ve all been so excited!AI has a large and growing carbon footprint, but there are potential solutions on the horizon
You can trust this article because it’s written by academics.
About us
For example, training an AI that’s to be used in self-driving cars would require a dataset of many different driving scenarios and decisions taken by human drivers.
After the training phase, the AI system will predict effective manoeuvres for a self-driving car. Artificial neural networks (ANN), are an underlying technology used in most current AI systems.
They have many different elements to them, called parameters, whose values are adjusted during the training phase of the AI system. These parameters can run to more than 100 billion in total.
While large numbers of parameters improve the capabilities of ANNs, they also make training and inference resource-intensive processes. To put things in perspective, training GPT-3 (the precursor AI system to the current ChatGPT) generated 502 metric tonnes of carbon, which is equivalent to driving 112 petrol powered cars for a year.
GPT-3 further emits 8.4 tonnes of CO₂ annuallydue to inference. Since the AI boom started in the early 2010s, the energy requirements of AI systems known as large language models (LLMs) – the type of technology that’s behind ChatGPT – have gone up by a factor of 300,000.
With the increasing ubiquity and complexity of AI models, this trend is going to continue, potentially making AI a significant contributor of CO₂ emissions. In fact, our current estimates could be lower than AI’s actual carbon footprintdue to a lack of standard and accurate techniques for measuring AI-related emissions.
Leonid Sorokin / Shutterstock
Spiking neural networks
The previously mentioned new technologies, spiking neural networks (SNNs) and lifelong learning (L2), have the potential to lower AI’s ever-increasing carbon footprint, with SNNs acting as an energy-efficient alternative to ANNs.
ANNs work by processing and learning patterns from data, enabling them to make predictions. They work with decimal numbers. To make accurate calculations, especially when multiplying numbers with decimal points together, the computer needs to be very precise. It is because of these decimal numbers that ANNs require lots of computing power, memory and time.
This means ANNs become more energy-intensive as the networks get larger and more complex. Both ANNs and SNNs are inspired by the brain, which contains billions of neurons (nerve cells) connected to each other via synapses.
Like the brain, ANNs and SNNs also have components which researchers call neurons, although these are artificial, not biological ones. The key difference between the two types of neural networks is in the way individual neurons transmit information to each other.
Neurons in the human brain communicate with each other by transmitting intermittent electrical signals called spikes. The spikes themselves do not contain information. Instead, the information lies in the timing of these spikes. This binary, all-or-none characteristic of spikes (usually represented as 0 or 1) implies that neurons are active when they spike and inactive otherwise.
This is one of the reasons for energy efficient processing in the brain.
Just as Morse code uses specific sequences of dots and dashes to convey messages, SNNs use patterns or timings of spikes to process and transmit information. So, while the artificial neurons in ANNs are always active, SNNs consume energy only when a spike occurs.
Otherwise, they have closer to zero energy requirements. SNNs can be up to 280 times more energy efficient than ANNs.
My colleagues and I are developing learning algorithms for SNNs that may bring them even closer to the energy efficiency exhibited by the brain. The lower computational requirements also imply that SNNs might be able to make decisions more quickly.
These properties render SNNs useful for broad range of applications, including space exploration, defence and self-driving cars because of the limited energy sources available in these scenarios.
Lifelong learning
L2 is another strategy for reducing the overall energy requirements of ANNs over the course of their lifetime that we are also working on.
Training ANNs sequentially (where the systems learn from sequences of data) on new problems causes them to forget their previous knowledgewhile learning new tasks. ANNs require retraining from scratch when their operating environment changes, further increasing AI-related emissions.
L2 is a collection of algorithms that enable AI models to be trained sequentially on multiple tasks with little or no forgetting. L2 enables models to learn throughout their lifetime by building on their existing knowledge without having to retrain them from scratch.
The field of AI is growing fast and other potential advancements are emerging that can mitigate the energy demands of this technology. For instance, building smaller AI models that exhibit the same predictive capabilities as that of a larger model.
Advances in quantum computing – a different approach to building computers that harnesses phenomena from the world of quantum physics – would also enable faster training and inference using ANNs and SNNs. The superior computing capabilities offered by quantum computing could allow us to find energy-efficient solutions for AI at a much larger scale.
The climate change challenge requires that we try to find solutions for rapidly advancing areas such as AI before their carbon footprint becomes too large.
Quite a few jobs going at apple for ai/ml currently, not sure any are of relevance.“There have been a lot of reports about Apple's dive into generative AI recently. We heard last week that in order to avoid many of the problems associated with the technology, such as hallucinations and massive energy usage, Apple is working on AI models that run on its devices rather than the cloud.”
Tim Cook reveals Apple's shift towards generative AI, says announcements are arriving this year
During a call with analysts after Apple reported its fiscal first-quarter earnings, Cook talked about the company's Vision Pro mixed reality headset and its AI plans. "As...www.techspot.com
I’ve never heard an academic say ‘you can trust me because I’m an academic.’How cool is that! Even the academics are starting to understand why we’ve all been so excited!
It's worth following the link, to OpenAI's Sora demonstrations...View attachment 57125Tejas Kulkarni on LinkedIn: Sora | OpenAI
OpenAI's new video model is unbelievable - https://openai.com/sora Do we still think generative AI is not going to disrupt production workflows in image…www.linkedin.com
View attachment 57126
You're praying for BRN to be worth minus AUD57.70 ?An interesting valuation
BRN.AX DCF Valuation | Brainchip Holdings Ltd (BRN.AX)
The Discounted Cash Flow (DCF) valuation of Brainchip Holdings Ltd (BRN.AX) is (1,123,339,091.34) AUD. With the latest stock price at 0.23 AUD, the upside of Brainchip Holdings Ltd (BRN.AX) is -499261818473.5%valueinvesting.io
I was reading it the -16128.1 was what the sp is today compared to what they think could be 57.70 in x amount yearsYou're praying for BRN to be worth minus AUD57.70 ?
I'm not sure anyone here, can afford that kind of loss...