BRN Discussion Ongoing

Hi SG

It is looking likely.

Might end up being easier to have a list of those companies not engaging with Brainchip if this keeps up. 😂🤣😂

My opinion only DYOR
Fact Finder
Hi SG
This probably means it is happening:

Zero Waste: What is it, Why do it?​

PUBLISHED DATE JANUARY 15, 2019AUTHORJENNA BIELLER




Zero Waste strategy

Whether your organization is guided by financial drivers, mandatory legislation, reputation or even altruism, corporate waste management practice has become an important subject for sustainability professionals.
Broadly speaking, waste management in an organization includes a wide variety of processes and resources engaged to: handle in-house generated waste; organize and minimize product-related and customer-related waste; organize corporate waste disposal methods and end-of-life; comply with environmental legislation and health and safety standards.
aHViPTY1NzI0JmNtZD1pdGVtZWRpdG9yaW1hZ2UmZmlsZW5hbWU9aXRlbWVkaXRvcmltYWdlXzVjM2U2NWJiODg0ZmQucG5nJnZlcnNpb249MDAwMCZzaWc9MmZiM2NkY2FlMzRhMzUyYmE1NzE0NjU1MzczYjg4ZjA%253D
There is a growing movement among global organizations focused on implementing certifiable Zero Waste to Landfill solutions. Leading organizations, such as Royal Canin, are moving beyond the Zero Waste to Landfill concept, aiming to establish a complete and comprehensive circular economy, a cradle-to-cradle solution.
Some well-known, impactful waste reporting and certifying guidelines and frameworks include GRI (306), Carbon Trust Standard and UN Sustainable Development Goals (12, 13, 17). Mandatory directives that support these frameworks, include the EU WEEE directive on electronic waste management and more , Non-Financial Reporting.

Waste management techniques vary and impact organizations in many ways:​

  • Traditional waste management focused on landfills only. Today, we understand the environmental hazards of this practice. Decaying waste dumped on a landfill creates methane, a greenhouse gas (GHG) that has much higher warming potential than carbon dioxide. Toxic liquids formed at a landfill also threaten to penetrate soils and underground water sources. Countries such as Austria are completely phasing out landfills and focusing on waste incineration facilities.
  • Recycling means taking materials that are no longer of use, processing them and giving them a new life through a new product and purpose. Reusing or repurposing is done by either extending the longevity of a product or repairing it to avoid the need to manufacture a new product from virgin materials.
  • Reducing means using less and wasting less. By reducing the materials we use, we not only reduce waste but also the demand for primary products, making a double positive effect on the environment.
  • Refusing is eliminating non-recyclable materials entirely from products, plastic wrapping for example, to decrease the total amount of waste.
  • Composting is to recycle organic materials regarded as waste into a compost soil, rich in nutrients.

Benefits of a Zero Waste strategy

  1. Yield financial savings of waste management
If for no other reason, implementing waste management saves organizations money in terms of waste costs. Inefficient operations that result in additional waste bring additional costs to the organization; costs that can be avoided by good management practice. Knowing and understanding waste flows helps organizations understand business operations, ensuring resource and energy cost optimization, efficient use of labor and final product cost cuts.
  1. Build waste into the risk management strategy
In the EU, waste management plays an important role in risk management. The legislation surrounding waste management and compliance requirements are stringent. If your company operates in the EU, ensure you are on top of the requirements, explore pan-European waste obligations and benchmark how your organization performs against these.
  1. Protect your brand reputation
Companies are often scrutinized for unsound environmental practices. Being proactive in implementing innovative waste management strategies, before being called out or required to do so, not only creates positive brand value that contributes to a good reputation, but also sets a path for other companies to emulate. Being seen as a leader in environmental practices further increases a company’s impact.
  1. Manage data centrally.
    aHViPTY1NzI0JmNtZD1pdGVtZWRpdG9yaW1hZ2UmZmlsZW5hbWU9aXRlbWVkaXRvcmltYWdlXzVjM2U2NWJmYjZiMjYuanBnJnZlcnNpb249MDAwMCZzaWc9MTc1MzQ1YjkwYWNjMWI5M2RmODhiMzQ3NGU1YWFkNjE%253D
Use Zero Waste as an initiative to empower your organization to aggregate and visualize all its cross-enterprise, energy and sustainability information to improve corporate transparency and transform data into action. Schneider Electric can support companies in their efforts of active waste management through customer tailored experience in EcoStruxure Resource Advisor, through a careful examination of industry and client waste specific circumstances, and strategy development focusing on optimizing and minimizing wastes.
Contact us if you are considering improving your waste management practice and looking into savings opportunities cross your global operations with EcoStruxure Resource Advisor.

Contributed by Jana K. Pataky, Sustainability Consultant at Schneider Electric​


My opinion only DYOR
Fact Finder
 
  • Like
  • Fire
  • Love
Reactions: 9 users
Hi DB
Okay your perigative not to believe but if you were interested you could have sort confirmation from investor relations before judging it as not correct and potentially misleading others with your interpretation of what Dr. Lewis stated.

Actually all oral evidence is heresay. It just means someone heard someone say something.

In the present circumstances this heresay is admissible it it can go to a fact in issue. In the present situation this is reasonably said to go to the fact in issue but as I say all of this has no relevance when it is open to you to confirm or otherwise my report.

I note that others who attended the meeting did not challenge my recollection.

My opinion only DYOR
Fact Finder
I didn't say I didn't believe it, I just said it hadn't been stated, I can't say I know it as a fact.

If it is true and small LMs have been run independently of the cloud on AKIDA 2.0 IP and it is free and public knowledge, don't you think an actual statement by the Company, that they have achieved this, is "newsworthy" even if it's "just" through social media?

How can I mislead others, with "my interpretation" of what Dr Tony Lewis has said?
I'm using his exact words, of "hold promise" and BrainChip "will" be the first.

Your interpretation, is that he's saying we have, which does not align with the words he used.

I would Love to see a public confirmation, that BrainChip has achieved this and If it's not a secret and in a World that is going nuts about Language Models, why hasn't it?
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 24 users
Good to see our Uni accelerator program seems to be working.

Couple CMU representatives made the 2024 Finalists for Nth America.


Screenshot_2024-02-17-20-58-04-58_4641ebc0df1485bf6b47ebd018b5ee76.jpg
Screenshot_2024-02-17-20-54-56-29_4641ebc0df1485bf6b47ebd018b5ee76.jpg
Screenshot_2024-02-17-20-56-15-30_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Fire
  • Love
Reactions: 27 users

IloveLamp

Top 20
1000013394.jpg
 
  • Like
  • Fire
Reactions: 11 users
Our mates at EdgX also a member of this cluster with quite a few other entities.

Obviously gives Akida additional exposure through these connections.



Screenshot_2024-02-17-21-05-21-40_4641ebc0df1485bf6b47ebd018b5ee76.jpg


Edge artificial intelligence in space​

Imagine a future having thousands of interconnected satellites, powered by brain-inspired AI technology, delivering realtime data connectivity to earth.
EDGX is pioneering the next generation of onboard AI technology for space.
We envision a future where intelligent space systems bring huge economical and societal benefits to humanity. Our mission is to transform the space industry by making next generation spacecrafts much more intelligent, adaptive and efficient.
EDGX designs a new innovative data processing unit inspired by the human brain, delivering onboard learning, modularity, energy efficiency, low latency and neuromorphic computing to the next generation of satellites.
EDGX_square-250x250.png

EDGX​

EDGX
Groeneweg 17
9320 Aalst
T +32 472 43 95 42​

 
  • Like
  • Fire
  • Love
Reactions: 25 users

IloveLamp

Top 20
1000013396.jpg
 
  • Like
  • Love
Reactions: 10 users

JB49

Regular
https://www.infineon.com/cms/dresden/en/Development-Center/

Artificial intelligence is a key topic at the DC: Smart chips with embedded AI, intuitive sensor solutions and AI accelerators with extremely low power consumption. Edge-AI enables data processing with artificial intelligence close to the sensor without communicating with a cloud. This makes applications more energy-efficient, faster and safer.

The first Infineon chip with edge-AI is developed in Dresden. The DC is also researching on novel neuromorphic processor architectures for ultra-low power object detection and classification for autonomous driving. Neuromorphic hardware is improving energy efficiency by factor 100. This enables new embedded sensor and data processor chip solutions for future edge-AI applications.
 
  • Like
  • Fire
  • Love
Reactions: 45 users
Our mates at EdgX also a member of this cluster with quite a few other entities.

Obviously gives Akida additional exposure through these connections.



View attachment 57110

Edge artificial intelligence in space​

Imagine a future having thousands of interconnected satellites, powered by brain-inspired AI technology, delivering realtime data connectivity to earth.
EDGX is pioneering the next generation of onboard AI technology for space.
We envision a future where intelligent space systems bring huge economical and societal benefits to humanity. Our mission is to transform the space industry by making next generation spacecrafts much more intelligent, adaptive and efficient.
EDGX designs a new innovative data processing unit inspired by the human brain, delivering onboard learning, modularity, energy efficiency, low latency and neuromorphic computing to the next generation of satellites.
EDGX_square-250x250.png

EDGX​

EDGX

Groeneweg 17​

9320 Aalst​

T +32 472 43 95 42​

20240218_023407.jpg
 
  • Haha
Reactions: 6 users
  • Haha
  • Like
Reactions: 12 users
I guess we could be heading back to the asx 200 next month? On top of hopefully a good quarterly report next week I can see a very big short squeeze coming up.



1708198892124.gif
 
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 11 users

rgupta

Regular
G
I didn't say I didn't believe it, I just said it hadn't been stated, I can't say I know it as a fact.

If it is true and small LMs have been run independently of the cloud on AKIDA 2.0 IP and it is free and public knowledge, don't you think an actual statement by the Company, that they have achieved this, is "newsworthy" even if it's "just" through social media?

How can I mislead others, with "my interpretation" of what Dr Tony Lewis has said?
I'm using his exact words, of "hold promise" and BrainChip "will" be the first.

Your interpretation, is that he's saying we have, which does not align with the words he used.

I would Love to see a public confirmation, that BrainChip has achieved this and If it's not a secret and in a World that is going nuts about Language Models, why hasn't it?
Your doubts may be valid but on the same hand it proves brainchip is seriously working on that end. It also proves we are the front runner and about to reach the destination if not at the destination.
For an investor an assurance from a person like Dr Tony Lewis is a big thing.
We all know everyone is looking for small customisable models of chat gpt, and if brainchip can bring them at friction of cost, memory, energy and latency that will be game changer for the whole word.
End of the day investments in a startup is always speculative but at these prices I believe there is least amount of risk even as a speculation.
 
  • Like
  • Fire
  • Thinking
Reactions: 13 users

Tothemoon24

Top 20

AI has a large and growing carbon footprint, but there are potential solutions on the horizon


Spiking neural networks​



Published: February 17, 2024 3.07am AEDT
Shirin Dora, Loughborough University

Given the huge problem-solving potential of artificial intelligence (AI), it wouldn’t be far-fetched to think that AI could also help us in tackling the climate crisis. However, when we consider the energy needs of AI models, it becomes clear that the technology is as much a part of the climate problem as a solution.
The emissions come from the infrastructure associated with AI, such as building and running the data centres that handle the large amounts of information required to sustain these systems.
But different technological approaches to how we build AI systems could help reduce its carbon footprint. Two technologies in particular hold promise for doing this: spiking neural networksand lifelong learning.
The lifetime of an AI system can be split into two phases: training and inference. During training, a relevant dataset is used to build and tune – improve – the system. In inference, the trained system generates predictions on previously unseen data.

You can trust this article because it’s written by academics.​

About us
For example, training an AI that’s to be used in self-driving cars would require a dataset of many different driving scenarios and decisions taken by human drivers.
After the training phase, the AI system will predict effective manoeuvres for a self-driving car. Artificial neural networks (ANN), are an underlying technology used in most current AI systems.
They have many different elements to them, called parameters, whose values are adjusted during the training phase of the AI system. These parameters can run to more than 100 billion in total.
While large numbers of parameters improve the capabilities of ANNs, they also make training and inference resource-intensive processes. To put things in perspective, training GPT-3 (the precursor AI system to the current ChatGPT) generated 502 metric tonnes of carbon, which is equivalent to driving 112 petrol powered cars for a year.
GPT-3 further emits 8.4 tonnes of CO₂ annuallydue to inference. Since the AI boom started in the early 2010s, the energy requirements of AI systems known as large language models (LLMs) – the type of technology that’s behind ChatGPT – have gone up by a factor of 300,000.
With the increasing ubiquity and complexity of AI models, this trend is going to continue, potentially making AI a significant contributor of CO₂ emissions. In fact, our current estimates could be lower than AI’s actual carbon footprintdue to a lack of standard and accurate techniques for measuring AI-related emissions.
Chimneys at a power station.

Leonid Sorokin / Shutterstock

Spiking neural networks​

The previously mentioned new technologies, spiking neural networks (SNNs) and lifelong learning (L2), have the potential to lower AI’s ever-increasing carbon footprint, with SNNs acting as an energy-efficient alternative to ANNs.
ANNs work by processing and learning patterns from data, enabling them to make predictions. They work with decimal numbers. To make accurate calculations, especially when multiplying numbers with decimal points together, the computer needs to be very precise. It is because of these decimal numbers that ANNs require lots of computing power, memory and time.
This means ANNs become more energy-intensive as the networks get larger and more complex. Both ANNs and SNNs are inspired by the brain, which contains billions of neurons (nerve cells) connected to each other via synapses.
Like the brain, ANNs and SNNs also have components which researchers call neurons, although these are artificial, not biological ones. The key difference between the two types of neural networks is in the way individual neurons transmit information to each other.
Neurons in the human brain communicate with each other by transmitting intermittent electrical signals called spikes. The spikes themselves do not contain information. Instead, the information lies in the timing of these spikes. This binary, all-or-none characteristic of spikes (usually represented as 0 or 1) implies that neurons are active when they spike and inactive otherwise.
This is one of the reasons for energy efficient processing in the brain.
Just as Morse code uses specific sequences of dots and dashes to convey messages, SNNs use patterns or timings of spikes to process and transmit information. So, while the artificial neurons in ANNs are always active, SNNs consume energy only when a spike occurs.
Otherwise, they have closer to zero energy requirements. SNNs can be up to 280 times more energy efficient than ANNs.
My colleagues and I are developing learning algorithms for SNNs that may bring them even closer to the energy efficiency exhibited by the brain. The lower computational requirements also imply that SNNs might be able to make decisions more quickly.
These properties render SNNs useful for broad range of applications, including space exploration, defence and self-driving cars because of the limited energy sources available in these scenarios.

Lifelong learning​

L2 is another strategy for reducing the overall energy requirements of ANNs over the course of their lifetime that we are also working on.
Training ANNs sequentially (where the systems learn from sequences of data) on new problems causes them to forget their previous knowledgewhile learning new tasks. ANNs require retraining from scratch when their operating environment changes, further increasing AI-related emissions.
L2 is a collection of algorithms that enable AI models to be trained sequentially on multiple tasks with little or no forgetting. L2 enables models to learn throughout their lifetime by building on their existing knowledge without having to retrain them from scratch.
The field of AI is growing fast and other potential advancements are emerging that can mitigate the energy demands of this technology. For instance, building smaller AI models that exhibit the same predictive capabilities as that of a larger model.
Advances in quantum computing – a different approach to building computers that harnesses phenomena from the world of quantum physics – would also enable faster training and inference using ANNs and SNNs. The superior computing capabilities offered by quantum computing could allow us to find energy-efficient solutions for AI at a much larger scale.
The climate change challenge requires that we try to find solutions for rapidly advancing areas such as AI before their carbon footprint becomes too large.
 
  • Like
  • Fire
  • Love
Reactions: 52 users

TopCat

Regular

AI has a large and growing carbon footprint, but there are potential solutions on the horizon


Spiking neural networks​



Published: February 17, 2024 3.07am AEDT
Shirin Dora, Loughborough University

Given the huge problem-solving potential of artificial intelligence (AI), it wouldn’t be far-fetched to think that AI could also help us in tackling the climate crisis. However, when we consider the energy needs of AI models, it becomes clear that the technology is as much a part of the climate problem as a solution.
The emissions come from the infrastructure associated with AI, such as building and running the data centres that handle the large amounts of information required to sustain these systems.
But different technological approaches to how we build AI systems could help reduce its carbon footprint. Two technologies in particular hold promise for doing this: spiking neural networksand lifelong learning.
The lifetime of an AI system can be split into two phases: training and inference. During training, a relevant dataset is used to build and tune – improve – the system. In inference, the trained system generates predictions on previously unseen data.

You can trust this article because it’s written by academics.​

About us
For example, training an AI that’s to be used in self-driving cars would require a dataset of many different driving scenarios and decisions taken by human drivers.
After the training phase, the AI system will predict effective manoeuvres for a self-driving car. Artificial neural networks (ANN), are an underlying technology used in most current AI systems.
They have many different elements to them, called parameters, whose values are adjusted during the training phase of the AI system. These parameters can run to more than 100 billion in total.
While large numbers of parameters improve the capabilities of ANNs, they also make training and inference resource-intensive processes. To put things in perspective, training GPT-3 (the precursor AI system to the current ChatGPT) generated 502 metric tonnes of carbon, which is equivalent to driving 112 petrol powered cars for a year.
GPT-3 further emits 8.4 tonnes of CO₂ annuallydue to inference. Since the AI boom started in the early 2010s, the energy requirements of AI systems known as large language models (LLMs) – the type of technology that’s behind ChatGPT – have gone up by a factor of 300,000.
With the increasing ubiquity and complexity of AI models, this trend is going to continue, potentially making AI a significant contributor of CO₂ emissions. In fact, our current estimates could be lower than AI’s actual carbon footprintdue to a lack of standard and accurate techniques for measuring AI-related emissions.
Chimneys at a power station.

Leonid Sorokin / Shutterstock

Spiking neural networks​

The previously mentioned new technologies, spiking neural networks (SNNs) and lifelong learning (L2), have the potential to lower AI’s ever-increasing carbon footprint, with SNNs acting as an energy-efficient alternative to ANNs.
ANNs work by processing and learning patterns from data, enabling them to make predictions. They work with decimal numbers. To make accurate calculations, especially when multiplying numbers with decimal points together, the computer needs to be very precise. It is because of these decimal numbers that ANNs require lots of computing power, memory and time.
This means ANNs become more energy-intensive as the networks get larger and more complex. Both ANNs and SNNs are inspired by the brain, which contains billions of neurons (nerve cells) connected to each other via synapses.
Like the brain, ANNs and SNNs also have components which researchers call neurons, although these are artificial, not biological ones. The key difference between the two types of neural networks is in the way individual neurons transmit information to each other.
Neurons in the human brain communicate with each other by transmitting intermittent electrical signals called spikes. The spikes themselves do not contain information. Instead, the information lies in the timing of these spikes. This binary, all-or-none characteristic of spikes (usually represented as 0 or 1) implies that neurons are active when they spike and inactive otherwise.
This is one of the reasons for energy efficient processing in the brain.
Just as Morse code uses specific sequences of dots and dashes to convey messages, SNNs use patterns or timings of spikes to process and transmit information. So, while the artificial neurons in ANNs are always active, SNNs consume energy only when a spike occurs.
Otherwise, they have closer to zero energy requirements. SNNs can be up to 280 times more energy efficient than ANNs.
My colleagues and I are developing learning algorithms for SNNs that may bring them even closer to the energy efficiency exhibited by the brain. The lower computational requirements also imply that SNNs might be able to make decisions more quickly.
These properties render SNNs useful for broad range of applications, including space exploration, defence and self-driving cars because of the limited energy sources available in these scenarios.

Lifelong learning​

L2 is another strategy for reducing the overall energy requirements of ANNs over the course of their lifetime that we are also working on.
Training ANNs sequentially (where the systems learn from sequences of data) on new problems causes them to forget their previous knowledgewhile learning new tasks. ANNs require retraining from scratch when their operating environment changes, further increasing AI-related emissions.
L2 is a collection of algorithms that enable AI models to be trained sequentially on multiple tasks with little or no forgetting. L2 enables models to learn throughout their lifetime by building on their existing knowledge without having to retrain them from scratch.
The field of AI is growing fast and other potential advancements are emerging that can mitigate the energy demands of this technology. For instance, building smaller AI models that exhibit the same predictive capabilities as that of a larger model.
Advances in quantum computing – a different approach to building computers that harnesses phenomena from the world of quantum physics – would also enable faster training and inference using ANNs and SNNs. The superior computing capabilities offered by quantum computing could allow us to find energy-efficient solutions for AI at a much larger scale.
The climate change challenge requires that we try to find solutions for rapidly advancing areas such as AI before their carbon footprint becomes too large.
We’re certainly in the right spot at the right time. Going to be an exciting future watching this unfold!!
 
  • Like
  • Love
  • Fire
Reactions: 34 users

Tothemoon24

Top 20
Nothing like a Sunday morning rumour to get the bloody pumping

IMG_8402.png

It would be the first NPU core-count increase since 2020


Rumor mill: Apple might have been slow to jump onto the generative AI bandwagon, but the company is starting to go all-in on artificial intelligence. According to a new rumor, nowhere will that be more apparent than in the iPhone 16, which is said to come with a massively upgraded Neural Engine for on-device AI tasks.


Apple's future generation of iPhone, iPad, and MacBook chips, the M4 and A18, will have an increased number of cores in their improved Neural Engines, writes Taiwanese publication Economic Daily News.


Apple first introduced its dual-core Neural Engine in the A11 Bionic SoC found in the iPhone 8/8 Plus and iPhone X, which released in 2017. The company bumped the Neural Engine's cores to 8 in the A13 that launched in the iPhone 11 series in 2019, doubling the count to 16 in the A14 that debuted in the iPhone 12/10th-gen iPads a year later.


Apple has stuck with 16 cores in its iPhone's Neural Engines since 2020, though the component's performance has still improved with each generation – Cupertino says the iPhone 15 Pro's A17 Pro chip's Neural Engine is twice as fast as the one in the iPhone 14 Pro. It sounds as if the A18 could double the core count to 32, which would match the Mac Studio and the Mac Pro that are configured with an M1 Ultra or M2 Ultra SoC.

The latest rumor follows reports that Apple's future products will likely run generative AI models using built-in hardware instead of cloud services.

There are plenty of advantages to using on-device silicon for generative AI tasks rather than relying on remote servers and cloud platforms. Google made a big deal about the AI processing abilities of its Tensor G3 chip, which it claims pushes the boundaries of on-device machine learning, bringing the latest in Google AI research directly to the phone. That statement was put under scrutiny when YouTube channel Mrwhosetheboss found that most of the Pixel 8 Pro's new generative AI features need to be processed in the cloud, meaning a constant internet connection is required.

Earlier this month, Apple CEO Tim Cook confirmed that the company would announce new generative AI features for its products later this year.
 
  • Like
  • Fire
  • Thinking
Reactions: 46 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
Reactions: 12 users

Worker122

Regular
Hi All
If you are interested in trying to understand what markets are now all about I recommend reading the linked interview with DAVID EINHORN. The following is a brief introduction to the interview:

“This week, we speak with David Einhorn. president of Greenlight Capital. He launched the value-oriented in 1996. Since inception, Greenlight has generated about 13% annually, and ~2900% total return versus the S&P500’s 1117% total and 9.5% annual returns.

He famously shorted Allied Capital in the 2000’s and Lehman Brothers about a year before it collapsed into bankruptcy in 2008. Time magazine named him to their “100 most influential people in the world” in 2013.

In our wide-ranging discussion, Einhorn stated that “Market structures are broken and value investing is dead.”


It is quite a detailed interview but it does provide a possible way to understand how algorithmic trading could have been the cause of Brainchip being driven to well below true value and why algorithmic trading might well take Brainchip well past true value.

My opinion only DYOR
Fact Finder
Interesting read FF,
That led me to another term I had not been aware of but suspected was a tactic used by shorters in collusion.
A Bear Raid.
In a typical bear raid, short sellers may conspire beforehand to quietly establish large short positions in the target stock. Since the short interest in the stock increases the risk of a short squeeze that can inflict substantial losses on the shorts, the short sellers cannot afford to wait patiently for months until their short strategy works out.


The next step in the bear raid is akin to a smear campaign, with whispers and rumors about the company spread by unknown sources. These rumors can be anything that portrays the target company in a negative light, such as allegations of accounting fraud, an SEC investigation, an earnings miss, financial difficulties, and so on. The rumors may cause nervous investors to exit the stock in droves, driving the price down further and giving the short sellers the profit they are looking for.
 
  • Like
  • Sad
  • Thinking
Reactions: 20 users

TopCat

Regular
Nothing like a Sunday morning rumour to get the bloody pumping

View attachment 57124
It would be the first NPU core-count increase since 2020


Rumor mill: Apple might have been slow to jump onto the generative AI bandwagon, but the company is starting to go all-in on artificial intelligence. According to a new rumor, nowhere will that be more apparent than in the iPhone 16, which is said to come with a massively upgraded Neural Engine for on-device AI tasks.


Apple's future generation of iPhone, iPad, and MacBook chips, the M4 and A18, will have an increased number of cores in their improved Neural Engines, writes Taiwanese publication Economic Daily News.


Apple first introduced its dual-core Neural Engine in the A11 Bionic SoC found in the iPhone 8/8 Plus and iPhone X, which released in 2017. The company bumped the Neural Engine's cores to 8 in the A13 that launched in the iPhone 11 series in 2019, doubling the count to 16 in the A14 that debuted in the iPhone 12/10th-gen iPads a year later.


Apple has stuck with 16 cores in its iPhone's Neural Engines since 2020, though the component's performance has still improved with each generation – Cupertino says the iPhone 15 Pro's A17 Pro chip's Neural Engine is twice as fast as the one in the iPhone 14 Pro. It sounds as if the A18 could double the core count to 32, which would match the Mac Studio and the Mac Pro that are configured with an M1 Ultra or M2 Ultra SoC.

The latest rumor follows reports that Apple's future products will likely run generative AI models using built-in hardware instead of cloud services.

There are plenty of advantages to using on-device silicon for generative AI tasks rather than relying on remote servers and cloud platforms. Google made a big deal about the AI processing abilities of its Tensor G3 chip, which it claims pushes the boundaries of on-device machine learning, bringing the latest in Google AI research directly to the phone. That statement was put under scrutiny when YouTube channel Mrwhosetheboss found that most of the Pixel 8 Pro's new generative AI features need to be processed in the cloud, meaning a constant internet connection is required.

Earlier this month, Apple CEO Tim Cook confirmed that the company would announce new generative AI features for its products later this year.
“There have been a lot of reports about Apple's dive into generative AI recently. We heard last week that in order to avoid many of the problems associated with the technology, such as hallucinations and massive energy usage, Apple is working on AI models that run on its devices rather than the cloud.”

 
  • Like
  • Thinking
  • Fire
Reactions: 26 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

AI has a large and growing carbon footprint, but there are potential solutions on the horizon


You can trust this article because it’s written by academics.​

About us
For example, training an AI that’s to be used in self-driving cars would require a dataset of many different driving scenarios and decisions taken by human drivers.
After the training phase, the AI system will predict effective manoeuvres for a self-driving car. Artificial neural networks (ANN), are an underlying technology used in most current AI systems.
They have many different elements to them, called parameters, whose values are adjusted during the training phase of the AI system. These parameters can run to more than 100 billion in total.
While large numbers of parameters improve the capabilities of ANNs, they also make training and inference resource-intensive processes. To put things in perspective, training GPT-3 (the precursor AI system to the current ChatGPT) generated 502 metric tonnes of carbon, which is equivalent to driving 112 petrol powered cars for a year.
GPT-3 further emits 8.4 tonnes of CO₂ annuallydue to inference. Since the AI boom started in the early 2010s, the energy requirements of AI systems known as large language models (LLMs) – the type of technology that’s behind ChatGPT – have gone up by a factor of 300,000.
With the increasing ubiquity and complexity of AI models, this trend is going to continue, potentially making AI a significant contributor of CO₂ emissions. In fact, our current estimates could be lower than AI’s actual carbon footprintdue to a lack of standard and accurate techniques for measuring AI-related emissions.
Chimneys at a power station.

Leonid Sorokin / Shutterstock

Spiking neural networks​

The previously mentioned new technologies, spiking neural networks (SNNs) and lifelong learning (L2), have the potential to lower AI’s ever-increasing carbon footprint, with SNNs acting as an energy-efficient alternative to ANNs.
ANNs work by processing and learning patterns from data, enabling them to make predictions. They work with decimal numbers. To make accurate calculations, especially when multiplying numbers with decimal points together, the computer needs to be very precise. It is because of these decimal numbers that ANNs require lots of computing power, memory and time.
This means ANNs become more energy-intensive as the networks get larger and more complex. Both ANNs and SNNs are inspired by the brain, which contains billions of neurons (nerve cells) connected to each other via synapses.
Like the brain, ANNs and SNNs also have components which researchers call neurons, although these are artificial, not biological ones. The key difference between the two types of neural networks is in the way individual neurons transmit information to each other.
Neurons in the human brain communicate with each other by transmitting intermittent electrical signals called spikes. The spikes themselves do not contain information. Instead, the information lies in the timing of these spikes. This binary, all-or-none characteristic of spikes (usually represented as 0 or 1) implies that neurons are active when they spike and inactive otherwise.
This is one of the reasons for energy efficient processing in the brain.
Just as Morse code uses specific sequences of dots and dashes to convey messages, SNNs use patterns or timings of spikes to process and transmit information. So, while the artificial neurons in ANNs are always active, SNNs consume energy only when a spike occurs.
Otherwise, they have closer to zero energy requirements. SNNs can be up to 280 times more energy efficient than ANNs.
My colleagues and I are developing learning algorithms for SNNs that may bring them even closer to the energy efficiency exhibited by the brain. The lower computational requirements also imply that SNNs might be able to make decisions more quickly.
These properties render SNNs useful for broad range of applications, including space exploration, defence and self-driving cars because of the limited energy sources available in these scenarios.

Lifelong learning​

L2 is another strategy for reducing the overall energy requirements of ANNs over the course of their lifetime that we are also working on.
Training ANNs sequentially (where the systems learn from sequences of data) on new problems causes them to forget their previous knowledgewhile learning new tasks. ANNs require retraining from scratch when their operating environment changes, further increasing AI-related emissions.
L2 is a collection of algorithms that enable AI models to be trained sequentially on multiple tasks with little or no forgetting. L2 enables models to learn throughout their lifetime by building on their existing knowledge without having to retrain them from scratch.
The field of AI is growing fast and other potential advancements are emerging that can mitigate the energy demands of this technology. For instance, building smaller AI models that exhibit the same predictive capabilities as that of a larger model.
Advances in quantum computing – a different approach to building computers that harnesses phenomena from the world of quantum physics – would also enable faster training and inference using ANNs and SNNs. The superior computing capabilities offered by quantum computing could allow us to find energy-efficient solutions for AI at a much larger scale.
The climate change challenge requires that we try to find solutions for rapidly advancing areas such as AI before their carbon footprint becomes too large.
How cool is that! Even the academics are starting to understand why we’ve all been so excited! 😝
 
  • Like
  • Fire
  • Haha
Reactions: 21 users
“There have been a lot of reports about Apple's dive into generative AI recently. We heard last week that in order to avoid many of the problems associated with the technology, such as hallucinations and massive energy usage, Apple is working on AI models that run on its devices rather than the cloud.”

Quite a few jobs going at apple for ai/ml currently, not sure any are of relevance.

 
  • Like
Reactions: 4 users
Top Bottom