BRN Discussion Ongoing

The Pope

Regular
Rob seems like a very nice guy. I liked him. Sad to see him go. Hoping that his replacement is equally likeable, and more importantly can get us some IP deals!
Can you weave that into your next downramping MF article. Take care and hope to see you at the AGM in Sydney.
 
  • Like
  • Haha
  • Fire
Reactions: 6 users

Diogenese

Top 20
WoW, I'm thrilled, what beautiful music! You have really a good taste.
...I heard a study in a science podcast of the University of Innsbruck published at Science Report before I read your post about how song lyrics are becoming more and more simple minded in the last 40a. Better to dispense with words in these times and let the music flow.
Thanks again, I will listen to it more closely, didn't know him.

WoW, I'm thrilled, what beautiful music! You have really a good taste.
...I heard a study in a science podcast of the University of Innsbruck published at Science Report before I read your post about how song lyrics are becoming more and more simple minded in the last 40a. Better to dispense with words in these times and let the music flow.
Thanks again, I will listen to it more closely, didn't know him.


Sorry, what's your point? Tony should be telepathic and BrainChip is somehow responsible for Rob leaving to join another company that isn't doing so well?

I suppose I should blame BrainChip because I stubbed my toe on a rock this morning? Sheesh!
I'd blame your dance thongs.
 
  • Haha
  • Like
Reactions: 5 users

Guzzi62

Regular
Haven’t had the time to watch it, yet, but here is a video of that talk resp. of a talk with the same name that was uploaded to YouTube just over a week prior to the Design&Reuse IP-SoC Silicon Valley 2024 Day event:





View attachment 62100

Very good presentation by Steve Thorne, thanks for the link.

He explain each slide very nice and clearly.

Sadly only 60 views when I write this, +100.000 would be much better.

I gave it a "like"
 
  • Like
  • Fire
Reactions: 11 users

Diogenese

Top 20
Hi TTM, @Tothemoon24

A few years ago Apple bought XNOR which had a few AI patents, so

Apple’s unique combination of seamless hardware, software and services integration, groundbreaking Apple silicon with our industry leading neural engines,

could refer to XNOR technology.

For example,

US11651192B2 Compressed convolutional neural network models 20190212 includes an inventor named Rastegari nee Xnor.

systems and processes for building a compressed CNN model suitable for deployment on different types of computing platforms having different processing, power and memory capabilities.

They are building edge adapted LLMs.

Apple's Neural Engine is built on MACs:

US2022222510A1 MULTI-OPERATIONAL MODES OF NEURAL ENGINE CIRCUIT 20210113
1714744514273.png
 
  • Like
  • Sad
  • Fire
Reactions: 16 users

Frangipani

Regular
Given the previously published paper stating the use of Akida and the suggestions that BRN have been working with a communications company, the following Ericsson blog end of April on their MWC24 booth imo appears to have some different additional prototype work to my original Christmas eve post (link below) on:

Towards 6G Zero-Energy Internet of Things:
Standards, Trends, and Recent Results
  • December 2023

https://www.researchgate.net/public...of_Things_Standards_Trends_and_Recent_Results




6G straight from the Ericsson labs​

There were many questions for our experts who worked on the 6G demos in the “6G – straight from our labs” area at the Mobile World Congress (MWC) 2024. Among the most commonly-asked questions were: "Is it not too early to talk 6G?”, “What is new for 6G?”, “How does 6G relate to 5G?” and all sorts of questions on spectrum – new, existing, co-existing, reuse, coverage and more. So, let’s take the questions one-by-one… This blog will show how we take steps towards addressing some of these questions by describing what was on display at MWC 2024.

APR 30, 2024 | 7 min.

Marie Hogan
6G Portfolio Strategy, Business Area Networks

Johan Lundsjö
Strategic Research Communication Director

What is new for 6G?​

6G will build on 5G Standalone and 5G-Advanced, evolving from today’s network towards the needs of 2030 and beyond. In other words, 6G will be a mix of both new and evolved concepts and use cases.

A selection of new concepts was on display straight from our 6G labs on the MWC floor.

Ultra-Low Power AI​

This concept addresses two major areas of interest - AI and Energy Efficiency - topics that might seem mutually exclusive! There is rapid growth both in interest and usage of AI in mobile network operations, solutions and applications. There’s a risk that the massive amounts of data processing demanded in many AI scenarios lead to high power consumption in parallel. With increased usage of AI expected in the networks, it is important to have solutions to address this.

This live prototype for ultra-low power AI used a novel neuromorphic-AI-based approach for radio channel estimation a
nd showcased the feasibility of low-compute and low-energy AI using AI-based radio receiver use-cases. The trick here is that in a neuromorphic neural network (like our human brain) only the neurons detecting a change are active, whereas no computations are needed for neurons in remember state. The fraction of inactive neurons translates directly to an energy efficiency gain as compared to a traditional deep neural network where computations are always needed for all neurons. The live demo showed how neural activity in the channel estimation computations varied with changes in the radio channel and how energy consumption could be reduced when less or no computations were ongoing. Radio channel estimation is only one of many areas where this exciting AI technology can be used. Keep an eye out for upcoming blogs on this topic from Ericsson soon!


View attachment 62102


View attachment 62103

Prev paper:

View attachment 62104
View attachment 62105

Hi Fullmoonfever,

the radio receiver algorithm prototype Ericsson demoed at MWC 2024 was unfortunately not implemented on Akida.


See my post dated April 18:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-419427


Or in short here:



“Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications artificial intelligence (AI) models to optimize telecom architecture. Ericsson currently uses AI-based network performance diagnostics to analyze communications service providers’ radio access networks (RANs) to resolve network issues efficiently and provide specific parameter change recommendations. At Mobile World Congress (MWC) Barcelona 2024, Ericsson Research demoed a radio receiver algorithm prototype targeted for Intel’s Loihi 2 neuromorphic research AI accelerator, demonstrating a significant reduction in computational cost to improve signals across the RAN
(…)
Ericsson Research’s working prototype of a radio receiver algorithm was implemented in Lava for Loihi 2. In the demonstration, the neural network performs a common complex task of recognizing the effects of reflections and noise on radio signals as they propagate from the sender (base station) to the receiver (mobile). Then the neural network must reverse these environmental effects so that the information can be correctly decoded.”

Have a good weekend!

Frangipani
 
  • Like
  • Thinking
  • Fire
Reactions: 9 users

cassip

Regular
Hi Fullmoonfever,

the radio receiver algorithm prototype Ericsson demoed at MWC 2024 was unfortunately not implemented on Akida.


See my post dated April 18:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-419427


Or in short here:



“Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications artificial intelligence (AI) models to optimize telecom architecture. Ericsson currently uses AI-based network performance diagnostics to analyze communications service providers’ radio access networks (RANs) to resolve network issues efficiently and provide specific parameter change recommendations. At Mobile World Congress (MWC) Barcelona 2024, Ericsson Research demoed a radio receiver algorithm prototype targeted for Intel’s Loihi 2 neuromorphic research AI accelerator, demonstrating a significant reduction in computational cost to improve signals across the RAN
(…)
Ericsson Research’s working prototype of a radio receiver algorithm was implemented in Lava for Loihi 2. In the demonstration, the neural network performs a common complex task of recognizing the effects of reflections and noise on radio signals as they propagate from the sender (base station) to the receiver (mobile). Then the neural network must reverse these environmental effects so that the information can be correctly decoded.”

Have a good weekend!

Frangipani

Could it be that they demonstrated their prototype on Loihi2-base but besides work/do further research with Akida?

... maybe they swapped Loihi 2 against Akida 2.


"About Hala Point: Loihi 2 neuromorphic processors, which form the basis for Hala Point, apply brain-inspired computing principles, such as asynchronous, event-based spiking neural networks (SNNs), integrated memory and computing, and sparse and continuously changing connections to achieve orders-of-magnitude gains in energy consumption and performance. Neurons communicate directly with one another rather than communicating through memory, reducing overall power consumption.


Hala Point packages 1,152 Loihi 2 processors produced on Intel 4 process node in a six-rack-unit data center chassis the size of a microwave oven. The system supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores, consuming a maximum of 2,600 watts of power. It also includes over 2,300 embedded x86 processors for ancillary computations.


Hala Point integrates processing, memory, and communication channels in a massively parallelized fabric, providing a total of 16 petabytes per second (PB/s) of memory bandwidth, 3.5 PB/s of inter-core communication bandwidth, and 5 terabytes per second (TB/s) of inter-chip communication bandwidth. The system can process over 380 trillion 8-bit synapses and over 240 trillion neuron operations per second.


Applied to bio-inspired spiking neural network models, the system can execute its full capacity of 1.15 billion neurons 20 times faster than a human brain and up to 200 times faster rates at lower capacity. While Hala Point is not intended for neuroscience modeling, its neuron capacity is roughly equivalent to that of an owl brain or the cortex of a capuchin monkey.


Loihi-based systems can perform AI inference and solve optimization problems using 100 times less energy at speeds as much as 50 times faster than conventional CPU and GPU architectures1. By exploiting up to 10:1 sparse connectivity and event-driven activity, early results on Hala Point show the system can achieve deep neural network efficiencies as high as 15 TOPS/W2 without requiring input data to be collected into batches, a common optimization for GPUs that significantly delays the processing of data arriving in real-time, such as video from cameras. While still in research, future neuromorphic LLMs capable of continuous learning could result in gigawatt-hours of energy savings by eliminating the need for periodic re-training with ever-growing datasets."



"Intel’s neuromorphic systems, such as Pohoiki Springs, are still in the research phase and are not intended to replace conventional computing systems. Instead, they provide a tool for researchers to develop and characterize new neuro-inspired algorithms for real-time processing, problem solving, adaptation and learning."
 
  • Like
  • Fire
Reactions: 5 users

Frangipani

Regular
Although it would be grossly negligent for the Ericsson Team to test only the Intel chip (which according to BRN is still in research) when there are others available.
I very much doubt they would not be aware of BRN. I would also be very surprised if Ericsson and other telcos have not had high level contact from BRN.
The word slipped that we were tied with Mercedes. After that you can bet that every Auto is testing AKIDA. No one wants to be caught short.
The use for AKIDA in Autos is obvious.
For the 'layman' it's a little harder to identify Telco company uses.
BRN has worked hard to set up an eco system and together with enormous industry exposure its hard to imagine any decent Research departments of big business would be unaware of BRN. It would be just a matter of how, if at all, hey see AKIDA improving their business.

Hi manny100,

please reread my posts re Ericsson.
I never said they are not aware of BrainChip - they obviously are, since a number of senior Ericsson researchers have experimented with Akida, as evidenced by that December 2023 paper on the AI-enabled ZeroEnergy-IoT prototype. But we don’t have any proof to date that this development and testing of a prototype (that our company may not even have been aware of before the paper was published) led to more than just this publication. We can hope so, but we simply don’t know.

What I did say was this: While it is possible that they are hiding behind an NDA, we do not know this for a fact. We do know for a fact, though, that they are currently collaborating with Intel in various areas, including neuromorphic technology.

That’s why my suggestion was to hold off putting Ericsson on the list of companies BrainChip is verifiably engaged with, until such engagement is publicly disclosed by at least one of the parties.

Regards
Frangipani
 
  • Like
  • Fire
  • Thinking
Reactions: 19 users

Slade

Top 20
  • Like
  • Love
  • Fire
Reactions: 34 users
Hi Fullmoonfever,

the radio receiver algorithm prototype Ericsson demoed at MWC 2024 was unfortunately not implemented on Akida.


See my post dated April 18:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-419427


Or in short here:



“Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications artificial intelligence (AI) models to optimize telecom architecture. Ericsson currently uses AI-based network performance diagnostics to analyze communications service providers’ radio access networks (RANs) to resolve network issues efficiently and provide specific parameter change recommendations. At Mobile World Congress (MWC) Barcelona 2024, Ericsson Research demoed a radio receiver algorithm prototype targeted for Intel’s Loihi 2 neuromorphic research AI accelerator, demonstrating a significant reduction in computational cost to improve signals across the RAN
(…)
Ericsson Research’s working prototype of a radio receiver algorithm was implemented in Lava for Loihi 2. In the demonstration, the neural network performs a common complex task of recognizing the effects of reflections and noise on radio signals as they propagate from the sender (base station) to the receiver (mobile). Then the neural network must reverse these environmental effects so that the information can be correctly decoded.”

Have a good weekend!

Frangipani
Cheers.

Wasn't aware of that and given our existing prototype with them seemed a reasonable assumption it may have been us.

They also demoed a few other prototypes at MWC.

Was trying to see where Channel Estimation fits within the RAN as the Intel post speaks of noise and reflections while still related to the general radio receiver and Ericsson blog is more specific to the channel estimation and no mention of noise and reflections.

Intel.

"...Loihi 2. In the demonstration, the neural network performs a common complex task of recognizing the effects of reflections and noise on radio signals as they propagate from the sender (base station) to the receiver (mobile). Then the neural network must reverse these environmental effects so that the information can be correctly decoded."

Ericsson

"...The live demo showed how neural activity in the channel estimation computations varied with changes in the radio channel and how energy consumption could be reduced when less or no computations were ongoing."

Is it all one and the same...channels and environmental effects or diff components of the RAN process...I don't know as a bit above my head at this point.

Will need to see how fits together.
 
  • Like
  • Fire
Reactions: 7 users

Frangipani

Regular
Cheers.

Wasn't aware of that and given our existing prototype with them seemed a reasonable assumption it may have been us.

They also demoed a few other prototypes at MWC.

Was trying to see where Channel Estimation fits within the RAN as the Intel post speaks of noise and reflections while still related to the general radio receiver and Ericsson blog is more specific to the channel estimation and no mention of noise and reflections.

Intel.

"...Loihi 2. In the demonstration, the neural network performs a common complex task of recognizing the effects of reflections and noise on radio signals as they propagate from the sender (base station) to the receiver (mobile). Then the neural network must reverse these environmental effects so that the information can be correctly decoded."

Ericsson

"...The live demo showed how neural activity in the channel estimation computations varied with changes in the radio channel and how energy consumption could be reduced when less or no computations were ongoing."

Is it all one and the same...channels and environmental effects or diff components of the RAN process...I don't know as a bit above my head at this point.

Will need to see how fits together.

I know what you mean. I had also wondered about the different terminology at first, but if you listen to what Ericsson’s Strategic Research Communications Director Johan Lundsjö says from 29:28 min in the Tour the demo floor video, there seems to have been only one demo involving a neuromorphic radio receiver at MWC 2024 (although he doesn’t mention Intel):



The solar-powered AI-enabled ZeroEnergy IoT prototype appears to be a completely different thing, as it also involves a low-power camera:

3C3A898C-7782-4036-8A7C-7D2B41A8BCD2.jpeg


Could it be that they demonstrated their prototype on Loihi2-base but besides work/do further research with Akida?

... maybe they swapped Loihi 2 against Akida 2.


"About Hala Point: Loihi 2 neuromorphic processors, which form the basis for Hala Point, apply brain-inspired computing principles, such as asynchronous, event-based spiking neural networks (SNNs), integrated memory and computing, and sparse and continuously changing connections to achieve orders-of-magnitude gains in energy consumption and performance. Neurons communicate directly with one another rather than communicating through memory, reducing overall power consumption.


Hala Point packages 1,152 Loihi 2 processors produced on Intel 4 process node in a six-rack-unit data center chassis the size of a microwave oven. The system supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores, consuming a maximum of 2,600 watts of power. It also includes over 2,300 embedded x86 processors for ancillary computations.


Hala Point integrates processing, memory, and communication channels in a massively parallelized fabric, providing a total of 16 petabytes per second (PB/s) of memory bandwidth, 3.5 PB/s of inter-core communication bandwidth, and 5 terabytes per second (TB/s) of inter-chip communication bandwidth. The system can process over 380 trillion 8-bit synapses and over 240 trillion neuron operations per second.


Applied to bio-inspired spiking neural network models, the system can execute its full capacity of 1.15 billion neurons 20 times faster than a human brain and up to 200 times faster rates at lower capacity. While Hala Point is not intended for neuroscience modeling, its neuron capacity is roughly equivalent to that of an owl brain or the cortex of a capuchin monkey.


Loihi-based systems can perform AI inference and solve optimization problems using 100 times less energy at speeds as much as 50 times faster than conventional CPU and GPU architectures1. By exploiting up to 10:1 sparse connectivity and event-driven activity, early results on Hala Point show the system can achieve deep neural network efficiencies as high as 15 TOPS/W2 without requiring input data to be collected into batches, a common optimization for GPUs that significantly delays the processing of data arriving in real-time, such as video from cameras. While still in research, future neuromorphic LLMs capable of continuous learning could result in gigawatt-hours of energy savings by eliminating the need for periodic re-training with ever-growing datasets."



"Intel’s neuromorphic systems, such as Pohoiki Springs, are still in the research phase and are not intended to replace conventional computing systems. Instead, they provide a tool for researchers to develop and characterize new neuro-inspired algorithms for real-time processing, problem solving, adaptation and learning."

Hi cassip,

as I said earlier, it is not out of the question that Ericsson staff currently do research on both Loihi and Akida, but the Intel website article I shared in my reply to Fullmoonfever is dated April 17 and was co-authored by an an Intel and an Ericsson researcher:

Philipp Stratmann is a research scientist at Intel Labs, where he explores new neural network architectures for Loihi, Intel’s neuromorphic research AI accelerator. Co-author Péter Hága is a master researcher at Ericsson Research, where he leads research activities focusing on the applicability of neuromorphic and AI technologies to telecommunication tasks.

In the aforementioned Ericsson video, Johan Lundsjö mentions that they see the first commercial deployments of 6G by 2030 - sounds to me as if they are not pressed for time regarding getting their hands on a commercially available chip.



As for you sharing the link about Hala Point - that is a single large-scale neuromorphic system consisting of 1152 Loihi 2 chips built for Sandia National Laboratories (one of three research labs of the U.S. Department of Energy’s National Nuclear Security Administration) and currently restricted to their own researchers:


Here is also a new article by Sally Ward-Foxton on it:


Pohoiki Springs is Hala Point’s predecessor, introduced in 2020.


Schönes Wochenende
Frangipani
 
  • Like
  • Fire
  • Love
Reactions: 12 users

TECH

Regular
Haven’t had the time to watch it, yet, but here is a video of that talk resp. of a talk with the same name that was uploaded to YouTube just over a week prior to the Design&Reuse IP-SoC Silicon Valley 2024 Day event:





View attachment 62100


Thanks for posting, it was excellent and as this is the first time that I have heard Steve talk/present, I thought he was very clear and precise
in his delivery, well done ! HOW ABOUT THE 15,800 TIMES BETTER RESULT THAN GPT-2 BY COMPARISION !!!
 
  • Like
  • Fire
Reactions: 16 users

Justchilln

Regular
IMG_7588.png
 
  • Like
  • Fire
Reactions: 27 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 7 users
Could it be that they demonstrated their prototype on Loihi2-base but besides work/do further research with Akida?

... maybe they swapped Loihi 2 against Akida 2.


"About Hala Point: Loihi 2 neuromorphic processors, which form the basis for Hala Point, apply brain-inspired computing principles, such as asynchronous, event-based spiking neural networks (SNNs), integrated memory and computing, and sparse and continuously changing connections to achieve orders-of-magnitude gains in energy consumption and performance. Neurons communicate directly with one another rather than communicating through memory, reducing overall power consumption.


Hala Point packages 1,152 Loihi 2 processors produced on Intel 4 process node in a six-rack-unit data center chassis the size of a microwave oven. The system supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores, consuming a maximum of 2,600 watts of power. It also includes over 2,300 embedded x86 processors for ancillary computations.


Hala Point integrates processing, memory, and communication channels in a massively parallelized fabric, providing a total of 16 petabytes per second (PB/s) of memory bandwidth, 3.5 PB/s of inter-core communication bandwidth, and 5 terabytes per second (TB/s) of inter-chip communication bandwidth. The system can process over 380 trillion 8-bit synapses and over 240 trillion neuron operations per second.


Applied to bio-inspired spiking neural network models, the system can execute its full capacity of 1.15 billion neurons 20 times faster than a human brain and up to 200 times faster rates at lower capacity. While Hala Point is not intended for neuroscience modeling, its neuron capacity is roughly equivalent to that of an owl brain or the cortex of a capuchin monkey.


Loihi-based systems can perform AI inference and solve optimization problems using 100 times less energy at speeds as much as 50 times faster than conventional CPU and GPU architectures1. By exploiting up to 10:1 sparse connectivity and event-driven activity, early results on Hala Point show the system can achieve deep neural network efficiencies as high as 15 TOPS/W2 without requiring input data to be collected into batches, a common optimization for GPUs that significantly delays the processing of data arriving in real-time, such as video from cameras. While still in research, future neuromorphic LLMs capable of continuous learning could result in gigawatt-hours of energy savings by eliminating the need for periodic re-training with ever-growing datasets."



"Intel’s neuromorphic systems, such as Pohoiki Springs, are still in the research phase and are not intended to replace conventional computing systems. Instead, they provide a tool for researchers to develop and characterize new neuro-inspired algorithms for real-time processing, problem solving, adaptation and learning."
 
  • Fire
  • Like
Reactions: 2 users

JB49

Regular
Given the previously published paper stating the use of Akida and the suggestions that BRN have been working with a communications company, the following Ericsson blog end of April on their MWC24 booth imo appears to have some different additional prototype work to my original Christmas eve post (link below) on:

Towards 6G Zero-Energy Internet of Things:
Standards, Trends, and Recent Results
  • December 2023

https://www.researchgate.net/public...of_Things_Standards_Trends_and_Recent_Results




6G straight from the Ericsson labs​

There were many questions for our experts who worked on the 6G demos in the “6G – straight from our labs” area at the Mobile World Congress (MWC) 2024. Among the most commonly-asked questions were: "Is it not too early to talk 6G?”, “What is new for 6G?”, “How does 6G relate to 5G?” and all sorts of questions on spectrum – new, existing, co-existing, reuse, coverage and more. So, let’s take the questions one-by-one… This blog will show how we take steps towards addressing some of these questions by describing what was on display at MWC 2024.

APR 30, 2024 | 7 min.

Marie Hogan
6G Portfolio Strategy, Business Area Networks

Johan Lundsjö
Strategic Research Communication Director

What is new for 6G?​

6G will build on 5G Standalone and 5G-Advanced, evolving from today’s network towards the needs of 2030 and beyond. In other words, 6G will be a mix of both new and evolved concepts and use cases.

A selection of new concepts was on display straight from our 6G labs on the MWC floor.

Ultra-Low Power AI​

This concept addresses two major areas of interest - AI and Energy Efficiency - topics that might seem mutually exclusive! There is rapid growth both in interest and usage of AI in mobile network operations, solutions and applications. There’s a risk that the massive amounts of data processing demanded in many AI scenarios lead to high power consumption in parallel. With increased usage of AI expected in the networks, it is important to have solutions to address this.

This live prototype for ultra-low power AI used a novel neuromorphic-AI-based approach for radio channel estimation a
nd showcased the feasibility of low-compute and low-energy AI using AI-based radio receiver use-cases. The trick here is that in a neuromorphic neural network (like our human brain) only the neurons detecting a change are active, whereas no computations are needed for neurons in remember state. The fraction of inactive neurons translates directly to an energy efficiency gain as compared to a traditional deep neural network where computations are always needed for all neurons. The live demo showed how neural activity in the channel estimation computations varied with changes in the radio channel and how energy consumption could be reduced when less or no computations were ongoing. Radio channel estimation is only one of many areas where this exciting AI technology can be used. Keep an eye out for upcoming blogs on this topic from Ericsson soon!


View attachment 62102


View attachment 62103

Prev paper:

View attachment 62104
View attachment 62105
Ericsson switched to testing out Loihi 2 recently.
 
Last edited:
  • Like
Reactions: 1 users
A really good but long read about what tech companies are doing and the toll it's taking on employees, in order to stay ahead of the game in regards to the AI space:


"AI workers at other Big Tech companies, including Google and Microsoft, told CNBC about the pressure they are similarly under to roll out tools at breakneck speeds due to the internal fear of falling behind the competition in a technology that, according to Nvidia CEO Jensen Huang, is having its "iPhone moment."

"Engineers and those with other roles in the field said an increasingly large part of their job was focused on satisfying investors and not falling behind the competition rather than solving actual problems for users. Some said they were switched over to AI teams to help support fast-paced rollouts without having adequate time to train or learn about AI, even if they are new to the technology."

"A common feeling they described is burnout from immense pressure, long hours and mandates that are constantly changing. Many said their employers are looking past surveillance concerns, AI's effect on the climate and other potential harms, all in the name of speed. Some said they or their colleagues were looking for other jobs or switching out of AI departments, due to an untenable pace."

"Microsoft Chief Financial Officer Amy Hood, on an earnings call earlier this year, said the software company is "repivoting our workforce toward the AI-first work we’re doing without adding material number of people to the workforce," and said Microsoft will continue to prioritize investing in AI as "the thing that’s going to shape the next decade."

"He also said AI accuracy, and testing in general, has taken a backseat to prioritize speed of product rollouts despite "motivational speeches" from managers about how their work will "revolutionize the industry."

"The biggest piece that’s missing is lacking the ability to work with domain experts on projects, and the ability to even evaluate them as stringently as they should be evaluated before release," Odubela said, regarding the current ethos in AI."
 
  • Like
  • Love
  • Fire
Reactions: 16 users
I want BRN in a mobile phone any ideas were I can get one ?.
 
  • Like
Reactions: 6 users
Top Bottom