Pom down under
Top 20
Must be the 1st time since I’ve been invested in BRN that the SP has fallen before a quarterly thanks to LDA and hopefully this drop in SP continues as my super should be ready next week some time.
Should we seriously believe Microsoft and open ai should talk specifically about brainchip involvement in their 100 billion dollar project.Could someone please, please, PLEEEEEEEAAAASE ask Steven Thorne or Rob Telson to get on the blower to Sam Altman ASAP?!
Didn't Sam Altman attend Intel‘s IFS Connect Forum in Feb earlier this year? If he did, then wouldn't he have seen our demo? And if he saw our demo, then he'd have to know that we're like the Betty Ford clinic for those suffering from crippling addictions to NVIDIA's expensive and energy guzzling GPU's.
I'd call Sam myself, but I haven't got his phone number at present.
Thanks in advance to Steve or Rob! Do us proud lads!
View attachment 61091
Microsoft and OpenAI Will Spend $100 Billion to Wean Themselves Off Nvidia GPUs
The companies are working on an audacious data center for AI that's expected to be operational in 2028.
By Josh Norem April 18, 2024
Credit: Microsoft
Microsoft was the first big company to throw a few billion at ChatGPT-maker OpenAI. Now, a new report states the two companies are working together on a very ambitious AI project that will cost at least $100 billion. Both companies are currently huge Nvidia customers; Microsoft uses Nvidia hardware for its Azure cloud infrastructure, while OpenAI uses Nvidia GPUs for ChatGPT. However, the new data center will host an AI supercomputer codenamed "Stargate," which might not include any Nvidia hardware at all.
The news of the companies' plans to ditch Nvidia hardware comes from a variety of sources, as noted by Windows Central. The report details a five-phase plan developed by Microsoft and OpenAI to advance the two companies' AI ambitions through the end of the decade, with the fifth phase being the so-called Stargate AI supercomputer. This computer is expected to be operational by 2028 and will reportedly be outfitted with future versions of Microsoft's custom-made Arm Cobalt processors and Maia XPUs, all connected by Ethernet.
Microsoft is reportedly planning on using its custom-built Cobalt and Maia silicon to power its future AI ambitions. Credit: Microsoft
This future data center, which will house Stargate, will allow both companies to pursue their AI ambitions far into the future; reports say it will cost around $115 billion. That level of investment shows both companies have no plans to move their respective feet off the AI gas pedal any time soon and that they expect this market to continue to expand far into the future. TechRadar also notes that the amount required to get this supercomputer running is more than triple what Microsoft spent on CapEx last year, so the company is tripling down on AI, it seems.
What's also notable is at least one source says the data center itself will be the computer, as opposed to just housing it. Multiple data centers may link together, like Voltron, to form the supercomputer. This futuristic machine will reportedly push the boundaries of AI capabilities. Given how fast things are advancing in this field, it's impossible to imagine what that will even mean four years from now.
This situation, where massive companies abandon Nvidia for custom-made AI accelerators, will likely become a significant issue for Nvidia soon. Long wait times for Nvidia GPUs and exorbitant pricing have resulted in many companies reportedly beginning to look elsewhere to satisfy their AI hardware needs, which is why Nvidia is already looking to capture this market. OpenAI CEO Sam Altman is reportedly looking to build a global infrastructure of fabs and power sources to make custom silicon, so its plans with Microsoft might be aligned along this front.
How’s our 3 year lead going ?.
Up the shit.How’s our 3 year lead going ?.
Mmm, that person photographed next to Mischa, looks like a young Pia. I wonder if they are related.Here is a follow-up of my post on Ericsson (see above).
When I recently searched for info on Ericsson’s interest in neuromorphic technology besides the Dec 2023 paper, in which six Ericsson researchers described how they had built a prototype of an AI-enabled ZeroEnergy-IoT device utilising Akida, I not only came across an Ericsson Senior Researcher for Future Computing Platforms who was very much engaged with Intel’s Loihi (he even gave a presentation at the Intel Neuromorphic Research Community’s Virtual Summer 2023 Workshop), as well as an Intel podcast with Ericsson’s VP of Emerging Technologies, Mischa Dohler.
I also spotted the following LinkedIn post by a Greek lady, who had had a successful career at Ericsson spanning more than 23 years before taking the plunge into self-employment two years ago:
View attachment 61042
View attachment 61043
![]()
Since Maria Boura concluded her post by sharing that very Intel podcast with Mischa Dohler mentioned earlier, my gut feeling was that those Ericsson 6G researchers she had talked to at MWC (Mobile World Congress) 2024 in Barcelona at the end of February had most likely been collaborating with Intel, but a quick Google search didn’t come up with any results at the time I first saw that post of hers back in March.
Then, last night, while reading an article on Intel’s newly revealed Hala Point (https://www.eejournal.com/industry_...morphic-system-to-enable-more-sustainable-ai/), there it was - the undeniable evidence that those Ericsson researchers had indeed been utilising Loihi 2:
“Advancing on its predecessor, Pohoiki Springs, with numerous improvements, Hala Point now brings neuromorphic performance and efficiency gains to mainstream conventional deep learning models, notably those processing real-time workloads such as video, speech and wireless communications. For example, Ericsson Research is applying Loihi 2 to optimize telecom infrastructure efficiency, as highlighted at this year’s Mobile World Congress.”
The blue link connects to the following article on the Intel website, published yesterday:
![]()
Ericsson Research Demonstrates How Intel Labs’ Neuromorphic AI Accelerator Reduces Compute Costs
Philipp Stratmann is a research scientist at Intel Labs, where he explores new neural network architectures for Loihi, Intel’s neuromorphic research AI accelerator. Co-author Péter Hága is a master researcher at Ericsson Research, where he leads research activities focusing on the applicability...community.intel.com
Ericsson Research Demonstrates How Intel Labs’ Neuromorphic AI Accelerator Reduces Compute Costs
Subscribe
Article Options
![]()
![]()
Philipp_Stratmann
Employee
04-17-2024
00344
Philipp Stratmann is a research scientist at Intel Labs, where he explores new neural network architectures for Loihi, Intel’s neuromorphic research AI accelerator. Co-author Péter Hága is a master researcher at Ericsson Research, where he leads research activities focusing on the applicability of neuromorphic and AI technologies to telecommunication tasks.
Highlights
- Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications AI models to optimize telecom architecture.
- Ericsson Research developed a radio receiver prototype for Intel’s Loihi 2 neuromorphic AI accelerator based on neuromorphic spiking neural networks, which reduced the data communication by 75 to 99% for energy efficient radio access networks (RANs).
- As a member of Intel’s Neuromorphic Research Community, Ericsson Research is searching for new AI technologies that provide energy efficiency and low latency inference in telecom systems.
Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications artificial intelligence (AI) models to optimize telecom architecture. Ericsson currently uses AI-based network performance diagnostics to analyze communications service providers’ radio access networks (RANs) to resolve network issues efficiently and provide specific parameter change recommendations. At Mobile World Congress (MWC) Barcelona 2024, Ericsson Research demoed a radio receiver algorithm prototype targeted for Intel’s Loihi 2 neuromorphic research AI accelerator, demonstrating a significant reduction in computational cost to improve signals across the RAN.
In 2021, Ericsson Research joined the Intel Neuromorphic Research Community (INRC), a collaborative research effort that brings together academic, government, and industry partners to work with Intel to drive advances in real-world commercial usages of neuromorphic computing.
Ericsson Research is actively searching for new AI technologies that provide low latency inference and energy efficiency in telecom systems. Telecom networks face many challenges, including tight latency constraints driven by the need for data to travel quickly over the network, and energy constraints due to mobile system battery limitations. AI will play a central role in future networks by optimizing, controlling, and even replacing key components across the telecom architecture. AI could provide more efficient resource utilization and network management as well as higher capacity.
Neuromorphic computing draws insights from neuroscience to create chips that function more like the biological brain instead of conventional computers. It can deliver orders of magnitude improvements in energy efficiency, speed of computation, and adaptability across a range of applications, including real-time optimization, planning, and decision-making from edge to data center systems. Intel's Loihi 2 comes with Lava, an open-source software framework for developing neuro-inspired applications.
Radio Receiver Algorithm Prototype
Ericsson Research’s working prototype of a radio receiver algorithm was implemented in Lava for Loihi 2. In the demonstration, the neural network performs a common complex task of recognizing the effects of reflections and noise on radio signals as they propagate from the sender (base station) to the receiver (mobile). Then the neural network must reverse these environmental effects so that the information can be correctly decoded.
During training, researchers rewarded the model based on accuracy and the amount of communication between neurons. As a result, the neural communication was reduced, or sparsified, by 75 to 99% depending on the difficulty of the radio environment and the amount of work needed by the AI to correct the environmental effects on the signal.
Loihi 2 is built to leverage such sparse messaging and computation. With its asynchronous spike-based communication, neurons do not need to compute or communicate information when there is no change. Furthermore, Loihi 2 can compute with substantially less power due to its tight compute-memory integration. This reduces the energy and latency involved in moving data between the compute unit and the memory.
Like the human brain’s biological neural circuits that can intelligently process, respond to, and learn from real-world data at microwatt power levels and millisecond response times, neuromorphic computing can unlock orders of magnitude gains in efficiency and performance.
Neuromorphic computing AI solutions could address the computational power needed for future intelligent telecom networks. Complex telecom computation results must be produced in tight deadlines down to the millisecond range. Instead of using GPUs that draw substantial amounts of power, neuromorphic computing can provide faster processing and improved energy efficiency.
Emerging Technologies and Telecommunications
Learn more about emerging technologies and telecommunications in this episode of InTechnology. Host Camille Morhardt interviews Mischa Dohler, VP of Emerging Technologies at Ericsson, about neuromorphic computing, quantum computing, and more.
While Ericsson being deeply engaged with Intel even in the area of neuromorphic research doesn’t preclude them from also knocking on BrainChip’s door, this new reveal reaffirms my hesitation about adding Ericsson to our list of companies above the waterline, given the lack of any official acknowledgment by either party to date.
So to sum it up: Ericsson collaborating with Intel in neuromorphic research is a verifiable fact, while an NDA with BrainChip is merely speculation so far.
All this secrecy is running a little thin . Especially when everyone else is discussing what everyone else is doing with who everHow’s our 3 year lead going ?.
BrainChip selected by U.S. Air Force Research Laboratory to develop AI-based radar
2024-04-19 17:52
BrainChip, the world's first commercial producer of neuromorphic artificial intelligence chips and IP, today announced that Information Systems Laboratories (ISL) is developing an AI-based radar for the U.S. Air Force Research Laboratory (AFRL) based on its Akida™ Neural Network Processor Research solutions.
ISL is a specialist in expert research and complex analysis, software and systems engineering, advanced hardware design and development, and high-quality manufacturing for a variety of clients worldwide.
Military and Space are always leaders of new technologies
Sorry Frangi, but i have to say that has to be a record shortest post from you doesn't it?!Hi BrainShit,
while it looks like it is hot off the press-news due to today’s publication date on the website, this press release is actually more than two years old:
![]()
Information Systems Labs Joins BrainChip Early Access Program
BrainChip announced that Information Systems Labs is developing an AI-based radar research solution for the AFRL based on Akida™brainchip.com
Regards
Frangipani
Sorry Frangi, but i have to say that has to be a record shortest post from you doesn't it?!
View attachment 61116
Although it would be grossly negligent for the Ericsson Team to test only the Intel chip (which according to BRN is still in research) when there are others available.Here is a follow-up of my post on Ericsson (see above).
When I recently searched for info on Ericsson’s interest in neuromorphic technology besides the Dec 2023 paper, in which six Ericsson researchers described how they had built a prototype of an AI-enabled ZeroEnergy-IoT device utilising Akida, I not only came across an Ericsson Senior Researcher for Future Computing Platforms who was very much engaged with Intel’s Loihi (he even gave a presentation at the Intel Neuromorphic Research Community’s Virtual Summer 2023 Workshop), as well as an Intel podcast with Ericsson’s VP of Emerging Technologies, Mischa Dohler.
I also spotted the following LinkedIn post by a Greek lady, who had had a successful career at Ericsson spanning more than 23 years before taking the plunge into self-employment two years ago:
View attachment 61042
View attachment 61043
![]()
Since Maria Boura concluded her post by sharing that very Intel podcast with Mischa Dohler mentioned earlier, my gut feeling was that those Ericsson 6G researchers she had talked to at MWC (Mobile World Congress) 2024 in Barcelona at the end of February had most likely been collaborating with Intel, but a quick Google search didn’t come up with any results at the time I first saw that post of hers back in March.
Then, last night, while reading an article on Intel’s newly revealed Hala Point (https://www.eejournal.com/industry_...morphic-system-to-enable-more-sustainable-ai/), there it was - the undeniable evidence that those Ericsson researchers had indeed been utilising Loihi 2:
“Advancing on its predecessor, Pohoiki Springs, with numerous improvements, Hala Point now brings neuromorphic performance and efficiency gains to mainstream conventional deep learning models, notably those processing real-time workloads such as video, speech and wireless communications. For example, Ericsson Research is applying Loihi 2 to optimize telecom infrastructure efficiency, as highlighted at this year’s Mobile World Congress.”
The blue link connects to the following article on the Intel website, published yesterday:
![]()
Ericsson Research Demonstrates How Intel Labs’ Neuromorphic AI Accelerator Reduces Compute Costs
Philipp Stratmann is a research scientist at Intel Labs, where he explores new neural network architectures for Loihi, Intel’s neuromorphic research AI accelerator. Co-author Péter Hága is a master researcher at Ericsson Research, where he leads research activities focusing on the applicability...community.intel.com
Ericsson Research Demonstrates How Intel Labs’ Neuromorphic AI Accelerator Reduces Compute Costs
Subscribe
Article Options
![]()
![]()
Philipp_Stratmann
Employee
04-17-2024
00344
Philipp Stratmann is a research scientist at Intel Labs, where he explores new neural network architectures for Loihi, Intel’s neuromorphic research AI accelerator. Co-author Péter Hága is a master researcher at Ericsson Research, where he leads research activities focusing on the applicability of neuromorphic and AI technologies to telecommunication tasks.
Highlights
- Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications AI models to optimize telecom architecture.
- Ericsson Research developed a radio receiver prototype for Intel’s Loihi 2 neuromorphic AI accelerator based on neuromorphic spiking neural networks, which reduced the data communication by 75 to 99% for energy efficient radio access networks (RANs).
- As a member of Intel’s Neuromorphic Research Community, Ericsson Research is searching for new AI technologies that provide energy efficiency and low latency inference in telecom systems.
Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications artificial intelligence (AI) models to optimize telecom architecture. Ericsson currently uses AI-based network performance diagnostics to analyze communications service providers’ radio access networks (RANs) to resolve network issues efficiently and provide specific parameter change recommendations. At Mobile World Congress (MWC) Barcelona 2024, Ericsson Research demoed a radio receiver algorithm prototype targeted for Intel’s Loihi 2 neuromorphic research AI accelerator, demonstrating a significant reduction in computational cost to improve signals across the RAN.
In 2021, Ericsson Research joined the Intel Neuromorphic Research Community (INRC), a collaborative research effort that brings together academic, government, and industry partners to work with Intel to drive advances in real-world commercial usages of neuromorphic computing.
Ericsson Research is actively searching for new AI technologies that provide low latency inference and energy efficiency in telecom systems. Telecom networks face many challenges, including tight latency constraints driven by the need for data to travel quickly over the network, and energy constraints due to mobile system battery limitations. AI will play a central role in future networks by optimizing, controlling, and even replacing key components across the telecom architecture. AI could provide more efficient resource utilization and network management as well as higher capacity.
Neuromorphic computing draws insights from neuroscience to create chips that function more like the biological brain instead of conventional computers. It can deliver orders of magnitude improvements in energy efficiency, speed of computation, and adaptability across a range of applications, including real-time optimization, planning, and decision-making from edge to data center systems. Intel's Loihi 2 comes with Lava, an open-source software framework for developing neuro-inspired applications.
Radio Receiver Algorithm Prototype
Ericsson Research’s working prototype of a radio receiver algorithm was implemented in Lava for Loihi 2. In the demonstration, the neural network performs a common complex task of recognizing the effects of reflections and noise on radio signals as they propagate from the sender (base station) to the receiver (mobile). Then the neural network must reverse these environmental effects so that the information can be correctly decoded.
During training, researchers rewarded the model based on accuracy and the amount of communication between neurons. As a result, the neural communication was reduced, or sparsified, by 75 to 99% depending on the difficulty of the radio environment and the amount of work needed by the AI to correct the environmental effects on the signal.
Loihi 2 is built to leverage such sparse messaging and computation. With its asynchronous spike-based communication, neurons do not need to compute or communicate information when there is no change. Furthermore, Loihi 2 can compute with substantially less power due to its tight compute-memory integration. This reduces the energy and latency involved in moving data between the compute unit and the memory.
Like the human brain’s biological neural circuits that can intelligently process, respond to, and learn from real-world data at microwatt power levels and millisecond response times, neuromorphic computing can unlock orders of magnitude gains in efficiency and performance.
Neuromorphic computing AI solutions could address the computational power needed for future intelligent telecom networks. Complex telecom computation results must be produced in tight deadlines down to the millisecond range. Instead of using GPUs that draw substantial amounts of power, neuromorphic computing can provide faster processing and improved energy efficiency.
Emerging Technologies and Telecommunications
Learn more about emerging technologies and telecommunications in this episode of InTechnology. Host Camille Morhardt interviews Mischa Dohler, VP of Emerging Technologies at Ericsson, about neuromorphic computing, quantum computing, and more.
While Ericsson being deeply engaged with Intel even in the area of neuromorphic research doesn’t preclude them from also knocking on BrainChip’s door, this new reveal reffirms my hesitation about adding Ericsson to our list of companies above the waterline, given the lack of any official acknowledgment by either party to date.
So to sum it up: Ericsson collaborating with Intel in neuromorphic research is a verifiable fact, while an NDA with BrainChip is merely speculation so far.
Evening BrainShit ,BrainChip selected by U.S. Air Force Research Laboratory to develop AI-based radar
2024-04-19 17:52
BrainChip, the world's first commercial producer of neuromorphic artificial intelligence chips and IP, today announced that Information Systems Laboratories (ISL) is developing an AI-based radar for the U.S. Air Force Research Laboratory (AFRL) based on its Akida™ Neural Network Processor Research solutions.
ISL is a specialist in expert research and complex analysis, software and systems engineering, advanced hardware design and development, and high-quality manufacturing for a variety of clients worldwide.
Military and Space are always leaders of new technologies
Assuming Apple make the decision to pursue their own way and manage to cobble something together which is workable (which is highly likely, given their resources).It means Apple have their own inferior in-house NN.
They have also purchased Xnor which has the tech which is a trade-off between efficiency and accuracy:
US10691975B2 Lookup-based convolutional neural network 20170717
[0019] … Lookup-based convolutional neural network (LCNN) structures are described that encode convolutions by few lookups to a dictionary that is trained to cover the space of weights in convolutional neural networks. For example, training an LCNN may include jointly learning a dictionary and a small set of linear combinations. The size of the dictionary may naturally traces a spectrum of trade-offs between efficiency and accuracy.
They have also developed an LLM compression technique which can be applied to edge applications:
US11651192B2 Compressed convolutional neural network models 20190212 Rastegari nee Xnor:
[0019] As described in further detail below, the subject technology includes systems and processes for building a compressed CNN model suitable for deployment on different types of computing platforms having different processing, power and memory capabilities.
Thus Apple have in-house technology which, on the face of it, is capable of implementing GenAI at the edge. That presents a substantial obstacle to Akida getting a look in.
I wonder how long ISL had been playing with Akida before they got the Airforce radar SBIR?Evening BrainShit ,
1 , if one could find a more salubrious forum name would be great.
But yes , truely sense & feel ones torment.
2, Not 100% certain , but pretty sure this is a rehashed announcement from some time ago..... with the pressent day date attached , purely to confuse.
Regards,
Esq.
If their potential customers have done their DD, they will be aware of Akida.Assuming Apple make the decision to pursue their own way and manage to cobble something together which is workable (which is highly likely, given their resources).
What are the chances of them offering this tech, to their competitors, to also use?...
The problems Apple had, with their flagship iPhone 15 overheating last year, goes to show they are maybe not as "hot shit" as they think they are..
Maybe they are literally...
Happy to change my opinion, if they come to their senses![]()
Diogenes, just a query concerning tge original patent that expires in 2028.If their potential customers have done their DD, they will be aware of Akida.
Using MACs makes the Apple system much less efficient and slower than Akida.
That would make it a different equation for the potential customers in not having the sunk costs of developing a second-rate in-house system.
As you point out, using the iPhone as a handwarmer did use up the battery quickly.