BRN Discussion Ongoing


IMG_0796.jpeg
 
  • Like
  • Fire
Reactions: 17 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 9 users

jtardif999

Regular
AI threat keeps me awake at night, says Arm boss


Rene Haas believes the rapidly developing technology ‘will change everything we do’ within a decade


December 11 2023, The Times


The head of one of Britain’s most important technology companies has spoken of his fears that humans could lose control of artificial intelligence.


Rene Haas, chief executive of Arm Holdings, the Cambridge-based microchip designer, said the threat kept him up at night. “The thing I worry about most is humans losing capability [over the machines],” he told Bloomberg. “You need some override, some backdoor, some way that the system can be shut down.”


Arm creates the blueprint for energy-efficient microchips and licences these designs to companies such as Apple, Nvidia and Qualcomm. Its processors run virtually every smartphone on the planet, as well as other devices such as digital TVs and drones.


Haas estimated that 70 per cent of the world’s population have come into contact with Arm-designed products in some way. He said AI would be transformational for the company, which is trying to lessen its reliance on the smartphone sector.


“I think it will find its way into everything that we do, and every aspect of how we work, live, play,” he said. “It’s going to change everything over the next five to ten years.”


The company, which was valued at $54.5 billion at its New York stock market listing in September, employs about 6,400 people globally, 3,500 of them in the UK. The shares have since risen from $51 to $67.23.


Arm’s owner, the Japanese tech conglomerate SoftBank, chose the Nasdaq exchange even though the company was listed in London until 2016. The decision was regarded as a blow to the British technology scene, although Arm emphasised its commitment to the UK.


Haas said that access to talent, particularly in the UK, was another concern. “We were born here, we intend to stay here,” he added. “Please make it very easy for us to attract world-class talent and attract engineers to come and work for Arm.”
 
  • Like
  • Fire
  • Love
Reactions: 17 users

TheDrooben

Pretty Pretty Pretty Pretty Good
AI threat keeps me awake at night, says Arm boss


Rene Haas believes the rapidly developing technology ‘will change everything we do’ within a decade


December 11 2023, The Times


The head of one of Britain’s most important technology companies has spoken of his fears that humans could lose control of artificial intelligence.


Rene Haas, chief executive of Arm Holdings, the Cambridge-based microchip designer, said the threat kept him up at night. “The thing I worry about most is humans losing capability [over the machines],” he told Bloomberg. “You need some override, some backdoor, some way that the system can be shut down.”


Arm creates the blueprint for energy-efficient microchips and licences these designs to companies such as Apple, Nvidia and Qualcomm. Its processors run virtually every smartphone on the planet, as well as other devices such as digital TVs and drones.


Haas estimated that 70 per cent of the world’s population have come into contact with Arm-designed products in some way. He said AI would be transformational for the company, which is trying to lessen its reliance on the smartphone sector.


“I think it will find its way into everything that we do, and every aspect of how we work, live, play,” he said. “It’s going to change everything over the next five to ten years.”


The company, which was valued at $54.5 billion at its New York stock market listing in September, employs about 6,400 people globally, 3,500 of them in the UK. The shares have since risen from $51 to $67.23.


Arm’s owner, the Japanese tech conglomerate SoftBank, chose the Nasdaq exchange even though the company was listed in London until 2016. The decision was regarded as a blow to the British technology scene, although Arm emphasised its commitment to the UK.


Haas said that access to talent, particularly in the UK, was another concern. “We were born here, we intend to stay here,” he added. “Please make it very easy for us to attract world-class talent and attract engineers to come and work for Arm.”
Here is the interview, less than 8 mins long



Happy as Larry
 
  • Like
  • Love
  • Fire
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!


41 mins

12 Dec 2023 Bloomberg Talks
Arm CEO Rene Haas speaks exclusively with Bloomberg's Tom Mackenzie at the company's global headquarters in Cambridge, UK. The pair spoke about how Arm will prove essential to the generative AI revolution, trying to lessen Arm's dependence on the slowing smartphone industry by getting its technology into new areas such as personal computers, servers and electric vehicles. Haas also spoke about Arm's business in China and the challenges of attracting talent in the UK.
 
  • Like
  • Fire
  • Love
Reactions: 11 users

Deena

Regular
Here is the interview, less than 8 mins long



Happy as Larry

Wow. This video is very encouraging. Well worth a watch and listen. Good one Drooben. Power efficiency is so important ... and Brainchip delivers.
Deena
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Neuromorphic roadmap: are brain-like processors the future of computing?​

Neuromorphic chips could reduce energy bills for AI developers as well as emit useful cybersecurity signals.
11 December 2023
Picturing the future of computing.


Rethinking chip design: brain-inspired asynchronous neuromorphic devices are gaining momentum as researchers report on progress.

• The future of computing might not look anything like computing as we know it.
• Neuromorphic chips would function much more like brains than the chips we have today.
• Neuromorphic chips and AI could be a combination that takes us much further – without the energy billls.

A flurry of new chips announced recently by Qualcomm, NVIDIA, and AMD has ramped up competition to build the ultimate PC processor. And while the next couple of years are shaping up to be good ones for consumers of laptops and other PC products, the future of computing could end up looking quite different to what we know right now.
Despite all of the advances in chipmaking, which have shrunk feature sizes and packed billions of transistors onto modern devices, the computing architecture remains a familiar one. General-purpose, all-electronic, digital PCs based on binary logic are, at their heart, so-called Von Neumann machines.

Von Neumann machines versus neuromorphic chips​

The basics of a Von Neumann computing machine features a memory store to hold instructions and data; control and logic units; plus input and output devices.
Demonstrated more than half a century ago, the architecture has stood the test of time. However, bottlenecks have emerged – provoked by growing application sizes and exponential amounts of data.

Processing units need to fetch their instructions and data from memory. And while on-chip caches help reduce latency, there’s a disparity between how fast the CPU can run and the rate at which information can be supplied.
What’s more, having to bus data and instructions between the memory and the processor not only affects chip performance, it drains energy too.
Chip designers have loaded up processors with multiple cores, clustered CPUs, and engineered other workarounds to squeeze as much performance as they can from Von Neumann machines. But this complexity adds cost and requires cooling.
It’s often said that the best solutions are the simplest, and today’s chips based on Von Neumann principles are starting to look mighty complicated. There are resource constraints too, made worse by the boom in generative AI, and these could steer the future of computing away from its Von Neumann origins.

Neuromorphic chips and AI – a dream combination?​

Large language models (LLMs) have wowed the business world and enterprise software developers are racing to integrate LLMs developed by OpenAI, Google, Meta, and other big names into their products. And competition for computing resources is fierce.
OpenAI had to pause new subscriptions to its paid-for ChatGPT service as it couldn’t keep up with demand. Google, for the first time, is reportedly spending more on compute than it is on people – as access to high-performance chips becomes imperative to revenue growth.


Writing in a Roadmap for Unconventional Computing with Nanotechnology (available on arXiv and submitted to Nano Futures), experts highlight the fact that the computational need for artificial intelligence is growing at a rate 50 times faster than Moore’s law for electronics.
LLMs feature billions of parameters – essentially a very long list of decimal numbers – which have to be encoded in binary so that processors can interpret whether artificial neurons fire or not in response to their software inputs.
So-called ‘neural engines’ can help accelerate AI performance by hard-coding common instructions, but running LLMs on conventional computing architecture is resource-intensive.
Researchers estimate that data processing and transmission worldwide could be responsible for anywhere between 5 and 15% of global energy consumption. And this forecast was made before ChatGPT existed.
But what if developers could switch from modeling artificial neurons in software to building them directly in hardware instead? Our brains can perform all kinds of supercomputing magic using a few Watts of power (orders of magnitude less than computers) and that’s thanks to physical neural networks and their synaptic connections.


Rather than having to pay an energy penalty for shuffling computing instructions and data into a different location, calculations can be performed directly in memory. And developers are busy working on a variety of neuromorphic (brain-inspired) chip ideas to enable computing with small energy budgets, which brings a number of benefits.

“It provides hardware security as well, which is very important for artificial intelligence,” comments Jean Anne Incorvia – who holds the Fellow of Advanced Micro Devices (AMD) Chair in Computer Engineering at The University of Texas at Austin, US – in the roadmap paper. “Because of the low power requirement, these architectures can be embedded in edge devices that have minimal contact with the cloud and are therefore somewhat insulated from cloud‐borne attacks.”

Neuromorphic chips emit cybersecurity signals​

What’s more, with neuromorphic computing devices consuming potentially tiny amounts of power, hardware attacks become much easier to detect due to the tell-tale increase in energy demand that would follow – something that would be noticeable through side-channel monitoring.
The future of computing could turn out to be one involving magnetic neural network crossbar arrays, redox memristors, 3D nanostructures, biomaterials and more, with designers of neuromorphic devices using brain functionality as a blueprint.
“Communication strength depends on the history of synapse activity, also known as plasticity,” writes Aida Todri‐Sanial – who leads the NanoComputing Research Lab at Eindhoven University of Technology (TU/e) in The Netherlands. “Short‐term plasticity facilitates computation, while long‐term plasticity is attributed to learning and memory.”


Neuromorphic computing is said to be much more forgiving of switching errors compared with Boolean logic. However, one issue holding back progress is the poor tolerance of device-to-device variations. Conventional chip makers have taken years to optimize their fabrication processes, so the future of computing may not happen overnight.
However, different ways of doing things may help side-step some hurdles. For example, researchers raise the prospect of being able to set model weights using an input waveform rather than having to read through billions of individual parameters.
Also, the more we learn about how the brain functions, the more designers of future computing devices can mimic those features in their architectures.

Giving a new meaning to sleep mode​

“During awake activity, sensory signals are processed through subcortical layers in the cortex and the refined outputs reach the hippocampus,” explains Jennifer Hasler and her collaborators, reflecting on what’s known about how the brain works. “During the sleep cycle, these memory events are replayed to the neocortex where sensory signals cannot disrupt the playback.”
Today, closing your laptop – putting the device to sleep – is mostly about power-saving. But perhaps the future of computing will see chips that utilize sleep more like the brain. With sensory signals blocked from disrupting memory events, sleeping provides a chance to strengthen synapses, encode new concepts, and expand learning mechanisms.
And if these ideas sound far-fetched, it’s worth checking out the computing capabilities of slime mold powered by just a few oat flakes. The future of computing doesn’t have to resemble a modern data center, and thinking differently could dramatically lower those energy bills.

 
  • Like
  • Love
  • Fire
Reactions: 37 users

7für7

Top 20

Neuromorphic roadmap: are brain-like processors the future of computing?​

Neuromorphic chips could reduce energy bills for AI developers as well as emit useful cybersecurity signals.
11 December 2023
Picturing the future of computing.


Rethinking chip design: brain-inspired asynchronous neuromorphic devices are gaining momentum as researchers report on progress.

• The future of computing might not look anything like computing as we know it.
• Neuromorphic chips would function much more like brains than the chips we have today.
• Neuromorphic chips and AI could be a combination that takes us much further – without the energy billls.

A flurry of new chips announced recently by Qualcomm, NVIDIA, and AMD has ramped up competition to build the ultimate PC processor. And while the next couple of years are shaping up to be good ones for consumers of laptops and other PC products, the future of computing could end up looking quite different to what we know right now.
Despite all of the advances in chipmaking, which have shrunk feature sizes and packed billions of transistors onto modern devices, the computing architecture remains a familiar one. General-purpose, all-electronic, digital PCs based on binary logic are, at their heart, so-called Von Neumann machines.

Von Neumann machines versus neuromorphic chips​

The basics of a Von Neumann computing machine features a memory store to hold instructions and data; control and logic units; plus input and output devices.
Demonstrated more than half a century ago, the architecture has stood the test of time. However, bottlenecks have emerged – provoked by growing application sizes and exponential amounts of data.

Processing units need to fetch their instructions and data from memory. And while on-chip caches help reduce latency, there’s a disparity between how fast the CPU can run and the rate at which information can be supplied.
What’s more, having to bus data and instructions between the memory and the processor not only affects chip performance, it drains energy too.
Chip designers have loaded up processors with multiple cores, clustered CPUs, and engineered other workarounds to squeeze as much performance as they can from Von Neumann machines. But this complexity adds cost and requires cooling.
It’s often said that the best solutions are the simplest, and today’s chips based on Von Neumann principles are starting to look mighty complicated. There are resource constraints too, made worse by the boom in generative AI, and these could steer the future of computing away from its Von Neumann origins.

Neuromorphic chips and AI – a dream combination?​

Large language models (LLMs) have wowed the business world and enterprise software developers are racing to integrate LLMs developed by OpenAI, Google, Meta, and other big names into their products. And competition for computing resources is fierce.
OpenAI had to pause new subscriptions to its paid-for ChatGPT service as it couldn’t keep up with demand. Google, for the first time, is reportedly spending more on compute than it is on people – as access to high-performance chips becomes imperative to revenue growth.


Writing in a Roadmap for Unconventional Computing with Nanotechnology (available on arXiv and submitted to Nano Futures), experts highlight the fact that the computational need for artificial intelligence is growing at a rate 50 times faster than Moore’s law for electronics.
LLMs feature billions of parameters – essentially a very long list of decimal numbers – which have to be encoded in binary so that processors can interpret whether artificial neurons fire or not in response to their software inputs.
So-called ‘neural engines’ can help accelerate AI performance by hard-coding common instructions, but running LLMs on conventional computing architecture is resource-intensive.
Researchers estimate that data processing and transmission worldwide could be responsible for anywhere between 5 and 15% of global energy consumption. And this forecast was made before ChatGPT existed.
But what if developers could switch from modeling artificial neurons in software to building them directly in hardware instead? Our brains can perform all kinds of supercomputing magic using a few Watts of power (orders of magnitude less than computers) and that’s thanks to physical neural networks and their synaptic connections.


Rather than having to pay an energy penalty for shuffling computing instructions and data into a different location, calculations can be performed directly in memory. And developers are busy working on a variety of neuromorphic (brain-inspired) chip ideas to enable computing with small energy budgets, which brings a number of benefits.

“It provides hardware security as well, which is very important for artificial intelligence,” comments Jean Anne Incorvia – who holds the Fellow of Advanced Micro Devices (AMD) Chair in Computer Engineering at The University of Texas at Austin, US – in the roadmap paper. “Because of the low power requirement, these architectures can be embedded in edge devices that have minimal contact with the cloud and are therefore somewhat insulated from cloud‐borne attacks.”

Neuromorphic chips emit cybersecurity signals​

What’s more, with neuromorphic computing devices consuming potentially tiny amounts of power, hardware attacks become much easier to detect due to the tell-tale increase in energy demand that would follow – something that would be noticeable through side-channel monitoring.
The future of computing could turn out to be one involving magnetic neural network crossbar arrays, redox memristors, 3D nanostructures, biomaterials and more, with designers of neuromorphic devices using brain functionality as a blueprint.
“Communication strength depends on the history of synapse activity, also known as plasticity,” writes Aida Todri‐Sanial – who leads the NanoComputing Research Lab at Eindhoven University of Technology (TU/e) in The Netherlands. “Short‐term plasticity facilitates computation, while long‐term plasticity is attributed to learning and memory.”


Neuromorphic computing is said to be much more forgiving of switching errors compared with Boolean logic. However, one issue holding back progress is the poor tolerance of device-to-device variations. Conventional chip makers have taken years to optimize their fabrication processes, so the future of computing may not happen overnight.
However, different ways of doing things may help side-step some hurdles. For example, researchers raise the prospect of being able to set model weights using an input waveform rather than having to read through billions of individual parameters.
Also, the more we learn about how the brain functions, the more designers of future computing devices can mimic those features in their architectures.

Giving a new meaning to sleep mode​

“During awake activity, sensory signals are processed through subcortical layers in the cortex and the refined outputs reach the hippocampus,” explains Jennifer Hasler and her collaborators, reflecting on what’s known about how the brain works. “During the sleep cycle, these memory events are replayed to the neocortex where sensory signals cannot disrupt the playback.”
Today, closing your laptop – putting the device to sleep – is mostly about power-saving. But perhaps the future of computing will see chips that utilize sleep more like the brain. With sensory signals blocked from disrupting memory events, sleeping provides a chance to strengthen synapses, encode new concepts, and expand learning mechanisms.
And if these ideas sound far-fetched, it’s worth checking out the computing capabilities of slime mold powered by just a few oat flakes. The future of computing doesn’t have to resemble a modern data center, and thinking differently could dramatically lower those energy bills.

“why” neuromorphic chips “could” be the future? 😂 could? COULD???? someone didn’t hear the alarm clock I guess? Ok let’s try this “why washing machines could make your clothes cleaner” 😂 what the heck?
 
  • Haha
  • Fire
  • Like
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Published: December 11, 2023 | Source: ResetEra | Author: Mark Campbell

Alleged PlayStation 5 Pro Specifications emerge – RDNA 3 graphics and an AI-focused NPU​


Extract

Screen Shot 2023-12-12 at 11.13.08 am.png


EXRACT ONLY
Screen Shot 2023-12-12 at 11.34.03 am.png

 
Last edited:
  • Like
  • Wow
  • Love
Reactions: 23 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
OPenAI could easily resolve this conflict of interest issue by purchasing neuromorphic chips from us instead of Rain. The loss of funding via Prosperity7 Ventures is likely to affect the timeline for the launch of Rain's neuromorphic chip. Meanwhile, we're already up and racing out the gate. I know what I'd do if I wanted to maintain the competitive edge!


giddy-up-lachy-wiggle.gif







EXTRACT ONLY

Why There’s A Conflict Of Interest Debate?​

Rain disclosed to investors that Sam Altman, reinstated as OpenAI CEO last month after a brief dismissal, personally invested over $1 million in the company. While Altman’s return to OpenAI mitigates potential conflicts, it’s noteworthy that during his tenure, OpenAI had committed to a $51 million investment in AI chips from Rain AI, a startup where Altman holds personal investments.


The ambiguity lies in whether Altman influenced OpenAI’s decision to engage with Rain AI for chip purchases.

Nevertheless, the scenario raises concerns about potential conflicts of interest, as the CEO of one company (OpenAI) could indirectly benefit financially from another (Rain AI) in which he has personal investments.

Altman Looks At the Middle East​

Earlier this year, Rain showcased its advancements to potential investors, projecting a milestone achievement this month with the potential ‘taping out’ of a test chip – a crucial step indicating readiness for fabrication. However, recent leadership reshuffles and investor changes occurred following a mandate from a U.S. government body overseeing national security risks. The body reportedly required Prosperity7 Ventures, a Saudi Arabia–affiliated fund that led a $25 million fundraising campaign for Rain in 2022, to divest its stake in the company.

This mandated removal of the fund, reported by Bloomberg , adds to Rain’s challenges in bringing its innovative chip technology to market. The forced change may potentially impact the timeline for fulfilling OpenAI’s $51 million advance order. Grep VC, based in Silicon Valley, acquired the shares.
 
  • Like
  • Fire
  • Wow
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Wearables and we're in with Renesas..These rings began selling on 1 September 2023.. What does everyone think? 🤞





Ring From Muse, Renesas Lets Your Fingers Do the…Paying​

OPINION​

By Chandana Pairla 12.11.2023 0
Share Post
Share on Facebook
Share on Twitter


Health trackers, or “wearables,” have taken the fitness industry by storm over the last decade. The global wearables market in 2022 was valued at $138 billion, according to data from Precedence Research, and the market’s expected to approach $500 billion by 2032. Now, an Indian start-up called Muse Wearables is releasing a ring that’s far more than a fashion statement or even a health-monitoring ring: To be sure, Ring One’s a clinical-grade tracker that monitors heart rate and variability, blood pressure, respiratory rates, body temperature, and blood-oxygen levels. But, on top of all of those functions, a simple switch of operating mode turns Ring One into an NFC-enabled, electronic-payment device.
Prathyusha Kamarajugadda, KLN Sai Prasanth and Yathindra Ajay KA founded Muse Wearables in 2017, which was incubated at the prestigious Indian Institute of Technology Madras Incubation Cell program in Chennai, India. Two years later, Muse Wearables released its first product–the Muse Hybrid Smart Watch—with Bluetooth and activity tracking.

ADVERTISING

Chandana Pairla (Source: Renesas)
Things intensified during the Covid-19 outbreak in 2020 when Muse added blood-oxygen–monitoring tech. That was a game changer for players in the healthcare industry in India, which scooped up the watches to provide clinical-grade data that indicates precisely when a patient’s blood oxygen level drops enough to require use of a respirator.
Widespread adoption of the Muse Wearables smart watch allowed the company to collect millions of anonymous data points and refine the accuracy of its health-tracker algorithms. The experience also revealed that watches aren’t necessarily the most practical for patient vital-sign monitoring because they tend to rotate around the wrist, especially when patients are asleep.

Engineers at Muse realized that a finger is a better place to get a good signal and more accurate data.

Muse is confident that the Ring One will change how people engage with their fitness trackers and incentivize them to expand their use of e-payment options.
Outside of a hospital setting, for example, a typical user experience might begin with a sleep-quality score each morning based on overnight vital sign analysis, including important REM sleep periods. During the day, the ring tracks all activity, such as step counts, and issues an alert if the user is stationary for too long. The ring also tracks maximal oxygen consumption, which is a measure of how much oxygen a person uses during an intense workout.
For e-payments, turn Ring One to the right, then a simple wave of the hand engages any point-of-sale terminal without the need for a wallet or phone.

Expanding the feature set

The Ring One is a marvel of integration. In addition to a temperature sensor, 3D accelerometer and a gyroscope, the company came upon the novel concept of incorporating a single NFC antenna that enables both e-payments and device charging. The addition of a unique user interface complements a Bluetooth feature that allows ring data to be relayed to a smartphone.
The ring has a very small form factor, so it took a lot of thought to design the user interface. The engineers call it a “turn wheel” interface because a user rotates the outer shell of the ring to activate different modes. For example, a user rotates the shell left to start a workout, and then turns it to the right to enable payment mode. The ring also comes with a charging case, which has a battery of its own that will last a month.
Muse Wearables’ Ring One. (Source: Muse Wearables)
Ring One is the first in the world to pioneer a new era by combining wireless charging and NFC payments using a single antenna—and that allows for an extremely compact device. The ring’s innovative, turn-wheel user interface adds further sophistication to the functionality. The technology is underscored by the filing of multiple patents.
To perform both payments and charging using a single antenna, Muse Wearables turned to Panthronics, a unit of Renesas Electronics that specializes in NFC wireless charging tech. Due to the footprint constraints of the project, Renesas was the only solution able to provide NFC charging.
The Ring One uses a Renesas NFC transmitter, which transmits power to the ring from the charging case and provides the industry’s highest output power for faster NFC charging.
A powerful and efficient integrated NFC wireless charging listener SoC enables wireless charging applications together with data communication. Renesas worked closely with Muse Wearables in testing and evaluation.
Muse Wearables began selling Ring One in September.

 
  • Like
  • Love
  • Fire
Reactions: 14 users

jtardif999

Regular
I think the thing that bamboozles most of us about how long it’s taking BRN to get an actual break is how much news comes from all over the world that sounds like us.. ?? Trouble is most of the time it’s not us. I think the reason for that is more to do with how long it’s taken this first wave of AI products to actually hit the market. I reckon it’s been around 10 years. You know as Dio stated Renesas is an example with their DRP product costing them a bucket to develop over many years. So a lot of what’s coming out now was well and truely in development when BRN first started commercialising Akida. Many Fortune 500 companies may well be planning to incorporate Akida into their second wave of AI products, but it’s the first wave we are seeing now. This has been a difficult thing for most of us to swallow - the time we have endured seeing the marvel that is Akida be born and then where exactly does it fit into these waves of development? We still have our Early Adopters - Valeo, Mercedes and Renesas and of course NASA and the US military as well as VVDN and the Edge Box so hopefully 2024 will see our first break. I hope so. AIMO.
 
  • Like
  • Fire
  • Love
Reactions: 47 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I think the thing that bamboozles most of us about how long it’s taking BRN to get an actual break is how much news comes from all over the world that sounds like us.. ?? Trouble is most of the time it’s not us. I think the reason for that is more to do with how long it’s taken this first wave of AI products to actually hit the market. I reckon it’s been around 10 years. You know as Dio stated Renesas is an example with their DRP product costing them a bucket to develop over many years. So a lot of what’s coming out now was well and truely in development when BRN first started commercialising Akida. Many Fortune 500 companies may well be planning to incorporate Akida into their second wave of AI products, but it’s the first wave we are seeing now. This has been a difficult thing for most of us to swallow - the time we have endured seeing the marvel that is Akida be born and then where exactly does it fit into these waves of development? We still have our Early Adopters - Valeo, Mercedes and Renesas and of course NASA and the US military as well as VVDN and the Edge Box so hopefully 2024 will see our first break. I hope so. AIMO.
This article says...


Screen Shot 2023-12-12 at 12.45.24 pm.png
Screen Shot 2023-12-12 at 12.45.16 pm.png


 
  • Like
  • Fire
  • Love
Reactions: 12 users

Getupthere

Regular
 
  • Like
  • Fire
  • Love
Reactions: 6 users
  • Haha
  • Sad
Reactions: 12 users

jtardif999

Regular
This article says...


View attachment 51872 View attachment 51871

Surely if that was us we would have heard about them (like we’ve heard about the eco bins). There would have been some news from the company as an NDA or licence fee wouldn’t apply.
 
Last edited:
  • Like
Reactions: 1 users

miaeffect

Oat latte lover
  • Haha
Reactions: 6 users


DeepMind has developed the graph neural network GNoME, predicting material stability. GNoME has identified 2.2 million new materials, with 380 thousand deemed stable for application in developing computer chips, batteries, and solar panels.

Before the advent of GNoME (Graph Networks for Materials Exploration), only 48 thousand stable inorganic crystals were known. The model increased this number by almost 9 times. DeepMind claims the model’s output is equivalent to 800 years of researchers’ work.



This is the kind of thing, that concerns me..
The absolutely incredible rate of development, now becoming possible, through "unintelligent" A.I.

Technologies will be made obsolete, almost the second they are created (known), in the very near future.
 
Last edited:
  • Thinking
  • Fire
  • Like
Reactions: 4 users

DeepMind has developed the graph neural network GNoME, predicting material stability. GNoME has identified 2.2 million new materials, with 380 thousand deemed stable for application in developing computer chips, batteries, and solar panels.

Before the advent of GNoME (Graph Networks for Materials Exploration), only 48 thousand stable inorganic crystals were known. The model increased this number by almost 9 times. DeepMind claims the model’s output is equivalent to 800 years of researchers’ work.



This is the kind of thing, that concerns me..
The absolutely incredible rate of development, now becoming possible, through "unintelligent" A.I.

Technologies will be made obsolete, almost the second they are created (known), in the very near future.
This explains it with moving pictures.



From 40 seconds in, 13 seconds more explains the gist of it, in a kind of dramatic way 😛
 
  • Like
  • Fire
  • Haha
Reactions: 5 users
Top Bottom