
MediaTek's AI-Powered Chips Aim to Make Your Next Phone Very Personal
At the MediaTek Summit, the chipmaker announced new phone chips that use generative AI, new Redcap 5G modems and teased a new augmented reality chip.
Here is the interview, less than 8 mins longAI threat keeps me awake at night, says Arm boss
Rene Haas believes the rapidly developing technology ‘will change everything we do’ within a decade
December 11 2023, The Times
The head of one of Britain’s most important technology companies has spoken of his fears that humans could lose control of artificial intelligence.
Rene Haas, chief executive of Arm Holdings, the Cambridge-based microchip designer, said the threat kept him up at night. “The thing I worry about most is humans losing capability [over the machines],” he told Bloomberg. “You need some override, some backdoor, some way that the system can be shut down.”
Arm creates the blueprint for energy-efficient microchips and licences these designs to companies such as Apple, Nvidia and Qualcomm. Its processors run virtually every smartphone on the planet, as well as other devices such as digital TVs and drones.
Haas estimated that 70 per cent of the world’s population have come into contact with Arm-designed products in some way. He said AI would be transformational for the company, which is trying to lessen its reliance on the smartphone sector.
“I think it will find its way into everything that we do, and every aspect of how we work, live, play,” he said. “It’s going to change everything over the next five to ten years.”
The company, which was valued at $54.5 billion at its New York stock market listing in September, employs about 6,400 people globally, 3,500 of them in the UK. The shares have since risen from $51 to $67.23.
Arm’s owner, the Japanese tech conglomerate SoftBank, chose the Nasdaq exchange even though the company was listed in London until 2016. The decision was regarded as a blow to the British technology scene, although Arm emphasised its commitment to the UK.
Haas said that access to talent, particularly in the UK, was another concern. “We were born here, we intend to stay here,” he added. “Please make it very easy for us to attract world-class talent and attract engineers to come and work for Arm.”
Here is the interview, less than 8 mins long
Happy as Larry
“why” neuromorphic chips “could” be the future?
Neuromorphic roadmap: are brain-like processors the future of computing?
Neuromorphic chips could reduce energy bills for AI developers as well as emit useful cybersecurity signals.
11 December 2023
![]()
Rethinking chip design: brain-inspired asynchronous neuromorphic devices are gaining momentum as researchers report on progress.
• The future of computing might not look anything like computing as we know it.
• Neuromorphic chips would function much more like brains than the chips we have today.
• Neuromorphic chips and AI could be a combination that takes us much further – without the energy billls.
A flurry of new chips announced recently by Qualcomm, NVIDIA, and AMD has ramped up competition to build the ultimate PC processor. And while the next couple of years are shaping up to be good ones for consumers of laptops and other PC products, the future of computing could end up looking quite different to what we know right now.
Despite all of the advances in chipmaking, which have shrunk feature sizes and packed billions of transistors onto modern devices, the computing architecture remains a familiar one. General-purpose, all-electronic, digital PCs based on binary logic are, at their heart, so-called Von Neumann machines.
Von Neumann machines versus neuromorphic chips
The basics of a Von Neumann computing machine features a memory store to hold instructions and data; control and logic units; plus input and output devices.
Demonstrated more than half a century ago, the architecture has stood the test of time. However, bottlenecks have emerged – provoked by growing application sizes and exponential amounts of data.
Processing units need to fetch their instructions and data from memory. And while on-chip caches help reduce latency, there’s a disparity between how fast the CPU can run and the rate at which information can be supplied.
What’s more, having to bus data and instructions between the memory and the processor not only affects chip performance, it drains energy too.
Chip designers have loaded up processors with multiple cores, clustered CPUs, and engineered other workarounds to squeeze as much performance as they can from Von Neumann machines. But this complexity adds cost and requires cooling.
It’s often said that the best solutions are the simplest, and today’s chips based on Von Neumann principles are starting to look mighty complicated. There are resource constraints too, made worse by the boom in generative AI, and these could steer the future of computing away from its Von Neumann origins.
Neuromorphic chips and AI – a dream combination?
Large language models (LLMs) have wowed the business world and enterprise software developers are racing to integrate LLMs developed by OpenAI, Google, Meta, and other big names into their products. And competition for computing resources is fierce.
OpenAI had to pause new subscriptions to its paid-for ChatGPT service as it couldn’t keep up with demand. Google, for the first time, is reportedly spending more on compute than it is on people – as access to high-performance chips becomes imperative to revenue growth.
Writing in a Roadmap for Unconventional Computing with Nanotechnology (available on arXiv and submitted to Nano Futures), experts highlight the fact that the computational need for artificial intelligence is growing at a rate 50 times faster than Moore’s law for electronics.
LLMs feature billions of parameters – essentially a very long list of decimal numbers – which have to be encoded in binary so that processors can interpret whether artificial neurons fire or not in response to their software inputs.
So-called ‘neural engines’ can help accelerate AI performance by hard-coding common instructions, but running LLMs on conventional computing architecture is resource-intensive.
Researchers estimate that data processing and transmission worldwide could be responsible for anywhere between 5 and 15% of global energy consumption. And this forecast was made before ChatGPT existed.
But what if developers could switch from modeling artificial neurons in software to building them directly in hardware instead? Our brains can perform all kinds of supercomputing magic using a few Watts of power (orders of magnitude less than computers) and that’s thanks to physical neural networks and their synaptic connections.
Rather than having to pay an energy penalty for shuffling computing instructions and data into a different location, calculations can be performed directly in memory. And developers are busy working on a variety of neuromorphic (brain-inspired) chip ideas to enable computing with small energy budgets, which brings a number of benefits.
“It provides hardware security as well, which is very important for artificial intelligence,” comments Jean Anne Incorvia – who holds the Fellow of Advanced Micro Devices (AMD) Chair in Computer Engineering at The University of Texas at Austin, US – in the roadmap paper. “Because of the low power requirement, these architectures can be embedded in edge devices that have minimal contact with the cloud and are therefore somewhat insulated from cloud‐borne attacks.”
Neuromorphic chips emit cybersecurity signals
What’s more, with neuromorphic computing devices consuming potentially tiny amounts of power, hardware attacks become much easier to detect due to the tell-tale increase in energy demand that would follow – something that would be noticeable through side-channel monitoring.
The future of computing could turn out to be one involving magnetic neural network crossbar arrays, redox memristors, 3D nanostructures, biomaterials and more, with designers of neuromorphic devices using brain functionality as a blueprint.
“Communication strength depends on the history of synapse activity, also known as plasticity,” writes Aida Todri‐Sanial – who leads the NanoComputing Research Lab at Eindhoven University of Technology (TU/e) in The Netherlands. “Short‐term plasticity facilitates computation, while long‐term plasticity is attributed to learning and memory.”
Neuromorphic computing is said to be much more forgiving of switching errors compared with Boolean logic. However, one issue holding back progress is the poor tolerance of device-to-device variations. Conventional chip makers have taken years to optimize their fabrication processes, so the future of computing may not happen overnight.
However, different ways of doing things may help side-step some hurdles. For example, researchers raise the prospect of being able to set model weights using an input waveform rather than having to read through billions of individual parameters.
Also, the more we learn about how the brain functions, the more designers of future computing devices can mimic those features in their architectures.
Giving a new meaning to sleep mode
“During awake activity, sensory signals are processed through subcortical layers in the cortex and the refined outputs reach the hippocampus,” explains Jennifer Hasler and her collaborators, reflecting on what’s known about how the brain works. “During the sleep cycle, these memory events are replayed to the neocortex where sensory signals cannot disrupt the playback.”
Today, closing your laptop – putting the device to sleep – is mostly about power-saving. But perhaps the future of computing will see chips that utilize sleep more like the brain. With sensory signals blocked from disrupting memory events, sleeping provides a chance to strengthen synapses, encode new concepts, and expand learning mechanisms.
And if these ideas sound far-fetched, it’s worth checking out the computing capabilities of slime mold powered by just a few oat flakes. The future of computing doesn’t have to resemble a modern data center, and thinking differently could dramatically lower those energy bills.
![]()
Technology News | TechHQ | Latest Technology News & Analysis
TechHQ is influential media, creating rich and relevant stories about technology and business. We're problem solvers, using technology to turn ideas into actiontechhq.com
This article says...I think the thing that bamboozles most of us about how long it’s taking BRN to get an actual break is how much news comes from all over the world that sounds like us.. ?? Trouble is most of the time it’s not us. I think the reason for that is more to do with how long it’s taken this first wave of AI products to actually hit the market. I reckon it’s been around 10 years. You know as Dio stated Renesas is an example with their DRP product costing them a bucket to develop over many years. So a lot of what’s coming out now was well and truely in development when BRN first started commercialising Akida. Many Fortune 500 companies may well be planning to incorporate Akida into their second wave of AI products, but it’s the first wave we are seeing now. This has been a difficult thing for most of us to swallow - the time we have endured seeing the marvel that is Akida be born and then where exactly does it fit into these waves of development? We still have our Early Adopters - Valeo, Mercedes and Renesas and of course NASA and the US military as well as VVDN and the Edge Box so hopefully 2024 will see our first break. I hope so. AIMO.
Here is the interview, less than 8 mins long
Happy as Larry
Surely if that was us we would have heard about them (like we’ve heard about the eco bins). There would have been some news from the company as an NDA or licence fee wouldn’t apply.This article says...
View attachment 51872 View attachment 51871
![]()
Muse Wearables Raises $1 Million In 4 Weeks For Game-Changing Smart Ring ‘Ring One’
Muse Wearables, an Indian tech startup, founded by graduates of IIT Madras and NIT Warangal, has achieved a remarkable feat by raising $1 million USD...www.ubergizmo.com
So he only last a minuteStill more than 7 mins too long, for the Pom..
This explains it with moving pictures.![]()
DeepMind GNoME Discovered 2 Million New Materials
DeepMind GNoME is a graph neural network uncovering 2.2 million new materials, reshaping possibilities for computer technology, batteries, and solar energy.neurohive.io
DeepMind has developed the graph neural network GNoME, predicting material stability. GNoME has identified 2.2 million new materials, with 380 thousand deemed stable for application in developing computer chips, batteries, and solar panels.
Before the advent of GNoME (Graph Networks for Materials Exploration), only 48 thousand stable inorganic crystals were known. The model increased this number by almost 9 times. DeepMind claims the model’s output is equivalent to 800 years of researchers’ work.
This is the kind of thing, that concerns me..
The absolutely incredible rate of development, now becoming possible, through "unintelligent" A.I.
Technologies will be made obsolete, almost the second they are created (known), in the very near future.
This explains it with moving pictures.
From 40 seconds in, 13 seconds more explains the gist of it, in a kind of dramatic way![]()