BRN Discussion Ongoing

Justchilln

Regular
How about Numem as our mystery customer for the highlighted Customer SoC?!

View attachment 64246



Ultra-low-power MRAM-based SoC for sensors/AI

Ultra-low-power MRAM-based SoC for sensors/AI

Technology News | May 30, 2024
By Jean-Pierre Joosting
MRAM AI RISC-V DSP SOC



Numem, a leader in high-performance memory IP cores and memory chips/chiplets based on its patented NuRAM (MRAM) and SmartMem technologies, and IC’ALPS, a leader in ASIC/SoC design and supply chain management, have pooled their expertise to meet the challenge of developing an ambitious integrated circuit with RISC-V processors, 2MBytes of NuRAM and a DSP/AI Custom Datapath Accelerator.​


The Custom SoC was developed in an advanced technology node. This SoC has been designed and implemented to highlight the Numem high-performance, low power memory subsystem with a RISC V Processor and AI Accelerator for ultra-low power applications. It has been developed through a close collaboration between Numem and IC’ALPS.

The physical implementation of this integrated circuit was made in a secure space (isolated location, network, and servers, and encrypted exchanges) to meet with the stringent protection of sensitive data required by this program.

“We were pleased with the collaboration and quality of service provided by IC’ALPS which made this on-time tape out possible and first time functional silicon” said Jack Guedj, CEO of Numem. “NuRAM with SmartMem is a high-performance memory subsystem which is 2-3x smaller and boast significant power reduction over SRAM”, he added.

“The challenges were numerous including — architecture, power domains, protection of the sensitive data, run times pushing improvement of EDA flow and the pressure of the tape out deadline”.

Numem and IC’Alps intend to extend their partnership to serve new SoC projects for customers.

www.numem.com
www.icalps.com
I think it’s going to be megachips, when we signed the license they said in the future they might start to make some of their own chips…….
 
  • Like
  • Fire
  • Love
Reactions: 8 users

itsol4605

Regular
NaNose has been added to the Google AI Startup Fund. And we all know who has links to them…👍🏻
As we all know:
No Brainchip Akida inside NaNose products🤷‍♂️
 
  • Haha
Reactions: 1 users

IloveLamp

Top 20
  • Haha
  • Like
Reactions: 12 users
As we all know:
No Brainchip Akida inside NaNose products🤷‍♂️
Dribble Dribble Dribble Dribble
Ignore time
 
  • Like
Reactions: 5 users

wilzy123

Founding Member
  • Haha
  • Like
  • Fire
Reactions: 15 users
  • Haha
  • Fire
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Chinese Academy of Sciences + Synsense




Chinese scientists unveil low-power neuromorphic ‘brain-like’ chips
By GT staff reporters

Published: Jun 02, 2024 06:53 PM

An employee inspects a cellphone chip at an electronic product research and development company in Ningbo, East China's Zhejiang Province on February 22, 2024. The company's products are exported to more than 80 countries in Europe and Latin America, and its overseas order book is full through the second quarter of 2024. Photo: VCG

An employee inspects a cellphone chip at an electronic product research and development company in Ningbo, East China's Zhejiang Province on February 22, 2024. The company's products are exported to more than 80 countries in Europe and Latin America, and its overseas order book is full through the second quarter of 2024. Photo: VCG
A Chinese scientific team has developed a new 'brain-like' chip that operates on reduced energy consumption, marking a significant advance in China's chip manufacturing technology.

Researchers from the Chinese Academy of Sciences, in collaboration with other scholars, have developed Speck, a low-power neuromorphic chip capable of dynamic computing. This system-level chip, integrating algorithm, software, and hardware design, demonstrates the inherent advantages of 'brain-like' computation in incorporating high-level brain mechanisms. The study was recently published online in the international journal 'Nature'.

"The human brain is an incredibly complex neural network, consuming only 20 watts, far less than current AI systems," said Li Guoqi, a researcher at the Institute of Automation, Chinese Academy of Sciences reported by Xinhua.

He emphasized that as computational demands and energy consumption rise, mimicking the neurons and synapses of the human brain to develop new intelligent computing systems is a promising direction.

Human brains can dynamically allocate attention based on stimulus, a process known as the attention mechanism. This research proposes 'neuromorphic dynamic computing,' applying this principle to enhance neuromorphic chip designs, thereby unlocking greater performance and energy efficiency.

Speck combines a dynamic visual sensor and a neuromorphic chip on one chip, achieving remarkably low power use at rest. It can handle visual tasks with just 0.7 milliwatts, providing an energy-efficient, responsive, and low-power solution for AI applications, according to Li.

"The development of the neuromorphic chip concept is both a breakthrough in existing technology and a strategic response to US pressures, marking our pursuit of alternative development paths," Ma Jihua, a veteran telecom industry observer, told the Global Times on Sunday.

"China is leading the market in the brain-inspired chip sector," Ma told the Global Times, "Although this approach has been studied for a long time, transitioning from mathematical theory to mass manufacturing is challenging and requires extensive work," he added.

This kind of chip may help address fundamental challenges across the chip manufacturing industry, which is currently facing a bottleneck. The concept of neuromorphic computing presents a promising and viable research direction, according to Ma.




Screenshot 2024-06-03 at 9.19.17 am.png


 
Last edited:
  • Like
  • Wow
  • Thinking
Reactions: 19 users
Something to look out for over the next few days:


Computex Taipei is Taiwan's largest tech event, with many of the largest tech companies attending. It'll run from 4-7 June.

AI is going to be the main focus.
 
  • Like
  • Love
Reactions: 15 users

MrNick

Regular
  • Like
  • Haha
  • Love
Reactions: 12 users

7für7

Top 20
Actually, View attachment 64257 , View attachment 64256 für View attachment 64256 .


At least not on Tradegate, which is Germany’s most important stock exchange regarding BRN. Low volume on Friday, though.

View attachment 64258
View attachment 64259

Now that you’ve addressed your own Angst after mentioning “German angst” three times last month in the context of share price volatility, will you by any chance be covering “Australian angst” the next time BRN closes red on the ASX?
Actually, I wanted to respond appropriately, but I decided to delete it. It seems you don't understand irony or satire. Wallow in your self-glorification and pseudo wannabe fact-finder posts. It's too exhausting for me to deal with. Have a nice day, Mr. or Mrs. "I insult a forum member because I have nothing better to do, Karen."
 
  • Like
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

200w (4).gif

Why Intel is making big bets on Edge AI​

The chipmaker's corporate vice president, Pallavi Mahajan, talks about the growing need for Edge AI
May 29, 2024 By Charlotte Trueman Have your say
FacebookTwitterLinkedInRedditEmailShare

As is the case with all things AI in recent history, Edge AI deployments have not been immune to exponential growth.
As the pendulum has swung from centralized to distributed deployments, AI has driven the majority of growth in Edge computing, with organizations increasingly looking to deploy AI algorithms and models onto local Edge devices, removing the need to constantly rely on cloud infrastructure.
As a result, research from Gartner shows that at least 50 percent of Edge deployments by the year 2026 will incorporate machine learning, a figure that sat at around five percent in the year 2022.
Pallavi Mahajan

Pallavi Mahajan, corporate vice president of Intel's Edge group software– Intel

Edge is not the cloud​

Businesses want the Edge to bring in the same agility and flexibility as the cloud, said Pallavi Mahajan, corporate vice president of Intel’s network and Edge group software. But, she notes, it’s important to differentiate between Edge AI and cloud AI.
“Edge is not the cloud, it is very different from the cloud because it is heterogeneous,” she says. “You have different hardware, you have different servers, and you have different operating systems.”
Such devices can include anything from sensors and IoT devices to routers, integrated access devices (IAD), and wide area network (WAN) access devices.
One of the benefits of Edge AI is that by storing all your data in an Edge environment rather than a data center, even when large data sets are involved, it speeds up the decision-making and data analysis process, both of which are vital for AI applications that have been designed to provide real-time insights to organizations.
Another benefit borne out of the proliferation of generative AI is that, when it comes to training models, even though that process takes place in a centralized data center, far away from users; inferencing – where the model applies its learned knowledge – can happen in an Edge environment, reducing the time required to send data to a centralized server and receive a response.
Meanwhile, talent shortages, the growing need for efficiency, and the desire to improve time to market through the delivery of new services have all caused businesses to double down on automation.
Alluding to the aforementioned benefits of Edge computing, Mahajan said there are three things driving its growth right now: businesses looking for new and different ways to automate and innovate, which will in turn improve their profit margins; the growing need for real-time insights, which means data has to stay at the Edge; and new regulations around data privacy, which means companies have to be more mindful about where customer data is being stored.
Add to that the fact that AI has now become a ubiquitous workload, it's no surprise that organizations across all sectors are looking for ways to deploy AI at the Edge.
Almost every organization deploys smart devices to support their day-to-day business operations, be that MRI machines in hospitals, sensors in factories, or cameras in shops, all of which generate a lot of data that can deliver valuable real-time insights.
GE Healthcare is one Intel customer that uses Edge AI to support the real-time insights generated by its medical devices.
The American healthcare company wanted to use AI in advanced medical imaging to improve patient outcomes, so partnered with Intel to develop a set of AI algorithms that can detect critical findings on a chest X-ray.
Mahajan explains that in real-time, the GE’s X-ray machines scan the images that are being taken and, using machine learning, automatically detect if there’s something wrong with a scan or if there’s an anomaly that needs further investigation.
While the patient is still at the hospital, the machine can also advise the physician to take more images, perhaps from different angles, to make sure nothing is being missed. The AI algorithm is embedded in the imaging device, instead of being on the cloud or a centralized server, meaning any potentially critical conditions can be identified and prioritized almost immediately.
“Experiences are changing,” Mahajan says. “How quickly you can consume the data and how quickly you can use the data to get real-time insights, that’s what Edge AI is all about.”

Intel brings AI to the Edge​

Mahajan joined Intel in 2022, having previously held software engineering roles at Juniper Networks and HPE. She explains she was hired specifically to help build Intel’s new Edge AI platform.
Unveiled at Mobile World Congress (MWC) in February 2024, the platform is an evolution of the solution codenamed Project Strata that Intel first announced at its Intel Innovation event last year.
“[Intel] has been working at the Edge for many, many years… and we felt there was a need for a platform for the Edge,” she explains. Intel says it has over 90,000 Edge deployments across 200 million processors sold in the last ten years.
Traditionally, businesses looking to deploy automation have had to do so in a very siloed way. In contrast, Mahajan explains that Intel’s new platform will enable customers to have one server that can host multiple solutions simultaneously.
The company has described its Edge AI offering as a “modular and open software platform that enables enterprises to build, deploy, run, manage and scale Edge and AI solutions on standard hardware.” The new platform has been designed to help customers take advantage of Edge AI opportunities and will include support for heterogeneous components in addition to providing lower total cost of ownership and zero-touch, policy-based management of infrastructure and applications, and AI across a fleet of Edge nodes with a single pane of glass.
The platform consists of three key components: the infrastructure layer and the AI application layer, with the industry solutions layer sitting on top. Intel provides the software, the infrastructure, and its silicon, and Intel’s customers then deploy their solutions directly on top of it.
“The infrastructure layer enables you to go out and securely onboard all of your devices,” Mahajan says. “It enables you to remotely manage these devices and abstracts the heterogeneity of the hardware that exists at the Edge. Then, on top of it, we have the AI application layer.”
This layer consists of a number of capabilities and tools, including application orchestration, low-code and high-code AI model and application development, and horizontal and industry-specific Edge services such as data thinning and annotation.
The final layer consists of the industry solutions and, to demonstrate the wide range of use cases the platform can support, it has been launched alongside an ecosystem of partners, including Amazon Web Services, Capgemini, Lenovo, L&T Technology Services, Red Hat, SAP, Vericast, Verizon Business, and Wipro.
Mahajan also lists some of the specific solutions Intel’s customers have already deployed on the platform, citing one manufacturer that is automatically detecting welding defects by training its AI tool on photos of good and bad welding jobs.
“What this platform enables you to do is build and deploy these Edge native applications which have AI in them, and then you can go out and manage, operate, and scale all these Edge devices in a very secure manner,” Mahajan says.
At the time of writing, a release date had not been confirmed for Intel’s Edge AI platform. However, during MWC, the company said it would be “later this quarter.”
There are three things driving Edge computing’s growth right now: businesses looking for new and different ways to automate and innovate; the growing need for real-time insights; and new regulations around data privacy.

AI 'everywhere'​

Although Gartner predicted in 2023 that Edge AI had two years before it hit its plateau, Intel is confident this is not the case, and has made the Edge AI platform a central part of its ‘AI Everywhere’ vision.
Alongside its Edge AI platform, Intel also previewed its Granite Rapids-D processor at MWC. Designed for Edge solutions, it has built-in AI acceleration and will feature the latest generation of Performance-cores (P-cores).
Writing on X, the social media platform previously known as Twitter, in October 2023, Intel’s CEO Pat Gelsinger said: “Our focus at Intel is to bring AI everywhere – making it more accessible to all, and easier to integrate at scale across the continuum of workloads, from client and Edge to the network and cloud.”
As demonstrated by the recent slew of announcements, Intel clearly believes that Edge AI has just reached its peak, with Mahajan stating that all industries go through what she described as “the S Curve of maturity.” Within this curve, the bottom of the ‘S’ represents those tentative first forays into exploring a new technology, where organizations run pilot programs and proof-of-concepts, while the top of the curve is the point at which the market has fully matured.
“This is where I think we are now,” she says, adding that she believes Intel was “the first to read the need for [an Edge AI] platform.” She continues: “This is the feedback that we got back from after the launch at MWC, that everybody was saying, ‘Yes, this market needs a platform.’
“I’m sure there will be more platforms to come but I'm glad that Intel has been a leader here.”

 
  • Like
  • Love
  • Fire
Reactions: 52 users
Pallavi Mahajan

The future of AI is bright, and I am excited to see our solutions empowering businesses to unlock the full potential of their data. A special thanks to Mauro Capo for walking on and sharing Accenture expertise in helping enterprises leverage the power of GenAI. Such collaborations fuel innovation and drive transformative change to shape the AI landscape.
 
  • Like
  • Fire
Reactions: 7 users

miaeffect

Oat latte lover

View attachment 64277

Why Intel is making big bets on Edge AI​

The chipmaker's corporate vice president, Pallavi Mahajan, talks about the growing need for Edge AI
May 29, 2024 By Charlotte Trueman Have your say
FacebookTwitterLinkedInRedditEmailShare

As is the case with all things AI in recent history, Edge AI deployments have not been immune to exponential growth.
As the pendulum has swung from centralized to distributed deployments, AI has driven the majority of growth in Edge computing, with organizations increasingly looking to deploy AI algorithms and models onto local Edge devices, removing the need to constantly rely on cloud infrastructure.
As a result, research from Gartner shows that at least 50 percent of Edge deployments by the year 2026 will incorporate machine learning, a figure that sat at around five percent in the year 2022.
Pallavi Mahajan

Pallavi Mahajan, corporate vice president of Intel's Edge group software– Intel

Edge is not the cloud​

Businesses want the Edge to bring in the same agility and flexibility as the cloud, said Pallavi Mahajan, corporate vice president of Intel’s network and Edge group software. But, she notes, it’s important to differentiate between Edge AI and cloud AI.
“Edge is not the cloud, it is very different from the cloud because it is heterogeneous,” she says. “You have different hardware, you have different servers, and you have different operating systems.”
Such devices can include anything from sensors and IoT devices to routers, integrated access devices (IAD), and wide area network (WAN) access devices.
One of the benefits of Edge AI is that by storing all your data in an Edge environment rather than a data center, even when large data sets are involved, it speeds up the decision-making and data analysis process, both of which are vital for AI applications that have been designed to provide real-time insights to organizations.
Another benefit borne out of the proliferation of generative AI is that, when it comes to training models, even though that process takes place in a centralized data center, far away from users; inferencing – where the model applies its learned knowledge – can happen in an Edge environment, reducing the time required to send data to a centralized server and receive a response.
Meanwhile, talent shortages, the growing need for efficiency, and the desire to improve time to market through the delivery of new services have all caused businesses to double down on automation.
Alluding to the aforementioned benefits of Edge computing, Mahajan said there are three things driving its growth right now: businesses looking for new and different ways to automate and innovate, which will in turn improve their profit margins; the growing need for real-time insights, which means data has to stay at the Edge; and new regulations around data privacy, which means companies have to be more mindful about where customer data is being stored.
Add to that the fact that AI has now become a ubiquitous workload, it's no surprise that organizations across all sectors are looking for ways to deploy AI at the Edge.
Almost every organization deploys smart devices to support their day-to-day business operations, be that MRI machines in hospitals, sensors in factories, or cameras in shops, all of which generate a lot of data that can deliver valuable real-time insights.
GE Healthcare is one Intel customer that uses Edge AI to support the real-time insights generated by its medical devices.
The American healthcare company wanted to use AI in advanced medical imaging to improve patient outcomes, so partnered with Intel to develop a set of AI algorithms that can detect critical findings on a chest X-ray.
Mahajan explains that in real-time, the GE’s X-ray machines scan the images that are being taken and, using machine learning, automatically detect if there’s something wrong with a scan or if there’s an anomaly that needs further investigation.
While the patient is still at the hospital, the machine can also advise the physician to take more images, perhaps from different angles, to make sure nothing is being missed. The AI algorithm is embedded in the imaging device, instead of being on the cloud or a centralized server, meaning any potentially critical conditions can be identified and prioritized almost immediately.
“Experiences are changing,” Mahajan says. “How quickly you can consume the data and how quickly you can use the data to get real-time insights, that’s what Edge AI is all about.”

Intel brings AI to the Edge​

Mahajan joined Intel in 2022, having previously held software engineering roles at Juniper Networks and HPE. She explains she was hired specifically to help build Intel’s new Edge AI platform.
Unveiled at Mobile World Congress (MWC) in February 2024, the platform is an evolution of the solution codenamed Project Strata that Intel first announced at its Intel Innovation event last year.
“[Intel] has been working at the Edge for many, many years… and we felt there was a need for a platform for the Edge,” she explains. Intel says it has over 90,000 Edge deployments across 200 million processors sold in the last ten years.
Traditionally, businesses looking to deploy automation have had to do so in a very siloed way. In contrast, Mahajan explains that Intel’s new platform will enable customers to have one server that can host multiple solutions simultaneously.
The company has described its Edge AI offering as a “modular and open software platform that enables enterprises to build, deploy, run, manage and scale Edge and AI solutions on standard hardware.” The new platform has been designed to help customers take advantage of Edge AI opportunities and will include support for heterogeneous components in addition to providing lower total cost of ownership and zero-touch, policy-based management of infrastructure and applications, and AI across a fleet of Edge nodes with a single pane of glass.
The platform consists of three key components: the infrastructure layer and the AI application layer, with the industry solutions layer sitting on top. Intel provides the software, the infrastructure, and its silicon, and Intel’s customers then deploy their solutions directly on top of it.
“The infrastructure layer enables you to go out and securely onboard all of your devices,” Mahajan says. “It enables you to remotely manage these devices and abstracts the heterogeneity of the hardware that exists at the Edge. Then, on top of it, we have the AI application layer.”
This layer consists of a number of capabilities and tools, including application orchestration, low-code and high-code AI model and application development, and horizontal and industry-specific Edge services such as data thinning and annotation.
The final layer consists of the industry solutions and, to demonstrate the wide range of use cases the platform can support, it has been launched alongside an ecosystem of partners, including Amazon Web Services, Capgemini, Lenovo, L&T Technology Services, Red Hat, SAP, Vericast, Verizon Business, and Wipro.
Mahajan also lists some of the specific solutions Intel’s customers have already deployed on the platform, citing one manufacturer that is automatically detecting welding defects by training its AI tool on photos of good and bad welding jobs.
“What this platform enables you to do is build and deploy these Edge native applications which have AI in them, and then you can go out and manage, operate, and scale all these Edge devices in a very secure manner,” Mahajan says.
At the time of writing, a release date had not been confirmed for Intel’s Edge AI platform. However, during MWC, the company said it would be “later this quarter.”
There are three things driving Edge computing’s growth right now: businesses looking for new and different ways to automate and innovate; the growing need for real-time insights; and new regulations around data privacy.

AI 'everywhere'​

Although Gartner predicted in 2023 that Edge AI had two years before it hit its plateau, Intel is confident this is not the case, and has made the Edge AI platform a central part of its ‘AI Everywhere’ vision.
Alongside its Edge AI platform, Intel also previewed its Granite Rapids-D processor at MWC. Designed for Edge solutions, it has built-in AI acceleration and will feature the latest generation of Performance-cores (P-cores).
Writing on X, the social media platform previously known as Twitter, in October 2023, Intel’s CEO Pat Gelsinger said: “Our focus at Intel is to bring AI everywhere – making it more accessible to all, and easier to integrate at scale across the continuum of workloads, from client and Edge to the network and cloud.”
As demonstrated by the recent slew of announcements, Intel clearly believes that Edge AI has just reached its peak, with Mahajan stating that all industries go through what she described as “the S Curve of maturity.” Within this curve, the bottom of the ‘S’ represents those tentative first forays into exploring a new technology, where organizations run pilot programs and proof-of-concepts, while the top of the curve is the point at which the market has fully matured.
“This is where I think we are now,” she says, adding that she believes Intel was “the first to read the need for [an Edge AI] platform.” She continues: “This is the feedback that we got back from after the launch at MWC, that everybody was saying, ‘Yes, this market needs a platform.’
“I’m sure there will be more platforms to come but I'm glad that Intel has been a leader here.”

Loved it

If Intel can trigger the edge AI market quickly, BRN is one of the bullets for sure
 
  • Like
  • Fire
  • Love
Reactions: 39 users
Ping
 
  • Haha
Reactions: 3 users
Hmm. Very quiet.
 
  • Haha
  • Like
Reactions: 5 users

Tothemoon24

Top 20
IMG_9030.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Tothemoon24

Top 20

Probably been posted , great read​

Extending the IoT to Mars​

  • May 30, 2024
  • Steve Rogerson
  • Eseye
mars-620x620.jpg
Credit: Nasa/JPL/MSSS
How far can the IoT go? Further than you think, as Steve Rogerson discovered at this week’s Hardware Pioneers Max show in London.
We will never see a little green man using the self-checkout till at the Mars branch of Walmart to buy a bottle of Romulan ale. Nor will a woman from Venus see how she looks in that polka-dot dress using a magic mirror on the Moon.
But that does not mean the IoT will not stretch beyond the upper reaches of Earth’s atmosphere; it already has.
These seeds were first sown when the realisation of the benefits of connecting armies of sensors to the internet first dawned, though then few considered the problems lurking of handling such vast amounts of data. Yes, the data are useful, but only if something useful can be done with them.
Given much of this involved monitoring situations remotely, an added problem became latency. Some benefits could only be harvested if the response to the data was immediate, or close to it.
Not long passed before it became obvious that for the IoT to succeed, data had to be processed and acted on at the edge. That meant giving some autonomy to these systems. At its simplest, it saw thermostats in factories, offices and homes turning the heating on if it got too cold or increasing ventilation if it got too hot.
Easy so far. But we needed more, especially if we were going to have more automated factories, robot deliveries and self-driving cars. The job of the edge processor was becoming harder. Higher intelligence was needed and, thankfully, it has arrived, and getting better all the time.
Developments in artificial intelligence (AI) have blossomed in recent years, bringing impressive smartness to edge devices.
MParmrgiani-783x620.jpg
Marco Parmegiani from Eseye.
“IoT starts and ends with the devices,” Marco Parmegiani, architect director at Eseye (www.eseye.com), told visitors to this week’s Hardware Pioneers Max (HPM) show in London. “This year, we are seeing the rise of the intelligent edge.”
He said things had to happen at the edge. For example, there are devices that will monitor for a water leak. If all they do is send or sound an alarm, it could all be too late by the time you get home to fix it. But add a bit of intelligence and it will work out if the water needs turning off and do it itself. In fleet management, devices can intelligently know whether to send data by wifi, cellular or satellite depending on which is available and which is cheapest at the time.
“The intelligence is being pushed down to the device,” said Marco. “IoT devices are becoming cleverer. You can now put a lot more processing power into the device and make the decision about whether and when to send data.”
It did not take long before these advances caught the attention of people with more off-world problems. Space bodies such as Nasa and the European Space Agency (ESA) have long battled with latencies – space is big, as Douglas Adams pointed out in Hitchhikers Guide to the Galaxy – that go far beyond what is experienced on Earth. Remotely controlling a rover on Mars is just not practical in real time; by the time the engineer on Earth has pressed the stop button, the rover will have its face full of red rock. AI is becoming the answer.
AKuchenbuch-512x620.jpg
Alf Kuchenbuch from Brainchip.
This was explained by Alf Kuchenbuch, a vice president at Australian technology company Brainchip (brainchip.com), who told HPM delegates how excited he was that his company’s chips were now doing real edge processing in space.
“Nasa and the ESA are picking up on AI,” he said. “They want to see AI in space. They are nervous, but they are acting with urgency.”
Earlier this month, he attended a workshop in the Netherlands organised by the ESA where he said the general view was that everything that happened on Earth would happen in space in five years’ time.
“Some find that shocking, but it is an inevitable truth,” he said. “Nasa is picking up on this too.”
But he said even satellites in low Earth orbit sometimes hit latency problems. There are also bandwidth difficulties. Satellites sending constant images of the Earth’s surface use a lot of bandwidth, but many of those images are useless because of cloud cover. Applying AI to the images on the satellite can pick those that show not just the top of clouds, and sometimes they can stitch images together, reducing drastically the amount of data they need to send. And if they are being used, say, to track ships, they don’t need to keep sending pictures of the ship, but just its coordinates.
Taking a leaf from autonomous vehicles on Earth, similar technology can be used for performing docking manoeuvres in space and, as mentioned, controlling ground vehicles on the Moon or Mars. Another application is debris removal. There is a lot of junk circling the Earth and there are plans to remove it by slowing it down so it falls towards Earth and burns up.
“These are why AI in space is so necessary,” said Alf.
Brainchip is using neuromorphic AI on its chips, which Alf said had a big advantage in that it worked in a similar way to a brain, only processing information when an event happened, lowering the power requirements. The firm’s Akida chip is on SpaceX’s Transporter 10 mission, launched in March.
“We are waiting for them to turn it on and for it to start doing its work,” he said. He wouldn’t say what that work was just that: “It is secret.”
Brainchip is also working with Frontgrade Gaisler (www.gaisler.com), a provider of space-grade systems-on-chip, to explore integrating Akida into fault-tolerant, radiation-hardened microprocessors to make space-grade SoCs incorporating AI.
“If this works out, our chip will be going on the Moon landing, or even to Mars,” he said. “Akida is not a dream. It is here today, and it is up there today.”
I was going to end with some joke about the IoT boldly going to the final frontier, but felt the force wouldn’t really be with me, so I didn’t make it so.
Share:
 
  • Like
  • Fire
  • Love
Reactions: 45 users
Top Bottom