Hi 7fur7,
Maybe you're suffering from the occupational hazard on this thread - information overload.
Accenture said this about Akida last week:
https://brainchip.com/neuromorphic-computing-making-space-smart/
Neuromorphic Computing: Making Space Smart
Published Mar 19, 2024
Eric Gallo, Sr. Principal, Future Technologies R&D, Accenture Labs
...
N
euromorphic computing emulates neurons and synapses in the brain to process information, offering low power, low latency, and data sparsity benefits for edge computation. Accenture Labs demonstrated a 3-5x reduction in system power using BrainChip’s Akida neuromorphic accelerator for audio recognition tasks compared to CPU and GPU. Other companies—including BrainChip, Intel, and Synsense—have developed digital neuromorphic silicon chips, with BrainChip, and Synsense offering commercial processors. These chips are still in v1 and v2, with increasing performance expected as they mature. Startups like EDGX and Neurobus are exploring neuromorphic computing hardware and software designs for space. Intel’s Loihi processor has already been validated in space, and neuromorphic cameras are operating on the ISS as part of the Falco Neuro Project. BrainChip’s Akida recently launched on SpaceX Falcon 9. These efforts are laying the groundwork for widespread implementation of neuromorphic technologies in space applications.
APPLICATIONS
New satellite to show how AI advances Earth observation
02/07/20242542 VIEWS14 LIKES
ESA /
Applications /
Observing the Earth /
Φsat-2
Artificial intelligence technologies have achieved remarkable successes and continue to show their value as backbones in scientific research and real-world applications.
ESA’s new
Φsat-2 mission, launching in the coming weeks, will push the boundaries of AI for Earth observation – demonstrating the transformative potential of AI for space technology.
Earth observation has, for decades, provided a rich stream of actionable data for scientists, businesses and policymakers. Thanks to new satellites and advanced sensors, the scale and quality of available Earth observation data have risen exponentially in the past decade.
The integration of AI has significantly enhanced Earth observation. AI capabilities allow for more data to be processed quickly and accurately, helping to enable to transform vast amounts of raw data into actionable insights.
As part of an initiative to promote the development and implementation of innovative technologies onboard Earth observation missions, ESA launched
Ф-sat-1 in 2020. It was ESA’s first experiment to demonstrate how artificial intelligence can be used for Earth observation and paved the way for its successor:
Φsat-2.
Φsat-2 integrated
Φsat-2 is a dedicated AI mission which will fully explore the benefits and capabilities of utilising extended onboard processing and further demonstrate the benefits of using AI for innovative Earth observation.
Measuring just 22 x 10 x 33 cm, ESA’s Φsat-2 satellite is equipped with a multispectral camera and powerful AI computer that analyses and processes imagery onboard – promising to deliver smarter and more efficient ways of monitoring our planet.
With six AI applications running onboard, the satellite is designed to turn images into maps, detect clouds in the images, classify them and provide insight into cloud distribution, detect and classify vessels, compress images on board and reconstruct them in the ground reducing the download time, spot anomalies in marine ecosystems and detect wildfires.
ESA’s Φsat-2 Technical Officer Nicola Melega, commented, “Φsat-2 will unlock a new era of real-time insights from space and will allow for custom AI apps to be easily developed, installed, and operated on the satellite even while in orbit. This adaptability maximises the satellite's value for scientists, businesses and governments.”
The Φsat-2 mission is a collaborative effort between ESA and Open Cosmos who serves as the prime contractor, supported by an industrial consortium including Ubotica, GGI, CEiiA, GEO-K, KP-Labs, and SIMERA.
Φsat-2 integration into the Exolaunch deployer
Access the video
Φsat-2, which shares its ride into orbit with
ESA’s Arctic Weather Satellite, is scheduled to liftoff in July 2024 on a SpaceX Falcon 9 from the Vandenberg Air Force Base, California, in the US.
Φsat-2 carries a multispectral instrument that images Earth in seven different bands and, through its AI applications, is capable of many things that can provide actionable information on the ground, including:
Cloud detection
Unlike traditional satellites that downlink all captured images, including those obscured by clouds, Φsat-2 processes these images directly in orbit, ensuring that only clear, usable images are sent back to Earth.
Developed by KP Labs, this application can also classify clouds and provide insights into cloud distribution. This gives users more flexibility when it is time to decide whether an image is usable or not.
Street map generation
The Sat2Map application, developed by CGI, converts satellite imagery into street maps. This capability is particularly beneficial for emergency response teams, enabling them to identify accessible roads during disasters such as floods or earthquakes.
When the satellite orbits over the affected area and acquires images, the images are passed to the onboard processer that will identify streets and generate a corresponding map.
Initially, this application will be demonstrated over Southeast Asia, showcasing its potential to aid in crisis management.
Maritime vessel detection
The maritime vessel detection application, developed by CEiiA, utilises machine learning techniques to automatically detect and classify vessels in specified regions, facilitating the monitoring of activities like illegal fishing. This application underscores the satellite’s role in supporting maritime security and environmental conservation efforts.
Training Φsat-2’s autonomous vessel awareness application
On-board image compression and reconstruction
Developed by GEO-K, this application is responsible for compressing images on board. By significantly reducing file sizes, this application increases the volume and speed of data downloads. After being downlinked to the ground, the images are reconstructed using a dedicated decoder. The first demonstrations of this technology will occur over Europe, focusing on the detection of buildings.
Φsat-2’s capabilities have been further expanded with the incorporation of two additional AI applications that will be uploaded once the satellite is in orbit.
These AI applications were the winning entries in the
OrbitalAI challenge organized by
ESA’s Φ-lab and was designed to give companies the change to pioneer in-orbit Earth observation data processing. The winning applications are:
Marine anomaly detection
Developed by IRT Saint Exupery Technical Research, this application uses machine learning algorithm to spot anomalies in marine ecosystems – identifying threats to the marine ecosystem such as oil spills, harmful algae blooms and heavy sediment discharges in real-time.
Wildfire detection
The wildfire detection system, developed by Thales Alenia Space, uses machine learning to provide critical real-time information to response teams. The tool provides a classification report that helps firefighters locate wildfires, track fire spread and identify potential hazards.
PhiFireAI classification
Artificial intelligence technologies have achieved remarkable successes and continue to show their value as backbones in scientific research and real-world applications.ESA’s new Φsat-2 mission, launching in the coming weeks, will push the boundaries of AI for Earth observation – demonstrating...
www.esa.int