BRN Discussion Ongoing

Diogenese

Top 20
July 16, 2024
Emily Cerf, UC Santa Cruz
A lit lightbulb laying on its side on a desktop next to an open laptop. Sparkles shimmer around the lightbulb.

Credit: iStock/Kriangsak Koopattanakij
By eliminating the most computationally expensive element of a large language model, engineers drastically improve energy efficiency while maintaining performance.
Large language models such as ChaptGPT have proven to be able to produce remarkably intelligent results, but the energy and monetary costs associated with running these massive algorithms is sky high. It costs $700,000 per day in energy costs to run ChatGPT 3.5, according to recent estimates, and leaves behind a massive carbon footprint in the process.
In a new preprint paper, researchers from UC Santa Cruz show that it is possible to eliminate the most computationally expensive element of running large language models, called matrix multiplication, while maintaining performance. In getting rid of matrix multiplication and running their algorithm on custom hardware, the researchers found that they could power a billion-parameter-scale language model on just 13 watts, about equal to the energy of powering a lightbulb and more than 50 times more efficient than typical hardware.
Even with a slimmed-down algorithm and much less energy consumption, the new, open source model achieves the same performance as state-of-the-art models like Meta’s Llama.
“We got the same performance at way less cost — all we had to do was fundamentally change how neural networks work,” said Jason Eshraghian, an assistant professor of electrical and computer engineering at the Baskin School of Engineering and the paper’s lead author. “Then we took it a step further and built custom hardware.”

Understanding the cost​

Until now, all modern neural networks, the algorithms that power large language models, have used a technique called matrix multiplication. In large language models, words are represented as numbers that are then organized into matrices. Matrices are multiplied by each other to produce language, performing operations that weigh the importance of particular words or highlight relationships between words in a sentence or sentences in a paragraph. Larger scale language models have trillions of these numbers.
“Neural networks, in a way, are glorified matrix multiplication machines,” Eshraghian said. “The larger your matrix, the more things your neural network can learn.”
For the algorithms to be able to multiply matrices together, the matrices need to be stored somewhere, and then fetched when it comes time to compute. This is solved by storing the matrices on hundreds of physically-separated graphics processing units (GPUs), which are specialized circuits designed to quickly carry out computations on very large datasets, designed by the likes of hardware giant Nvidia. To multiply numbers from matrices on different GPUs, data must be moved around, a process which creates most of the neural network’s costs in terms of time and energy.
Eliminating matrix multiplication
The researchers came up with a strategy to avoid using matrix multiplication using two main techniques. The first is a method to force all the numbers within the matrices to be ternary, meaning they can take one of three values: negative one, zero, or positive one. This allows the computation to be reduced to summing numbers rather than multiplying.
From a computer science perspective the two algorithms can be coded the exact same way, but the way Eshraghian’s team’s method works eliminates a ton of cost on the hardware side.
“From a circuit designer standpoint, you don't need the overhead of multiplication, which carries a whole heap of cost,” Eshraghian said.
This strategy was inspired by a paper produced by Microsoft that showed it was possible to use ternary numbers in neural networks, but did not go as far as to get rid of matrix multiplication, or open-sourcing their model to the public. To do this, the researchers adjusted the strategy of how the matrices communicate with each other.
Instead of multiplying every single number in one matrix with every single number in the other matrix, as is typical, the researchers devised a strategy to produce the same mathematical results. In this approach, the matrices are overlaid and only the most important operations are performed.
“It’s quite light compared to matrix multiplication,” said Rui-Jie Zhu, the paper’s first author and a graduate student in Eshraghian’s group. “We replaced the expensive operation with cheaper operations.”
Although they reduced the number of operations, the researchers were able to maintain the performance of the neural network by introducing time-based computation in the training of the model. This enables the network to have a “memory” of the important information it processes, enhancing performance. This technique paid off — the researchers compared their model to Meta’s state-of-the-art algorithm called Llama, and were able to achieve the same performance, even at a scale of billions of model parameters.

Custom chips​

The researchers designed their neural network to operate on GPUs, as they have become ubiquitous in the AI industry, allowing the team’s software to be readily accessible and useful to anyone who might want to use it.
On standard GPUs, the researchers saw that their neural network achieved about 10 times less memory consumption and operated about 25 percent faster than other models. Reducing the amount of memory needed to run a powerful large language model could provide a path forward to enabling the algorithms to run at full capacity on devices with smaller memory like smartphones.
Nvidia, the dominant producer of GPUs worldwide, designs their hardware to be highly optimized to perform matrix multiplication, which has enabled them to dominate the industry and launched them to be one of the most profitable companies in the world. However, this hardware is not fully optimized for ternary operations.
To push the energy savings even further, the team collaborated with Assistant Professor Dustin Richmond and Lecturer Ethan Sifferman in the Baskin Engineering Computer Science and Engineering department to create custom hardware. Over three weeks, the team created a prototype of their hardware on a highly-customizable circuit called a field-programmable gate array (FPGA). This hardware enables them to take full advantage of all the energy-saving features they programmed into the neural network.
With this custom hardware, the model surpasses human-readable throughput, meaning it produces words faster than the rate a human reads, on just 13 watts of power. Using GPUs would require about 700 watts of power, meaning that the custom hardware achieved more than 50 times the efficiency of GPUs.
With further development, the researchers believe they can further optimize the technology for even more energy efficiency.
“These numbers are already really solid, but it is very easy to make them much better,” Eshraghian said. “If we’re able to do this within 13 watts, just imagine what we could do with a whole data center worth of compute power. We’ve got all these resources, but let’s use them effectively.”


Close, but
1721538034411.png
 
  • Haha
  • Like
  • Fire
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Maybe it deserves a half a cigar 🚬because at the 17.20 minute mark of the podcast with Sean Hehir, Dr Jason Eshraighan states:

"One thing that will probably come out at the same time as this podcast getting released is a Matrix Multiply Free Language Model so, yeah, I'm very excited to see how these things can intercept with what Brainchip has got going on."

21aed033941e9e5cb6ca0e5b52642f95.gif
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Diogenese

Top 20
Maybe it deserves a half a cigar 🚬because at the 17.20 minute mark of the podcast with Sean Hehir, Dr Jason Eshraighan states:

"One thing that will probably come out at the same time as this podcast getting released is a Matrix Multiply Free Language Model so, yeah, I'm very excited to see how these things can intercept with what Brainchip has got going on."

View attachment 66842
As long as you smoke outside with one of Roger Miller's old Stogies.

The Eshraghian paper uses GPU, but does some mathematical fiddling with sparsity,.

https://arxiv.org/pdf/2406.02528
ABSTRACT
...
We also provide a GPU-efficient implementation of this model which reduces memory usage by up to 61% over an unoptimized baseline during training. By utilizing an optimized kernel during inference, our model’s memory consumption can be reduced by more than 10× compared to unoptimized models. To properly quantify the efficiency of our architecture, we build a custom hardware solution on an FPGA which exploits lightweight operations beyond what GPUs are capable of.

As you say, he has no doubt had his eyed opened by learning about Akida. In the paper, they talk about "binarized activations", so the equivalent of the first engineering samples before Akida 1 moved to 4 bits.

Section 2 Related Works
...
MatMul-free Transformers: The use of MatMul-free Transformers has been largely concentrated in the domain of SNNs. Spikformer led the first integration of the Transformer architecture with SNNs [18, 19], with later work developing alternative Spike-driven Transformers [20, 21]. These techniques demonstrated success in vision tasks. In the language understanding domain, SpikingBERT [22] and SpikeBERT [23] applied SNNs to BERT utilizing knowledge distillation techniques to perform sentiment analysis. In language generation, SpikeGPT trained a 216M-parameter generative model using a spiking RWKV architecture. However, these models remain constrained in size, with 2 SpikeGPT being the largest, reflecting the challenges of scaling with binarized activations. In addition to SNNs, BNNs have also made significant progress in this area. BinaryViT [24] and BiViT [25] successfully applied Binary Vision Transformers to visual tasks. Beyond these approaches, Kosson et al. [26] achieve multiplication-free training by replacing multiplications, divisions, and non-linearities with piecewise affine approximations while maintaining performance.

I wonder whether they can adapt their system to installed cloud-based GPU LLM processors as their system does provide significant power savings. However, they did build a special purpose FPGA.
 
  • Like
  • Fire
Reactions: 9 users
  • Like
  • Wow
Reactions: 7 users

IloveLamp

Top 20
  • Like
  • Thinking
  • Fire
Reactions: 9 users

TECH

Regular
Good morning,

In about 6 business days our 4c will be released covering the 2nd quarter activity, I'm expecting some revenue/cash receipts this time,
like most of us from the Edge-Box pre-paid orders and whatever may have generated some income during the quarter.

Our "official" release of the Edge-Box must be awfully close now, maybe within the next 3 weeks ??, has anybody contacted Tony for
an update by any chance ?

Does anyone know if LDA Capital have finally off loaded their recent pile of shares ?

Because..................
bots GIF
Tech x
 
  • Like
Reactions: 13 users

7für7

Regular
Good morning,

In about 6 business days our 4c will be released covering the 2nd quarter activity, I'm expecting some revenue/cash receipts this time,
like most of us from the Edge-Box pre-paid orders and whatever may have generated some income during the quarter.

Our "official" release of the Edge-Box must be awfully close now, maybe within the next 3 weeks ??, has anybody contacted Tony for
an update by any chance ?

Does anyone know if LDA Capital have finally off loaded their recent pile of shares ?

Because..................
bots GIF
Tech x
Yes for sure we will see some revenue/cash but I doubt that these will cover our costs. Nonetheless, it's better than nothing. However, it is not appropriate to jump for joy because of it. I expect a slight increase in the stock price, as always before the release, and depending on the results, a further increase or the usual game... not meant negatively, just objectively.
 
  • Like
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Airbus looking at opportunities to create scale in space, satellites​

e6b9c0c2a647d727c021ad8b0e60adbd

CEO of Airbus SE Guillaume Faury addresses the audience during a signing ceremony in Istanbul·Reuters
Reuters
Mon, Jul 22, 2024, 3:46 AM GMT+101 min read


LONDON (Reuters) - Airbus is looking at opportunities to create scale in defence, space and particularly satellites markets, CEO Guillaume Faury said on Sunday.
Airbus and France's Thales are exploring a tie-up of some space activities as new competition disrupts the sector, two industry sources said last week.
The sources said preliminary talks, first reported by La Tribune, were focusing on the companies' overlapping satellite activities.
"We are looking at opportunities to create scale, and that's true in defence, that's true in space, and in particular on satellites," Faury said, ahead of this week's Farnborough Airshow.

"We would be happy to find ways to create scale in the space environment in Europe in general."
Airbus and Thales Alenia Space, in which Italy's Leonardo holds a 33% stake, are Europe's largest makers of satellites for telecommunications, navigation and surveillance.
Demand for their geostationary satellites is increasingly under pressure as traditional manufacturers face competition from massive constellations of expendable satellites in low Earth orbit, like the Starlink network of Elon Musk's SpaceX.
Airbus last month took a 900 million euro ($980 million) charge on its struggling space services business, on top of 500 million euros last year.
Faury told analysts at the time the company was "evaluating all strategic options" for its space business including restructuring, co-operation, a portfolio review and potential merger and acquisition options.
(Reporting by Tim Hepher and Joanna Plucinska; Editing by Mark Potter)

 
  • Like
  • Fire
  • Love
Reactions: 21 users

Diogenese

Top 20

Airbus looking at opportunities to create scale in space, satellites​

e6b9c0c2a647d727c021ad8b0e60adbd

CEO of Airbus SE Guillaume Faury addresses the audience during a signing ceremony in Istanbul·Reuters
Reuters
Mon, Jul 22, 2024, 3:46 AM GMT+101 min read


LONDON (Reuters) - Airbus is looking at opportunities to create scale in defence, space and particularly satellites markets, CEO Guillaume Faury said on Sunday.
Airbus and France's Thales are exploring a tie-up of some space activities as new competition disrupts the sector, two industry sources said last week.
The sources said preliminary talks, first reported by La Tribune, were focusing on the companies' overlapping satellite activities.
"We are looking at opportunities to create scale, and that's true in defence, that's true in space, and in particular on satellites," Faury said, ahead of this week's Farnborough Airshow.

"We would be happy to find ways to create scale in the space environment in Europe in general."
Airbus and Thales Alenia Space, in which Italy's Leonardo holds a 33% stake, are Europe's largest makers of satellites for telecommunications, navigation and surveillance.
Demand for their geostationary satellites is increasingly under pressure as traditional manufacturers face competition from massive constellations of expendable satellites in low Earth orbit, like the Starlink network of Elon Musk's SpaceX.
Airbus last month took a 900 million euro ($980 million) charge on its struggling space services business, on top of 500 million euros last year.
Faury told analysts at the time the company was "evaluating all strategic options" for its space business including restructuring, co-operation, a portfolio review and potential merger and acquisition options.
(Reporting by Tim Hepher and Joanna Plucinska; Editing by Mark Potter)

Someone should tell him scales don't work in zero gravity.
 
  • Haha
  • Like
  • Fire
Reactions: 22 users
Given our relationship and work done on CyberNeuro-RT with Quantum Ventura and Penn State (now on our Uni list) and which also included MFC (a Lockheed Martin division) I wonder if we have been in the background mix with LM? They would be using Loihi no doubt but Akida?

From a presentation late last year HERE

We know the CyberNeuro-RT SBIR completed earlier this year apparently and also that QV are offering it on their website with either Akida or Loihi options.

IMG_20240722_104425.jpg







Screenshot_2024-07-22-10-46-06-92_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Wow
  • Haha
Reactions: 4 users

Esq.111

Fascinatingly Intuitive.
  • Haha
  • Love
Reactions: 12 users

mrgds

Regular
Good morning,

In about 6 business days our 4c will be released covering the 2nd quarter activity, I'm expecting some revenue/cash receipts this time,
like most of us from the Edge-Box pre-paid orders and whatever may have generated some income during the quarter.

Our "official" release of the Edge-Box must be awfully close now, maybe within the next 3 weeks ??, has anybody contacted Tony for
an update by any chance ?

Does anyone know if LDA Capital have finally off loaded their recent pile of shares ?

Because..................
bots GIF
Tech x
@TECH ............... why are you expecting a "official release " of the Edge-Box as its been available for purchase for what seems to be forever.

Because it is of a "monetary value " , is this the reason why you think BRN will/should do a "offical release announcement "?
 
  • Like
  • Fire
Reactions: 4 users

TECH

Regular
@TECH ............... why are you expecting a "official release " of the Edge-Box as its been available for purchase for what seems to be forever.

Because it is of a "monetary value " , is this the reason why you think BRN will/should do a "offical release announcement "?

Actually I was thinking more along the lines of "it's actually in the hands of the early, patient customers" !

As far as I know it hasn't been "officially" released to anybody, so any article/s on customer feedback would be well received,
not only by the company but by us the shareholders.

I also fully realize that the Edge-Box isn't a big money earner, but a tool to open up a gateway to further company traction,
through education and individuals imagination to create for us for free, so to speak.

Regards....Tech
 
  • Like
Reactions: 7 users

Airbus looking at opportunities to create scale in space, satellites​

e6b9c0c2a647d727c021ad8b0e60adbd

CEO of Airbus SE Guillaume Faury addresses the audience during a signing ceremony in Istanbul·Reuters
Reuters
Mon, Jul 22, 2024, 3:46 AM GMT+101 min read


LONDON (Reuters) - Airbus is looking at opportunities to create scale in defence, space and particularly satellites markets, CEO Guillaume Faury said on Sunday.
Airbus and France's Thales are exploring a tie-up of some space activities as new competition disrupts the sector, two industry sources said last week.
The sources said preliminary talks, first reported by La Tribune, were focusing on the companies' overlapping satellite activities.
"We are looking at opportunities to create scale, and that's true in defence, that's true in space, and in particular on satellites," Faury said, ahead of this week's Farnborough Airshow.

"We would be happy to find ways to create scale in the space environment in Europe in general."
Airbus and Thales Alenia Space, in which Italy's Leonardo holds a 33% stake, are Europe's largest makers of satellites for telecommunications, navigation and surveillance.
Demand for their geostationary satellites is increasingly under pressure as traditional manufacturers face competition from massive constellations of expendable satellites in low Earth orbit, like the Starlink network of Elon Musk's SpaceX.
Airbus last month took a 900 million euro ($980 million) charge on its struggling space services business, on top of 500 million euros last year.
Faury told analysts at the time the company was "evaluating all strategic options" for its space business including restructuring, co-operation, a portfolio review and potential merger and acquisition options.
(Reporting by Tim Hepher and Joanna Plucinska; Editing by Mark Potter)

"We would be happy to find ways to create scale in the space environment in Europe in general."

How about "personal satellites"?
Might be the next big thing 🤔..


20240722_135914.jpg
 
  • Haha
  • Like
Reactions: 5 users

mrgds

Regular
Actually I was thinking more along the lines of "it's actually in the hands of the early, patient customers" !

As far as I know it hasn't been "officially" released to anybody, so any article/s on customer feedback would be well received,
not only by the company but by us the shareholders
.

I also fully realize that the Edge-Box isn't a big money earner, but a tool to open up a gateway to further company traction,
through education and individuals imagination to create for us for free, so to speak.

Regards....Tech
Well, i guess its been released to anyone who wanted to stump-up the $799. ( whether they early patient customers or not, who knows )
i only have access of information via Brainchip IR and i did ask Tony about any "official release of the Edge-Box " being conveyed, and his short reply was that it (the Edge-Box) is available through the BRN website.

Maybe you could ask Peter or Anil if it is indeed in the hands of the "early patient customers " ? and also how many EBs were produced
initially and how many are in production now as the first lot were "sold out " .

Cheers @TECH
 
  • Fire
  • Like
Reactions: 3 users
@TECH ............... why are you expecting a "official release " of the Edge-Box as its been available for purchase for what seems to be forever.

Because it is of a "monetary value " , is this the reason why you think BRN will/should do a "offical release announcement "?
How much would brainchip receive from the sale of the box? as it would be a lot less than the selling price, so even if they were to sell 1000 units maximum, I would expect brainchip to receive maybe $100 a unit so $100k.
 
  • Like
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Screenshot 2024-07-22 at 3.52.46 pm.png



High Performance Spaceflight Computing​




Animation showing Artemis astronauts working on the Moon with rovers, tools and habitats.


The objective of NASA’s High Performance Spaceflight Computing (HPSC) project is to develop a next-generation flight computing system that addresses computational performance, power management, fault tolerance, and connectivity needs of NASA missions through 2040 and beyond.
More GCD Projectsabout

LEAD CENTER​

Jet Propulsion Laboratory

CONTRACT AWARDED​

August 2022

PROJECT MANAGER​

Jim Butler

PRINCIPLE TECHNOLOGIST​

Wesley Powell
NASA is actively seeking to answer fundamental questions about life beyond Earth through groundbreaking science and exploration missions:

  • Are we alone?
  • What does tomorrow bring?
  • How is our universe changing?
For future space missions seeking to answer these questions, there is a need for significant advances in onboard computing. Required advances in computing include navigation and control systems, complex science instruments, robotic science sample acquisition and return, communications, autonomous robotic operations, crewed instrument health and safety monitoring, power generation and management, and autonomous fault handling and recovery.

However, space presents challenges to computing. Radiation in the space environment can cause long-term damage to electronic components and cause errors that disrupt computing. Missions beyond Earth orbit can also present a high demand for onboard computing resources because of the time delay to communicate to and from Earth. This communication latency drives the need for many space activities to be performed autonomously and in real-time onboard, without any assistance from ground controllers on Earth. Performing these onboard functions involves running many types of computational workloads in the space environment, including advanced autonomy, AI (artificial intelligence) and machine learning, image and signal processing, data flow management, and object detection and classification. All of these workloads require advances in onboard computing technology to meet the demands of increasingly complex missions.

NASA’s Planetary Decadal Study provides the U.S. with a 10 -year roadmap for space exploration and concludes that future missions demand extensive computational and autonomous capabilities that cannot be met by legacy space processors.

Managed by NASA’s Jet Propulsion Laboratory in Southern California and funded by the agency’s Space Technology Mission Directorate (STMD) at NASA Headquarters in Washington, the High Performance Spaceflight Computing (HPSC) project closes that requirements gap.

Quote about High Performance Spaceflight Computing by Dr. Prasun Desai, Deputy Administrator for NASA STMD: “This project will spur innovative solutions for next generation space computing tailored to different mission applications that the whole world can harness. Advancing this capability will transform future space missions at a rapid pace. It is a truly game changing technology advancing the current state-of-the-art capability by up to 100x, and it is very exciting as we get closer to manufacturing the processor.”

High Performance

The HPSC project will provide high performance AI dataflow processing with scalable vector computing capabilities that are critical for the science and autonomy needs of future advanced space systems.


Fault-Tolerant

This project is specially designed to survive in hazardous space environments and contains features that ensure it can provide reliable results in the harshest of space environments. This capability ensures the execution of critical operations, such as robotically landing or flying on another planet, supporting astronauts in deep space, or operating near small bodies in the outer solar system.


Power Optimized

Because electrical power is a vital resource in space, HPSC is designed to be adaptable in terms of power usage and computing performance. This flexibility allows dynamic, granular control of functions that can be turned off when they are not in use or put into lower power modes. Due to this flexibility to tailor power and performance, the HPSC processor can be used across missions with widely ranging power requirements and can be uniquely suited for missions where the power budget and performance needs vary significantly between mission phases.


Connectivity​

Using advanced Ethernet networking technologies, HPSC can connect to a wide array of sensors and other devices. Using these same connectivity technologies, multiple HPSC’s can be connected together to build bigger systems with enhanced capabilities.


Built on Industry-Standards

The core of the HPSC design is an industry standard, open-source instruction set architecture, bundled with significant fault tolerance, radiation tolerance, and a full security suite as well as all the software required to run it. The HPSC also includes a suite of features and industry-standard interfaces and protocols.

NASA's Mars Reconnaissance Orbiter passes above a portion of the planet called Nilosyrtis Mensae in this artist's concept illustration.

HPSC includes fault tolerance and error correction features to ensure reliable operation in the harsh environment of space, where radiation and extreme temperatures can affect electronic components.
NASA/JPL-Caltech
hpsc-stylized-lunar.jpg

HPSC supports autonomous decision-making capabilities, allowing the spacecraft to perform tasks without real-time human intervention, which is crucial for missions far from Earth.
Credit: NASA/JPL-Caltech
Illustration of astronauts working on the Moon.

HPSC handles vast amounts of data generated by spacecraft instruments and sensors, performing complex calculations and data analysis in real-time.
Credit: NASA
hpsc-powered-artemis-gateway-artemis-iv-solar-arrayslarge.jpg

HPSC runs the software that controls the spacecraft’s various subsystems, such as navigation, communication, power management, and scientific instruments.
Credit: NASA
artemis-base-camp-banner.webp

HPSC manages data communication between the spacecraft and ground control, ensuring that mission data is transmitted efficiently and commands from Earth are received and executed correctly.
Credit: NASA
NASA's Mars Reconnaissance Orbiter passes above a portion of the planet called Nilosyrtis Mensae in this artist's concept illustration.

HPSC includes fault tolerance and error correction features to ensure reliable operation in the harsh environment of space, where radiation and extreme temperatures can affect electronic components.
NASA/JPL-Caltech
hpsc-stylized-lunar.jpg

HPSC supports autonomous decision-making capabilities, allowing the spacecraft to perform tasks without real-time human intervention, which is crucial for missions far from Earth.
Credit: NASA/JPL-Caltech
Illustration of astronauts working on the Moon.

HPSC handles vast amounts of data generated by spacecraft instruments and sensors, performing complex calculations and data analysis in real-time.
Credit: NASA
hpsc-powered-artemis-gateway-artemis-iv-solar-arrayslarge.jpg

HPSC runs the software that controls the spacecraft’s various subsystems, such as navigation, communication, power management, and scientific instruments.
Credit: NASA
artemis-base-camp-banner.webp

HPSC manages data communication between the spacecraft and ground control, ensuring that mission data is transmitted efficiently and commands from Earth are received and executed correctly.
Credit: NASA
NASA's Mars Reconnaissance Orbiter passes above a portion of the planet called Nilosyrtis Mensae in this artist's concept illustration.

HPSC includes fault tolerance and error correction features to ensure reliable operation in the harsh environment of space, where radiation and extreme temperatures can affect electronic components.
NASA/JPL-Caltech

High Performance Spaceflight Computing (HPSC) is the brain of the spacecraft, coordinating and executing the necessary functions to ensure mission success.


Processing Data

Handles vast amounts of data generated by spacecraft instruments and sensors, performing complex calculations and data analysis in real-time.


Control Systems​

Runs the software that controls the spacecraft’s various subsystems, such as navigation, communication, power management, and scientific instruments.


Communications​

Manages data communication between the spacecraft and ground control, ensuring that mission data is transmitted efficiently and commands from Earth are received and executed correctly.


Autonomous Operations​

Supports autonomous decision-making capabilities, allowing the spacecraft to perform tasks without real-time human intervention, which is crucial for missions far from Earth.


Error Handling

Includes fault tolerance and error correction features to ensure reliable operation in the harsh environment of space, where radiation and extreme temperatures can affect electronic components.

Quote about High Performance Spaceflight Computing by Eugene Schwanbeck, Program Element Manager for HPSC: In the gold rush of space exploration, the cooperation between NASA and Microchip Technologies has provided a better shovel. What we will find with this leap in performance, fault tolerance, and flexibility, only time and exploration will unveil. The journey to uncovering the mysteries of our universe and how we can explore it more productively becomes more promising with advancements like HPSC.
The NASA HPSC project is being developed in collaboration with Microchip Technology Inc. and led by a team of engineers at NASA JPL with significant research and development contributions from Microchip.

The HPSC project has been active since 2021 and will produce its first processors in early 2025, followed by demonstrations on future NASA or commercial space platforms. This project will advance spaceflight computing needs, offering an unprecedented opportunity for scientific return, future advancements, and upgrades in space computing systems. It will benefit not only NASA and commercial space companies, but also has applications in automotive, consumer, industrial, and aerospace and defense industries. The HPSC processor will be commercially available from Microchip with broad ecosystem and industry support.

NASA is leading the SOSAᵀᴹ (Sensor Open Systems Architectureᵀᴹ) Space Subcommittee to foster an interoperable spaceflight avionics standard, which is key to the establishment of an ecosystem that will support the HPSC processor. The SOSA Space Subcommittee has developed significant ecosystem involvement from commercial space providers and is delivering interoperable, industry standard specifications that industry and government can use to build efficient and performant systems.










NASA meatball with white background

HPSC White Paper​


94278178c01805256b59ca739c6055dd.gif



Here are some parts I've highlighted from the HPSC White Paper.

Notably, the paper discusses artificial "intelligence at-the-edge" (AIAE) systems. 🧠🍟🚀

Screenshot 2024-07-22 at 4.05.35 pm.png

Screenshot 2024-07-22 at 4.07.34 pm.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 18 users

gilti

Regular
what the hell happened to "supply & demand"
it 10:1 .195c - .200c and still they are forcing it down.
pricks
 
  • Like
  • Fire
Reactions: 6 users
Top Bottom