THOR was discovered on a vid earlier this year by
@Pom down under with a couple of responses from posters who watched it as below.
I share very similar concerns. I simply don't trust Sean Hehir and I reckon Tony Dawe is sincere but probably in the dark as much as we shareholders. I've been waiting too long for my investment to bear fruit (originally bought about 2019 at 6 cents, and now average 65 cents) its making me very...
thestockexchange.com.au
I share very similar concerns. I simply don't trust Sean Hehir and I reckon Tony Dawe is sincere but probably in the dark as much as we shareholders. I've been waiting too long for my investment to bear fruit (originally bought about 2019 at 6 cents, and now average 65 cents) its making me very...
thestockexchange.com.au
ok watching it now and already at the 2.33min the presenter announces out loud brainchip! So now got 28mins to go but its a great start to the video thanks Pom. Ok watched it all, basically talk was about progressing neuromorphic awareness and Edge AI Foundation building THOR which will bring...
thestockexchange.com.au
Appears moving ahead pretty quickly with website up and running etc as at November.
Akida & MetaTF has been included with a number of other neuromorphic players HW / SW like Loihi, Spinnaker, Brainscales, Dynap etc.
Not a bad thing imo, having major exposure to the academic groups and new breed of engineers and developers etc that will be part of the driving force behind further neuromorphic uptake.
What is THOR (as opposed to who is THOR

):
www.neuromorphiccommons.com
The Neuromorphic Commons
THOR is the US hub for open-access large-scale neuromorphic research.
Building
Biological Intelligence
at Scale
We envision a future where interdisciplinary collaboration on biological intelligence is seamless. Through close partnerships with industry, THOR will empower researchers to request, co-develop, deploy, and evaluate neuromorphic experiments on heterogeneous computing hardware systems. This platform will enable a richer understanding of computational models, algorithms, neuromorphic hardware, and real-world applications that benefit from bio-inspired processing.
Deployed at the University of Texas at San Antonio, THOR's infrastructure is hosted and maintained on-site. This provides a stable and reliable resource, ensuring community researchers have the ability to innovate and push the boundaries of what's possible in neuromorphic computing.
Neuromorphic Hardware
Explore a diverse range of cutting-edge neuromorphic hardware platforms.
Neuromorphic hardware systems and chips are a groundbreaking class of computer processors engineered to mimic the efficiency and structure of the human brain. Unlike conventional computers, which separate processing and memory (the von Neumann bottleneck), neuromorphic chips integrate these functions by using networks of artificial neurons and synapses, eliminating the energy-intensive transfer of data. These chips rely on Spiking Neural Networks (SNNs), where processing is event-driven—meaning the artificial neurons only consume power when an information "spike" is received—leading to drastically lower energy consumption and latency. This brain-inspired architecture, demonstrated in chips like Intel's Loihi and IBM's TrueNorth, makes them exceptionally well-suited for demanding, real-time AI tasks requiring on-chip learning and adaptation, particularly in resource-constrained environments such as autonomous vehicles, robotics, and edge computing devices.
Neuromorphic Software
Explore a diverse range of cutting-edge neuromorphic software platforms.
Neuromorphic software frameworks are the essential programming bridge that allows developers to design and deploy efficient, brain-inspired algorithms—namely Spiking Neural Networks (SNNs)—onto specialized neuromorphic hardware. Unlike traditional software that uses continuous data and sequential instructions, these frameworks provide tools and APIs to work with the event-driven, asynchronous communication characteristic of spiking neurons and to leverage synaptic plasticity for on-chip learning. Key examples include Intel's Lava (an open-source framework designed to be hardware-agnostic), PyNN (a simulator-independent language for SNN model specification), and deep learning extensions like snnTorch (which integrates SNNs with the PyTorch ecosystem), all of which are crucial for overcoming the complexities of programming massively parallel, bio-inspired architectures to fulfill the promise of ultra-low-power AI.