I hadn’t considered that this could be paid marketing. Could it be? I’d be really disappointed if it was…
I don't care as long as it helps.I hadn’t considered that this could be paid marketing. Could it be? I’d be really disappointed if it was…
Don't let the FUDster's either over on the crapper, or here, into your head.I hadn’t considered that this could be paid marketing. Could it be? I’d be really disappointed if it was…
It's all good brother. It's been a long hard ride for all of us.Sorry everyone. Ive been in 8 years… and that was a small moment of weakness. Forgive me!![]()
Sorry everyone. Ive been in 8 years… and that was a small moment of weakness. Forgive me!![]()
Actually - I think you’re right about that. I recall it too. As you were.I seem to remember this question being asked before to kevin and he said no he wasn't being paid by brn .
Also, over 10 million in prop bids have disappeared from the buy side over the last few days, mainly from today.
The Kevin effect?
Hopefully the share price tide is turning our way at last.
Early signs perhaps?
Not when they're just prop bids placed to distort and manipulate the market.Not fluent English here… but isn’t it negative at first place if the buy side disappears?
Not when they're just prop bids placed to distort and manipulate the market.
So it means paid by IBM and approved by IBMI seem to remember this question being asked before to kevin and he said no he wasn't being paid by brn .
Hi mia,So it means paid by IBM and approved by IBM
Actually - I think you’re right about that. I recall it too. As you were.
Hi mia,
It is Kevin's job to investigate "neighbouring" technologies. Here's his IBM blog:
https://community.ibm.com/community/user/blogs/kevin-d-johnson
This one from 20260125 is a good example pre Akida:
https://community.ibm.com/community...unityKey=74d589b7-7276-4d70-acf5-0fc26430c6c0
Building Event-Driven HPC/AI Infrastructure with IBM Spectrum Symphony
By Kevin D. Johnson posted Wed January 21, 2026 10:16 AM
I've built out the neuromorphic demo with IBM Spectrum Symphony and GPFS. Like the KNN semantic routing I demonstrated a couple of weeks ago Symphony routes meaning, not merely bits and bytes.
Now, I've built a Spiking Neural Network (SNN) service on Symphony that runs on six A100 GPUs on IBM Cloud extending the intelligent routing project I did originally to identify patterns for small, medium and large queries. But, what you choose to route is really up to you. Let me give you another way we could route this. How about options portfolio optimization?
I presented Symphony's neuromorphic engine with live options pricing routed after training with five years of options data. When constraints are violated or opportunities emerge, neurons fire. Each spike is a decision: buy, sell, rebalance. The Norse LIF spiking implementation achieved 92% of traditional convex optimizer performance with just an 8% gap that likely closes with a little tuning that I didn't do.
But, the communication pattern matters more than the benchmark. Traditional optimizers move continuous gradients. The SNN averaged 43 spikes per optimization. Meaning moves. Silent neurons stay silent.
GPFS/GPU HBM as Computational Storage
The tensor state originates in GPU HBM where the spiking neural network runs. GPFS provides the framing that makes this computational storage possible: 480GB of A100 HBM across six GPUs becomes the hot tier, with GPFS as a warm tier for observability and audit. In other words, we flip a traditional HSM capability on its head. When tensor state needs to be tracked or checkpointed, DMAPI manages the migration to GPFS and creates a durable record without any code changes. The spike logs, neuron states, and portfolio weights flow to /gpfs/fs1/neuromorphic where they become observable, auditable, and recoverable.
DMAPI automatically intercepts file events and maintains extended attributes linking GPFS files back to their GPU HBM origins. ILM policies control when hot data cools to GPFS and when warm data recalls back to the GPU's HBM. The storage system participates in a computation lifecycle and not just persistence.
This is computational storage: GPU memory as the execution tier, GPFS as the observability and checkpoint tier, DMAPI as the event fabric connecting them, all guided by Symphony and the neuromorphic design it makes possible.
One final note: Spiking neural nets are popular in edge deployments because they use less power than a traditional GPU. One of the easily missed points here is that Symphony can be used in edge computing as well. I'll also demo that in the coming weeks. Stay tuned!
Hi mia,
It is Kevin's job to investigate "neighbouring" technologies. Here's his IBM blog:
https://community.ibm.com/community/user/blogs/kevin-d-johnson
This one from 20260125 is a good example pre Akida:
https://community.ibm.com/community...unityKey=74d589b7-7276-4d70-acf5-0fc26430c6c0
Building Event-Driven HPC/AI Infrastructure with IBM Spectrum Symphony
By Kevin D. Johnson posted Wed January 21, 2026 10:16 AM
I've built out the neuromorphic demo with IBM Spectrum Symphony and GPFS. Like the KNN semantic routing I demonstrated a couple of weeks ago Symphony routes meaning, not merely bits and bytes.
Now, I've built a Spiking Neural Network (SNN) service on Symphony that runs on six A100 GPUs on IBM Cloud extending the intelligent routing project I did originally to identify patterns for small, medium and large queries. But, what you choose to route is really up to you. Let me give you another way we could route this. How about options portfolio optimization?
I presented Symphony's neuromorphic engine with live options pricing routed after training with five years of options data. When constraints are violated or opportunities emerge, neurons fire. Each spike is a decision: buy, sell, rebalance. The Norse LIF spiking implementation achieved 92% of traditional convex optimizer performance with just an 8% gap that likely closes with a little tuning that I didn't do.
But, the communication pattern matters more than the benchmark. Traditional optimizers move continuous gradients. The SNN averaged 43 spikes per optimization. Meaning moves. Silent neurons stay silent.
GPFS/GPU HBM as Computational Storage
The tensor state originates in GPU HBM where the spiking neural network runs. GPFS provides the framing that makes this computational storage possible: 480GB of A100 HBM across six GPUs becomes the hot tier, with GPFS as a warm tier for observability and audit. In other words, we flip a traditional HSM capability on its head. When tensor state needs to be tracked or checkpointed, DMAPI manages the migration to GPFS and creates a durable record without any code changes. The spike logs, neuron states, and portfolio weights flow to /gpfs/fs1/neuromorphic where they become observable, auditable, and recoverable.
DMAPI automatically intercepts file events and maintains extended attributes linking GPFS files back to their GPU HBM origins. ILM policies control when hot data cools to GPFS and when warm data recalls back to the GPU's HBM. The storage system participates in a computation lifecycle and not just persistence.
This is computational storage: GPU memory as the execution tier, GPFS as the observability and checkpoint tier, DMAPI as the event fabric connecting them, all guided by Symphony and the neuromorphic design it makes possible.
One final note: Spiking neural nets are popular in edge deployments because they use less power than a traditional GPU. One of the easily missed points here is that Symphony can be used in edge computing as well. I'll also demo that in the coming weeks. Stay tuned!