I'm so bloody slow sometimes, only now it hit me that neuromorphic computing in a way solves the problem af scaling down the process node, by enabling more performance in different ways:
1) Now that we have such a low power consumption and dissipated heat, it must be possible to run them more agressively.
2) As I see it, neuromorphic computing lends itself perfectly to expanding the amount of silicon used, like connecting multiple chiplets to support greater models and/or running multiple models utilizing each other. They can even be stacked not to take up any significant space.
3) It's a young technology that is already beating the old technology and has a long runway of innovation ahead of it, like the jump from Akida 1 to 2. I bet that there's a vast space of possibilities to be explored, like hardware support for n-dimensional models.
While nVidia hit the brickwall and others are struggeling with Moores law, we just got started and are seemingly already way ahead of their Jetson.
So, now I think neuromorphic computing is going to be indespensable for future performance gains and it might branch out like the three suggestions above and combinations/more branches may appear.