BRN Discussion Ongoing

Wags

Regular
Artificial Intelligence Strategy for the Department of War
Released very recently.
Fits BRN like a glove. Seems huge!!!!!!
No further comment from me right now - still absorbing it.
What do posters think?
I agree Manny, and it certainly would be huge, can you imagine what else it leads to. One can only hope at this point.

My biggest fear however (no, I am not a downramper, not even a little bit), but, none the less, a fear I have had for some time, is that INTEL will shine out of the shadows, just in time, and scoop the govt money and contracts. Didn't the US Govt take a 10% stake in Intel via a bailout fairly recently??
Just watiff Intel tape out Lolhi3 for commercial use. Did I read that somewhere recently?? I'm assuming they can do it in their own foundry?, maybe even jump the queue? This would be the worst outcome but it is a scenario that bothers me.
I know we have some small orders and collaborations in the defence sector, but mostly every mention of Akida also mentions Lolhi
Is there anyone that can give me some reasoning to banish this fear from my birdsnest infested brain.
Is it possible Lolhi3 could have any akida special sauce, I don't know.
Hopefully our CEO and team are all over it.
 
  • Like
  • Fire
Reactions: 7 users

Guzzi62

Regular
I’m thinking we wouldn’t need to do a capital raise if it were Brainchip!
LOL, that paper is from the 6th Jan 2026, so how could they get any contract when the capital raise was done?

Not making any sense.

Thanks manyy100, excellent find and as you pointed out, BRN already have a foot inside military circles, and they know the company.

James Shields, VP of Sales & Business Development - BrainChip is ex Air Force which also helps.

 
  • Like
  • Fire
Reactions: 4 users

HopalongPetrovski

I'm Spartacus!
I agree Manny, and it certainly would be huge, can you imagine what else it leads to. One can only hope at this point.

My biggest fear however (no, I am not a downramper, not even a little bit), but, none the less, a fear I have had for some time, is that INTEL will shine out of the shadows, just in time, and scoop the govt money and contracts. Didn't the US Govt take a 10% stake in Intel via a bailout fairly recently??
Just watiff Intel tape out Lolhi3 for commercial use. Did I read that somewhere recently?? I'm assuming they can do it in their own foundry?, maybe even jump the queue? This would be the worst outcome but it is a scenario that bothers me.
I know we have some small orders and collaborations in the defence sector, but mostly every mention of Akida also mentions Lolhi
Is there anyone that can give me some reasoning to banish this fear from my birdsnest infested brain.
Is it possible Lolhi3 could have any akida special sauce, I don't know.
Hopefully our CEO and team are all over it.
Hi Wags.
Impossible to know the future or what developments or serendipitous discoveries may lead to, but even if Loihi does manage to grab the lions share of neuromorphic applications, even a partial bit of the pie that would be picked up by their competition would be substantial.
Think of Apple re the Intel/Microsoft behemoth.
The native advantages of neuromorphic tech are just now beginning to be understood and incorporated by big tech and our patents make us, at the least, a handsome and tasty entree for whoever wants to go up against a much diminished Intel.
 
  • Like
  • Fire
  • Love
Reactions: 5 users

Diogenese

Top 20
I agree Manny, and it certainly would be huge, can you imagine what else it leads to. One can only hope at this point.

My biggest fear however (no, I am not a downramper, not even a little bit), but, none the less, a fear I have had for some time, is that INTEL will shine out of the shadows, just in time, and scoop the govt money and contracts. Didn't the US Govt take a 10% stake in Intel via a bailout fairly recently??
Just watiff Intel tape out Lolhi3 for commercial use. Did I read that somewhere recently?? I'm assuming they can do it in their own foundry?, maybe even jump the queue? This would be the worst outcome but it is a scenario that bothers me.
I know we have some small orders and collaborations in the defence sector, but mostly every mention of Akida also mentions Lolhi
Is there anyone that can give me some reasoning to banish this fear from my birdsnest infested brain.
Is it possible Lolhi3 could have any akida special sauce, I don't know.
Hopefully our CEO and team are all over it.
Hi Wags,

Yes. We cannot ignore Intel, or the unknown unknowns.

I hope that TENNs will give us the advantage.
 
  • Like
  • Fire
Reactions: 8 users

manny100

Top 20
I agree Manny, and it certainly would be huge, can you imagine what else it leads to. One can only hope at this point.

My biggest fear however (no, I am not a downramper, not even a little bit), but, none the less, a fear I have had for some time, is that INTEL will shine out of the shadows, just in time, and scoop the govt money and contracts. Didn't the US Govt take a 10% stake in Intel via a bailout fairly recently??
Just watiff Intel tape out Lolhi3 for commercial use. Did I read that somewhere recently?? I'm assuming they can do it in their own foundry?, maybe even jump the queue? This would be the worst outcome but it is a scenario that bothers me.
I know we have some small orders and collaborations in the defence sector, but mostly every mention of Akida also mentions Lolhi
Is there anyone that can give me some reasoning to banish this fear from my birdsnest infested brain.
Is it possible Lolhi3 could have any akida special sauce, I don't know.
Hopefully our CEO and team are all over it.
Even if INTEL declared Loihi2 commercial overnight they would face lengthy engagement to adoption cycles. Not as long as Brainchip but we would still retain a lead.
INTEL does not have the equivalent of AKIDA cloud which speeds up time to prototype.
AKIDA is deployable today, LOIHI is not.
I imagine however that INTEL would be pushing hard to commercialisation.
Fortunately we do have some key defense clients on the hook right now.
 
  • Like
  • Love
  • Fire
Reactions: 7 users

TECH

Regular
Hi Wags,

Yes. We cannot ignore Intel, or the unknown unknowns.

I hope that TENNs will give us the advantage.

Dare I say, "God save the Queen (King)" Cos "Nothing is going to save the Governor General" (Intel)

Best regards.... The (x) Prime Minister :ROFLMAO::ROFLMAO::ROFLMAO:

Intel just seems to have too many issues, as soon as one spot fire is extinguished, another fires up, but as we all
know, the US Government will always come to their rescue......

IP.
 
  • Like
  • Love
Reactions: 3 users

TopCat

Regular
LOL, that paper is from the 6th Jan 2026, so how could they get any contract when the capital raise was done?

Not making any sense.

Thanks manyy100, excellent find and as you pointed out, BRN already have a foot inside military circles, and they know the company.

James Shields, VP of Sales & Business Development - BrainChip is ex Air Force which also helps.

Ok, maybe we’ll see an explosion of sales from it. ( I won’t hold my breath )
 

Wags

Regular
Even if INTEL declared Loihi2 commercial overnight they would face lengthy engagement to adoption cycles. Not as long as Brainchip but we would still retain a lead.
INTEL does not have the equivalent of AKIDA cloud which speeds up time to prototype.
AKIDA is deployable today, LOIHI is not.
I imagine however that INTEL would be pushing hard to commercialisation.
Fortunately we do have some key defense clients on the hook right now.
Thankyou ALL for the positive feedback.
I need to give myself a clip over the head every now and then. I think it's driven as much as anything by Trumpy MAGA and Intel's American flag. Maybe that was what was driving the magical redomicile talk?
I remind myself about TENNS as Dio points out, our Bagfull of patents, the new Provinence tech, PICO as well as AKIDA 1, 11/2, 2 and assuming soonish 3, the META software, tapeout in play, customers on the hook, all the backroom hush hush, etc etc.
Also Dio, serious question, is it possible for lolhi to incorporate akida special sauce? or is this just a ridiculous thought?
WTF was I thinking.
I shall redirect my thinking for a moment to the nice steak soon to be on the bbq, and the accompanying glass of red.
 
  • Fire
  • Like
Reactions: 5 users
Have another Drink
1768720596180.gif
 
  • Haha
Reactions: 2 users

manny100

Top 20
I asked IR to request that Sean address the implications of the Department of War release for Brainchip in the podcast due later this month.
 
  • Like
  • Fire
Reactions: 8 users

manny100

Top 20
We have Lockheed -Martin (LM) on our CyberNeutro-RT hook.
LM Skunkworks division has been testing Drones with Neuromorphic Edge AI with our Water safety 'Drones' partner Arquimea.
 
  • Like
  • Fire
Reactions: 9 users

TopCat

Regular

Diogenese

Top 20
Thankyou ALL for the positive feedback.
I need to give myself a clip over the head every now and then. I think it's driven as much as anything by Trumpy MAGA and Intel's American flag. Maybe that was what was driving the magical redomicile talk?
I remind myself about TENNS as Dio points out, our Bagfull of patents, the new Provinence tech, PICO as well as AKIDA 1, 11/2, 2 and assuming soonish 3, the META software, tapeout in play, customers on the hook, all the backroom hush hush, etc etc.
Also Dio, serious question, is it possible for lolhi to incorporate akida special sauce? or is this just a ridiculous thought?
WTF was I thinking.
I shall redirect my thinking for a moment to the nice steak soon to be on the bbq, and the accompanying glass of red.
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

1768726249223.png



[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
 
  • Fire
  • Like
Reactions: 3 users
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

View attachment 94377


[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
I thought it was the JAST rules we bought Dio?

Renesas have already borrowed the N & M from us for $1M when they bought the licence and then went solo :(

Could be mistaken too, going on memory and that was a few years back.
 
  • Fire
  • Like
Reactions: 2 users

Diogenese

Top 20
I thought it was the JAST rules we bought Dio?

Renesas have already borrowed the N & M from us for $1M when they bought the licence and then went solo :(

Could be mistaken too, going on memory and that was a few years back.
Forgive a befuddled old man. There was something rattling around in the back of my brain ... (plenty of room to rattle around there).

In fact @uiux 's All Roads Lead to JAST thread all those years ago explains JAST.

However, I believe we did also get N-of-M from ST. I assumed that N-of-M was the secret sauce because JAST was not a secret.
 
  • Like
Reactions: 2 users

Wags

Regular
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

View attachment 94377


[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Thanks ganDis a
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

View attachment 94377


[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

View attachment 94377


[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Thank
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

View attachment 94377


[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

View attachment 94377


[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Thanks Dio,
This N-of-M coding corner seems a little crowded.
Yr a clever man with a good sense of humor, cheers to you
 

manny100

Top 20
Interesting. Under the Heading "AI-Native Warfighting." Page 5.
"We must put aside legacy approaches to combat and ensure we use this disruptive technology to compound the lethality of our military. Exercises and experiments that do not meaningfully incorporate AI and autonomous capabilities will be reviewed by the Director of Cost Assessment and Program Evaluation for resourcing adjustment"
 
  • Like
Reactions: 2 users

Diogenese

Top 20
Interesting. Under the Heading "AI-Native Warfighting." Page 5.
"We must put aside legacy approaches to combat and ensure we use this disruptive technology to compound the lethality of our military. Exercises and experiments that do not meaningfully incorporate AI and autonomous capabilities will be reviewed by the Director of Cost Assessment and Program Evaluation for resourcing adjustment"
Hi Manny,

I'm not sure compounding lethality was at the front of PvdM's mind when he invented Akida, but it is inevitable.
 
  • Like
Reactions: 2 users

Guzzi62

Regular
Interesting. Under the Heading "AI-Native Warfighting." Page 5.
"We must put aside legacy approaches to combat and ensure we use this disruptive technology to compound the lethality of our military. Exercises and experiments that do not meaningfully incorporate AI and autonomous capabilities will be reviewed by the Director of Cost Assessment and Program Evaluation for resourcing adjustment"
We should not forget that we already have a contract running with the US Air Force Research Laboratory.




 

manny100

Top 20
Hi Manny,

I'm not sure compounding lethality was at the front of PvdM's mind when he invented Akida, but it is inevitable.
Hi Dio, agree it is inevitable.
It seems historically disruptive tech starts in Defense and Space and spreads to consumers in one form or another.
AKIDA could function equally as well in a toy or a Drone, missile, satellite, or sophisticated cyber security system.
The key is how the AKIDA buyer uses it. Likely why it will not take long for the 'Prime' defense suppliers to take out an IP and apply/add their own 'secret sauce' to it - economies of scale apply as well. Hence not millions of 1500 chips being produced.
In no time most will have an AKIDA chip in something they own.
It all comes down to how smart those working with it are.
We should do quite well from DOD.
 
Last edited:
Top Bottom