BRN Discussion Ongoing

Diogenese

Top 20
If correct, maybe explains why no subcontractor announced yet :unsure:


The project focuses on a specific type of radar processing known as micro-Doppler signature analysis, which offers unprecedented activity discrimination capabilities. BrainChip is currently in negotiations to enter into a subcontractor agreement with the previously mentioned aerospace and defence company for the completion of the contract award.

BrainChip will partner with the subcontractor to provide research and development services developing and optimising algorithms for a fixed fee totalling $800,000 over the same period.

No other material conditions exist that must be satisfied for the agreement to become legally binding and to proceed. Air Force Research Laboratory will begin making milestone payments in January 2025.
Doppler shift is generated by the motion of an object relative to a receiver.

Micro-Doppler radar can detect vibrations, rotation and other self-contained relative movement in a target object.

One possible application would be in the detecetion and identification of propellor-driven drones. It can detect the rotation speed of the propellor and possibly the length of the propellor because the Doppler shift increases from the centre to the tip of the propellor.

Other targets may have characteristic vibrations providing a micro-Doppler signature.

So part of the development will include building models of characteristic micro-Doppler signatures of friend-or foe objects, just like Sheffield identified the EXocet as friendly.
 
  • Like
  • Fire
  • Love
Reactions: 20 users
Hi Stable Genius,

I just realised I never followed up on your above reply to my Nov 20 post regarding Frontgrade Gaisler.

What I had meant to express with my post was that I don’t believe we are involved in FG’s GR716B, which is a microcontroller, not a microprocessor.

Frontgrade Gaisler and BrainChip announced their collaboration “to explore the integration of BrainChip’s Akida neuromorphic processor into Frontgrade Gaisler’s next generation fault-tolerant, radiation-hardened microprocessors” on May 6.


(By the way, Alf Kuchenbuch gave his Hardware Pioneers Max 2024 presentation three weeks after the FG announcement, not prior to it.)

May 6th was also the first day of the 17th Annual Workshop on Spacecraft Flight Software referred to in the following LinkedIn post, which in my opinion makes it clear that what we should be looking at instead is GR765, FG’s next-generation radiation-hardened multi-processor SoC (and hence prepare ourselves for a much longer time frame):

View attachment 74118



View attachment 74135

Just my two cents. I’d be happy to have some people with a technical background (which I don’t have) comment on this.


Thank’s for the slide from that Frontgrade October 2024 presentation, by the way, which I hadn’t been aware of (green arrow and highlighting is mine).



View attachment 74140

View attachment 74136


From a February 2022 Position Paper:


View attachment 74137
View attachment 74138
View attachment 74139



Thanks for the feedback @Frangipani

I went of memory re the dates of the presentation and release; boy time flys. I must say they missed an opportunity for some Star Wars gags by having it on May the 6th instead of May the 4th :)


I’m not in the industry so I took a look at what FG were making at the time and the GR716B seemed to fit. They are upgrading it from the 716A and the main change I could see appeared to be the Real Time Accelorators (RTA’s) which I took to be 2 x Akida1000’s. I could be wrong. I’m not married to that opinion and won’t die on a hill defending it :)

Not sure why it isn’t just part of the announcement rather than have us all guess to be honest? It would help with timeline projections for investment purposes.


1733994050591.png


I‘m a technophobe so I couldn’t tell you the difference between a microprocessor or microcontroller so I’ve looked it up:



If it is the one you’re suggesting the only thing that changes is the timeframe. The point I was making was that FG are trusting us to go into their “Must not fail“ system. That sends a message to the rest of the industry they have done their due diligence, testing and validation and have chosen us. I find that very comforting.

A large portion of my holding is in my Super so I have no choice but to wait it out regardless of which product it is going into. I also have some personal shares and they bounce tomorrow please :)

I must admit I’d rather some results sooner rather than later to push the SP along quicker and be sitting at a couple of dollars. But I have no control over that. We are much better placed now than this time last year. In my mind the FG, AFRL announcements to me have secured my retirement in the long term.

Whilst having a Quick Look I saw the Airbus release also: more great news to celebrate! To be working with Airbus sends another strong message to the space and defence industries.

Whilst here I am curious to know why the Bascomm Hunter product hasn’t been announced via the ASX. Surely having a working product being sold; at this stage of our company growth is ASX worthy?

Thanks for picking up my mistake. Hopefully as a group we can work it out together.
 
  • Like
  • Love
  • Fire
Reactions: 24 users

Frangipani

Regular
One thing is indisputable, if it were not for Joe Guerci, we would not have won this contract.
Allow me to dispute the “indisputable” 😉:

No doubt Joe Guerci gushing about Akida has been great publicity for our company in the cognitive radar/EW community, but the way I see it, ISL and RTX (or whoever else the undisclosed aerospace and defense customer is) would have worked with AKD1000 independently of each other, possibly in parallel for some time.

So why would BrainChip not have been able to win the AFRL contract based on the merit of the so far undisclosed entity’s R&D alone? 🤔

Sure, ISL’s radar solution based on Akida and developed for the Air Force Research Laboratory from 2022 onwards may have helped with the AFRL’s willingness to part with US$1.8 million over the next 12 months, but I really don’t see why Joe Guerci should be attributed an indispensable role with regards to BrainChip winning the SBIR contract award announced on 10 December.

On the contrary, I wouldn’t be surprised if he were actually disappointed and somewhat embittered that BrainChip is about to pick another (and much bigger) company as subcontractor for this brand new AFRL contract. A company, which is not only a competitor to ISL in R&D that - as it turns out - has also worked with AKD1000 over the past months/years (likely even staying under Joe Guerci’s radar - pun intended - judging from how I read his LinkedIn comment), but could in addition one day commercialise similar solutions itself, given it is “a multinational aerospace and defense customer”…

Joe Guerci with some help from his friends at Raytheon made this happen.

Is there any indication that Joe Guerci and Raytheon have been collaborating?
Who are “his friends at Raytheon”, then?


The Raytheon-sponsored medal he was awarded in 2020 (almost two years before ISL joined the BrainChip Early Access Program) does not suggest any link between him and Raytheon (now RTX) to me - I interpret the IEEE accolade as a peer-nominated lifetime achievement award in the field of radar applications, with the medal having been named in honour of a defense industry legend, who as Raytheon’s CEO helped the company to become a global leader.

“The 2020 IEEE Dennis J. Picard Medal for Radar Technologies and Applications is given for outstanding accomplishments in advancing the fields of radar technologies and their applications, sponsored by Raytheon Company, is presented to JOSEPH R. GUERCI “For contributions to advanced, fully adaptive radar systems and real-time knowledge-aided, and cognitive radar processing architectures.”

Raytheon/RTX is merely the award sponsor, but does not decide who gets awarded - the annual nominations are discussed and a winner is selected by an IEEE medal committee made up of nine members.

The committee’s chairman, for example, holds the THALES/Royal Academy Chair of RF Sensors in the Department of Electronic and Electrical Engineering at University College London (sponsored by Raytheon’s competition, that is).

50D3AE97-8845-43F2-8816-E84CF5DBF4AD.jpeg
 
  • Like
  • Wow
  • Fire
Reactions: 8 users

Frangipani

Regular
Our Chairman of the Board of Directors, Antonio J. Viana, is joining yet another company as Non-Executive Director: PQShield (https://pqshield.com/).
According to their self-description, PQShield are “leading experts in post-quantum cryptography” and “were the first cybersecurity company to develop quantum-safe cryptography on chips, in applications, and in the cloud.”

Since I don’t have a clue what that actually means, I have no idea whether there is any cross-pollination possible, but highly likely someone else here on TSE will be able to tell us…
(Unsurprisingly in the semiconductor world, they do share some ecosystem partners with BrainChip.)



779C95A1-0431-48CA-8D4B-D964549C34BF.jpeg


65D8E101-CAD1-411E-ABB3-99274083D819.jpeg


0DBB2A3B-5E41-4347-B447-D184087D3FD3.jpeg

6F7BCDFA-D3A9-4534-AC53-55519B175A18.jpeg
 
  • Like
  • Thinking
  • Wow
Reactions: 20 users
Was just looking through a podcast transcript from late March this year with Dr Guerci. Seeing there is some chatter on him and ISL.

I've pasted part of it after the intros and some other discussion on the Fed budget, defence spending etc but the full thing is:


What I do find is that Dr Guerci is def a fan of neuromorphic, definitely likes BRN and Akida and I suspect he was probably one of our partners / customers asking for the additional functionalities of Akida 2.0 and TENNS from some of his comments around tailoring it for their needs.

Personally do hope that ISL is the partner as they have laid so much of the ground work it appears, unless there is a parallel NDA giant (possibly) that has been doing the same behind the scenes.

Also get an insight in how time consuming it is to develop and also get through the Govt redtape.

I've bold some parts discussing us and neuromorphic in general.

Enjoy.



So we're talking about how many things we can do with AI.

I wanna talk a little bit more, kind of take a step back, and continue talking a little bit about how AI works.

And you had a slide in your webinar presentation that we were talking about
the relationship with AI, and there's an aspect to AI that's using neuromorphic
computing and neuromorphic chips,
and we were talking about this.

This concept just blew my mind, because I really never heard the term before.

So I wanted to kind of, I wanna ask you to talk a little bit about this.

What is this piece of the puzzle, and what does it it hold in terms of the future for artificial intelligence, and then feeding into cognitive radar and EW?

- So cognitive radar, EW, live and die by embedded systems, right?

They don't have the luxury of living in a laboratory with thousands of acres
of computers, right?

They have to take all their resources on a plane or at UAB or whatever platform and go into battle.

And so to really leverage the power of AI,
you need to implement them on efficient embedded computing systems.

Right now, that means FPGAs, GPUs,
and those things are, when all is said and done, you know, all the peripherals required, the ruggedization, the MIL-SPEC, you're talking kilograms and kilowatts.

And as I pointed out, there is a rather quiet
revolutionary part to AI that's perhaps even biggervthan all the hullabaloo about ChatGPT, and that's neuromorphic chips.

So neuromorphic chips don't implement
traditional digital flip-flop circuits, things like that.

Essentially they actually, in silicon, create neurons with interconnects.

And the whole point of a neural network
is the weighting that goes onto those interconnects from layer to layer.

And the interesting thing about that
is you've got companies like BrainChip in Australia, right, that is not by any stretch
using the most sophisticated foundry
to achieve ridiculous line lists like conventional FPGAs and GPUs do.

Instead it's just a different architecture.

But why is that such a big deal?

Well, in the case of BrainChip as well as Intel and IBM, these chips can be the
size of a postage stamp.


And because they're implementing what are called spiking neural networks, or SNNs, they only draw power when
there's a change of state, and that's a very short amount of time, and it's relatively low-power.

So at the end of the day, you have something the size of a postage stamp
that's implementing a very, very sophisticated convolutional neural network solution with grams and milliwatts as opposed to kilograms and kilowatts.

And so to me, this is the revolution.


This is dawning. This is the thing that changes everything.

So now you see this little UAV coming in,
and you don't think for a second that it could do, you know, the most sophisticated electronic warfare functions, for example.

Pulse sorting, feature identification, geolocation, all these things that require,
you know, thousands of lines of code
and lots of high-speed embedded computing, all of a sudden it's done on a postage stamp.

That's the crazy thing.

And by the way, in my research we've done it. we've implemented pulse, the interleaving, we've implemented, you know, ATR,
specifically on the BrainChip
from Australia, by the way.


So really quite amazing.


- So where is this technology?

You said we've already done it.

We have a pretty good understanding of what it can do.

And like you mentioned, you know, a scenario where whether it's a UAV or whatever system, I mean, something the
size of a postage stamp, it completely changes size, weight, power,
all those considerations, and makes almost anything a potential host for that capability.

- Yeah.


- What are some of the next steps in this,
call it a revolution or rapid evolution of technology?

I mean, because we obviously, you know,
a couple years ago there was a CHIPS Act, you know, trying to make sure that we, in the development of a domestic
chip production capability, Congress passed a CHIPS Act to kind of help spur
on domestic foundries, domestic capability to produce chips.

And does this kind of fall into kind of the...

Is this benefiting from that type of activity?

Is this part of the development that's happened through the CHIPS Act?

Is there something more that we need to be doing to spur on this innovation?

- Well, the CHIPS Act is a good
thing domestically speaking.

And by the way, part of the CHIPS Act,
it is focused on neuromorphic chips, by the way, so that's good to know.


However, the real culprit is the age-old valley of death, bridging the valley of death.

And by the way, I spent seven years at DARPA, and even at DARPA with the
funds I had available to me, bridging the gap between S&T and Programs of Record is still a herculean maze of biblical proportions.

And so while you'll hear lots of nice-sounding words coming out of OSD and other places, saying, you know, "We
gotta move things along.

We gotta spur small business. We gotta..."
it's all S&T funding.

There still is an extraordinary impediment
to getting new technologies into Programs of Record.

And I, you know, I'm not the only one saying that, so don't take my word for it.

I can tell you lots of horror stories, and I've done it.

I was successful while at DARPA.

So my stuff is on the F-35 and F-22, for example, and other classified systems.

I mean, I know what it takes to get it done.

Unfortunately, though there's a lot of lip service about overcoming that barrier,

it still has changed very littlei n the 20 years since I've been successful at DARPA in transitioning.

So I'm sorry, but that's biggest impediment.

And I know it's not a technical thing, and I know there's lots of-

- But here's what concerns me about that,
is, you know, the valley of death, I mean, that's been in our terminology, in our lexicon for decades, like you say, going way back, you know, even before we even under, you know, back in the nineties and eighties when the technology, while
advanced at the time, pales in comparison to what we can do today,
the process hasn't changed.

And so like if we had a valley of death back then, how are we ever going to bridge it today with as fast as technology is moving, as fast as the solutions we
need to prototype and field.

I mean, you mentioned it's herculean.

I mean, it's almost beyond that it seems,
because our system hasn't really changed that much over the past 20, 30 years.

- Yeah, so maybe it's ironic, I don't know the right word, but on the S&T side, OSD, the service labs, you know, I would say that they're pretty forward-leaning and they're making good investments.

The problem is getting into a Program of Record is where the rubber hits the road,
and where things get fielded.

And so you look at the S&T budgets, you look at the number of small businesses
getting DOD S&T funds, and you could almost say, "Well, they're a success," right?

I mean, we're giving small businesses,
they're coming up with great things.

But then look at how much of that actually ends up in a Program of Record.

And let me just be clear.

I don't blame the Programs of Record,
because the game is stacked against them.

They, very often, especially if it's newer technology, they are having lots of problems with getting the baseline system fielded.

There's cost overruns, there's scheduling issues, and so they're already with
2.95 strikes against them, and now all of a sudden you want to on-ramp and
entirely new capability when they're already behind the eight ball.

That's just impossible, unless the whole culture of Programs of Record changes
where, for example, you structure it so that every year you have to answer
how are you dealing with obsolescence?

How are you keeping up?

Where are the on-ramps?

How successful were you with these on-ramps, these upgrades, all of these things?

Because until you fix that, I don't care how much money you spend on S&T, you're not gonna get fielded.

- From a technology standpoint, let's just, you know, assume for a second that we make some progress in the policy side of the equation as it pertains to acquisition
and the valley of death.

From a technology perspective, you've been following this for 20 years.

You know, where are some of the opportunities that are before you that you're like, this is the direction we need to go in, this is something that excites you
or keeps you awake at night in a positive way, of like this is promising and it's gonna be your next pursuit?

- Well, we definitely have to embrace cognitive systems for sure.

I mean, I don't think there's anyone out there that would say we don't need that kind of flexibility and adaptability on the fly.

Now, we can argue over just how much cognition we need and the flavors.

That's fine. So there's that, right?

Let's all just accept that.

And then I think you touched on this earlier, you know, there's a big push across all the services on what's called the JSE, which is the Joint Simulation Environment, which is this grandiose vision for having multi-user, multiplayer,
high fidelity training environments,
synthetic environments, which, by the way, can include live over sim, so that our systems become much more realistic
and reflective of whatt hey're really gonna see when they get out into the real world.

Again, I come back to lots of good things going on on the S&T side.

You almost can't, you know, you really can't argue with it, but that transition to field its systems and Programs of Record is still very much broken, and that's just a fact.

And it's not just me saying that.

You can ask anyone who's in the business
of trying to transition technology to the Department of Defense, and they'll tell you the same thing.

So, you know, again, S&T community,
doing a great job, I think, generally speaking, your DARPAs, your AFRLs, all of these, but that transition piece is just continuing.

And by the way, do our adversaries have the same issues?

Some do, some don't, you know?

And this technology I'm talking about, neuromorphic chips, that's available to the world.

I mean, BrainChip is an Australian company.

There's no ITAR restrictions, so.

- Well, and also I think it speaks to the multidisciplinary approach to technology today.

I mean, the neuromorphic chip, I mean, it has military applications you can obviously use it for, but, I mean, you're gonna find this
in all various sectors
of an economy and society and what we use in everyday
life, and so, you know-

- So Ken, let me just say that the neuromorphic chip that BrainChip makes from Australia had nothing to do with electronic warfare.

It's designed to do image processing.

So one of the things we had to overcome
was take our electronic warfare I/Q data,
in-phase and quadrature RF measurement data, and put it into a format to make it look like an image so that the BrainChip could actually digest it and do something with it.

So you're absolutely right.

I mean, these chips are not being designed for us in the electronic warfare community, but they're so powerful that we were still able to get it to work.

Imagine if they put a little effort
into tailoring it to our needs.

Then you have a revolution.

So, sorry to interrupt you there, but I just want...

You made a point and it's very valid, you know.

- It's valid. It's valid, it's important.

I mean, it goes to just the possibilities that are out there.

- Well, and to amplify that point, all the advanced capabilities that we have in our RFsystems, radar and EW, most of that is driven by the wireless community, the trillion-dollar wireless community compared to a paltry radar and EW ecosystem.

So, you know, what's happening in the commercial world is where, and leveraging, you know, commercial off-the-shelf technology is a gargantuan piece of
staying up and keeping up, and by the way, addressing obsolescence as well, right?

If you have a piece of proprietary
hardware from the 1980s, good luck, you know, with obsolescence, right?

- Well, that, and also hopefully, you know,
as we move down this path on standards
and open systems and so forth, some of that will work its way in.

We can adapt some of thatbso that as we struggle less with obsolescence in the future than we do now.

- We hope.
- Hopefully, yes. I mean-

- Again-
- We'll see.

But, I mean, I would think that's the idea.

- I mean, look at the day-to-day pressures
that Programs of Record are under.

So I'm not gonna get into all kinds of details here, but we had a capability
that was vetted by the program offices
and was developed under SBIRS, and went all the wayt hrough to a Phase III SBIR.

We have flight-qualified software to bring this much-needed capability to the war fighter.

This is all a true story.

And all of a sudden the program ran into scheduling and budgetary constraints,
so they had a jettison the on-ramps,
and so a capability that was vetted was a really important capability, just got thrown to the curb because of the everyday problems that Programs of Record run into, and that's not how they get judged, right?

They're judged on getting that baseline system over...

Look, the F-35 was just recently declared operational, what, a month ago?

You gotta be kiddin' me.

- Well, Joe, I think this is a good spot to, I mean, I feel like if we keep talking we can keep going inl ayer and layer and layer,
and I don't wanna put our listeners through that, but I think a good consolation prize is to have you back on
the show in the future, and we can go a little bit deeper into this, but I do really appreciate you taking some time to talk about this, 'cause this is a topic as of,
you know, really 24 hours ago, I realized how often I just use the word, and I never really understood the depth of the definition of what the words I was using,
so I really appreciate you coming on the show, kind of helping me understand this better, and hopefully our listeners as well.

- Thank you, Ken.

You had great questions, great interview.

And let me give a shout out to AOC. Great organization.

I'm personally, and my company's a big supporter of AOC and what you guys are doing, so you're part of the solution, not part of the problem.

- We appreciate that, and, you know, appreciate all that you've done for us
in terms of helping us understand this really complex topic.

And really I do say this honestly,

I do hope to have you back on the show here, and there's no shortage of topics of conversation for us, so I appreciate you joining me.

- Thanks again, Ken.

- That will conclude this episode of "From the Crow's Nest."

I'd like to thank my guest, Dr. Joe Guerci,
for joining me for this discussion.

Also, don't forget to review, share, and follow this podcast.
 
  • Like
  • Fire
  • Love
Reactions: 49 users
Our Chairman of the Board of Directors, Antonio J. Viana, is joining yet another company as Non-Executive Director: PQShield (https://pqshield.com/).
According to their self-description, PQShield are “leading experts in post-quantum cryptography” and “were the first cybersecurity company to develop quantum-safe cryptography on chips, in applications, and in the cloud.”

Since I don’t have a clue what that actually means, I have no idea whether there is any cross-pollination possible, but highly likely someone else here on TSE will be able to tell us…
(Unsurprisingly in the semiconductor world, they do share some ecosystem partners with BrainChip.)



View attachment 74148

View attachment 74150

View attachment 74149
View attachment 74151
I see TCS are also a partner though I'm starting to worry that Antonio has a problem....hoarding NED roles :LOL:
 
  • Haha
  • Like
  • Thinking
Reactions: 7 users

7für7

Top 20
Me every morning scrolling through the comments

1734045941078.gif
 
  • Haha
  • Like
Reactions: 6 users

Diogenese

Top 20
Was just looking through a podcast transcript from late March this year with Dr Guerci. Seeing there is some chatter on him and ISL.

I've pasted part of it after the intros and some other discussion on the Fed budget, defence spending etc but the full thing is:


What I do find is that Dr Guerci is def a fan of neuromorphic, definitely likes BRN and Akida and I suspect he was probably one of our partners / customers asking for the additional functionalities of Akida 2.0 and TENNS from some of his comments around tailoring it for their needs.

Personally do hope that ISL is the partner as they have laid so much of the ground work it appears, unless there is a parallel NDA giant (possibly) that has been doing the same behind the scenes.

Also get an insight in how time consuming it is to develop and also get through the Govt redtape.

I've bold some parts discussing us and neuromorphic in general.

Enjoy.



So we're talking about how many things we can do with AI.

I wanna talk a little bit more, kind of take a step back, and continue talking a little bit about how AI works.

And you had a slide in your webinar presentation that we were talking about
the relationship with AI, and there's an aspect to AI that's using neuromorphic
computing and neuromorphic chips,
and we were talking about this.

This concept just blew my mind, because I really never heard the term before.

So I wanted to kind of, I wanna ask you to talk a little bit about this.

What is this piece of the puzzle, and what does it it hold in terms of the future for artificial intelligence, and then feeding into cognitive radar and EW?

- So cognitive radar, EW, live and die by embedded systems, right?

They don't have the luxury of living in a laboratory with thousands of acres
of computers, right?

They have to take all their resources on a plane or at UAB or whatever platform and go into battle.

And so to really leverage the power of AI,
you need to implement them on efficient embedded computing systems.

Right now, that means FPGAs, GPUs,
and those things are, when all is said and done, you know, all the peripherals required, the ruggedization, the MIL-SPEC, you're talking kilograms and kilowatts.

And as I pointed out, there is a rather quiet
revolutionary part to AI that's perhaps even biggervthan all the hullabaloo about ChatGPT, and that's neuromorphic chips.

So neuromorphic chips don't implement
traditional digital flip-flop circuits, things like that.

Essentially they actually, in silicon, create neurons with interconnects.

And the whole point of a neural network
is the weighting that goes onto those interconnects from layer to layer.

And the interesting thing about that
is you've got companies like BrainChip in Australia, right, that is not by any stretch
using the most sophisticated foundry
to achieve ridiculous line lists like conventional FPGAs and GPUs do.

Instead it's just a different architecture.

But why is that such a big deal?

Well, in the case of BrainChip as well as Intel and IBM, these chips can be the
size of a postage stamp.


And because they're implementing what are called spiking neural networks, or SNNs, they only draw power when
there's a change of state, and that's a very short amount of time, and it's relatively low-power.

So at the end of the day, you have something the size of a postage stamp
that's implementing a very, very sophisticated convolutional neural network solution with grams and milliwatts as opposed to kilograms and kilowatts.

And so to me, this is the revolution.

This is dawning. This is the thing that changes everything.


So now you see this little UAV coming in,
and you don't think for a second that it could do, you know, the most sophisticated electronic warfare functions, for example.

Pulse sorting, feature identification, geolocation, all these things that require,
you know, thousands of lines of code
and lots of high-speed embedded computing, all of a sudden it's done on a postage stamp.

That's the crazy thing.

And by the way, in my research we've done it. we've implemented pulse, the interleaving, we've implemented, you know, ATR,
specifically on the BrainChip
from Australia, by the way.


So really quite amazing.

- So where is this technology?

You said we've already done it.

We have a pretty good understanding of what it can do.

And like you mentioned, you know, a scenario where whether it's a UAV or whatever system, I mean, something the
size of a postage stamp, it completely changes size, weight, power,
all those considerations, and makes almost anything a potential host for that capability.

- Yeah.


- What are some of the next steps in this,
call it a revolution or rapid evolution of technology?

I mean, because we obviously, you know,
a couple years ago there was a CHIPS Act, you know, trying to make sure that we, in the development of a domestic
chip production capability, Congress passed a CHIPS Act to kind of help spur
on domestic foundries, domestic capability to produce chips.

And does this kind of fall into kind of the...

Is this benefiting from that type of activity?

Is this part of the development that's happened through the CHIPS Act?

Is there something more that we need to be doing to spur on this innovation?

- Well, the CHIPS Act is a good
thing domestically speaking.

And by the way, part of the CHIPS Act,
it is focused on neuromorphic chips, by the way, so that's good to know.


However, the real culprit is the age-old valley of death, bridging the valley of death.

And by the way, I spent seven years at DARPA, and even at DARPA with the
funds I had available to me, bridging the gap between S&T and Programs of Record is still a herculean maze of biblical proportions.

And so while you'll hear lots of nice-sounding words coming out of OSD and other places, saying, you know, "We
gotta move things along.

We gotta spur small business. We gotta..."
it's all S&T funding.

There still is an extraordinary impediment
to getting new technologies into Programs of Record.

And I, you know, I'm not the only one saying that, so don't take my word for it.

I can tell you lots of horror stories, and I've done it.

I was successful while at DARPA.

So my stuff is on the F-35 and F-22, for example, and other classified systems.

I mean, I know what it takes to get it done.

Unfortunately, though there's a lot of lip service about overcoming that barrier,

it still has changed very littlei n the 20 years since I've been successful at DARPA in transitioning.

So I'm sorry, but that's biggest impediment.

And I know it's not a technical thing, and I know there's lots of-

- But here's what concerns me about that,
is, you know, the valley of death, I mean, that's been in our terminology, in our lexicon for decades, like you say, going way back, you know, even before we even under, you know, back in the nineties and eighties when the technology, while
advanced at the time, pales in comparison to what we can do today,
the process hasn't changed.

And so like if we had a valley of death back then, how are we ever going to bridge it today with as fast as technology is moving, as fast as the solutions we
need to prototype and field.

I mean, you mentioned it's herculean.

I mean, it's almost beyond that it seems,
because our system hasn't really changed that much over the past 20, 30 years.

- Yeah, so maybe it's ironic, I don't know the right word, but on the S&T side, OSD, the service labs, you know, I would say that they're pretty forward-leaning and they're making good investments.

The problem is getting into a Program of Record is where the rubber hits the road,
and where things get fielded.

And so you look at the S&T budgets, you look at the number of small businesses
getting DOD S&T funds, and you could almost say, "Well, they're a success," right?

I mean, we're giving small businesses,
they're coming up with great things.

But then look at how much of that actually ends up in a Program of Record.

And let me just be clear.

I don't blame the Programs of Record,
because the game is stacked against them.

They, very often, especially if it's newer technology, they are having lots of problems with getting the baseline system fielded.

There's cost overruns, there's scheduling issues, and so they're already with
2.95 strikes against them, and now all of a sudden you want to on-ramp and
entirely new capability when they're already behind the eight ball.

That's just impossible, unless the whole culture of Programs of Record changes
where, for example, you structure it so that every year you have to answer
how are you dealing with obsolescence?

How are you keeping up?

Where are the on-ramps?

How successful were you with these on-ramps, these upgrades, all of these things?

Because until you fix that, I don't care how much money you spend on S&T, you're not gonna get fielded.

- From a technology standpoint, let's just, you know, assume for a second that we make some progress in the policy side of the equation as it pertains to acquisition
and the valley of death.

From a technology perspective, you've been following this for 20 years.

You know, where are some of the opportunities that are before you that you're like, this is the direction we need to go in, this is something that excites you
or keeps you awake at night in a positive way, of like this is promising and it's gonna be your next pursuit?

- Well, we definitely have to embrace cognitive systems for sure.

I mean, I don't think there's anyone out there that would say we don't need that kind of flexibility and adaptability on the fly.

Now, we can argue over just how much cognition we need and the flavors.

That's fine. So there's that, right?

Let's all just accept that.

And then I think you touched on this earlier, you know, there's a big push across all the services on what's called the JSE, which is the Joint Simulation Environment, which is this grandiose vision for having multi-user, multiplayer,
high fidelity training environments,
synthetic environments, which, by the way, can include live over sim, so that our systems become much more realistic
and reflective of whatt hey're really gonna see when they get out into the real world.

Again, I come back to lots of good things going on on the S&T side.

You almost can't, you know, you really can't argue with it, but that transition to field its systems and Programs of Record is still very much broken, and that's just a fact.

And it's not just me saying that.

You can ask anyone who's in the business
of trying to transition technology to the Department of Defense, and they'll tell you the same thing.

So, you know, again, S&T community,
doing a great job, I think, generally speaking, your DARPAs, your AFRLs, all of these, but that transition piece is just continuing.

And by the way, do our adversaries have the same issues?

Some do, some don't, you know?

And this technology I'm talking about, neuromorphic chips, that's available to the world.

I mean, BrainChip is an Australian company.

There's no ITAR restrictions, so.

- Well, and also I think it speaks to the multidisciplinary approach to technology today.

I mean, the neuromorphic chip, I mean, it has military applications you can obviously use it for, but, I mean, you're gonna find this
in all various sectors
of an economy and society and what we use in everyday
life, and so, you know-

- So Ken, let me just say that the neuromorphic chip that BrainChip makes from Australia had nothing to do with electronic warfare.

It's designed to do image processing.

So one of the things we had to overcome
was take our electronic warfare I/Q data,
in-phase and quadrature RF measurement data, and put it into a format to make it look like an image so that the BrainChip could actually digest it and do something with it.

So you're absolutely right.

I mean, these chips are not being designed for us in the electronic warfare community, but they're so powerful that we were still able to get it to work.

Imagine if they put a little effort
into tailoring it to our needs.

Then you have a revolution.

So, sorry to interrupt you there, but I just want...

You made a point and it's very valid, you know.

- It's valid. It's valid, it's important.

I mean, it goes to just the possibilities that are out there.

- Well, and to amplify that point, all the advanced capabilities that we have in our RFsystems, radar and EW, most of that is driven by the wireless community, the trillion-dollar wireless community compared to a paltry radar and EW ecosystem.

So, you know, what's happening in the commercial world is where, and leveraging, you know, commercial off-the-shelf technology is a gargantuan piece of
staying up and keeping up, and by the way, addressing obsolescence as well, right?

If you have a piece of proprietary
hardware from the 1980s, good luck, you know, with obsolescence, right?

- Well, that, and also hopefully, you know,
as we move down this path on standards
and open systems and so forth, some of that will work its way in.

We can adapt some of thatbso that as we struggle less with obsolescence in the future than we do now.

- We hope.
- Hopefully, yes. I mean-

- Again-
- We'll see.

But, I mean, I would think that's the idea.

- I mean, look at the day-to-day pressures
that Programs of Record are under.

So I'm not gonna get into all kinds of details here, but we had a capability
that was vetted by the program offices
and was developed under SBIRS, and went all the wayt hrough to a Phase III SBIR.

We have flight-qualified software to bring this much-needed capability to the war fighter.

This is all a true story.

And all of a sudden the program ran into scheduling and budgetary constraints,
so they had a jettison the on-ramps,
and so a capability that was vetted was a really important capability, just got thrown to the curb because of the everyday problems that Programs of Record run into, and that's not how they get judged, right?

They're judged on getting that baseline system over...

Look, the F-35 was just recently declared operational, what, a month ago?

You gotta be kiddin' me.

- Well, Joe, I think this is a good spot to, I mean, I feel like if we keep talking we can keep going inl ayer and layer and layer,
and I don't wanna put our listeners through that, but I think a good consolation prize is to have you back on
the show in the future, and we can go a little bit deeper into this, but I do really appreciate you taking some time to talk about this, 'cause this is a topic as of,
you know, really 24 hours ago, I realized how often I just use the word, and I never really understood the depth of the definition of what the words I was using,
so I really appreciate you coming on the show, kind of helping me understand this better, and hopefully our listeners as well.

- Thank you, Ken.

You had great questions, great interview.

And let me give a shout out to AOC. Great organization.

I'm personally, and my company's a big supporter of AOC and what you guys are doing, so you're part of the solution, not part of the problem.

- We appreciate that, and, you know, appreciate all that you've done for us
in terms of helping us understand this really complex topic.

And really I do say this honestly,

I do hope to have you back on the show here, and there's no shortage of topics of conversation for us, so I appreciate you joining me.

- Thanks again, Ken.

- That will conclude this episode of "From the Crow's Nest."

I'd like to thank my guest, Dr. Joe Guerci,
for joining me for this discussion.

Also, don't forget to review, share, and follow this podcast.

So one of the things we had to overcome
was take our electronic warfare I/Q data,
in-phase and quadrature RF measurement data, and put it into a format to make it look like an image so that the BrainChip could actually digest it and do something with it.

We are told TENNs uses "converging orthogonal polynomials". "in-phase and quadrature RF measurement data" defines two orthogonal RF waveforms/polynomials.

Coincidence? (Excuse the "converging" pun)
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 28 users

MrNick

Regular
Screenshot 2024-12-13 at 9.13.55 am.png
 

Attachments

  • Screenshot 2024-12-13 at 9.15.03 am.png
    Screenshot 2024-12-13 at 9.15.03 am.png
    543.3 KB · Views: 107
  • Like
  • Thinking
  • Love
Reactions: 33 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Has this been posted previously? Time-frame for completion seems a good fit.





DEFENSE ADVANCED RESEARCH PROJECTS AGENCY

Raytheon
Co., El Segundo, California, was awarded an $8,842,171 cost-plus-fixed-fee completion contract for a Defense Advanced Research Projects Agency research project for the Fast Event-based Neuromorphic Camera and Electronics (FENCE) program. The FENCE program seeks to develop and demonstrate a low-latency, low-power, event-based camera and a new class of signal processing and learning algorithms that uses combined spatial and temporal (spatio-temporal) information to enable intelligent sensors for tactical Department of Defense applications. Work will be performed in Goleta, California (53%); Cambridge, Massachusetts (17%); El Segundo, California (15%); McKinney, Texas (10%); Tempe, Arizona (3%); New York, New York (1%); and Tewksbury, Massachusetts (1%), with an expected completion date of May 2025. Fiscal 2021 research, development, test and evaluation funds in the amount of $4,864,730 are being obligated at time of award. This contract is a competitive acquisition in which nine proposals were received in response to broad agency announcement HR0011-21-S-0001. The Defense Advanced Research Projects Agency, Arlington, Virginia, is the contracting activity (HR0011-21-C-0134).

*Small business
**Mandatory source
***Woman-Owned Small Business

 
  • Like
  • Fire
  • Love
Reactions: 23 users

mcm

Regular

Prophesee collaborates with AMD to deliver industry-first Event-based Vision solution running on leading, FPGA-based AMD Kria™ KV260 Vision AI Starter Kit​


Developers can now take full advantage of Prophesee Event-based Metavision® sensor and AI performance, power, and speed to create the next generation of Edge AI machine vision applications running on AMD platforms.​

PARIS, May 6, 2024 – Prophesee SA, inventor of the world’s most advanced neuromorphic vision systems, today announced that its Event-based Metavision HD sensor and AI are now available for use with the AMD Kria ™ KV260 Vision AI Starter Kit, creating a powerful and efficient combination to accelerate the development of advanced Edge machine vision applications. It marks the industry’s first Event-based Vision development kit compatible with an AMD platform, providing customers a platform to both evaluate and go to production with an industrial-grade solution for target applications such as smart city and machine vision, security cameras, retail analytics, and many others.
The development platform for the AMD Kria™ K26 System-on-Module (SOM), the KV260 Vision AI starter kit is built for advanced vision application development without requiring complex hardware design knowledge or FPGA programming skills. AMD Kria SOMs for edge AI applications provide a production-ready, energy-efficient FPGA-based device with enough I/O to speed up vision and robotics tasks at an affordable price point. Combined with the Prophesee breakthrough Event-based vision technology, machine vision system developers can leverage the lower latency and lower power capabilities of the Metavision platform to experiment and create more efficient, and in many cases not previously possible, applications compared to traditional frame-based vision sensing approaches.
Metavision AMD Starter Kit IMX636 Active Markers Board
Fig. 1: Prophesee Metavision Starter Kit – AMD Kria KV260 and Active Marker LED board

A breakthrough plug-and-play Active Markers Tracking application is included in this kit. It allows for >1,000Hz 3D pose estimation, with complete background rejection at pixel level while providing extreme robustness to challenging lighting conditions.
This application highlights unique features of Prophesee’s Event-based Metavision technologies, enabling a new range of ultra high-speed tracking use cases such as game controller tracking, construction site safety, heavy load anti-sway systems and many more.
Multiple additional ready-to-use application algorithms will be made available over the coming months.
The Prophesee Starter Kit provides an ‘out of the box’ development solution to quickly get up and running with the Prophesee Metavision SDK and IMX636 HD Event-based sensor realized in collaboration between Prophesee and Sony, allowing easy porting of algorithms to the AMD commercial and industrial-grade system-on-module (SOMs) powered by the custom-built Zynq™ UltraScale+™ multiprocessing SoC.
The new, Prophesee-enabled Kria KV260 AI Starter Kit will be on display at Automate 2024 in Prophesee’s booth 3452

“The ever-expanding Kria ecosystem helps make motion capture, connectivity, and edge AI applications more accessible to roboticists and developers,” said Chetan Khona, senior director of Industrial, Vision, Healthcare and Sciences Markets, AMD. “Prophesee Event-based Vision offers unique advantages for machine vision applications. Its low data consumption translates into efficient energy consumption, less compute and memory needed, and fast response times.”
“It’s never been easier to develop Event-based Edge applications with this combination of development aids from AMD and Prophesee,” said Luca Verre, co-founder and CEO of Prophesee. “We are providing everything needed to take complete advantage of the lower power processing and low latency performance inherent in Event-based Vision, as well as provide an environment to optimize machine vision system based on specific KPIs for customer-defined applications and use cases. This will further accelerate the adoption of Event-based Vision in key market segments that can benefit from Metavision’s unique advantages.”

Pricing & Availability

The Prophesee-enabled AMD Kria KV260 Starter Kit is available now.
I wonder if Akida is incorporated in the starter kit?
 
  • Like
  • Fire
  • Love
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Prophesee collaborates with AMD to deliver industry-first Event-based Vision solution running on leading, FPGA-based AMD Kria™ KV260 Vision AI Starter Kit​


Developers can now take full advantage of Prophesee Event-based Metavision® sensor and AI performance, power, and speed to create the next generation of Edge AI machine vision applications running on AMD platforms.​

PARIS, May 6, 2024 – Prophesee SA, inventor of the world’s most advanced neuromorphic vision systems, today announced that its Event-based Metavision HD sensor and AI are now available for use with the AMD Kria ™ KV260 Vision AI Starter Kit, creating a powerful and efficient combination to accelerate the development of advanced Edge machine vision applications. It marks the industry’s first Event-based Vision development kit compatible with an AMD platform, providing customers a platform to both evaluate and go to production with an industrial-grade solution for target applications such as smart city and machine vision, security cameras, retail analytics, and many others.
The development platform for the AMD Kria™ K26 System-on-Module (SOM), the KV260 Vision AI starter kit is built for advanced vision application development without requiring complex hardware design knowledge or FPGA programming skills. AMD Kria SOMs for edge AI applications provide a production-ready, energy-efficient FPGA-based device with enough I/O to speed up vision and robotics tasks at an affordable price point. Combined with the Prophesee breakthrough Event-based vision technology, machine vision system developers can leverage the lower latency and lower power capabilities of the Metavision platform to experiment and create more efficient, and in many cases not previously possible, applications compared to traditional frame-based vision sensing approaches.
Metavision AMD Starter Kit IMX636 Active Markers Board
Fig. 1: Prophesee Metavision Starter Kit – AMD Kria KV260 and Active Marker LED board

A breakthrough plug-and-play Active Markers Tracking application is included in this kit. It allows for >1,000Hz 3D pose estimation, with complete background rejection at pixel level while providing extreme robustness to challenging lighting conditions.
This application highlights unique features of Prophesee’s Event-based Metavision technologies, enabling a new range of ultra high-speed tracking use cases such as game controller tracking, construction site safety, heavy load anti-sway systems and many more.
Multiple additional ready-to-use application algorithms will be made available over the coming months.
The Prophesee Starter Kit provides an ‘out of the box’ development solution to quickly get up and running with the Prophesee Metavision SDK and IMX636 HD Event-based sensor realized in collaboration between Prophesee and Sony, allowing easy porting of algorithms to the AMD commercial and industrial-grade system-on-module (SOMs) powered by the custom-built Zynq™ UltraScale+™ multiprocessing SoC.
The new, Prophesee-enabled Kria KV260 AI Starter Kit will be on display at Automate 2024 in Prophesee’s booth 3452



Pricing & Availability

The Prophesee-enabled AMD Kria KV260 Starter Kit is available now.

I wonder if Akida is incorporated in the starter kit?
Hi @mcm,

i was reminded of a previous article I stumbled upon where Lisa Su, CEO of AMD seemed to hint at AMD developing neuromorphic chips or “new acceleration technologies” .

“Su didn’t describe how AMD will differentiate various Ryzens with NPU capabilities. But there’s a history here: In 2021, AMD mixed and matched parts from various Zen generations under the Ryzen 5000 name. AMD could conceivably do the same with future Ryzens, taking older NPUs and combining them with various CPUs and GPUs.

But that’s not to say we could see just an NPU, either. In response to another question about whether AMD could develop a neuromorphic chip like Intel’s Loihi, Su seemed open to the possibility. “I think as we go forward, we always look at some specific types of what’s called new acceleration technologies,” she said. “I think we could see some of these going forward.”
 
Last edited:
  • Like
  • Love
Reactions: 11 users

JB49

Regular
Everything described about the Prophesee Gen X320 sounded like it could have been Akida

But its already been on sale for a while so I'd assume we aren't involved in the X320 (yet).
 
  • Like
Reactions: 7 users

Thx Thomas, can I call you Tom ? 👊

IMG_3364.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 57 users

JB49

Regular
Some more stuff from Fraunhofer who we are already keeping an eye on


There are 4 projects they are currently working on that can be found here.
 
Last edited:
  • Like
  • Love
Reactions: 12 users

7für7

Top 20
I’ve been reading here for quite some time before I started contributing myself. I’m aware that some of my posts aren’t well-received by everyone and are often considered nonsense. Nevertheless, I’d like to add something today and explain why I sometimes deliberately bring humor into this space and approach many things – with the exception of a few individuals trying to stir up negativity – with calmness and a sense of humor.
First of all, a big thank you to everyone who takes the time to research and share relevant contributions here. Personally, I – and I’m sure many others – deeply appreciate it!

Now to the main point: some people still don’t fully grasp what’s currently happening. The question is not whether AI and its applications have a future – that’s already clear. Much of what is already possible behind the scenes remains hidden from us as outsiders. AI has now developed processing methods that humans can no longer fully comprehend. It’s similar to the human brain: we know something is happening, and in the end, there’s a solution. We can measure which brain regions are active during specific tasks, but the exact mechanisms remain a mystery. The same applies to AI.
The focus now is on developing processes to control AI, as it is already on the verge of becoming the most intelligent entity on Earth. What is being tested in high-security facilities, completely isolated from our networks, is beyond anyone’s imagination.
We are invested in a technology that, for many outside our “BrainChip world,” still feels like science fiction. They don’t yet understand the full scope of this innovation – and this is reflected in the current stock price. The majority of potential investors prefer to put their money into established companies because they haven’t grasped the advancements and possibilities of this technology yet.
However, as soon as we announce further successes in the near future and another major licensing partner is revealed, their perspective will change. These people don’t trust the technology itself but rather the tangible progress and results.
So, see yourselves as pioneers who had the luck and foresight to recognize the importance of this technology early on. Stay healthy – and look forward to more of my memes in the future!
 
  • Like
  • Love
  • Fire
Reactions: 28 users

JB49

Regular
Tedtalk on Neuromorphic computing:


Jorg Conradt has supervised a few thesis projects utilising AKD1000 at KTH University in Stockholm. They also use spinnaker and TrueNorth at their lab.
 
  • Like
  • Fire
  • Wow
Reactions: 11 users

7für7

Top 20
By the way… I just was talking with a friend of mine about Mercedes and AI and what we will expect generally from car manufacturers in the future… then he surprisingly mentioned the Aleefa (Sony and Honda) which is coming into market next year… I don’t know… but it sound somehow interesting…(edit: he was very excited talking about it)

 
  • Fire
Reactions: 3 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 30 users

rgupta

Regular
So one of the things we had to overcome
was take our electronic warfare I/Q data,
in-phase and quadrature RF measurement data, and put it into a format to make it look like an image so that the BrainChip could actually digest it and do something with it.

We are told TENNs uses "converging orthogonal polynomials". "in-phase and quadrature RF measurement data" defines two orthogonal RF waveforms.

Coincidence? (Excuse the "converging" pun)
Dio sorry in advance, but does that mean we can process and analyse blockchain data much faster and cheaper than a GPU?
 
  • Like
Reactions: 2 users
Top Bottom