BRN Discussion Ongoing

jtardif999

Regular
Was just looking through a podcast transcript from late March this year with Dr Guerci. Seeing there is some chatter on him and ISL.

I've pasted part of it after the intros and some other discussion on the Fed budget, defence spending etc but the full thing is:


What I do find is that Dr Guerci is def a fan of neuromorphic, definitely likes BRN and Akida and I suspect he was probably one of our partners / customers asking for the additional functionalities of Akida 2.0 and TENNS from some of his comments around tailoring it for their needs.

Personally do hope that ISL is the partner as they have laid so much of the ground work it appears, unless there is a parallel NDA giant (possibly) that has been doing the same behind the scenes.

Also get an insight in how time consuming it is to develop and also get through the Govt redtape.

I've bold some parts discussing us and neuromorphic in general.

Enjoy.



So we're talking about how many things we can do with AI.

I wanna talk a little bit more, kind of take a step back, and continue talking a little bit about how AI works.

And you had a slide in your webinar presentation that we were talking about
the relationship with AI, and there's an aspect to AI that's using neuromorphic
computing and neuromorphic chips,
and we were talking about this.

This concept just blew my mind, because I really never heard the term before.

So I wanted to kind of, I wanna ask you to talk a little bit about this.

What is this piece of the puzzle, and what does it it hold in terms of the future for artificial intelligence, and then feeding into cognitive radar and EW?

- So cognitive radar, EW, live and die by embedded systems, right?

They don't have the luxury of living in a laboratory with thousands of acres
of computers, right?

They have to take all their resources on a plane or at UAB or whatever platform and go into battle.

And so to really leverage the power of AI,
you need to implement them on efficient embedded computing systems.

Right now, that means FPGAs, GPUs,
and those things are, when all is said and done, you know, all the peripherals required, the ruggedization, the MIL-SPEC, you're talking kilograms and kilowatts.

And as I pointed out, there is a rather quiet
revolutionary part to AI that's perhaps even biggervthan all the hullabaloo about ChatGPT, and that's neuromorphic chips.

So neuromorphic chips don't implement
traditional digital flip-flop circuits, things like that.

Essentially they actually, in silicon, create neurons with interconnects.

And the whole point of a neural network
is the weighting that goes onto those interconnects from layer to layer.

And the interesting thing about that
is you've got companies like BrainChip in Australia, right, that is not by any stretch
using the most sophisticated foundry
to achieve ridiculous line lists like conventional FPGAs and GPUs do.

Instead it's just a different architecture.

But why is that such a big deal?

Well, in the case of BrainChip as well as Intel and IBM, these chips can be the
size of a postage stamp.


And because they're implementing what are called spiking neural networks, or SNNs, they only draw power when
there's a change of state, and that's a very short amount of time, and it's relatively low-power.

So at the end of the day, you have something the size of a postage stamp
that's implementing a very, very sophisticated convolutional neural network solution with grams and milliwatts as opposed to kilograms and kilowatts.

And so to me, this is the revolution.

This is dawning. This is the thing that changes everything.


So now you see this little UAV coming in,
and you don't think for a second that it could do, you know, the most sophisticated electronic warfare functions, for example.

Pulse sorting, feature identification, geolocation, all these things that require,
you know, thousands of lines of code
and lots of high-speed embedded computing, all of a sudden it's done on a postage stamp.

That's the crazy thing.

And by the way, in my research we've done it. we've implemented pulse, the interleaving, we've implemented, you know, ATR,
specifically on the BrainChip
from Australia, by the way.


So really quite amazing.

- So where is this technology?

You said we've already done it.

We have a pretty good understanding of what it can do.

And like you mentioned, you know, a scenario where whether it's a UAV or whatever system, I mean, something the
size of a postage stamp, it completely changes size, weight, power,
all those considerations, and makes almost anything a potential host for that capability.

- Yeah.


- What are some of the next steps in this,
call it a revolution or rapid evolution of technology?

I mean, because we obviously, you know,
a couple years ago there was a CHIPS Act, you know, trying to make sure that we, in the development of a domestic
chip production capability, Congress passed a CHIPS Act to kind of help spur
on domestic foundries, domestic capability to produce chips.

And does this kind of fall into kind of the...

Is this benefiting from that type of activity?

Is this part of the development that's happened through the CHIPS Act?

Is there something more that we need to be doing to spur on this innovation?

- Well, the CHIPS Act is a good
thing domestically speaking.

And by the way, part of the CHIPS Act,
it is focused on neuromorphic chips, by the way, so that's good to know.


However, the real culprit is the age-old valley of death, bridging the valley of death.

And by the way, I spent seven years at DARPA, and even at DARPA with the
funds I had available to me, bridging the gap between S&T and Programs of Record is still a herculean maze of biblical proportions.

And so while you'll hear lots of nice-sounding words coming out of OSD and other places, saying, you know, "We
gotta move things along.

We gotta spur small business. We gotta..."
it's all S&T funding.

There still is an extraordinary impediment
to getting new technologies into Programs of Record.

And I, you know, I'm not the only one saying that, so don't take my word for it.

I can tell you lots of horror stories, and I've done it.

I was successful while at DARPA.

So my stuff is on the F-35 and F-22, for example, and other classified systems.

I mean, I know what it takes to get it done.

Unfortunately, though there's a lot of lip service about overcoming that barrier,

it still has changed very littlei n the 20 years since I've been successful at DARPA in transitioning.

So I'm sorry, but that's biggest impediment.

And I know it's not a technical thing, and I know there's lots of-

- But here's what concerns me about that,
is, you know, the valley of death, I mean, that's been in our terminology, in our lexicon for decades, like you say, going way back, you know, even before we even under, you know, back in the nineties and eighties when the technology, while
advanced at the time, pales in comparison to what we can do today,
the process hasn't changed.

And so like if we had a valley of death back then, how are we ever going to bridge it today with as fast as technology is moving, as fast as the solutions we
need to prototype and field.

I mean, you mentioned it's herculean.

I mean, it's almost beyond that it seems,
because our system hasn't really changed that much over the past 20, 30 years.

- Yeah, so maybe it's ironic, I don't know the right word, but on the S&T side, OSD, the service labs, you know, I would say that they're pretty forward-leaning and they're making good investments.

The problem is getting into a Program of Record is where the rubber hits the road,
and where things get fielded.

And so you look at the S&T budgets, you look at the number of small businesses
getting DOD S&T funds, and you could almost say, "Well, they're a success," right?

I mean, we're giving small businesses,
they're coming up with great things.

But then look at how much of that actually ends up in a Program of Record.

And let me just be clear.

I don't blame the Programs of Record,
because the game is stacked against them.

They, very often, especially if it's newer technology, they are having lots of problems with getting the baseline system fielded.

There's cost overruns, there's scheduling issues, and so they're already with
2.95 strikes against them, and now all of a sudden you want to on-ramp and
entirely new capability when they're already behind the eight ball.

That's just impossible, unless the whole culture of Programs of Record changes
where, for example, you structure it so that every year you have to answer
how are you dealing with obsolescence?

How are you keeping up?

Where are the on-ramps?

How successful were you with these on-ramps, these upgrades, all of these things?

Because until you fix that, I don't care how much money you spend on S&T, you're not gonna get fielded.

- From a technology standpoint, let's just, you know, assume for a second that we make some progress in the policy side of the equation as it pertains to acquisition
and the valley of death.

From a technology perspective, you've been following this for 20 years.

You know, where are some of the opportunities that are before you that you're like, this is the direction we need to go in, this is something that excites you
or keeps you awake at night in a positive way, of like this is promising and it's gonna be your next pursuit?

- Well, we definitely have to embrace cognitive systems for sure.

I mean, I don't think there's anyone out there that would say we don't need that kind of flexibility and adaptability on the fly.

Now, we can argue over just how much cognition we need and the flavors.

That's fine. So there's that, right?

Let's all just accept that.

And then I think you touched on this earlier, you know, there's a big push across all the services on what's called the JSE, which is the Joint Simulation Environment, which is this grandiose vision for having multi-user, multiplayer,
high fidelity training environments,
synthetic environments, which, by the way, can include live over sim, so that our systems become much more realistic
and reflective of whatt hey're really gonna see when they get out into the real world.

Again, I come back to lots of good things going on on the S&T side.

You almost can't, you know, you really can't argue with it, but that transition to field its systems and Programs of Record is still very much broken, and that's just a fact.

And it's not just me saying that.

You can ask anyone who's in the business
of trying to transition technology to the Department of Defense, and they'll tell you the same thing.

So, you know, again, S&T community,
doing a great job, I think, generally speaking, your DARPAs, your AFRLs, all of these, but that transition piece is just continuing.

And by the way, do our adversaries have the same issues?

Some do, some don't, you know?

And this technology I'm talking about, neuromorphic chips, that's available to the world.

I mean, BrainChip is an Australian company.

There's no ITAR restrictions, so.

- Well, and also I think it speaks to the multidisciplinary approach to technology today.

I mean, the neuromorphic chip, I mean, it has military applications you can obviously use it for, but, I mean, you're gonna find this
in all various sectors
of an economy and society and what we use in everyday
life, and so, you know-

- So Ken, let me just say that the neuromorphic chip that BrainChip makes from Australia had nothing to do with electronic warfare.

It's designed to do image processing.

So one of the things we had to overcome
was take our electronic warfare I/Q data,
in-phase and quadrature RF measurement data, and put it into a format to make it look like an image so that the BrainChip could actually digest it and do something with it.

So you're absolutely right.

I mean, these chips are not being designed for us in the electronic warfare community, but they're so powerful that we were still able to get it to work.

Imagine if they put a little effort
into tailoring it to our needs.

Then you have a revolution.

So, sorry to interrupt you there, but I just want...

You made a point and it's very valid, you know.

- It's valid. It's valid, it's important.

I mean, it goes to just the possibilities that are out there.

- Well, and to amplify that point, all the advanced capabilities that we have in our RFsystems, radar and EW, most of that is driven by the wireless community, the trillion-dollar wireless community compared to a paltry radar and EW ecosystem.

So, you know, what's happening in the commercial world is where, and leveraging, you know, commercial off-the-shelf technology is a gargantuan piece of
staying up and keeping up, and by the way, addressing obsolescence as well, right?

If you have a piece of proprietary
hardware from the 1980s, good luck, you know, with obsolescence, right?

- Well, that, and also hopefully, you know,
as we move down this path on standards
and open systems and so forth, some of that will work its way in.

We can adapt some of thatbso that as we struggle less with obsolescence in the future than we do now.

- We hope.
- Hopefully, yes. I mean-

- Again-
- We'll see.

But, I mean, I would think that's the idea.

- I mean, look at the day-to-day pressures
that Programs of Record are under.

So I'm not gonna get into all kinds of details here, but we had a capability
that was vetted by the program offices
and was developed under SBIRS, and went all the wayt hrough to a Phase III SBIR.

We have flight-qualified software to bring this much-needed capability to the war fighter.

This is all a true story.

And all of a sudden the program ran into scheduling and budgetary constraints,
so they had a jettison the on-ramps,
and so a capability that was vetted was a really important capability, just got thrown to the curb because of the everyday problems that Programs of Record run into, and that's not how they get judged, right?

They're judged on getting that baseline system over...

Look, the F-35 was just recently declared operational, what, a month ago?

You gotta be kiddin' me.

- Well, Joe, I think this is a good spot to, I mean, I feel like if we keep talking we can keep going inl ayer and layer and layer,
and I don't wanna put our listeners through that, but I think a good consolation prize is to have you back on
the show in the future, and we can go a little bit deeper into this, but I do really appreciate you taking some time to talk about this, 'cause this is a topic as of,
you know, really 24 hours ago, I realized how often I just use the word, and I never really understood the depth of the definition of what the words I was using,
so I really appreciate you coming on the show, kind of helping me understand this better, and hopefully our listeners as well.

- Thank you, Ken.

You had great questions, great interview.

And let me give a shout out to AOC. Great organization.

I'm personally, and my company's a big supporter of AOC and what you guys are doing, so you're part of the solution, not part of the problem.

- We appreciate that, and, you know, appreciate all that you've done for us
in terms of helping us understand this really complex topic.

And really I do say this honestly,

I do hope to have you back on the show here, and there's no shortage of topics of conversation for us, so I appreciate you joining me.

- Thanks again, Ken.

- That will conclude this episode of "From the Crow's Nest."

I'd like to thank my guest, Dr. Joe Guerci,
for joining me for this discussion.

Also, don't forget to review, share, and follow this podcast.
I think this transcript just reminds us that we have a long, long way to go from SBIR to field implementation via the DoD program of record 🫤
 
  • Like
  • Thinking
Reactions: 7 users

7für7

Top 20
I think this transcript just reminds us that we have a long, long way to go from SBIR to field implementation via the DoD program of record 🫤
What is considered a barrier today could be resolved tomorrow with simple methods. Teams are working tirelessly every day to tackle a wide range of challenges. While the current timeline may seem lengthy, it’s only a matter of time before the breakthrough happens—or let’s put it this way: the existing solutions could soon be ready for commercialization. In public, only the difficulties are highlighted, accompanied by the note that work is ongoing. This actually means they already have a solution in mind that is not yet fully developed.
 
  • Like
  • Love
Reactions: 7 users

Diogenese

Top 20
I think this transcript just reminds us that we have a long, long way to go from SBIR to field implementation via the DoD program of record 🫤
Didn't someone say it is direct to Phase II?
 
  • Like
Reactions: 5 users

Diogenese

Top 20
Hi @mcm,

i was reminded of a previous article I stumbled upon where Lisa Su, CEO of AMD seemed to hint at AMD developing neuromorphic chips or “new acceleration technologies” .

“Su didn’t describe how AMD will differentiate various Ryzens with NPU capabilities. But there’s a history here: In 2021, AMD mixed and matched parts from various Zen generations under the Ryzen 5000 name. AMD could conceivably do the same with future Ryzens, taking older NPUs and combining them with various CPUs and GPUs.

But that’s not to say we could see just an NPU, either. In response to another question about whether AMD could develop a neuromorphic chip like Intel’s Loihi, Su seemed open to the possibility. “I think as we go forward, we always look at some specific types of what’s called new acceleration technologies,” she said. “I think we could see some of these going forward.”
I s'pose there's always hope AMD wil see the light but their patents relate to floating-point, MACs, or analog NNs.
 
Last edited:
  • Like
  • Fire
Reactions: 6 users

7für7

Top 20
No announcement????
Āi yā, jīntiān méiyǒu jīngxǐ a!


1734065265965.gif
 
  • Haha
  • Like
Reactions: 2 users

Boab

I wish I could paint like Vincent
The needle hasn't moved for hours. Can't think why.
1734065287829.png
 
  • Like
  • Sad
Reactions: 7 users

TECH

Regular

Thx Thomas, can I call you Tom ? 👊

View attachment 74160


Gidday Super....first off, thank you for your steady stream of excellent articles week in and week out, I, like many, do appreciate your
efforts to highlight any interesting articles or comments from within the semi-conductor industry that generally always acknowledge
our company is creating a lot of interest at the Edge, I still see a big future at a later date at the opposite end of the spectrum, that is,
data centres, that environment has to change, data centres may still be useful today, but the way technology just continues to accelerate
there seems to be a big disconnect on the horizon, in my opinion at least.

The other thing that is a little annoying with our company's written dialogue is for example, in the press release regarding the AFRL
1.8 Mil contract the other day, I was one of many whom initially thought that we were receiving a fee of $800,000 from an unknown
subcontractor to provide R&D to develop and optimize what we know best, AKIDA II and TENN's.....but I was wrong.

The other thing that definitely needs sorting out is this:

TENN's - Temporal Event (Based) Neural Networks

or is it

TENN'S - Temporal Embedded Neural Networks

or is it

TENN's - Temporal Enabled Neural Networks

It has been quoted in a variety of ways, even our own staff have referred to TENN's in different ways...clear definition is a must.

Tech...(NZ)
 
  • Like
  • Love
  • Fire
Reactions: 18 users

CHIPS

Regular
  • Like
  • Haha
Reactions: 4 users

Diogenese

Top 20
  • Haha
  • Like
Reactions: 8 users

Tothemoon24

Top 20
More validation of the mighty chip !
IMG_0143.jpeg


IMG_0144.jpeg



Summary​

Autonomously detecting tiny, fast-moving objects emitting thermal radiation in the infrared is a challenging technical problem. In addition to being fast, these targets are often dim, small, and in the presence of clutter and occlusions. Conventional detection approaches require large size, weight, and power (SWaP) systems which may introduce substantial latencies. As such, the following will be explored in this article:

  • An end-to-end system composed of scene simulation,
  • Sensor capture from a novel, highly sensitive micro electromechanical systems (MEMS) microbolometer,
  • A readout integrated circuit (ROIC) that uses a unique saliency computation to remove uninteresting image regions, and
  • Deep-learning (DL) detection and tracking algorithms.
Simulations across these modules verify the advantages of this approach compared to conventional approaches. The system’s ability is estimated to detect targets at less than 5 s after a fast-moving object enters a sensor’s field of view. To explore low-energy implementations of these computer vision models, DL on commercial off-the-shelf (COTS) neuromorphic hardware is also discussed.

Introduction​

The autonomous detection of small and rapidly moving aerial targets is a technically demanding task. Such targets include missiles and airplanes. Missiles are often smaller than airplanes and fly faster; they are thus more challenging to detect. Newer missiles are also more maneuverable than previous missiles, posing a new threat to existing defense systems. Although these objects emit detectable amounts of infrared energy visible far away, their speed makes it difficult to image and track them. At large distances, these objects appear tiny and faint, adding to the task’s complexity. Additionally, these targets are often located in cluttered and occluded environments with other objects, such as slower moving airplanes, the sun, clouds, or buildings. Furthermore, robust detection in different environmental conditions, such as day, night, cloudy days, clear days, etc., poses more challenges.

Biological vision systems have become well adapted over millennia of evolution to ignore clutter and noise, detect motion, and compress visual information in a scene. On the other hand, conventional detection approaches may have difficulty with such a scene and would require increased complexity in hardware and/or software to filter out noise and alleviate nuisance factors while increasing target sensitivity with resulting inefficiencies in size, weight, power, and cost (SWaP-C).

There have been several attempts to create bioinspired vision systems. For example, Scribner et al. [1] created a neuromorphic ROIC with spike-based image processing. Chelian and Srinivasa simulated image processing from retinal [2] and thalamic [3] circuits in the spiking domain under the Defense Advanced Research Projects Agency (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program for noise suppression, ratios of spectral bands, and early motion processing. (Their work was informed by studies in the rate-coded domain [4, 5].) However, these works do not consider detection or tracking.

Artificially mimicking biological vision systems wherever feasible was explored in the current work to overcome the challenges to conventional systems described above. The imaging system includes everything from the optics taking in the scene to the final processor outputting target reports. The system components that would be implemented in hardware are a MEMS microbolometer, a ROIC which uses a unique saliency computation to remove uninteresting image regions and increase overall system speed, and DL detection and tracking models.

The performance of each component and across the whole system is estimated via tools that include simulated images and videos. The feasibility of COTS neuromorphic hardware to implement DL with less energy than graphical processing units (GPUs) is also described. The bioinspired system’s components are shown in Figure 1.

Figure 1. Bioinspired System for Autonomous Detection of Tiny, Fast Moving Objects in Infrared Imagery. Such a System Would Be Dramatically Smaller, Lighter, Less Power-Hungry, and More Cost-Effective Than Traditional Systems (Source: C. Bobda, Y.-K. Yoon, S. Chakraborty, S. Chelian, and S. Vasan).

Figure 1. Bioinspired System for Autonomous Detection of Tiny, Fast Moving Objects in Infrared Imagery. Such a System Would Be Dramatically Smaller, Lighter, Less Power-Hungry, and More Cost-Effective Than Traditional Systems (Source: C. Bobda, Y.-K. Yoon, S. Chakraborty, S. Chelian, and S. Vasan).

Methods​

There are five main thrusts to the design effort:

  1. A scene simulation,
  2. A MEMS microbolometer,
  3. A ROIC that uses a unique saliency computation to remove uninteresting image regions and increase detection speed (referred to as Hierarchical Attention-oriented, Region-based Processing or HARP [6]),
  4. DL detection and tracking models, and
  5. Neuromorphic computing.
Additionally, end-to-end system evaluation and evaluation of each component are performed.

Scene Simulation​

To detect and track tiny, fast objects in cluttered and noisy scenes, training data is needed. However, there is a scarcity of such datasets available to the public. For this reason, harnessing the power of synthetic datasets was started. The work of Park et al. [7], for example, illustrates this approach. Due to the scarcity of real hyperspectral images of contraband on substrates, synthetic hyperspectral images of contraband substances were created on substrates using infrared spectral data and radiative transfer models. For small and rapidly moving objects, a small publicly available dataset was used first because it had single-frame infrared targets with high-quality annotations, which can be used for detection modules. Animating targets for tracking modules was also explored. There are other infrared datasets of aerial targets such as unmanned aerial vehicles, but these tend to occupy more pixels per frame than the dataset used in the present work.

MEMS Microbolometer​

According to Dhar and Khan [8], detection ranges are sensitive to temperature and relative humidity variations, and long-wave infrared (LWIR) ranges depend more upon these variations than mid-wave infrared (MWIR) ranges. Because of this, on average, MWIR tends to have better overall atmospheric transmission compared to LWIR in most scenarios. Therefore, in this work, an MWIR MEMS microbolometer sensor that is highly selective in its spectral range was designed. Prior work in this area includes that of Dao et al. [9].

In a microbolometer, infrared energy strikes a detector material, heating it and changing its electrical resistance. This resistance change is measured and processed into temperatures which can be used to create an image.

There are commercially available non-MEMS microbolometers, but they have poorer sensitivity because thermal isolation is not as good as MEMS-based implementations. This is because in a MEMS device, there is a physical (e.g., air) gap between the detector and the substrate. The MEMS-based approach can increase the effective absorbing area of the sensor with complex structures and increase its responsivity. Yoon et al. [10] demonstrated multidirectional ultraviolet lithography for several complex three-dimensional (3-D) MEMS structures which can be used to create a MEMS microbolometer (Figure 2).

Figure 2. Previously Demonstrated Complex 3-D MEMS Structures Which Can Be Used for Microbolometers (Source: Yoon et al. [10]).

Figure 2. Previously Demonstrated Complex 3-D MEMS Structures Which Can Be Used for Microbolometers (Source: Yoon et al. [10]).

Hierarchical Attention-Oriented, Region-Based Processing (HARP)​

Event-based HARP is a ROIC design that suppresses uninteresting image regions and increases processing speed. It was developed by Bhowmik et al. [6, 11]. The work draws inspiration from and is a simplified abstraction of the hierarchical processing in the visual cortex of the brain where many cells respond to low-level features and transmit the information to fewer cells up the hierarchy where higher-level features are extracted [12].

The main idea is illustrated in Figure 3a. Figure 3b shows an architecture diagram. In the first layer, a pixel-level processing plane (PLPP) provides early feature extraction such as edge detection or image sharpening. Several pixels are then grouped into a region. In the next stage, the structure-level processing plane (SLPP) produces intermediate features such as line or corner detection using a region processing unit (RPU). For a region, processing is only activated if its image region is relevant. Image relevance is computed based on several metrics, such as predictive coding in space or time, edge detection, and measures of signal-to-noise ratio (SNR). The RPU also sends feedback signals to the PLPP using an attention module. If image relevance is too low, pixels in the PLPP halt their processing using a clock gating method. Thus, like the Dynamic Vision System (DVS) [13], uninteresting image regions like static fields would not be processed and would save energy and time. On the other hand, unlike the DVS, HARP directly provides intensity information and could differentiate between extremely hot targets and moderately hot targets.

Figure 3. Previously Demonstrated Complex 3-D MEMS Structures Which Can Be Used for Microbolometers (Source: Yoon et al. [10]).
Figure 3. Previously Demonstrated Complex 3-D MEMS Structures Which Can Be Used for Microbolometers (Source: Yoon et al. [10]).

Figure 3. HARP Illustrated as (a) a Conceptual Diagram and (b) an Architecture Diagram. HARP Is Used to Remove Uninteresting Image Regions to Find Targets Faster (Source: Bhowmik et al. [11, 12]).
Finally, at the knowledge inference processing plane (NIPP), global feature processing such as with a convolutional neural network (CNN) is performed. NIPP implementations are described in the next subsection. Because only interesting image regions are processed—not all pixels—speed and power savings advantages are realized. Savings can be up to 40% for images with few salient regions. Because the hardware is parallel, latency is more or less independent of image size.

Detection and Tracking​

Detection DL models such as You Only Look Once (YOLO) [14] and U-Net [15] were tried. Parameter searches over the number of layers, loss function (e.g., Tversky and Focal Tversky), and data augmentation (flipping and rotating images) to increase performance were also conducted.

For tracking, a fully convolutional Siamese network was chosen [16] on the videos derived from the previously mentioned, publicly available dataset. Siamese networks are relatively simple to train and computationally efficient. One input of the twin network starts with the first detection, and the other input is the larger search image. The twin network’s job is to locate the exemplar image inside the search image. In subsequent frames, the first input is updated with subsequent detected targets.

Neuromorphic Computing​

Neuromorphic computing offers low SWaP-C alternatives to larger central processing unit (CPU)/GPU systems. DL detection and tracking modules can be run on neuromorphic computers to exploit these advantages. The alternatives are compared in Table 1.

Table 1. Comparison of CPU/GPU and Neuromorphic Compute Platforms. Neuromorphic Platforms Offer Lower SWaP-C (Source for Left Image, Wikimedia; Right, Intel)
Table 1. Comparison of CPU/GPU and Neuromorphic Compute Platforms. Neuromorphic Platforms Offer Lower SWaP-C (Source for Left Image, Wikimedia; Right, Intel).

Neuromorphic computing has been used in several domains, including computer vision and cybersecurity. In a computer vision project, classification of contraband substances using LWIR hyperspectral images has been demonstrated in a variety of situations, including varied backgrounds, temperatures, and purities. Full-precision GPU and the BrainChip Akida compatible models gave promising results [7]. In a cybersecurity project, accurate detection of eight attack classes and one normal class was demonstrated in a highly imbalanced dataset. First-of-its-kind testing was chosen with the same network on full-precision GPUs and two neuromorphic offerings—the Intel Loihi 1 hardware and the BrainChip Akida 1000 [17]. Updates have since been made to this work, such as a smaller, more accurate neural network and the use of the BrainChip chip (not just a software simulator) and Intel’s new DL framework Lava for the Loihi 2 chip [18, 19].

End-to-End System Evaluation​

For end-to-end system evaluation, speed was the primary focus; however, power consumption was also estimated based on novel simulations or previous work. Each module had their own metrics, e.g., noise equivalent temperature difference (NETD) for the MEMS microbolometer or intersection over union (IoU) for detection and tracking.

Results​

Results from the five primary thrusts—scene simulation, MEMS microbolometer, HARP, detection and tracking, and neuromorphic computing—and end-to-end system evaluation are given. The system’s ability is estimated to detect targets at less than 5 s after a fast-moving object enters a sensor’s field of view. The system design would have a dramatically smaller SWaP-C envelope than conventional systems.

Scene Simulation​

Images from a publicly available dataset were used for detection and tracking. Separate images were used for training vs. testing. Example images are shown in Figures 4 a and c. These infrared images have tiny targets in cluttered scenes; thus, they are a good starting point for the current application domain.

Figure 4. Infrared Input Image Regions (a and c) and Salient Regions (b and d) (Black Regions Are Removed by HARP). Image (a) Has a True Positive, and Image (d) Has a True Positive and False Negative (Blue Bounding Boxes Are Accurate Detections, and Orange Is a Missed Region) (Source: github.com/YimianDai/sirst).

Figure 4. Infrared Input Image Regions (a and c) and Salient Regions (b and d) (Black Regions Are Removed by HARP). Image (a) Has a True Positive, and Image (d) Has a True Positive and False Negative (Blue Bounding Boxes Are Accurate Detections, and Orange Is a Missed Region) (Source: github.com/YimianDai/sirst).

MEMS Microbolometer​

COMSOL was used to simulate the microbolometer. A 10× improvement in NETD over existing vanadium oxide (VOx) microbolometers was achieved, and response times were approximately 3× less! This is because of the better thermal isolation of the MEMS microbolometer from its support structures and the unique choices of materials.

HARP​

HARP was able to remove uninteresting image regions to decrease detection speed. In Figure 4, input images from a publicly available dataset and salient regions are shown in two pairs. The top pair retains the target—the bright region in the upper left. In the bottom pair, HARP retains one target in the upper left but omits another one to the right of the center of the image. This initial testing was carried out at a preliminary level. Further parameter tuning is necessary for more refined target extraction.

Detection and Tracking​

For detection, U-Net worked better than YOLO networks probably because targets were tiny. U-net detected targets in different sizes and varied backgrounds. Across dozens of images, the average probability of detection was 90% and probability of false alarm was 12%. Further hyperparameter tuning will improve these results.

For tracking, a fully convolutional Siamese network method tracks targets in different sizes and varied backgrounds. Across about a dozen videos, the average IoU was 73%. Further hyperparameter tuning will lead to improved performance.

Neuromorphic Computing​

In addition to prior work in neuromorphic computer vision to detect contraband materials [7], recent work from Patel et al. [20] implemented U-Net on the Loihi 1 chip. They achieved 2.1× better energy efficiency than GPU versions, with only a 4% drop in mean IoU. However, processing times were ~87× slower. Their code was not released, so it was not possible to use their implementation with the current dataset. The release of the more powerful BrainChip and Intel Loihi chips combined with other optimizations will decrease processing times. For example, preliminary pedestrian and vehicle detection experiments were conducted with a YOLO v2 model on the BrainChip AKD1000 chip using red-green-blue color images. With a test set of 100 images from the PASCAL Visual Object Classes dataset, there was a small decrease in mean average precision of 4% compared to full-precision GPU models, but processing times were ~3× slower.

End-to-End System Evaluation​

The detection time was estimated to be 4 to 5 s once a faraway object entered the sensor’s field of view. The following times were included: (1) microbolometer response and readout is estimated at 50 ms via COMSOL; (2) HARP at less than 1 s [12] (estimated at 8 µs to 12.5 ns, with an application-specific integrated circuit [ASIC] or field-programmable gate array [FPGA], respectively); (3) detection and tracking at 2 s (DL algorithms operate at 30 fps or more); and (4) other latencies at 1 s.

For power, the system’s estimated power draw is less than 10 mW for the microbolometer via COMSOL and 2 mW to 7 W for HARP with an ASIC or FPGA, respectively, based on Yoon et al. [10]. For detection and tracking, a tradeoff between power and speed is apparent. For targeting applications, GPUs are a better choice and would draw ~250 W, per Table 1. With a 50% margin of safety, the system would draw ~375 W. For less time-critical applications, neuromorphic processors are adequate and would draw ~1 W. With a 50% margin of safety, the system would draw ~1.5 W.

Discussion​

Overall, a bioinspired system to autonomously detect tiny, fast-moving objects in infrared imagery such as missiles and aircraft was presented. Detecting and tracking tiny fast-moving objects is vitally important to several U.S. Department of Defense customers. As threats continue to evolve, existing systems need updating and/or replacing. Unlike existing systems, the approach here offers considerable SWaP-C advantages of bioinspired computing in several stages. Civilian applications include tracking launch or reentry vehicles, which is of interest to the National Aeronautics Space Administration (NASA) as well as private sector companies. The current design encompasses every aspect from capturing the target through optics to the final processor outputting target reports. The system design effort has five main thrusts: (1) scene simulation, (2) a novel highly sensitive MEMS microbolometer, (3) HARP—a ROIC which uses a unique saliency computation to remove uninteresting image regions, (4) DL detection and tracking algorithms, and (5) neuromorphic computing. This system can detect targets within 5 s of a fast-moving object entering the sensor’s field of view. Further enhancements are possible, and some improvements that can be realized in future work are described next.

Scene Simulation​

More realistic trajectories like those approximated by piecewise polynomial curves or physics-based models could be used here. Furthermore, the movement of several objects in the same scene could be simulated. This would present a significant challenge to tracking algorithms.

MEMS Microbolometer​

Simulating and fabricating 10×10 modular pixel array samples with a 20-µm pitch would provide valuable performance characterization information. The geometry of the microbolometer and the fabrication processes can be optimized for performance. For example, thinner, longer legs would yield better sensitivity but must be compliant with manufacturing and operational constraints. Some studies in this area are referenced—Erturk and Akin [21] illustrate absorption as a function of microbolometer thickness, and Dao et al. [9] model resistivity as a function of temperature and fabrication process. Additional fabrication parameters include temperature, curing steps, and deposition speed. Simulation tools like COMSOL can be used to theoretically optimize design characteristics; however, an initial fabrication run is necessary to validate these tools.

Lastly, even though microbolometer technology provides uncooled infrared thermal detection, microbolometer performance is generally limited by low sensitivity, high noise, slow video speed, and lack of spectral content. Ackerman et al. [22] and Xue et al. [23] have demonstrated fast and sensitive MWIR photodiodes based on mercury telluride (MgTe) colloidal quantum dots that can operate at higher temperatures, including room temperatures. With sufficient maturing, this work may be used as the sensing device in lieu of microbolometers. Cooling the microbolometer is another option to increase sensitivity.

HARP​

Work with FPGAs and simulated ASICs is expected to continue. Furthermore, to better deal with dim moving objects, SLPPs can be tuned to be more sensitive to motion than contrast differences. Because saliency is the weighted sum of several feature vectors, this would make the weight of the former larger than the latter. This will also help disambiguate targets from clutter, such as slower moving airplanes or the sun.

Detection and Tracking​

Single-frame detection techniques are used here. Alternatively, multiple frames can be leveraged to produce more accurate results, although it will also add latency. Another strategy could be to increase the integration time of the cameras so that targets would appear as streaks. These streaks would be larger features to detect and track.

Neuromorphic Computing​

Work in the domain continues using BrainChip and Intel products in cybersecurity and other domains. BrainChip and Intel have released the second generation of their chips—seeing what improvements in speed can be gained from these products will be interesting.

End-to-End System Evaluation​

More work would be helpful. Examples include calculating the SNR of the microbolometer; tabulating the results from HARP across several different images; and characterizing detection performance by scene type (e.g., day vs. night and cloudy vs. clear). For tracking, track length and uncertainty quantification can be explored. Furthermore, power, speed, and interface elements of hardware components could be further detailed.

Conclusions​

By leveraging prior experience and the current work in synthetic data generation and developing MEMS devices, specialized ROICs, DL, and neuromorphic systems, this work can continue further and achieve new heights in MWIR imagery and autonomous detection and tracking in infrared imagery. This includes fast-moving pixel and subpixel object detection and tracking. Bioinspired computing can produce tremendous savings in SWaP-C, as was illustrated in Table 1 and described throughout this article.

In the short term, the microbolometer can be tested with pixel array samples, the HARP ROIC can be implemented on an FPGA, and the detection and tracking systems can be implemented on GPUs. In the long term, the microbolometer and ROIC can be bonded together as a flip chip and the GPUs replaced with neuromorphic ASICs. This will yield improvements in power and latency, allowing this system to be deployed on large or small platforms.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 39 users
I’ve been reading here for quite some time before I started contributing myself. I’m aware that some of my posts aren’t well-received by everyone and are often considered nonsense. Nevertheless, I’d like to add something today and explain why I sometimes deliberately bring humor into this space and approach many things – with the exception of a few individuals trying to stir up negativity – with calmness and a sense of humor.
First of all, a big thank you to everyone who takes the time to research and share relevant contributions here. Personally, I – and I’m sure many others – deeply appreciate it!

Now to the main point: some people still don’t fully grasp what’s currently happening. The question is not whether AI and its applications have a future – that’s already clear. Much of what is already possible behind the scenes remains hidden from us as outsiders. AI has now developed processing methods that humans can no longer fully comprehend. It’s similar to the human brain: we know something is happening, and in the end, there’s a solution. We can measure which brain regions are active during specific tasks, but the exact mechanisms remain a mystery. The same applies to AI.
The focus now is on developing processes to control AI, as it is already on the verge of becoming the most intelligent entity on Earth. What is being tested in high-security facilities, completely isolated from our networks, is beyond anyone’s imagination.
We are invested in a technology that, for many outside our “BrainChip world,” still feels like science fiction. They don’t yet understand the full scope of this innovation – and this is reflected in the current stock price. The majority of potential investors prefer to put their money into established companies because they haven’t grasped the advancements and possibilities of this technology yet.
However, as soon as we announce further successes in the near future and another major licensing partner is revealed, their perspective will change. These people don’t trust the technology itself but rather the tangible progress and results.
So, see yourselves as pioneers who had the luck and foresight to recognize the importance of this technology early on. Stay healthy – and look forward to more of my memes in the future!
It's a long way to the Top, if you want to change the World.



 
Last edited:
  • Like
  • Fire
Reactions: 7 users

goodvibes

Regular
  • Like
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
More validation of the mighty chip !
View attachment 74168

View attachment 74169


Summary​

Autonomously detecting tiny, fast-moving objects emitting thermal radiation in the infrared is a challenging technical problem. In addition to being fast, these targets are often dim, small, and in the presence of clutter and occlusions. Conventional detection approaches require large size, weight, and power (SWaP) systems which may introduce substantial latencies. As such, the following will be explored in this article:

  • An end-to-end system composed of scene simulation,
  • Sensor capture from a novel, highly sensitive micro electromechanical systems (MEMS) microbolometer,
  • A readout integrated circuit (ROIC) that uses a unique saliency computation to remove uninteresting image regions, and
  • Deep-learning (DL) detection and tracking algorithms.
Simulations across these modules verify the advantages of this approach compared to conventional approaches. The system’s ability is estimated to detect targets at less than 5 s after a fast-moving object enters a sensor’s field of view. To explore low-energy implementations of these computer vision models, DL on commercial off-the-shelf (COTS) neuromorphic hardware is also discussed.

Introduction​

The autonomous detection of small and rapidly moving aerial targets is a technically demanding task. Such targets include missiles and airplanes. Missiles are often smaller than airplanes and fly faster; they are thus more challenging to detect. Newer missiles are also more maneuverable than previous missiles, posing a new threat to existing defense systems. Although these objects emit detectable amounts of infrared energy visible far away, their speed makes it difficult to image and track them. At large distances, these objects appear tiny and faint, adding to the task’s complexity. Additionally, these targets are often located in cluttered and occluded environments with other objects, such as slower moving airplanes, the sun, clouds, or buildings. Furthermore, robust detection in different environmental conditions, such as day, night, cloudy days, clear days, etc., poses more challenges.

Biological vision systems have become well adapted over millennia of evolution to ignore clutter and noise, detect motion, and compress visual information in a scene. On the other hand, conventional detection approaches may have difficulty with such a scene and would require increased complexity in hardware and/or software to filter out noise and alleviate nuisance factors while increasing target sensitivity with resulting inefficiencies in size, weight, power, and cost (SWaP-C).

There have been several attempts to create bioinspired vision systems. For example, Scribner et al. [1] created a neuromorphic ROIC with spike-based image processing. Chelian and Srinivasa simulated image processing from retinal [2] and thalamic [3] circuits in the spiking domain under the Defense Advanced Research Projects Agency (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program for noise suppression, ratios of spectral bands, and early motion processing. (Their work was informed by studies in the rate-coded domain [4, 5].) However, these works do not consider detection or tracking.

Artificially mimicking biological vision systems wherever feasible was explored in the current work to overcome the challenges to conventional systems described above. The imaging system includes everything from the optics taking in the scene to the final processor outputting target reports. The system components that would be implemented in hardware are a MEMS microbolometer, a ROIC which uses a unique saliency computation to remove uninteresting image regions and increase overall system speed, and DL detection and tracking models.

The performance of each component and across the whole system is estimated via tools that include simulated images and videos. The feasibility of COTS neuromorphic hardware to implement DL with less energy than graphical processing units (GPUs) is also described. The bioinspired system’s components are shown in Figure 1.

Figure 1. Bioinspired System for Autonomous Detection of Tiny, Fast Moving Objects in Infrared Imagery. Such a System Would Be Dramatically Smaller, Lighter, Less Power-Hungry, and More Cost-Effective Than Traditional Systems (Source: C. Bobda, Y.-K. Yoon, S. Chakraborty, S. Chelian, and S. Vasan).

Figure 1. Bioinspired System for Autonomous Detection of Tiny, Fast Moving Objects in Infrared Imagery. Such a System Would Be Dramatically Smaller, Lighter, Less Power-Hungry, and More Cost-Effective Than Traditional Systems (Source: C. Bobda, Y.-K. Yoon, S. Chakraborty, S. Chelian, and S. Vasan).

Methods​

There are five main thrusts to the design effort:

  1. A scene simulation,
  2. A MEMS microbolometer,
  3. A ROIC that uses a unique saliency computation to remove uninteresting image regions and increase detection speed (referred to as Hierarchical Attention-oriented, Region-based Processing or HARP [6]),
  4. DL detection and tracking models, and
  5. Neuromorphic computing.
Additionally, end-to-end system evaluation and evaluation of each component are performed.

Scene Simulation​

To detect and track tiny, fast objects in cluttered and noisy scenes, training data is needed. However, there is a scarcity of such datasets available to the public. For this reason, harnessing the power of synthetic datasets was started. The work of Park et al. [7], for example, illustrates this approach. Due to the scarcity of real hyperspectral images of contraband on substrates, synthetic hyperspectral images of contraband substances were created on substrates using infrared spectral data and radiative transfer models. For small and rapidly moving objects, a small publicly available dataset was used first because it had single-frame infrared targets with high-quality annotations, which can be used for detection modules. Animating targets for tracking modules was also explored. There are other infrared datasets of aerial targets such as unmanned aerial vehicles, but these tend to occupy more pixels per frame than the dataset used in the present work.

MEMS Microbolometer​

According to Dhar and Khan [8], detection ranges are sensitive to temperature and relative humidity variations, and long-wave infrared (LWIR) ranges depend more upon these variations than mid-wave infrared (MWIR) ranges. Because of this, on average, MWIR tends to have better overall atmospheric transmission compared to LWIR in most scenarios. Therefore, in this work, an MWIR MEMS microbolometer sensor that is highly selective in its spectral range was designed. Prior work in this area includes that of Dao et al. [9].

In a microbolometer, infrared energy strikes a detector material, heating it and changing its electrical resistance. This resistance change is measured and processed into temperatures which can be used to create an image.

There are commercially available non-MEMS microbolometers, but they have poorer sensitivity because thermal isolation is not as good as MEMS-based implementations. This is because in a MEMS device, there is a physical (e.g., air) gap between the detector and the substrate. The MEMS-based approach can increase the effective absorbing area of the sensor with complex structures and increase its responsivity. Yoon et al. [10] demonstrated multidirectional ultraviolet lithography for several complex three-dimensional (3-D) MEMS structures which can be used to create a MEMS microbolometer (Figure 2).

Figure 2. Previously Demonstrated Complex 3-D MEMS Structures Which Can Be Used for Microbolometers (Source: Yoon et al. [10]).

Figure 2. Previously Demonstrated Complex 3-D MEMS Structures Which Can Be Used for Microbolometers (Source: Yoon et al. [10]).

Hierarchical Attention-Oriented, Region-Based Processing (HARP)​

Event-based HARP is a ROIC design that suppresses uninteresting image regions and increases processing speed. It was developed by Bhowmik et al. [6, 11]. The work draws inspiration from and is a simplified abstraction of the hierarchical processing in the visual cortex of the brain where many cells respond to low-level features and transmit the information to fewer cells up the hierarchy where higher-level features are extracted [12].

The main idea is illustrated in Figure 3a. Figure 3b shows an architecture diagram. In the first layer, a pixel-level processing plane (PLPP) provides early feature extraction such as edge detection or image sharpening. Several pixels are then grouped into a region. In the next stage, the structure-level processing plane (SLPP) produces intermediate features such as line or corner detection using a region processing unit (RPU). For a region, processing is only activated if its image region is relevant. Image relevance is computed based on several metrics, such as predictive coding in space or time, edge detection, and measures of signal-to-noise ratio (SNR). The RPU also sends feedback signals to the PLPP using an attention module. If image relevance is too low, pixels in the PLPP halt their processing using a clock gating method. Thus, like the Dynamic Vision System (DVS) [13], uninteresting image regions like static fields would not be processed and would save energy and time. On the other hand, unlike the DVS, HARP directly provides intensity information and could differentiate between extremely hot targets and moderately hot targets.

Figure 3. Previously Demonstrated Complex 3-D MEMS Structures Which Can Be Used for Microbolometers (Source: Yoon et al. [10]).
Figure 3. Previously Demonstrated Complex 3-D MEMS Structures Which Can Be Used for Microbolometers (Source: Yoon et al. [10]).

Figure 3. HARP Illustrated as (a) a Conceptual Diagram and (b) an Architecture Diagram. HARP Is Used to Remove Uninteresting Image Regions to Find Targets Faster (Source: Bhowmik et al. [11, 12]).
Finally, at the knowledge inference processing plane (NIPP), global feature processing such as with a convolutional neural network (CNN) is performed. NIPP implementations are described in the next subsection. Because only interesting image regions are processed—not all pixels—speed and power savings advantages are realized. Savings can be up to 40% for images with few salient regions. Because the hardware is parallel, latency is more or less independent of image size.

Detection and Tracking​

Detection DL models such as You Only Look Once (YOLO) [14] and U-Net [15] were tried. Parameter searches over the number of layers, loss function (e.g., Tversky and Focal Tversky), and data augmentation (flipping and rotating images) to increase performance were also conducted.

For tracking, a fully convolutional Siamese network was chosen [16] on the videos derived from the previously mentioned, publicly available dataset. Siamese networks are relatively simple to train and computationally efficient. One input of the twin network starts with the first detection, and the other input is the larger search image. The twin network’s job is to locate the exemplar image inside the search image. In subsequent frames, the first input is updated with subsequent detected targets.

Neuromorphic Computing​

Neuromorphic computing offers low SWaP-C alternatives to larger central processing unit (CPU)/GPU systems. DL detection and tracking modules can be run on neuromorphic computers to exploit these advantages. The alternatives are compared in Table 1.

Table 1. Comparison of CPU/GPU and Neuromorphic Compute Platforms. Neuromorphic Platforms Offer Lower SWaP-C (Source for Left Image, Wikimedia; Right, Intel)
Table 1. Comparison of CPU/GPU and Neuromorphic Compute Platforms. Neuromorphic Platforms Offer Lower SWaP-C (Source for Left Image, Wikimedia; Right, Intel).

Neuromorphic computing has been used in several domains, including computer vision and cybersecurity. In a computer vision project, classification of contraband substances using LWIR hyperspectral images has been demonstrated in a variety of situations, including varied backgrounds, temperatures, and purities. Full-precision GPU and the BrainChip Akida compatible models gave promising results [7]. In a cybersecurity project, accurate detection of eight attack classes and one normal class was demonstrated in a highly imbalanced dataset. First-of-its-kind testing was chosen with the same network on full-precision GPUs and two neuromorphic offerings—the Intel Loihi 1 hardware and the BrainChip Akida 1000 [17]. Updates have since been made to this work, such as a smaller, more accurate neural network and the use of the BrainChip chip (not just a software simulator) and Intel’s new DL framework Lava for the Loihi 2 chip [18, 19].

End-to-End System Evaluation​

For end-to-end system evaluation, speed was the primary focus; however, power consumption was also estimated based on novel simulations or previous work. Each module had their own metrics, e.g., noise equivalent temperature difference (NETD) for the MEMS microbolometer or intersection over union (IoU) for detection and tracking.

Results​

Results from the five primary thrusts—scene simulation, MEMS microbolometer, HARP, detection and tracking, and neuromorphic computing—and end-to-end system evaluation are given. The system’s ability is estimated to detect targets at less than 5 s after a fast-moving object enters a sensor’s field of view. The system design would have a dramatically smaller SWaP-C envelope than conventional systems.

Scene Simulation​

Images from a publicly available dataset were used for detection and tracking. Separate images were used for training vs. testing. Example images are shown in Figures 4 a and c. These infrared images have tiny targets in cluttered scenes; thus, they are a good starting point for the current application domain.

Figure 4. Infrared Input Image Regions (a and c) and Salient Regions (b and d) (Black Regions Are Removed by HARP). Image (a) Has a True Positive, and Image (d) Has a True Positive and False Negative (Blue Bounding Boxes Are Accurate Detections, and Orange Is a Missed Region) (Source: github.com/YimianDai/sirst).

Figure 4. Infrared Input Image Regions (a and c) and Salient Regions (b and d) (Black Regions Are Removed by HARP). Image (a) Has a True Positive, and Image (d) Has a True Positive and False Negative (Blue Bounding Boxes Are Accurate Detections, and Orange Is a Missed Region) (Source: github.com/YimianDai/sirst).

MEMS Microbolometer​

COMSOL was used to simulate the microbolometer. A 10× improvement in NETD over existing vanadium oxide (VOx) microbolometers was achieved, and response times were approximately 3× less! This is because of the better thermal isolation of the MEMS microbolometer from its support structures and the unique choices of materials.

HARP​

HARP was able to remove uninteresting image regions to decrease detection speed. In Figure 4, input images from a publicly available dataset and salient regions are shown in two pairs. The top pair retains the target—the bright region in the upper left. In the bottom pair, HARP retains one target in the upper left but omits another one to the right of the center of the image. This initial testing was carried out at a preliminary level. Further parameter tuning is necessary for more refined target extraction.

Detection and Tracking​

For detection, U-Net worked better than YOLO networks probably because targets were tiny. U-net detected targets in different sizes and varied backgrounds. Across dozens of images, the average probability of detection was 90% and probability of false alarm was 12%. Further hyperparameter tuning will improve these results.

For tracking, a fully convolutional Siamese network method tracks targets in different sizes and varied backgrounds. Across about a dozen videos, the average IoU was 73%. Further hyperparameter tuning will lead to improved performance.

Neuromorphic Computing​

In addition to prior work in neuromorphic computer vision to detect contraband materials [7], recent work from Patel et al. [20] implemented U-Net on the Loihi 1 chip. They achieved 2.1× better energy efficiency than GPU versions, with only a 4% drop in mean IoU. However, processing times were ~87× slower. Their code was not released, so it was not possible to use their implementation with the current dataset. The release of the more powerful BrainChip and Intel Loihi chips combined with other optimizations will decrease processing times. For example, preliminary pedestrian and vehicle detection experiments were conducted with a YOLO v2 model on the BrainChip AKD1000 chip using red-green-blue color images. With a test set of 100 images from the PASCAL Visual Object Classes dataset, there was a small decrease in mean average precision of 4% compared to full-precision GPU models, but processing times were ~3× slower.

End-to-End System Evaluation​

The detection time was estimated to be 4 to 5 s once a faraway object entered the sensor’s field of view. The following times were included: (1) microbolometer response and readout is estimated at 50 ms via COMSOL; (2) HARP at less than 1 s [12] (estimated at 8 µs to 12.5 ns, with an application-specific integrated circuit [ASIC] or field-programmable gate array [FPGA], respectively); (3) detection and tracking at 2 s (DL algorithms operate at 30 fps or more); and (4) other latencies at 1 s.

For power, the system’s estimated power draw is less than 10 mW for the microbolometer via COMSOL and 2 mW to 7 W for HARP with an ASIC or FPGA, respectively, based on Yoon et al. [10]. For detection and tracking, a tradeoff between power and speed is apparent. For targeting applications, GPUs are a better choice and would draw ~250 W, per Table 1. With a 50% margin of safety, the system would draw ~375 W. For less time-critical applications, neuromorphic processors are adequate and would draw ~1 W. With a 50% margin of safety, the system would draw ~1.5 W.

Discussion​

Overall, a bioinspired system to autonomously detect tiny, fast-moving objects in infrared imagery such as missiles and aircraft was presented. Detecting and tracking tiny fast-moving objects is vitally important to several U.S. Department of Defense customers. As threats continue to evolve, existing systems need updating and/or replacing. Unlike existing systems, the approach here offers considerable SWaP-C advantages of bioinspired computing in several stages. Civilian applications include tracking launch or reentry vehicles, which is of interest to the National Aeronautics Space Administration (NASA) as well as private sector companies. The current design encompasses every aspect from capturing the target through optics to the final processor outputting target reports. The system design effort has five main thrusts: (1) scene simulation, (2) a novel highly sensitive MEMS microbolometer, (3) HARP—a ROIC which uses a unique saliency computation to remove uninteresting image regions, (4) DL detection and tracking algorithms, and (5) neuromorphic computing. This system can detect targets within 5 s of a fast-moving object entering the sensor’s field of view. Further enhancements are possible, and some improvements that can be realized in future work are described next.

Scene Simulation​

More realistic trajectories like those approximated by piecewise polynomial curves or physics-based models could be used here. Furthermore, the movement of several objects in the same scene could be simulated. This would present a significant challenge to tracking algorithms.

MEMS Microbolometer​

Simulating and fabricating 10×10 modular pixel array samples with a 20-µm pitch would provide valuable performance characterization information. The geometry of the microbolometer and the fabrication processes can be optimized for performance. For example, thinner, longer legs would yield better sensitivity but must be compliant with manufacturing and operational constraints. Some studies in this area are referenced—Erturk and Akin [21] illustrate absorption as a function of microbolometer thickness, and Dao et al. [9] model resistivity as a function of temperature and fabrication process. Additional fabrication parameters include temperature, curing steps, and deposition speed. Simulation tools like COMSOL can be used to theoretically optimize design characteristics; however, an initial fabrication run is necessary to validate these tools.

Lastly, even though microbolometer technology provides uncooled infrared thermal detection, microbolometer performance is generally limited by low sensitivity, high noise, slow video speed, and lack of spectral content. Ackerman et al. [22] and Xue et al. [23] have demonstrated fast and sensitive MWIR photodiodes based on mercury telluride (MgTe) colloidal quantum dots that can operate at higher temperatures, including room temperatures. With sufficient maturing, this work may be used as the sensing device in lieu of microbolometers. Cooling the microbolometer is another option to increase sensitivity.

HARP​

Work with FPGAs and simulated ASICs is expected to continue. Furthermore, to better deal with dim moving objects, SLPPs can be tuned to be more sensitive to motion than contrast differences. Because saliency is the weighted sum of several feature vectors, this would make the weight of the former larger than the latter. This will also help disambiguate targets from clutter, such as slower moving airplanes or the sun.

Detection and Tracking​

Single-frame detection techniques are used here. Alternatively, multiple frames can be leveraged to produce more accurate results, although it will also add latency. Another strategy could be to increase the integration time of the cameras so that targets would appear as streaks. These streaks would be larger features to detect and track.

Neuromorphic Computing​

Work in the domain continues using BrainChip and Intel products in cybersecurity and other domains. BrainChip and Intel have released the second generation of their chips—seeing what improvements in speed can be gained from these products will be interesting.

End-to-End System Evaluation​

More work would be helpful. Examples include calculating the SNR of the microbolometer; tabulating the results from HARP across several different images; and characterizing detection performance by scene type (e.g., day vs. night and cloudy vs. clear). For tracking, track length and uncertainty quantification can be explored. Furthermore, power, speed, and interface elements of hardware components could be further detailed.

Conclusions​

By leveraging prior experience and the current work in synthetic data generation and developing MEMS devices, specialized ROICs, DL, and neuromorphic systems, this work can continue further and achieve new heights in MWIR imagery and autonomous detection and tracking in infrared imagery. This includes fast-moving pixel and subpixel object detection and tracking. Bioinspired computing can produce tremendous savings in SWaP-C, as was illustrated in Table 1 and described throughout this article.

In the short term, the microbolometer can be tested with pixel array samples, the HARP ROIC can be implemented on an FPGA, and the detection and tracking systems can be implemented on GPUs. In the long term, the microbolometer and ROIC can be bonded together as a flip chip and the GPUs replaced with neuromorphic ASICs. This will yield improvements in power and latency, allowing this system to be deployed on large or small platforms.
Thanks for posting @Tothemoon24!

Is it just me or is anyone else noticing these defense and space related applications are only really singling out two products; BrainChip‘s AKIDA and Intel’s Loihi?

And we seem to be getting the contracts.

Nighty-nights. Sweet dreams and good luck to all!
 
  • Like
  • Love
  • Fire
Reactions: 49 users
  • Like
Reactions: 4 users

Dallas

Regular
 
  • Like
  • Fire
Reactions: 10 users

KiKi

Regular
  • Like
Reactions: 3 users
From the layman’s prospective I wish I was a fly on the wall to see the rd map of DSIAC regarding their thinking on cybersecurity and if this entails Personal computer’s, phones ext . I remember Robs response to an email last year regarding this which he stated he couldn’t say to much as DSIAC were under the NDA working in cybersecurity.
Maybe they will be in partnership with some other large company using their tech and BRN to bring this to personal devices, hopefully soonish. 🤞 🤞
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Diogenese

Top 20
Thanks for posting @Tothemoon24!

Is it just me or is anyone else noticing these defense and space related applications are only really singling out two products; BrainChip‘s AKIDA and Intel’s Loihi?

And we seem to be getting the contracts.

Nighty-nights. Sweet dreams and good luck to all!
Funnily enough, a lot of Intel's spiking NN patents use a variable oscillator in a rate-based coding arrangement. That means the neurons are "active" in generating the oscillations, which would add to power consumption and latency.

It's almost as if they had not read Edgar Adrian's 1920's paper which formed the basis of Thorpe's Spikenet N-of_M coding.
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 24 users
Top Bottom