BRN Discussion Ongoing

IloveLamp

Top 20
1000016679.jpg
 
  • Like
  • Fire
  • Love
Reactions: 14 users

davidfitz

Regular
I wonder if the 2 nodes that Renesas licenced from us are finally being used? Too much tech talk for me but interesting anyway.


1719472597053.png



1719472623403.png

 
  • Like
  • Wow
  • Thinking
Reactions: 10 users

wilzy123

Founding Member
I blinked and someone took out 2mil at 22 just like that

Yep. I am sure it's significant and that all of the trade over the past few weeks is a legitimately accurate representation of sentiment.

4a2e68e0813de0788848dab6c3443c12.gif
 
  • Haha
  • Like
Reactions: 2 users
Everything I am hearing here sounds like BRN I hope we are involved as this partnership covers the edge in high performance and lower power devises across the board,
What is the term Synthetic they use here mean and could we be involved ? And why is he wearing sunglasses , is this a covert operation !
How can they use our product and not sign a IP Licence hello
 
1719482232544.gif
 
  • Haha
  • Like
Reactions: 3 users

mrgds

Regular
Everything I am hearing here sounds like BRN I hope we are involved as this partnership covers the edge in high performance and lower power devises across the board,
What is the term Synthetic they use here mean and could we be involved ? And why is he wearing sunglasses , is this a covert operation !
Synthetic is data that is "made up " or fabricated, as opposed to real data, ie; video/speech etc
 
  • Like
Reactions: 2 users

Guzzi62

Regular
I wonder if the 2 nodes that Renesas licenced from us are finally being used? Too much tech talk for me but interesting anyway.


View attachment 65559


View attachment 65560
I been though all Renesas's partners and BRN is not even mentioned?

I am not skilled to read and understand the white paper.

 
  • Thinking
  • Like
Reactions: 3 users

IloveLamp

Top 20
I been though all Renesas's partners and BRN is not even mentioned?

I am not skilled to read and understand the white paper.

That's because Brn isn't a partner. They licensed our ip, as in most cases, it is in their best interests not to mention us to get a leg up on the competition.

Imo, dyor.
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Frangipani

Top 20
Below is a link to a tutorial called “Edge AI in Action: Practical Approaches to Developing and Deploying Optimized Models” that several researchers from Jabra/GN Audio resp. GN Hearing 🇩🇰 gave last week at the CVPR (Conference on Computer Vision and Pattern Recognition) 2024 in Seattle.

The slides presented at the conference are all available online, and a video of the tutorial will also be uploaded at some point and might provide interested viewers with even more detailed information, especially as the recording will presumably also cover the Q&A sessions.The tech-savvier among you will surely find those well-designed slides very intriguing!

I skimmed the presentation slides and picked out some less technical ones for everyone to enjoy. Although none of the tutorial’s practical applications involved any deployment on AKD1000, these Jabra / GN Audio resp. Hearing researchers are definitely aware of Akida, at the very least, as you can tell from the slide titled Edge AI Hardware.


83CB9E24-FC22-4A20-883B-96F9A62F292E.jpeg

F6387C70-DDE2-443A-88AB-0028BE95031D.jpeg


C1579928-B582-4B97-AA10-29802B95BAAA.jpeg




6F482C43-9F3D-4BE6-8D62-76E39BC5A867.jpeg



478AB281-A2A1-4370-A975-3746BDA64074.jpeg



(Anuj Datt used to be a Senior Software Engineer AI Systems with GN Audio until recently and now works for Adobe.
Fabricio Batista Narcizo is also a part-time lecturer at the IT University of Copenhagen, where Elizabete Sauthier Munzlinger Narcizo is an industrial PhD student - at Jabra GN, she is exploring ML to identify common hand gestures worldwide).


E5A701BB-1378-40D4-A555-4FAD5F352F02.jpeg

6558B818-3B8E-4818-97EB-9D566D2A0DA6.jpeg



265F5D1D-62A6-42EE-B6F6-92B8450BF861.jpeg


1322A929-138E-484C-8524-C7E6EE004DC6.jpeg
 

Attachments

  • 4A261D01-0294-4595-94EF-2D010CE2642B.jpeg
    4A261D01-0294-4595-94EF-2D010CE2642B.jpeg
    397.5 KB · Views: 52
  • Like
  • Fire
  • Love
Reactions: 34 users

IloveLamp

Top 20

1000016684.jpg
 
  • Like
  • Fire
  • Love
Reactions: 17 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 4 users
Still here @Iseki, @DK6161, @itsol4605? We are missing your amazing contributions.
I guess we won’t see them again until the sp is a fair bit higher, so they can restart there short campaign.
 
  • Like
Reactions: 2 users

Frangipani

Top 20
Has anyone ever come across this Swiss company called Viso.ai? Looks like they offer services similar to Edge Impulse, specifically for AI Vision applications?

Anyway, Akida getting a mention in the company’s “Milestones of Neuromorphic Engineering” overview (although they got the year wrong and don‘t seem to be aware of Akida Gen 2).


Interesting catalogue of AI Vision applications, too.
Constant surveillance is of course a double-edged sword.

And I can already foresee certain forum members getting all worked up about some very specific use cases listed here… 3…2…1…😉


856B03DC-6233-4606-ABF5-9E44E54BC805.jpeg



(…)

F64C7958-C2D7-4AE2-8EAE-FB45813AC911.jpeg

F7465B71-EFF3-46E9-A563-0E5EB15D17A3.jpeg


14950A22-8237-4905-9AF6-D66FE6DDAECA.jpeg



165A784B-67C8-4172-87D4-19AFBEB61774.jpeg
7E73D282-EB3B-4593-8D6A-08FDDB748F31.jpeg

CEF60B2D-0049-4CA5-B297-87842FAB1F3E.jpeg
D696600E-DD42-4F3C-88B6-176C720C112A.jpeg

D810E3E5-93C8-4AAE-A288-19901A57C4A8.jpeg
 

Attachments

  • 0F867E24-022A-4924-9D11-37E14B45910A.jpeg
    0F867E24-022A-4924-9D11-37E14B45910A.jpeg
    391.8 KB · Views: 37
  • Like
  • Love
  • Fire
Reactions: 17 users

Frangipani

Top 20
(…) Oh, and Southwest Research Institute (SwRI) has been collaborating with Intel and experimenting with Loihi for a long time:

View attachment 60288

View attachment 60289


… and the recent posts by Dr. Steve Harbour tagging Intel researchers on LinkedIn strongly suggest they do not intend to end this collaboration any time soon. They have even begun research on developing a neuromorphic camera for flights to Mars. I noticed that Gregory Cohen was tagged as well, so I assume Western Sydney University’s ICNS will also be involved in this project.

View attachment 60290


View attachment 60293

Since we know that SwRI has been collaborating intensively with Intel on applying neuromorphic technology to EW over the past few years, it is pretty obvious which company SwRI research engineer Dan Brown is referring to in the following interview, when he says they have “a strong research collaboration agreement with a major chip manufacturer who is developing an advanced neuromorphic processor”.

Nevertheless, the interview is a worthwhile read, especially when we keep in mind that at the same time ISL is utilising Akida for cognitive EW, too.




06.25.2024


Neuromorphic Computing for Cognitive Electronic Warfare with SwRI’s David Brown​

David Brown is a research engineer in the Defense & Intelligence Solutions Division at Southwest Research Institute where he is the lead engineer for cognitive electronic warfare system research and development.
667b00c9effa5e91a4a5cc5f_David%20Brown%20Interview%20Title%20Card_Blog%20Post.png

What is the Southwest Research Institute?

Our tagline is deep sea to deep space. We have multiple applied research programs going on across the spectrum—everything from chemical to petroleum to mechanical engineering to planetary and deep space science. We're a nonprofit independent applied research organization not affiliated with any university. SwRI was started by a philanthropist in the 1940s and was initially focused on research related to the automotive and petroleum industries. We’ve since evolved to focused research across most scientific domains. My group within Southwest Research is specific to defense and intelligence systems. Most of our work is sensitive or classified.

What brought you to the Southwest Research Institute?

I dreamed of flying F-15 fighters when I was a teen. When I enrolled at Georgia Tech as an engineering undergrad, I immediately joined the National Guard because they had an F-15 unit. When I graduated I needed an engineering job while waiting to be picked up to go fly F-15s. Southwest Research Institute hired me directly out of school and a year or two later, I did get picked up to go fly in the National Guard, but the F-15 unit had changed aircraft to the B-1B, so I became a B-1B “wizzo” electronic warfare officer while working with the Southwest Research Institute on the same electronic warfare systems. It was a neat combination—performing research and development for the systems that I was flying into combat.

Southwest Research has always pushed the envelope on electronic warfare (EW) and using the latest technology to improve our processes. In the past few years, technology improved so that our systems started generating more data than could be processed using traditional signal processing methods so our teams started looking at different ways of handling the data using newer technologies. We looked at compressive sensing and several other methodologies to handle that volume of data. We realized we were getting very good at throwing away data in an attempt to hone in on the pieces that were most relevant to us. But, the data that we were throwing away had useful information, we just didn’t have the capacity to process it. A few years ago, we started delving deep into how we could apply Artificial Intelligence (AI) to some EW problems that come from having to handle a large amount of data. That has since become a large internal research program that is now partially funded by the Department of Defense.

What is cognitive electronic warfare?

Let’s start with electronic warfare. It’s a broad spectrum that covers most radio frequency (RF) emitters out there today—we’re talking about things like radar and communication systems as well as everyday devices such as Bluetooth and wireless internet. However, most of my focus is on radar signals. You can think of radar as a technology that is trying to get a picture of the space around itself, and it’s usually used to see aircraft or other vehicles moving within that space. Electronic warfare focuses on ensuring that those radar systems cannot perform their job as intended. We’re on the defensive side of this so if you think about a US Air Force or Navy aircraft flying in a hostile area, for example, our intent is to keep US aircraft safe by making sure the adversary can’t develop a track on that aircraft for a targeting and firing solution.

As for cognitive EW, there are two definitions we could use. One is very loose, and it refers to any AI process that is applied to the EW problem set. But I like a stricter definition where you're looking at a system that can autonomously sense its environment, make decisions based on what it sees within the environment, and then affect the environment based on the decisions that it makes. Maybe it senses the emitters within the environment, makes a decision about what jamming technique to use on those emitters, and then autonomously generates that response and an estimate of its effectiveness so that it can control what it's doing.


You’ve been researching and using electronic warfare systems for decades. What’s changed over the past 10-20 years that has led to the need for cognitive systems?

If you go back to the 1980s, the technology that we had was very constrained. If we wanted to update a radar from one frequency to another or change the operating parameters such as the timing that it uses it was a lengthy process that might have taken our adversaries years. They had to develop new hardware, that hardware had to be tested, and you had very little digital control and processing capability within the system. And during that process, as our adversaries were developing a new system, our intelligence community would collect information on what they were doing, how they were radiating, what technologies were being employed, and so we'd have some indication years in advance that they were developing a new threat system that we had to react to. But our systems were constrained by the same technology problem and lengthy development process. Well, over time the acquisition process for these systems started speeding up. In addition, new hardware technologies have significant performance margins relative to the control process, meaning there is significant flexibility in digital software control. Thus, hardware isn't changing nearly as fast as our ability to control it with software. So, if you can imagine that an adversary is using a radar in a particular mode—using particular timing and modulation forms—and they detect we have a way of responding to that operating mode to introduce error into their system, they can very quickly—within weeks, days, or hours—change what they're doing. We must keep up with that rate of change and in many cases, the only viable way to respond to highly adaptive threat systems is to incorporate cognitive processes into our EW system. In addition, the idea with cognitive radar is that the radar system itself can sense changes in the RF environment and cognitively change how the radar is operating. Our adversaries are continuously updating how they send out their pulses to adapt to the dynamic environment, and we have to be faster at updating on our end to predict what they’re going to do next and determine the right response.

Earlier this year SwRI was awarded a nearly $6.5M contract from the USAF to do R&D work on cognitive electronic warfare systems. What are you researching as part of this contract?

We have a strong research collaboration agreement with a major chip manufacturer who is developing an advanced neuromorphic processor. One of our distinctions is that we’re one of the few research organizations that has research agreements on using neuromorphic processing for cognitive EW. There are a few places in the US working on the general problem of applying AI techniques to this domain, but what makes our work stand out, I think, is that we don't approach neuromorphic computing for cognitive electronic warfare as an AI problem first. Instead, we approached it as an EW problem and then looked for the best tool available to solve our challenges. One of the newer technologies that is available is neuromorphic processing with spiking neural networks. As we apply that technology, we’re seeing incredible improvements and advantages within the electronic warfare context.

What makes neuromorphic architectures such a good solution for cognitive electronic warfare?

Neuromorphic architectures are much more generalizable than more traditional AI approaches. Radar signal data is very messy, which creates challenges around training the AI to make inferences on radar data. We started at the I/Q level, which is a direct measurement of the RF spectrum itself. That is the most information-rich form of data, but it’s also the messiest and most complex. With traditional signal processing techniques, we smooth I/Q data using transforms so that we can pull out features that are understandable to humans, but in that process, we’re actually obscuring data that’s in the I/Q stream. When we apply AI to that original I/Q data using traditional methods, we can make it work on a specific set of I/Q data, but it doesn’t generalize over various operating and environmental conditions. It’s more specific to one scenario. We find that it generalizes much better with spiking neural networks. We’re able to use it over a wider operating envelope and apply AI to more cases than we can with a traditional method.

The second thing is that it uses much less power than traditional AI approaches. A lot of our EW systems are implemented on aircraft or small autonomous vehicles that have very limited power. In some of our tests, we are seeing performance out of our neuromorphic processor that’s equivalent to a bank of GPUs. Each of those GPUs is pulling quite a bit of power, but they also have to be cooled and that’s actually often the bigger challenge. But when we go to a neuromorphic processor, we’re using at least 3 orders of magnitude less power so we’re able to put these processors in a much more constrained environment.

What are the biggest challenges with using neuromorphic computing for cognitive electronic warfare?

One of the biggest challenges is obtaining suitable data to train the AI. We are constantly running into that issue. One way of mitigating this challenge is to augment real data with synthetic data. The problem with the real data taken from a radar is that the data was only collected in a limited number of environmental and operational conditions. There are only so many radar systems I can collect data from, and it turns out that most of our adversaries are not very cooperative when we do our testing. But with synthetic data, I can generate an almost unlimited number of scenario combinations and represent a wide range of environments and operating characteristics. The challenge is that synthetic data tends to be mathematically precise, which essentially allows the AI to pick up on features that aren't available in a real data set. So, the AI is looking at precision that's not there when we are flying in a real environment. The hardest problem that we’re working on solving right now with the DoD is ensuring that our algorithms are not biased by that synthetic data so we can graduate the algorithm to a real data set in an operational environment.

What does the future of electronic warfare look like to you?

What I'm seeing is the use of AI within specific domains. AI has largely been a domain unto itself where you had AI experts that did not necessarily understand the application domain such as EW. Now we're growing in AI knowledge and the AI experts are growing in the EW domain knowledge. We're seeing a real acceleration of what we're realizing we can do with this technology. For decades we’ve had these paradigms where we’re used to solving problems in a traditional linear way using non-AI methods, but I would expect that over the next five to ten years, you're going to see an acceleration as AI becomes more ubiquitous in the domain. Ten years from now people who are starting work in the electronic warfare domain are just going to understand these systems intrinsically. Forty years ago we would say we’re going to use a “digital controller” on an electronic warfare system. Now nobody says they’re going to use a digital controller; it’s just assumed. Similarly, in ten years, it’s going to be assumed that you have cognitive in your electronic warfare systems.




The Deep Tech Agency.

HAUS is a strategic communications agency in NYC. We specialize in marketing and public relations for deep tech startups. Check out our website, follow us on Twitter, or say hello@hausb.io

 
  • Like
  • Love
  • Fire
Reactions: 21 users

IloveLamp

Top 20
  • Like
  • Love
  • Thinking
Reactions: 11 users
Top Bottom