BRN Discussion Ongoing

Frangipani

Regular
Below is a link to a tutorial called “Edge AI in Action: Practical Approaches to Developing and Deploying Optimized Models” that several researchers from Jabra/GN Audio resp. GN Hearing 🇩🇰 gave last week at the CVPR (Conference on Computer Vision and Pattern Recognition) 2024 in Seattle.

The slides presented at the conference are all available online, and a video of the tutorial will also be uploaded at some point and might provide interested viewers with even more detailed information, especially as the recording will presumably also cover the Q&A sessions.The tech-savvier among you will surely find those well-designed slides very intriguing!

I skimmed the presentation slides and picked out some less technical ones for everyone to enjoy. Although none of the tutorial’s practical applications involved any deployment on AKD1000, these Jabra / GN Audio resp. Hearing researchers are definitely aware of Akida, at the very least, as you can tell from the slide titled Edge AI Hardware.


83CB9E24-FC22-4A20-883B-96F9A62F292E.jpeg

F6387C70-DDE2-443A-88AB-0028BE95031D.jpeg


C1579928-B582-4B97-AA10-29802B95BAAA.jpeg




6F482C43-9F3D-4BE6-8D62-76E39BC5A867.jpeg



478AB281-A2A1-4370-A975-3746BDA64074.jpeg



(Anuj Datt used to be a Senior Software Engineer AI Systems with GN Audio until recently and now works for Adobe.
Fabricio Batista Narcizo is also a part-time lecturer at the IT University of Copenhagen, where Elizabete Sauthier Munzlinger Narcizo is an industrial PhD student - at Jabra GN, she is exploring ML to identify common hand gestures worldwide).


E5A701BB-1378-40D4-A555-4FAD5F352F02.jpeg

6558B818-3B8E-4818-97EB-9D566D2A0DA6.jpeg



265F5D1D-62A6-42EE-B6F6-92B8450BF861.jpeg


1322A929-138E-484C-8524-C7E6EE004DC6.jpeg
 

Attachments

  • 4A261D01-0294-4595-94EF-2D010CE2642B.jpeg
    4A261D01-0294-4595-94EF-2D010CE2642B.jpeg
    397.5 KB · Views: 33
  • Like
  • Fire
  • Love
Reactions: 33 users

IloveLamp

Top 20

1000016684.jpg
 
  • Like
  • Fire
  • Love
Reactions: 17 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 4 users

Mws

Regular
  • Like
  • Haha
Reactions: 8 users
  • Like
Reactions: 2 users

Frangipani

Regular
Has anyone ever come across this Swiss company called Viso.ai? Looks like they offer services similar to Edge Impulse, specifically for AI Vision applications?

Anyway, Akida getting a mention in the company’s “Milestones of Neuromorphic Engineering” overview (although they got the year wrong and don‘t seem to be aware of Akida Gen 2).


Interesting catalogue of AI Vision applications, too.
Constant surveillance is of course a double-edged sword.

And I can already foresee certain forum members getting all worked up about some very specific use cases listed here… 3…2…1…😉


856B03DC-6233-4606-ABF5-9E44E54BC805.jpeg



(…)

F64C7958-C2D7-4AE2-8EAE-FB45813AC911.jpeg

F7465B71-EFF3-46E9-A563-0E5EB15D17A3.jpeg


14950A22-8237-4905-9AF6-D66FE6DDAECA.jpeg



165A784B-67C8-4172-87D4-19AFBEB61774.jpeg
7E73D282-EB3B-4593-8D6A-08FDDB748F31.jpeg

CEF60B2D-0049-4CA5-B297-87842FAB1F3E.jpeg
D696600E-DD42-4F3C-88B6-176C720C112A.jpeg

D810E3E5-93C8-4AAE-A288-19901A57C4A8.jpeg
 

Attachments

  • 0F867E24-022A-4924-9D11-37E14B45910A.jpeg
    0F867E24-022A-4924-9D11-37E14B45910A.jpeg
    391.8 KB · Views: 21
  • Like
  • Love
  • Fire
Reactions: 15 users

Frangipani

Regular
(…) Oh, and Southwest Research Institute (SwRI) has been collaborating with Intel and experimenting with Loihi for a long time:

View attachment 60288

View attachment 60289


… and the recent posts by Dr. Steve Harbour tagging Intel researchers on LinkedIn strongly suggest they do not intend to end this collaboration any time soon. They have even begun research on developing a neuromorphic camera for flights to Mars. I noticed that Gregory Cohen was tagged as well, so I assume Western Sydney University’s ICNS will also be involved in this project.

View attachment 60290


View attachment 60293

Since we know that SwRI has been collaborating intensively with Intel on applying neuromorphic technology to EW over the past few years, it is pretty obvious which company SwRI research engineer Dan Brown is referring to in the following interview, when he says they have “a strong research collaboration agreement with a major chip manufacturer who is developing an advanced neuromorphic processor”.

Nevertheless, the interview is a worthwhile read, especially when we keep in mind that at the same time ISL is utilising Akida for cognitive EW, too.




06.25.2024


Neuromorphic Computing for Cognitive Electronic Warfare with SwRI’s David Brown​

David Brown is a research engineer in the Defense & Intelligence Solutions Division at Southwest Research Institute where he is the lead engineer for cognitive electronic warfare system research and development.
667b00c9effa5e91a4a5cc5f_David%20Brown%20Interview%20Title%20Card_Blog%20Post.png

What is the Southwest Research Institute?

Our tagline is deep sea to deep space. We have multiple applied research programs going on across the spectrum—everything from chemical to petroleum to mechanical engineering to planetary and deep space science. We're a nonprofit independent applied research organization not affiliated with any university. SwRI was started by a philanthropist in the 1940s and was initially focused on research related to the automotive and petroleum industries. We’ve since evolved to focused research across most scientific domains. My group within Southwest Research is specific to defense and intelligence systems. Most of our work is sensitive or classified.

What brought you to the Southwest Research Institute?

I dreamed of flying F-15 fighters when I was a teen. When I enrolled at Georgia Tech as an engineering undergrad, I immediately joined the National Guard because they had an F-15 unit. When I graduated I needed an engineering job while waiting to be picked up to go fly F-15s. Southwest Research Institute hired me directly out of school and a year or two later, I did get picked up to go fly in the National Guard, but the F-15 unit had changed aircraft to the B-1B, so I became a B-1B “wizzo” electronic warfare officer while working with the Southwest Research Institute on the same electronic warfare systems. It was a neat combination—performing research and development for the systems that I was flying into combat.

Southwest Research has always pushed the envelope on electronic warfare (EW) and using the latest technology to improve our processes. In the past few years, technology improved so that our systems started generating more data than could be processed using traditional signal processing methods so our teams started looking at different ways of handling the data using newer technologies. We looked at compressive sensing and several other methodologies to handle that volume of data. We realized we were getting very good at throwing away data in an attempt to hone in on the pieces that were most relevant to us. But, the data that we were throwing away had useful information, we just didn’t have the capacity to process it. A few years ago, we started delving deep into how we could apply Artificial Intelligence (AI) to some EW problems that come from having to handle a large amount of data. That has since become a large internal research program that is now partially funded by the Department of Defense.

What is cognitive electronic warfare?

Let’s start with electronic warfare. It’s a broad spectrum that covers most radio frequency (RF) emitters out there today—we’re talking about things like radar and communication systems as well as everyday devices such as Bluetooth and wireless internet. However, most of my focus is on radar signals. You can think of radar as a technology that is trying to get a picture of the space around itself, and it’s usually used to see aircraft or other vehicles moving within that space. Electronic warfare focuses on ensuring that those radar systems cannot perform their job as intended. We’re on the defensive side of this so if you think about a US Air Force or Navy aircraft flying in a hostile area, for example, our intent is to keep US aircraft safe by making sure the adversary can’t develop a track on that aircraft for a targeting and firing solution.

As for cognitive EW, there are two definitions we could use. One is very loose, and it refers to any AI process that is applied to the EW problem set. But I like a stricter definition where you're looking at a system that can autonomously sense its environment, make decisions based on what it sees within the environment, and then affect the environment based on the decisions that it makes. Maybe it senses the emitters within the environment, makes a decision about what jamming technique to use on those emitters, and then autonomously generates that response and an estimate of its effectiveness so that it can control what it's doing.


You’ve been researching and using electronic warfare systems for decades. What’s changed over the past 10-20 years that has led to the need for cognitive systems?

If you go back to the 1980s, the technology that we had was very constrained. If we wanted to update a radar from one frequency to another or change the operating parameters such as the timing that it uses it was a lengthy process that might have taken our adversaries years. They had to develop new hardware, that hardware had to be tested, and you had very little digital control and processing capability within the system. And during that process, as our adversaries were developing a new system, our intelligence community would collect information on what they were doing, how they were radiating, what technologies were being employed, and so we'd have some indication years in advance that they were developing a new threat system that we had to react to. But our systems were constrained by the same technology problem and lengthy development process. Well, over time the acquisition process for these systems started speeding up. In addition, new hardware technologies have significant performance margins relative to the control process, meaning there is significant flexibility in digital software control. Thus, hardware isn't changing nearly as fast as our ability to control it with software. So, if you can imagine that an adversary is using a radar in a particular mode—using particular timing and modulation forms—and they detect we have a way of responding to that operating mode to introduce error into their system, they can very quickly—within weeks, days, or hours—change what they're doing. We must keep up with that rate of change and in many cases, the only viable way to respond to highly adaptive threat systems is to incorporate cognitive processes into our EW system. In addition, the idea with cognitive radar is that the radar system itself can sense changes in the RF environment and cognitively change how the radar is operating. Our adversaries are continuously updating how they send out their pulses to adapt to the dynamic environment, and we have to be faster at updating on our end to predict what they’re going to do next and determine the right response.

Earlier this year SwRI was awarded a nearly $6.5M contract from the USAF to do R&D work on cognitive electronic warfare systems. What are you researching as part of this contract?

We have a strong research collaboration agreement with a major chip manufacturer who is developing an advanced neuromorphic processor. One of our distinctions is that we’re one of the few research organizations that has research agreements on using neuromorphic processing for cognitive EW. There are a few places in the US working on the general problem of applying AI techniques to this domain, but what makes our work stand out, I think, is that we don't approach neuromorphic computing for cognitive electronic warfare as an AI problem first. Instead, we approached it as an EW problem and then looked for the best tool available to solve our challenges. One of the newer technologies that is available is neuromorphic processing with spiking neural networks. As we apply that technology, we’re seeing incredible improvements and advantages within the electronic warfare context.

What makes neuromorphic architectures such a good solution for cognitive electronic warfare?

Neuromorphic architectures are much more generalizable than more traditional AI approaches. Radar signal data is very messy, which creates challenges around training the AI to make inferences on radar data. We started at the I/Q level, which is a direct measurement of the RF spectrum itself. That is the most information-rich form of data, but it’s also the messiest and most complex. With traditional signal processing techniques, we smooth I/Q data using transforms so that we can pull out features that are understandable to humans, but in that process, we’re actually obscuring data that’s in the I/Q stream. When we apply AI to that original I/Q data using traditional methods, we can make it work on a specific set of I/Q data, but it doesn’t generalize over various operating and environmental conditions. It’s more specific to one scenario. We find that it generalizes much better with spiking neural networks. We’re able to use it over a wider operating envelope and apply AI to more cases than we can with a traditional method.

The second thing is that it uses much less power than traditional AI approaches. A lot of our EW systems are implemented on aircraft or small autonomous vehicles that have very limited power. In some of our tests, we are seeing performance out of our neuromorphic processor that’s equivalent to a bank of GPUs. Each of those GPUs is pulling quite a bit of power, but they also have to be cooled and that’s actually often the bigger challenge. But when we go to a neuromorphic processor, we’re using at least 3 orders of magnitude less power so we’re able to put these processors in a much more constrained environment.

What are the biggest challenges with using neuromorphic computing for cognitive electronic warfare?

One of the biggest challenges is obtaining suitable data to train the AI. We are constantly running into that issue. One way of mitigating this challenge is to augment real data with synthetic data. The problem with the real data taken from a radar is that the data was only collected in a limited number of environmental and operational conditions. There are only so many radar systems I can collect data from, and it turns out that most of our adversaries are not very cooperative when we do our testing. But with synthetic data, I can generate an almost unlimited number of scenario combinations and represent a wide range of environments and operating characteristics. The challenge is that synthetic data tends to be mathematically precise, which essentially allows the AI to pick up on features that aren't available in a real data set. So, the AI is looking at precision that's not there when we are flying in a real environment. The hardest problem that we’re working on solving right now with the DoD is ensuring that our algorithms are not biased by that synthetic data so we can graduate the algorithm to a real data set in an operational environment.

What does the future of electronic warfare look like to you?

What I'm seeing is the use of AI within specific domains. AI has largely been a domain unto itself where you had AI experts that did not necessarily understand the application domain such as EW. Now we're growing in AI knowledge and the AI experts are growing in the EW domain knowledge. We're seeing a real acceleration of what we're realizing we can do with this technology. For decades we’ve had these paradigms where we’re used to solving problems in a traditional linear way using non-AI methods, but I would expect that over the next five to ten years, you're going to see an acceleration as AI becomes more ubiquitous in the domain. Ten years from now people who are starting work in the electronic warfare domain are just going to understand these systems intrinsically. Forty years ago we would say we’re going to use a “digital controller” on an electronic warfare system. Now nobody says they’re going to use a digital controller; it’s just assumed. Similarly, in ten years, it’s going to be assumed that you have cognitive in your electronic warfare systems.




The Deep Tech Agency.

HAUS is a strategic communications agency in NYC. We specialize in marketing and public relations for deep tech startups. Check out our website, follow us on Twitter, or say hello@hausb.io

 
  • Like
  • Love
  • Fire
Reactions: 20 users

IloveLamp

Top 20
  • Like
  • Love
  • Thinking
Reactions: 10 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 35 users

Frangipani

Regular



UPDATED 09:00 EDT / JUNE 27 2024
Innatera.jpg
AI

Innatera books $21M in funding for its ultra-low-power AI chips​

33c4ed50eeebeb4edc9ff1f344e0a2d1

BY PAUL GILLIN


Netherlands-based microprocessor maker Innatera Nanosystems B.V. said it closed an oversubscribed $21 million Series A funding round, which included a $16 million investment the company announced in March and an additional $5 million from new investors.

The company’s Spiking Neural Processor T1, unveiled in January, is an energy-efficient artificial intelligence chip for sensor-edge applications. It incorporates a proprietary event-driven computing engine, a convolutional neural network accelerator and a RISC-V central processing unit (pictured) for running ultra-low-power AI applications on battery-powered devices.

Innatera, which was spun out of the Delft University of Technology in 2018, says it’s filling a gap in the market for AI-powered devices that require human-machine interaction. Its chip is “basically a brain-inspired processor that enables turnkey intelligence in applications where power is limited,” Chief Executive Officer Sumeet Kumar told SiliconANGLE in an interview. “It essentially allows you to analyze sensor data in real time by simulating how your brain recognizes patterns of interest.”

The company says the Spiking Neural Processor enables high-performance pattern recognition of images and spoken words at the sensor edge with submilliwatt power consumption and submillisecond latency. It claims its chips consume 500 times less energy and are 100 times faster than conventional microprocessors.

Always-on operation​

The analog-mixed signal neuromorphic architecture allows for the always-on operation needed in applications like security cameras and listening devices within a narrow power envelope. The processors can be used as a dedicated sensor-handling engine that allows functions such as conditioning, filtering and classification to be offloaded from a central processor or sent to the cloud.

“We tend not to focus on applications that require large format image processing,” Kumar said. “We are most useful when there is event data inside the data stream, or there is something temporal such as radar, low-resolution images, cameras and sensors.” A typical use case, he said, is a video doorbell that needs to be constantly awake but run on a rechargeable battery.

“It’s basically a neural network that understands time,” Kumar said
. “By implementing this sort of computation using analog circuits and mimicking the brain’s algorithms for pattern recognition, we came up with a solution that is about 10,000 times more efficient at detecting patterns and sensor data compared to traditional microcontrollers.”

Similar to a field programmable gate array, he said, “it consists of computational elements whose connectivity and parameters can be programmed at runtime. It can flexibly implement any neural network that you can train on your desktop.”

AI framework support​

As a microcontroller, the processor has no operating system, but Innatera has a software development kit and firmware that runs applications built in PyTorch, with support for additional AI frameworks planned. “You build a new training model in a familiar framework, and then once that model is trained, you can map it onto the chip without having to understand any of what goes on inside the chip,” Kumar said.

The processors, the result of six generations of silicon design, are expected to ship in limited volume by the end of the year and at full volume in 2025.
Innatera employs about 75 people today and built its first processors with less than $5 million of investment. “We’ve been tremendously capital-efficient,” Kumar said.

The company plans to use the funding to get its first product into large-scale production in 2025 and expand marketing and sales. A Series B funding round is planned within the next year.


The Series A extension was led by Innavest and Invest-NL N.V., who joined existing Series A investors, which included the European Commission’s EIC Fund, MIG Capital LLC, Matterwave Ventures Management GmbH and Delft Enterprises B.V.

Image: Innatera

 
  • Thinking
  • Like
  • Wow
Reactions: 11 users

Labsy

Regular
Good morning everyone.... It's going to be a cracker next couple of weeks .... Buckle up.
Just a blatant up-ramp... Or is it? ;)
Time will tell of course.
 
  • Like
  • Haha
  • Fire
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its biceps💪!



UPDATED 09:00 EDT / JUNE 27 2024
Innatera.jpg
AI

Innatera books $21M in funding for its ultra-low-power AI chips​

33c4ed50eeebeb4edc9ff1f344e0a2d1

BY PAUL GILLIN


Netherlands-based microprocessor maker Innatera Nanosystems B.V. said it closed an oversubscribed $21 million Series A funding round, which included a $16 million investment the company announced in March and an additional $5 million from new investors.

The company’s Spiking Neural Processor T1, unveiled in January, is an energy-efficient artificial intelligence chip for sensor-edge applications. It incorporates a proprietary event-driven computing engine, a convolutional neural network accelerator and a RISC-V central processing unit (pictured) for running ultra-low-power AI applications on battery-powered devices.

Innatera, which was spun out of the Delft University of Technology in 2018, says it’s filling a gap in the market for AI-powered devices that require human-machine interaction. Its chip is “basically a brain-inspired processor that enables turnkey intelligence in applications where power is limited,” Chief Executive Officer Sumeet Kumar told SiliconANGLE in an interview. “It essentially allows you to analyze sensor data in real time by simulating how your brain recognizes patterns of interest.”

The company says the Spiking Neural Processor enables high-performance pattern recognition of images and spoken words at the sensor edge with submilliwatt power consumption and submillisecond latency. It claims its chips consume 500 times less energy and are 100 times faster than conventional microprocessors.

Always-on operation​

The analog-mixed signal neuromorphic architecture allows for the always-on operation needed in applications like security cameras and listening devices within a narrow power envelope. The processors can be used as a dedicated sensor-handling engine that allows functions such as conditioning, filtering and classification to be offloaded from a central processor or sent to the cloud.

“We tend not to focus on applications that require large format image processing,” Kumar said. “We are most useful when there is event data inside the data stream, or there is something temporal such as radar, low-resolution images, cameras and sensors.” A typical use case, he said, is a video doorbell that needs to be constantly awake but run on a rechargeable battery.

“It’s basically a neural network that understands time,” Kumar said
. “By implementing this sort of computation using analog circuits and mimicking the brain’s algorithms for pattern recognition, we came up with a solution that is about 10,000 times more efficient at detecting patterns and sensor data compared to traditional microcontrollers.”

Similar to a field programmable gate array, he said, “it consists of computational elements whose connectivity and parameters can be programmed at runtime. It can flexibly implement any neural network that you can train on your desktop.”

AI framework support​

As a microcontroller, the processor has no operating system, but Innatera has a software development kit and firmware that runs applications built in PyTorch, with support for additional AI frameworks planned. “You build a new training model in a familiar framework, and then once that model is trained, you can map it onto the chip without having to understand any of what goes on inside the chip,” Kumar said.

The processors, the result of six generations of silicon design, are expected to ship in limited volume by the end of the year and at full volume in 2025.
Innatera employs about 75 people today and built its first processors with less than $5 million of investment. “We’ve been tremendously capital-efficient,” Kumar said.

The company plans to use the funding to get its first product into large-scale production in 2025 and expand marketing and sales. A Series B funding round is planned within the next year.


The Series A extension was led by Innavest and Invest-NL N.V., who joined existing Series A investors, which included the European Commission’s EIC Fund, MIG Capital LLC, Matterwave Ventures Management GmbH and Delft Enterprises B.V.

Image: Innatera



Innatera's T1 has no self-learning capabilities probably because it doesn't have enough neurons and synapses.





Screenshot 2024-06-28 at 9.16.01 am.png





Screenshot 2024-06-28 at 9.14.57 am.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 21 users

TheDrooben

Pretty Pretty Pretty Pretty Good

"Qualcomm's end goal for its AI technology is what it calls "embodied AI," which involves the complete integration of machine learning, multimodal AI, and LLM technology into a hybrid, always-on AI that is infused into every aspect of a device's capabilities."


We are absolutely in the right space at the right time

Happy as Larry
 
  • Like
  • Love
Reactions: 21 users

FiveBucks

Member
BRCHF up 10% overnight :unsure:
 
  • Like
  • Thinking
  • Love
Reactions: 15 users

DK6161

Regular
BRCHF up 10% overnight :unsure:
Which is equivalent to 22 ozzy cents mate.
Go back to bed.
 
  • Haha
Reactions: 1 users
Top Bottom