BRN Discussion Ongoing

Tothemoon24

Top 20
IMG_9161.jpeg





The Defense Department has long used artificial intelligence to detect objects in battlespaces, but the capability has been mainly limited to identification. New advancements in AI and data analysis can offer leaders new levels of mission awareness with insights into intent, path predictions, abnormalities, and other revealing characterizations.
The DoD has an extensive wealth of data. In today’s sensor-filled theaters, commanders can access text, images, video, radio signals, and sensor data from all sorts of assets. However, each data type is often analyzed separately, leaving human analysts to draw — and potentially miss — connections.

RELATED​

34L555D7QZF33GKD4RCPNDY7YI.jpg

Find a way to retain cyber pros, Pentagon personnel guru says

Part of the reason available flexibilities haven’t been used is they’re expensive and HR personnel don't know how to use them.​

By Molly Weisner
Using AI frameworks for multimodal data analysis allows different data streams to be analyzed together, offering decision-makers a comprehensive view of an event. For example, Navy systems can identify a ship nearby, but generative AI could zero in on the country of origin, ship class, and whether the system has encountered that specific vessel before.
With an object of interest identified, data fusion techniques and machine learning algorithms could review all the data available for other complementary information. Radio signals could show that the ship stopped emitting signals and no crew members are using cell phones. Has the vessel gone dark to prepare for battle, or could it be in distress? Pulling in recent weather reports could help decide the next move.
This enhanced situational awareness is only possible if real-time analysis happens at the edge instead of sending data to a central location for processing.

Keeping AI local is critical for battlefield awareness, cybersecurity, and healthcare monitoring applications requiring timely responses. To prepare, DoD must adopt solutions with significant computing power at the edge, find ways to reduce the size of their AI/ML models and mitigate new security threats.
With most new AI tools and models being open, meaning that the information placed into these technologies is publicly available, agencies need to implement advanced security measures and protocols to ensure that this critical data remains secure.

Pushing processing power​

Historically, tactical edge devices collect information and send data back to command data centers for analysis. Their limited computing and processing capabilities slow battlefield decision-making, but they don’t have to. Processing at the edge saves time and avoids significant costs by allowing devices to upload analysis results to the cloud instead of vast amounts of raw data.
However, AI at the edge requires equipment with sufficient computing power for today and tomorrow’s algorithms. Devices and sensors must be able to operate in a standalone manner to perform computing, analysis, learning, training, and inference in the field, wherever that may be. Whether on the battlefield or attached to a patient in a hospital, AI at the edge learns from scenarios to better predict and respond for the next time. For the Navy crew, that could mean identifying what path a ship of interest may take based on previous encounters. In a hospital, sensors could flag the symptoms of a heart attack before arrest happens.

Connectivity will be necessary, but systems should also be able to operate in degraded or intermittent communication environments. Using 5G or other channels allows sensors to talk and collaborate while disconnected from headquarters or a command cloud.
Another consideration is orchestration: Any resilient system should include dynamic role assignments. For example, if multiple drones are flying and the leader gets taken out, another system component needs to assume that role.

Shrinking AI to manageable size​

A battlefield is not an ideal environment for artificial intelligence. AI models like ChatGPT operate in climate-controlled data centers on thousands of GPU servers that consume enormous energy. They train on massive datasets, and their computing requirements increase exponentially in operational inference stages. The scenario presents a new size, weight, and power puzzle for what the military can deploy at the edge.
Some AI algorithms are now being designed for SWAP-constrained environments and novel hardware architectures. One option is miniaturizing AI models. Researchers are experimenting with multiple ways to make smaller, more efficient models through compression, model pruning, and other options.

Miniaturization has risks. A trained model could undergo “catastrophic forgetting” when it no longer recalls something previously learned. Or it could increasingly generate unreliable information — called hallucinations — due to flaws introduced by compression techniques or training a smaller model pulled from a larger one.

Computers without borders​

While large data centers can be physically walled off with gates, barriers, and guards, AI at the edge presents new digital and physical security challenges. Putting valuable, mission-critical data and advanced analytics capabilities at the edge requires more than protecting an AI’s backend API.
Adversaries could feed bad or manufactured data in a poisoning attack to taint a model and its outputs. Prompt injections could lead a model to ignore its original instructions, divulge sensitive data, or execute malicious code. However, defense-in-depth tactics and hardware features like physical access controls, tamper-evident enclosures, along with secure boot and trust execution environments / confidential computing can help prevent unauthorized access to sensitive equipment, applications, and data.
Still, having AI capabilities at the tactical edge can provide a critical advantage during evolving combat scenarios. By enabling advanced analytics at the edge, data can be quickly transformed into actionable intelligence, augmenting human decision-making with real-time information and providing a strategic advantage over adversaries.
 
  • Like
  • Fire
  • Love
Reactions: 31 users

Esq.111

Fascinatingly Intuitive.
  • Like
  • Fire
  • Wow
Reactions: 18 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 16 users

mrgds

Regular
I like it. This might be the understatement of the year.😝


@Bravo your a credit to your species !

But why do you do this to me, as last night produced yet another "wet dream " ie; ARM making a buy-out bid for BRN @$5.50 with
a 1 ARM for every 1000 BRN script attached.

Set For Life

ARMKIDA BRAINLISTA
 
  • Haha
  • Like
  • Love
Reactions: 20 users

IloveLamp

Top 20
 
  • Like
  • Fire
  • Wow
Reactions: 23 users
  • Like
  • Fire
  • Love
Reactions: 19 users

7für7

Top 20
Is it just me, or do articles about neuromorphic technology, no matter where they come from, fail to impress me anymore? The only thing that could catch my attention again is a PRICE-SENSITIVE ANNOUNCEMENT WITH A TRILLION-DOLLAR CONTRACT.


Everything else is nice to read but nothing more than the famous grasping at straws in a wild current.


P.S.: Take it with humor 😂

Come on Brn!!!!! COME ON
 
  • Like
  • Fire
  • Love
Reactions: 29 users

MDhere

Regular
  • Like
  • Fire
  • Love
Reactions: 15 users

Esq.111

Fascinatingly Intuitive.
Morning MDhere ,

Keep blinking 😃.

Regards,
Esq.
 
  • Haha
  • Like
Reactions: 33 users

MrNick

Regular
  • Like
  • Love
Reactions: 7 users

MDhere

Regular
Morning MDhere ,

Keep blinking 😃.

Regards,
Esq.
ok hang on i am about to blink again .... maybe Bravo can go for a run too
 
  • Haha
  • Like
  • Fire
Reactions: 15 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 14 users

davidfitz

Regular
I wonder if the 2 nodes that Renesas licenced from us are finally being used? Too much tech talk for me but interesting anyway.


1719472597053.png



1719472623403.png

 
  • Like
  • Wow
  • Thinking
Reactions: 10 users

wilzy123

Founding Member
I blinked and someone took out 2mil at 22 just like that

Yep. I am sure it's significant and that all of the trade over the past few weeks is a legitimately accurate representation of sentiment.

4a2e68e0813de0788848dab6c3443c12.gif
 
  • Haha
  • Like
Reactions: 2 users
1719482232544.gif
 
  • Haha
  • Like
Reactions: 3 users

mrgds

Regular
Everything I am hearing here sounds like BRN I hope we are involved as this partnership covers the edge in high performance and lower power devises across the board,
What is the term Synthetic they use here mean and could we be involved ? And why is he wearing sunglasses , is this a covert operation !
Synthetic is data that is "made up " or fabricated, as opposed to real data, ie; video/speech etc
 
  • Like
Reactions: 2 users

Guzzi62

Regular
I wonder if the 2 nodes that Renesas licenced from us are finally being used? Too much tech talk for me but interesting anyway.


View attachment 65559


View attachment 65560
I been though all Renesas's partners and BRN is not even mentioned?

I am not skilled to read and understand the white paper.

 
  • Thinking
  • Like
Reactions: 3 users

IloveLamp

Top 20
I been though all Renesas's partners and BRN is not even mentioned?

I am not skilled to read and understand the white paper.

That's because Brn isn't a partner. They licensed our ip, as in most cases, it is in their best interests not to mention us to get a leg up on the competition.

Imo, dyor.
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Frangipani

Regular
Below is a link to a tutorial called “Edge AI in Action: Practical Approaches to Developing and Deploying Optimized Models” that several researchers from Jabra/GN Audio resp. GN Hearing 🇩🇰 gave last week at the CVPR (Conference on Computer Vision and Pattern Recognition) 2024 in Seattle.

The slides presented at the conference are all available online, and a video of the tutorial will also be uploaded at some point and might provide interested viewers with even more detailed information, especially as the recording will presumably also cover the Q&A sessions.The tech-savvier among you will surely find those well-designed slides very intriguing!

I skimmed the presentation slides and picked out some less technical ones for everyone to enjoy. Although none of the tutorial’s practical applications involved any deployment on AKD1000, these Jabra / GN Audio resp. Hearing researchers are definitely aware of Akida, at the very least, as you can tell from the slide titled Edge AI Hardware.


83CB9E24-FC22-4A20-883B-96F9A62F292E.jpeg

F6387C70-DDE2-443A-88AB-0028BE95031D.jpeg


C1579928-B582-4B97-AA10-29802B95BAAA.jpeg




6F482C43-9F3D-4BE6-8D62-76E39BC5A867.jpeg



478AB281-A2A1-4370-A975-3746BDA64074.jpeg



(Anuj Datt used to be a Senior Software Engineer AI Systems with GN Audio until recently and now works for Adobe.
Fabricio Batista Narcizo is also a part-time lecturer at the IT University of Copenhagen, where Elizabete Sauthier Munzlinger Narcizo is an industrial PhD student - at Jabra GN, she is exploring ML to identify common hand gestures worldwide).


E5A701BB-1378-40D4-A555-4FAD5F352F02.jpeg

6558B818-3B8E-4818-97EB-9D566D2A0DA6.jpeg



265F5D1D-62A6-42EE-B6F6-92B8450BF861.jpeg


1322A929-138E-484C-8524-C7E6EE004DC6.jpeg
 

Attachments

  • 4A261D01-0294-4595-94EF-2D010CE2642B.jpeg
    4A261D01-0294-4595-94EF-2D010CE2642B.jpeg
    397.5 KB · Views: 44
  • Like
  • Fire
  • Love
Reactions: 34 users
Top Bottom