BRN Discussion Ongoing

Learning

Learning to the Top 🕵‍♂️
To me that is the most obvious explanation for the delay in IP agreements and revenue. It makes perfect sense.
To me personally, I don't believe that's to be the case.

By memory, Antonio said at the AGM (It take about 3 years to just test for a product.)

And Brainchip's relationships with Socionext date back mid 2019. Not until early this year at CES/ December 2022. We get the press releases.

"Advanced AI Solutions for Automotive
Socionext has partnered with artificial intelligence provider BrainChip to develop optimized, intelligent sensor data solutions based on Brainchip's Akida® processor IP.

BrainChip's flexible AI processing fabric IP delivers neuromorphic, event-based computation, enabling ultimate performance while minimizing silicon footprint and power consumption. Sensor data can be analyzed in real-time with distributed, high-performance and low-power edge inferencing, resulting in improved response time and reduced energy consumption.

Creating a proprietary chip requires a complex, highly structured framework with a complete support system for addressing each phase of the development process. With extensive experience in custom SoC development, Socionext uses state-of-the-art process technologies, such as 7nm and 5nm, to produce automotive-grade SoCs that ensure functional safety while accelerating software development and system verification."



So basically Brainchip had an agreement with Socionext do development Neuromorphic System-on-Chip (NSoC). Since 2019.
Socionext would be working with AKIDA1000. Hence, at 2023 CES. Socionext has publicly telling the world they will have a custom SOC for Automotive:- (Advanced AI Solutions for Automotive Socionext has partnered with artificial intelligence provider BrainChip to develop optimized, intelligent sensor data solutions based on Brainchip's Akida® processor IP.)

So in saying that's. Why would Socionext wasted time and resources close to 3 years, just to let go of AKIDA 1000 R&D and and wait for AKD1500, AKIDA 2.0 just to start R&D again.

Hope that make sense.

Just read the 21 December 2022 press-releases. "Socionext has partnered with artificial intelligence provider BrainChip to develop optimized, intelligent sensor data solutions based on Brainchip's Akida® processor IP."


Learning 🏖
 
  • Like
  • Love
  • Fire
Reactions: 36 users

TheFunkMachine

seeds have the potential to become trees.
https://www.linkedin.com/posts/luca...1-MXCt?utm_source=share&utm_medium=member_ios

I am not necessarily a fan of WEF agenda for its ideal world, but in terms of revenue it could be massive for BrainchipAkida if Prophesee gets traction in this sphere of influence.

I love Brainchip not only for its technology and potential to make my family well off one day, but that they have a code of ethics wanting to use their technology for the benefit of mankind.

I hope they are able to stay true to this sentiment and still be successful in this world…it’s a tough one for sure.
 

Attachments

  • 7647EDEE-B039-4206-844E-DC4D80C2318C.png
    7647EDEE-B039-4206-844E-DC4D80C2318C.png
    2.7 MB · Views: 107
  • Like
  • Fire
  • Love
Reactions: 11 users

GDJR69

Regular
To me personally, I don't believe that's to be the case.

By memory, Antonio said at the AGM (It take about 3 years to just test for a product.)

And Brainchip's relationships with Socionext date back mid 2019. Not until early this year at CES/ December 2022. We get the press releases.

"Advanced AI Solutions for Automotive
Socionext has partnered with artificial intelligence provider BrainChip to develop optimized, intelligent sensor data solutions based on Brainchip's Akida® processor IP.

BrainChip's flexible AI processing fabric IP delivers neuromorphic, event-based computation, enabling ultimate performance while minimizing silicon footprint and power consumption. Sensor data can be analyzed in real-time with distributed, high-performance and low-power edge inferencing, resulting in improved response time and reduced energy consumption.

Creating a proprietary chip requires a complex, highly structured framework with a complete support system for addressing each phase of the development process. With extensive experience in custom SoC development, Socionext uses state-of-the-art process technologies, such as 7nm and 5nm, to produce automotive-grade SoCs that ensure functional safety while accelerating software development and system verification."



So basically Brainchip had an agreement with Socionext do development Neuromorphic System-on-Chip (NSoC). Since 2019.
Socionext would be working with AKIDA1000. Hence, at 2023 CES. Socionext has publicly telling the world they will have a custom SOC for Automotive:- (Advanced AI Solutions for Automotive Socionext has partnered with artificial intelligence provider BrainChip to develop optimized, intelligent sensor data solutions based on Brainchip's Akida® processor IP.)

So in saying that's. Why would Socionext wasted time and resources close to 3 years, just to let go of AKIDA 1000 R&D and and wait for AKD1500, AKIDA 2.0 just to start R&D again.

Hope that make sense.

Just read the 21 December 2022 press-releases. "Socionext has partnered with artificial intelligence provider BrainChip to develop optimized, intelligent sensor data solutions based on Brainchip's Akida® processor IP."


Learning 🏖
You are assuming they have wasted time and resources and that they would have to start again. That may not be the case. However, it is possible that there is a further use case that Akida 1500 offers that they want to incorporate and that with the research and testing they have already conducted it is worth a further reasonable delay to get the extra functionality. We don't know but it's quite plausible.
 
  • Like
  • Love
Reactions: 11 users

Diogenese

Top 20
To me personally, I don't believe that's to be the case.

By memory, Antonio said at the AGM (It take about 3 years to just test for a product.)

And Brainchip's relationships with Socionext date back mid 2019. Not until early this year at CES/ December 2022. We get the press releases.

"Advanced AI Solutions for Automotive
Socionext has partnered with artificial intelligence provider BrainChip to develop optimized, intelligent sensor data solutions based on Brainchip's Akida® processor IP.

BrainChip's flexible AI processing fabric IP delivers neuromorphic, event-based computation, enabling ultimate performance while minimizing silicon footprint and power consumption. Sensor data can be analyzed in real-time with distributed, high-performance and low-power edge inferencing, resulting in improved response time and reduced energy consumption.

Creating a proprietary chip requires a complex, highly structured framework with a complete support system for addressing each phase of the development process. With extensive experience in custom SoC development, Socionext uses state-of-the-art process technologies, such as 7nm and 5nm, to produce automotive-grade SoCs that ensure functional safety while accelerating software development and system verification."



So basically Brainchip had an agreement with Socionext do development Neuromorphic System-on-Chip (NSoC). Since 2019.
Socionext would be working with AKIDA1000. Hence, at 2023 CES. Socionext has publicly telling the world they will have a custom SOC for Automotive:- (Advanced AI Solutions for Automotive Socionext has partnered with artificial intelligence provider BrainChip to develop optimized, intelligent sensor data solutions based on Brainchip's Akida® processor IP.)

So in saying that's. Why would Socionext wasted time and resources close to 3 years, just to let go of AKIDA 1000 R&D and and wait for AKD1500, AKIDA 2.0 just to start R&D again.

Hope that make sense.

Just read the 21 December 2022 press-releases. "Socionext has partnered with artificial intelligence provider BrainChip to develop optimized, intelligent sensor data solutions based on Brainchip's Akida® processor IP."


Learning 🏖
Is this not just putting our 2020 cooperation agreement on a more formal footing:
https://brainchip.com/brainchip-and...rm-for-ai-edge-applications-brainchip-230320/

23 March 2020

BrainChip and Socionext Provide a New Low-Power Artificial Intelligence Platform for AI Edge Applications​



ALISO VIEJO, Calif.–(BUSINESS WIRE)– BrainChip Holdings Ltd (ASX: BRN), a leading provider of ultra-low power high performance AI technology, today announced that Socionext Inc., a leader in advanced SoC solutions for video and imaging systems, will offer customers an Artificial Intelligence Platform that includes the Akida SoC, an ultra-low power high performance AI technology.
...
Socionext has played an important role in the implementation of BrainChip’s Akida IC, which required the engineering teams from both companies to work in concert. BrainChip’s AI technology provides a complete ultra-low power AI Edge Network for vision, audio, and smart transducers without the need for a host processor or external memory. The need for AI in edge computing is growing, and Socionext and BrainChip plan to work together in expanding this business in the global market.

Complementing the Akida SoC, BrainChip will provide training and technical customer support, including network simulation on the Akida Development Environment (ADE), emulation on a Field Programmable Gate Array (FPGA) and engineering support for Akida applications.

Socionext also offers a high-efficiency, parallel multi-core processor SynQuacerTM SC2A11 as a server solution for various applications. Socionext’s processor is available now and the two companies expect the Akida SoC engineering samples to be available in the third quarter of 2020.

In addition to integrating BrainChip’s AI technology in an SoC, system developers and OEMs may combine BrainChip’s proprietary Akida device and Socionext’s processor to create high-speed, high-density, low-power systems to perform image and video analysis, recognition and segmentation in surveillance systems, live-streaming and other video applications.

“Our neural network technology enables ultra-low power AI technology to be implemented effectively in edge applications”, said Louis DiNardo, CEO of BrainChip. “Edge devices have size and power consumption constraints that require a high degree of integration in IC solutions. The combination of BrainChip’s technology and Socionext’s ASIC expertise fulfills the requirements of edge applications. We look forward to working with the Socionext in commercial engagements.”

“As a leading provider of ASICs worldwide, we are pleased to offer our customers advanced technologies driving new innovations,” said Noriaki Kubo, Corporate Executive Vice President of Socionext Inc. “The Akida family of products allows us to stay at the forefront of the burgeoning AI market. BrainChip and Socionext have successfully collaborated on the Akida IC development and together, we aim to commercialize this product family and support our increasingly diverse customer base
.”


The plan has always been to offer Akida on its own and combined with Synquacer.

As an edge server (DELL?), you would expect a full Akida P to be included at some time in the not too distant future, and Synquacer scales up to a cloud server.

Ubiquitous!
 
  • Like
  • Love
  • Fire
Reactions: 50 users

Learning

Learning to the Top 🕵‍♂️
You are assuming they have wasted time and resources and that they would have to start again. That may not be the case. However, it is possible that there is a further use case that Akida 1500 offers that they want to incorporate and that with the research and testing they have already conducted it is worth a further reasonable delay to get the extra functionality. We don't know but it's quite plausible.
I agree on the extra functionality, but
my assumption was based AKIDA 1000 and AKD1500, as they are a little bit different.

AKIDA 1000 TSMC 28nm with an ARM M cortex.

AKD1500 are design on GlobalFoundries’ 22nm fully depleted silicon-on-insulator (FD-SOI) technology ( without the ARM M cortex).

So if Socionext has been testing AKIDA 1000. I would assume they will need to test ADK1500 to ensure everything works together also. I don't believe it's easy as pluck and play.

But hey, at the end of the day. I am just a handy man. I have only voices my personal opinion.

Learning 🏖
 
  • Like
  • Love
  • Fire
Reactions: 20 users

Learning

Learning to the Top 🕵‍♂️
Is this not just putting our 2020 cooperation agreement on a more formal footing:
https://brainchip.com/brainchip-and...rm-for-ai-edge-applications-brainchip-230320/

23 March 2020

BrainChip and Socionext Provide a New Low-Power Artificial Intelligence Platform for AI Edge Applications​



ALISO VIEJO, Calif.–(BUSINESS WIRE)– BrainChip Holdings Ltd (ASX: BRN), a leading provider of ultra-low power high performance AI technology, today announced that Socionext Inc., a leader in advanced SoC solutions for video and imaging systems, will offer customers an Artificial Intelligence Platform that includes the Akida SoC, an ultra-low power high performance AI technology.
...
Socionext has played an important role in the implementation of BrainChip’s Akida IC, which required the engineering teams from both companies to work in concert. BrainChip’s AI technology provides a complete ultra-low power AI Edge Network for vision, audio, and smart transducers without the need for a host processor or external memory. The need for AI in edge computing is growing, and Socionext and BrainChip plan to work together in expanding this business in the global market.

Complementing the Akida SoC, BrainChip will provide training and technical customer support, including network simulation on the Akida Development Environment (ADE), emulation on a Field Programmable Gate Array (FPGA) and engineering support for Akida applications.

Socionext also offers a high-efficiency, parallel multi-core processor SynQuacerTM SC2A11 as a server solution for various applications. Socionext’s processor is available now and the two companies expect the Akida SoC engineering samples to be available in the third quarter of 2020.

In addition to integrating BrainChip’s AI technology in an SoC, system developers and OEMs may combine BrainChip’s proprietary Akida device and Socionext’s processor to create high-speed, high-density, low-power systems to perform image and video analysis, recognition and segmentation in surveillance systems, live-streaming and other video applications.

“Our neural network technology enables ultra-low power AI technology to be implemented effectively in edge applications”, said Louis DiNardo, CEO of BrainChip. “Edge devices have size and power consumption constraints that require a high degree of integration in IC solutions. The combination of BrainChip’s technology and Socionext’s ASIC expertise fulfills the requirements of edge applications. We look forward to working with the Socionext in commercial engagements.”

“As a leading provider of ASICs worldwide, we are pleased to offer our customers advanced technologies driving new innovations,” said Noriaki Kubo, Corporate Executive Vice President of Socionext Inc. “The Akida family of products allows us to stay at the forefront of the burgeoning AI market. BrainChip and Socionext have successfully collaborated on the Akida IC development and together, we aim to commercialize this product family and support our increasingly diverse customer base
.”


The plan has always been to offer Akida on its own and combined with Synquacer.

As an edge server (DELL?), you would expect a full Akida P to be included at some time in the not too distant future, and Synquacer scales up to a cloud server.

Ubiquitous!
And that's is exactly my point. The relationship is getting stronger and bearing fruits.🌱🌲🍇.

Socionext is Brainchip first relationship with AKIDA1000. Although words on the grapevine the relationship is complicated 😆. But they are getting their with the custome SOC for "intelligent sensor data solutions based on Brainchip's Akida® processor IP." (👶)

Learning 🏖
 
  • Like
Reactions: 14 users

Diogenese

Top 20
I agree on the extra functionality, but
my assumption was based AKIDA 1000 and AKD1500, as they are a little bit different.

AKIDA 1000 TSMC 28nm with an ARM M cortex.

AKD1500 are design on GlobalFoundries’ 22nm fully depleted silicon-on-insulator (FD-SOI) technology ( without the ARM M cortex).

So if Socionext has been testing AKIDA 1000. I would assume they will need to test ADK1500 to ensure everything works together also. I don't believe it's easy as pluck and play.

But hey, at the end of the day. I am just a handy man. I have only voices my personal opinion.

Learning 🏖
Hi Learning,

I don't think Akida 1500 provides any additional functionality over the vanilla Akida. In fact, apart from its FDSOI makeup, it's an el cheapo version stripped of the ARM cortex processor as it can be used with any processor.

Akida 2 is a different kettle of fish, brimming with bells and whistles.

As we've seen with Renesas, it takes time, even for those with the do-it-yourself kit.
 
  • Like
  • Love
  • Fire
Reactions: 48 users

miaeffect

Oat latte lover
AI REVOLUTION!


Something to watch during your arvo tea break.
 
  • Like
  • Fire
Reactions: 13 users

Boab

I wish I could paint like Vincent
From what I have seen and read about Quantum Computing(which is very little) room
temperature is supposed to be the Holy Grail in Computing. These little Aussie Companies Quantum Brilliance and Pawsey in WA looks like they may have pulled it off. Let's hope they
achieve this amazing feat.
My question is will BRN be able to take advantage of this if and when this technology
is ready for the public use?


View attachment 39033

Archer Materials are also making claims but still a long way off production AXE is their ASX code
 
  • Like
  • Fire
Reactions: 7 users

Getupthere

Regular
  • Like
  • Fire
Reactions: 8 users

JDelekto

Regular
From what I have seen and read about Quantum Computing(which is very little) room
temperature is supposed to be the Holy Grail in Computing. These little Aussie Companies Quantum Brilliance and Pawsey in WA looks like they may have pulled it off. Let's hope they
achieve this amazing feat.
My question is will BRN be able to take advantage of this if and when this technology
is ready for the public use?


View attachment 39033


I just want to point out that quantum computing is not a panacea. Quantum algorithms, unlike classic algorithms, excel at certain operations, like unordered searches, factoring large numbers, finding primes, etc. The results of running a quantum algorithm are probabilistic, and it is quite possible the wrong answer is achieved, which is why one area, error correction, is getting a lot of attention in quantum algorithms.

Basically, the qubits in a quantum processor are put into a state of "superposition", where they can represent all values at once. A function called an "Oracle" (where a start and end result are known) is applied one or more times to the qubits in order to steer their probability that the measured result will be correct.

"Grover's Algorithm" is a quantum search algorithm, which operates in O(sqrt(N)) time. In simple terms, it means that if it takes me "N" iterations to (let's say 10,000) to find a value in an unordered set, then it would take roughly 100 applications of the Oracle function against the qubits to ensure their probability of finding the correct answer. This scales well with extremely large amounts of data.

I imagine that one day there will be quantum algorithms that can train models or inference at much better speeds than classic processors, I still think that is a little ways off and the power required by quantum computers would not necessarily be suited for edge processing.

I think that both quantum computers and Akida are two different beasts, and I think they are mutually exclusive technologies that are suited for different purposes.

To put my optimistic spin on it, I could see quantum computers being used for reducing the cost and time of training models that can be run on neuromorphic processors like Akida, which can in turn both inference and augment the model with its learning capabilities in edge computing applications.
 
  • Like
  • Fire
  • Love
Reactions: 42 users

GStocks123

Regular
Will Viana sell his shares tomorrow ????
 

rgupta

Regular

GStocks123

Regular

Attachments

  • IMG_2412.png
    IMG_2412.png
    483.6 KB · Views: 100
Last edited:
  • Like
Reactions: 2 users
Tesla and version 12 of FSD, does any off the 1000 eyes have any ideas of why Tesla has had a rapid advancement and what technology are they utilising?
 
  • Like
Reactions: 5 users
Our "mates" TCS released a BFSI white paper from 22nd June.

Nice to see they at least still have a soft spot for neuromorphic...be bloody great if they actually commited something materi to Akida though instead of just white papers.

Most annoying thing in the back of my mind is all these handy partnerships developing various things with us and I trust they not just piggybacking and cherry picking bits of knowledge to further themselves along some parallel internal dev program or gain some insight for a diff tangent and process.


Neuromorphic computing: Ushering in AI innovation in BFSI​


SUKRITI JALALI​

Principal Consultant, BFSI, TCS​


INDUSTRY​

SERVICES​


HIGHLIGHTS
  • Firms in the banking, financial services, and insurance (BFSI) industry have embraced artificial intelligence (AI) and machine learning (ML).
  • These technologies use deep learning (DL) models that require massive sets of training data, consume enormous amounts of power, and fall short in adapting to the changing business environment.
  • Neuromorphic computing (NC) can help firms leverage the latest innovations in AI to address some of the existing challenges while paving the way for next-generation use cases.
ON THIS PAGE

WHAT LIES AHEAD​


ABSTRACT
The banking, financial services, and insurance (BFSI) industry has been at the forefront of embracing disruptive technologies.

Firms have adopted artificial intelligence (AI) and machine learning (ML) to recast customer experience, improve business operations, and develop futuristic products and services. Existing AI and ML technologies utilize deep learning (DL) models that run on compute-intensive data centers, require massive sets of training data, consume a large amount of power to train, and fall short in adapting to the changing business environment.
In our view, the BFSI industry can overcome these challenges by exploring neuromorphic computing (NC) for certain kinds of use cases. Spiking neural networks (SNNs), which take inspiration from the functioning of biological neural networks in the human brain, when run on NC hardware, perform on par with DL models but consume significantly lesser power. They are purpose-built for AI and ML and offer advantages such as speed of learning and faster parallel processing. We highlight how NC can help firms overcome inefficiencies in the existing AI and ML deployments in the BFSI industry and examine new use cases.

INTRODUCTION
The proliferation of connected devices in the BFSI industry has generated enormous amounts of data.

This data needs to be analyzed and insights delivered in real time to enable instant action. Firms have been making use of data derived from images, videos, text, audio, and IoT devices. In the insurance industry, the use cases span property damage analysis, driver sleep detection, elderly care, and predictive asset maintenance. Robo-advisory for investment and wealth management, customer sentiment analysis, and fraud detection are other critical areas in the BFSI sphere that benefit from AI and ML.

However, much of the data analysis is post-facto or after-the-event, which means firms do not receive a timely warning and cannot take action to avert adverse events or minimize their impact. In addition, the existing models use DL networks that consume massive amounts of energy, both for training and inference. Firms need voluminous data sets to train the models while processing data sequentially. Enhancing the models or modifying their parameters are complex and cumbersome tasks. All this has resulted in several negative impacts for BFSI firms: higher carbon footprint, increased time and effort to train models, processing delays, and high manual effort across the AI and ML lifecycle whenever there is a change in input parameters or training data.

NC TO THE RESCUE
In our view, the BFSI industry should explore third-generation AI systems powered by neuromorphic computing (NC) platforms and spiking neural networks (SNNs).

This will help them address the aforementioned shortcomings and improve the response time, while significantly lowering the carbon footprint. NC closely replicates how the human brain responds to complex external events and learns unsupervised while using minimal energy. We believe that these systems will facilitate a natural progression toward developing ultra-low energy adaptive AI applications by mimicking human cognitive capabilities. NC will also reduce cloud dependence, which means that edge applications can be enabled without compromising privacy and security.

Key features of NC include:
  • Sparsity – allows models to be trained with a lesser amount of data compared to the existing DL models. This dramatically reduces memory and input training data requirements. For instance, in touchless banking kiosks, cameras underpinned by NC can recognize and learn individual gestures much faster, enabling personalized customer experience.
  • Event-based processing – allows firms to detect and respond to events in real time. For instance, for parametric insurance policies, instant detection of a threshold breach is essential for immediate, frictionless pay-outs, which is key to superior customer experience.
  • Colocation of memory and compute – enables faster parallel processing of multiple data streams. An insurance use case in focus is the prevention of work-related injuries in hazardous environments, resulting in reduced accidents and workers’ compensation claims. Given the low energy use of NC, some of the models can easily run on handheld devices without the need for cloud connectivity.
These factors make NC a natural choice for BFSI use cases that require real-time insights and are time-sensitive in nature.

PUTTING THEORY TO ACTION
The insurance industry is moving from a protection to a prevention and preservation paradigm.

And embracing NC will help insurers accelerate this shift. Currently, data from IoT devices – wearables, connected vehicles, or drones – is sent over a network to cloud servers, where pre-trained algorithms process, analyze, and respond to each event. The response needs to travel back to the edge, based on which action is taken. This causes delays, consumes significant processing power on the server, and requires all scenarios to be pre-trained. This is not the best approach where a real-time response is critical to prevent the occurrence of adverse events or minimize their impact.
With its in-situ processing capabilities and ability to offer real-time inferences, NC offers a superior alternative. In our view, there is tremendous scope for NC technology to improve edge AI applications (see Figure 1).

For example, real-time driver sleep detection is imperative to prevent an accident and the consequent insurance claims. Similarly, in home care, NC can prove to be a game changer for the remote monitoring of elderly patients. A fall or a sudden heart attack can be detected in real time. The connected ecosystem of family, doctors, ambulance, caregivers, and insurance providers can be alerted without delay. Insurance applications that need analytical insights at the edge span a wide range. They include usage-based vehicle insurance, real-time tracking of perishable cargo, predictive maintenance of critical equipment, elder care, early detection of anomalies in home insurance, and video- based claims processing. NC can also aid in faster detection of natural disasters such as floods, fires, or other calamities.

This information can be fed to the insureds in advance. Parametric insurance products that offer pre-specified payouts based upon a trigger are gaining traction in recent times. We believe that a combination of blockchain- and NC-based real-time event detection is superior to existing parametric claims processing mechanisms.

image

Figure 1: BFSI use cases that can benefit from neuromorphic computing

Time series data analysis is crucial for capital market firms for functions such as stock prices prediction, asset value fluctuation, derivative pricing, asset allocation, fraud detection, and high frequency trading. It requires learning and predicting patterns over a time period, where early experiments have found SNNs to be better than existing alternatives, especially for predicting future data points. NC can benefit each of these scenarios, but the actual gain will have to be evaluated on a case-by-case basis, depending on the number of model parameters, input datasets, the need for real-time predictions, and lower latency.

The most important benefit of NC will be in reducing the carbon footprint, especially as sustainability has become a boardroom agenda for BFSI firms, with the industry making net-zero commitments following the Paris agreement. With its key characteristic of lower power consumption, NC adoption will emerge as a priority for BFSI organizations given their reliance on IT infrastructure and ML applications, which contribute to higher emissions. As the integration of speech, video, images, generative AI, and facial recognition technologies into BFSI applications increases, reimagining the entire ML lifecycle from a sustainability perspective will become imperative. In early trials, NC has proved to be significantly more energy efficient while achieving accuracy that is comparable with DL models on a standard CPU or GPU. The limitations of existing models such as the need for multiple training cycles, hundreds of training examples, massive number crunching, and retraining due to information changes make the learning and inference process energy- and effort-intensive. NC can help overcome these challenges and accelerate green IT efforts.

In addition to reducing the carbon footprint, protecting property and communities from damage induced by climate change is also high on the regulatory agenda. For instance, to address wildfire risk intensified by climate change, the California Department of Insurance has issued ‘Safer From Wildfires’, a new insurance framework, which recommends actions that insurers should consider to mitigate their impact on communities. In our view, NC can help insurers enable the real-time audit of a slew of mitigation actions and features like Class-A fire rated roof, ember- and fire-resistant vents, and defensible space compliance.

Digital ecosystems are slowly but surely gaining traction in the BFSI industry as banks and insurers look for innovative business models to pursue new value streams and steal a march over the competition. Initiatives such as embedded lending, embedded investing, connected wellness, KYC automation, and parametric insurance will continue to push the boundaries of security and privacy. Existing techniques rely on pre-trained data sets and perform post-facto analysis to detect security breaches. NC can improve monitoring by detecting a new threat seconds before it evolves into a security ‘event.’ NC can enhance the in-situ processing of biometrics data in know your customer (KYC) verification and ensure that data from wearables is encrypted before it is sent over a network.

Digital banking transactions on smartphones can be monitored in real time and instant action can be taken to prevent a breach when anomalous patterns are detected.

WHAT LIES AHEAD
In our view, BFSI firms should adopt a use case-centric approach to NC adoption to understand the advantages it can bring to existing AI and ML deployments.
And the advantages span a wide spectrum – from providing real-time insights in a connected insurance ecosystem to instantly detecting anomalous user behavior in digital banking transactions or running specific time-sensitive calculations in capital markets. We believe that it will be advantageous for BFSI firms to identify specific use cases that can significantly benefit from NC and run early proofs of concepts to evaluate its transformational potential.

However, a word of caution: not all BFSI AI and ML use cases will gain from NC, and a careful analysis of the nature of the use case, latency, and the expected outcomes is key. We envisage the co-existence of traditional CPUs and/or GPUs, neural hardware and TPUs, as well as neuromorphic platforms. Having said that, we expect NC – with its ability to enhance customer experience, facilitate early risk detection, deliver inferences in real time, and lower carbon footprint – to emerge as the natural choice for the BFSI industry. We believe that BFSI firms must stay abreast of the evolution of NC and its potential applications in the industry—once the technology matures, quick action will be necessary to gain a lead.

Sukriti Jalali​

Sukriti Jalali is a principal consultant and thought leader in TCS’ Banking, Financial Services, and Insurance (BFSI) business unit. She is passionate about technology-enabled business transformation and helping customers achieve their growth and transformation objectives. Sukriti has presented at various industry forums and regularly publishes thought papers on digital transformation, IoT, a
 
  • Like
  • Love
  • Fire
Reactions: 22 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers,

Just came across this article.....

Australian Gov. spending a few bob on secure telecommunications for millitary.

If AKIDA is good enough for secure NASA / UN telecommunications ( University of Thrace , Greece ).... who knows.

Hope this link works.


If link dose not work....
If someone could go over to the smouldering orifice, aka HK.

Posted by
alconnor
30/6/23
@ 4.04am
Post no. #68544215
On the Archtis Limited company thread.


* Babcock is the company & $1,900,000,000.00 AU is the contract value from that Australian Gov.

Fingers 🤞.

Regards,
Esq.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 32 users

Gies

Regular

Looking for software engineer, proficiency in Python.

Our technology combines the latest in edge processing, deep learning and sensor technology. Through in-house development and research projects with leading universities around the world, we develop our agronomic intelligence products to serve growers throughout the world. Our products are validated by scientific institutes and are served globally through our cloud platform, but integrate well with most farm management systems.

Could they be working with AKIDA?
 
  • Like
  • Thinking
  • Love
Reactions: 5 users

IloveLamp

Top 20
  • Like
  • Thinking
  • Fire
Reactions: 9 users

Glen

Regular
 
  • Like
  • Thinking
  • Fire
Reactions: 9 users
Top Bottom