BRN Discussion Ongoing

Gazzafish

Regular
Sorry if I missed something but people are referring to forecast revenue. Can someone please show me the link to where we have this forecast? Thanks in advance 👍
 
  • Like
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

How to Maximize Cognitive Neuromorphic Computing in SRE​

Neuromorphic computing has the potential to redefine the future of digital system reliability and maintenance.
Yogesh Ramaswamy

Written by Yogesh Ramaswamy
Published on Sep. 03, 2024

A futuristic-looking brain hovers above a hand.

Image: Shutterstock / Buiilt In
Brand Studio Logo


Site reliability engineering (SRE) automates IT infrastructure tasks, thus improving the reliability of software applications. Cognitive neuromorphic computing, meanwhile, is a method of computer engineering in which elements of a computer are modeled after systems in the human brain and nervous system.

5 Ways Neuromorphic Systems Enhance SRE​

  1. Better monitoring and anomaly detection
  2. Faster processing and analysis of data
  3. Greater levels of automation in incidence response
  4. Efficiency in energy consumption and processing power
  5. Parallel processing capabilities that can handle complex tasks more efficiently
When integrated, these two advanced technological fields deliver plenty of benefits: improved performance, effectiveness and reliability of computing systems. Consider cognitive neuromorphic computing’s strengths: Neuromorphic systems excel in pattern recognition, learning and adaptation. They are instrumental in handling unstructured data, making real-time decisions and executing parallel processing tasks. Neuromorphic computing stands to revolutionize SRE and delves into the fascinating integration of brain-inspired computing technologies within SRE.
Cognitive neuromorphic computing mimics the human brain’s structure and functionality and is poised to drastically improve how digital infrastructures self-manage and react to changes.
These cognitive technologies enable systems to process information and respond to incidents in a manner akin to human reflexes — fast, efficient and increasingly intelligent. The bottom line is that neuromorphic computing has the potential to redefine the future of digital system reliability and maintenance.
Related ReadingWhat Is Artificial Intelligence?


3 Trends and Technologies Driving Innovation​

Several advancements in SRE are improving the ability to maintain reliable, scalable and efficient systems. Understanding these foundational innovations sets the stage for integrating cutting-edge technologies like neuromorphic computing, which promises to elevate SRE practices even further. For example:

Automation, Artificial Intelligence and Machine Learning​

Automation tools, AI, and machine learning automate repetitive tasks, predict incidents and provide intelligent incident responses. AI-powered incident management platforms such as Moogsoft and BigPanda rely on ML to correlate events, detect anomalies and reduce alert fatigue.

Improvements in Observability and Monitoring Enhancements​

Advances in observability tools have enhanced the ability to monitor complex, distributed systems, relying on metrics, logs and traces to provide richer insights into system health and performance. Tools like Prometheus, Grafana and OpenTelemetry provide real-time monitoring and enable insight into system metrics. Neuromorphic systems can further enhance these capabilities by enabling more intuitive and rapid pattern recognition, potentially identifying issues before they escalate.

Platforms That Define and Manage Infrastructure​

Platform tools like Terraform and Ansible allow for version control and automation of infrastructure deployments. Infrastructure as Code (IaC) facilitates the supervision and provisioning of computing infrastructure via machine-readable configuration files rather than interactive configuration tools or physical hardware configuration.
These trends and technological innovations are rapidly and significantly advancing the field of SRE, allowing the building of more resilient, scalable and efficient systems. With these technologies, SRE teams can better manage the complexity of modern cloud-native environments.

4 Challenges to Integration​

While the trends discussed earlier pave the way for integrating advanced technologies like neuromorphic systems, this integration comes with its own set of complexities. Here are some examples.

Compatibility With Existing Infrastructure​

Neuromorphic systems may require new hardware and software infrastructure that is incompatible with existing systems. This equates to significant financial outlays and disruption to operations throughout the integration process.
Overcoming this challenge requires taking a phased integration approach that steadily introduces neuromorphic components while ensuring backward compatibility. Train employees to work with both traditional and neuromorphic systems to maintain continuity from an operations standpoint.

Large Volumes of Data​

Neuromorphic systems also rely on large volumes of high-quality data for training and adaptation. Insufficient or poor data can translate to suboptimal performance and incorrect incident responses.
To overcome this challenge, organizations must put robust data validation and cleansing processes in place. This step ensures that data quality is maintained. Automated tools designed to provide real-time data monitoring and detecting anomalies are useful in identifying and addressing issues quickly and accurately.

Complexity and Unique Expertise​

Adopting neuromorphic systems also requires complex algorithms and specialized knowledge. As such, it’s important for organizations to employ and train specialized personnel. These steps will increase the initial implementation cost, but such measures will save time and money in the long run, ensuring smoother implementation.
While these systems are designed for efficiency, scaling them to handle enterprise-level operations can be daunting, especially in a heterogeneous environment, where different systems and technologies are mixed. Organizations must be sure that neuromorphic systems can scale without losing performance or accuracy to deploy them successfully.

Interoperability and Security Concerns​

Many organizations have legacy systems that may not integrate easily with new neuromorphic technologies. Careful planning and potentially significant modifications to existing systems can ensure interoperability. From a security standpoint, integrating advanced cognitive capabilities creates vulnerabilities within the organization, particularly with data integrity and system manipulation. Implementing robust security measures to protect neuromorphic systems from cyber threats is critical.
Related ReadingWhat Is Neuromorphic Computing?


How Neuromorphic Computing Improves SRE​

The neuromorphic systems offer many advantages, including enhanced monitoring and anomaly detection. Cognitive neuromorphic systems can improve anomaly detection in SRE by learning to recognize patterns of normal and abnormal system behavior more effectively than traditional systems. This means issues can be detected faster and downtime and mean time to recovery (MTTR) can be reduced.
Neuromorphic systems’ ability to process and analyze data in real time improves SRE practices. This enables faster decision making and automated responses to incidents. It also improves organizations’ ability to achieve greater levels of automation in incident response, subsequently improving system resilience and reducing the need for manual intervention.
Scalability can be a concern when implementing neuromorphic systems. These systems are highly efficient in energy consumption and processing power, which aids scaling operations without a proportional increase in resource usage. This greater efficiency also correlates to more cost savings and an increased ability to handle larger workloads more effectively.
Ultimately, integrating these technologies can lead to significant performance improvements. Neuromorphic computing’s parallel processing capabilities can handle complex tasks more efficiently, resulting in faster response times and better overall system performance.

 
  • Like
  • Love
  • Fire
Reactions: 11 users

Kachoo

Regular
Sorry if I missed something but people are referring to forecast revenue. Can someone please show me the link to where we have this forecast? Thanks in advance 👍
It's innthe half year report
 
  • Like
  • Fire
Reactions: 5 users

Diogenese

Top 20
Isn't this what Tony Lewis is focussed on? I remember he posted a reply on his Linkedin saying something like we are the first he's aware of to be able to run SOTA SML's on edge devices or something like that.

Does anyone have his Linkedin post handy?

IBM & NTT Explain How AI Works on Edge Computing​

avatar_user_286688_1698354675-42x42.jpg

by Senior Technology Journalist
Ray Fernandez
Fact Checked byEddie Wrenn
Updated on 3 September 2024
title


IBM & NTT Explain How AI Works on Edge Computing

One trillion plus AI parameter models are already banging hard at our front door. These new artificial intelligence models, with unprecedented computing power, will become the norm in the coming months and years ahead.
While the technological advancements of new generative AI models are promising and expected to benefit most sectors and industries, the world still has a big AI-size-edge-infrastructure problem (we will break this down in a moment).
How will giant AI models operate on the edge of networks while offering low latency and real-time services? It’s a ‘shrinking giant’ problem.
In this report, Techopedia talks with NTT and IBM experts to understand an approach to solving fast, resource-intensive artificial intelligence without over-burdening a network.

Key Takeaways​

  • Large AI models are often too resource-intensive for edge devices.
  • Smaller, more efficient AI models offer a practical solution for deploying AI at the edge.
  • Industries such as manufacturing need AI solutions that can operate effectively in distributed environments.
  • Successful edge AI deployment requires collaboration between silos of information, and it is beginning to happen today.
Table of Contents

Is the Solution an Edge Infrastructure Combined with Smaller AI Models?​

Compute power is rapidly moving from the data center to the edge. Organizations expect edge computing to significantly impact all aspects of operations. Meanwhile, worldwide spending on edge computing is expected to be $232 billion in 2024, an increase of 15.4% over 2023 — largely driven by AI.
Techopedia spoke to Paul Bloudoff, Senior Director, Edge Services, NTT DATA, which focuses on edge AI — particularly smaller, more efficient AI models. These models are use-case specific and sized to be simple to deploy and run.
Advertisements


Bloudoff explained how IT and Operational Technology (OT) teams can benefit from small AI models running on lightweight edge.
For example, in factory floors, these AI solutions can enhance maintenance by breaking down siloes and bringing together a suite of information provided by the Internet of Things (IoT) to, for instance, track data on vibration, temperature, machine wear, and more.
Industrial predictive maintenance AI can save organizations thousands of dollars by reducing downtime.

Shrinking AI Giants into TinyAI to Drive Sustainable Development
Scentistific researchers already advocate for TinyAI models as the solution to the complex transition of data center AI models into edge computing.
Scientists argue that TinyAI models can be deployed in healthcare, agriculture, and urban development and contribute to the development of the United Nations Sustainable Development Goals. Tiny models are designed for specific use cases and, therefore, are more cost-efficient, and consume less power, driving sustainability targets.
Nick Fuller, Vice President of AI and Automation at IBM Research, told Techopedia that for edge workloads requiring low latency, inferencing on devices is more than highly desirable; it is essential.

“Models, of course, can still be trained on-premise or on the cloud. To this end, lightweight AI models are very appealing to the edge market and especially to specific workloads where latency requirements are essential.”

Speaking the Many Languages of Edge Computing​

Many developers fear that edge infrastructure is constrained in terms of computing power processing, data storage, and memory. Tehcopedi asked Bloudoff from NTT how the company approached this problem.
Bloudoff explained that one of the challenges organizations face when deploying solutions at the edge is the silos between machines and devices across their network — machines and devices from different manufacturers do not always communicate well with each other.
“Think of it as different individuals speaking different languages who are all providing data that needs to be collected, analyzed, processed, and transformed into action,” Bloudoff said.
“What’s powerful about an Edge AI platform is that its software layer automatically discovers devices and machines across an IT or OT environment through its smaller, more efficient language learning model,” Bloudoff added.
NTT’s Edge AI platform runs auto-discovery and unifies and processes the data to provide a comprehensive diagnostic report that can be used for AI-powered solutions.

Speaking to Techopedia, IBM Fellow Catherine Crawford added that the challenge is not just in how to move AI to the edge.
“There are multiple existing use cases that can leverage edge using smaller AI models where the technical challenges still exist and assets can be developed for distributed, secure Edge to Cloud continuous development AIOps.”
Crawford explained that there is ongoing interest and research in understanding how task-tuned, smaller genAI and foundational models can be developed and leveraged for edge use cases considering constraints like compute, memory, storage, and power (battery life).

An AI Built for Your Edge​

The 2023 Edge Advantage Report — which surveyed 600 enterprises across multiple industries — found that approximately 70% of organizations use edge solutions to solve business challenges. Still, nearly 40% worry that their current infrastructure will not support advanced solutions.
Bloudoff from NTT said that the company is well aware of the market´s concern and focuses on smaller, more efficient language models. These are easier to deploy and can run in real-time without demanding advanced edge hardware updates.
Smaller AI models represent enormous savings in edge hardware costs for industries and businesses across the world. By going tiny, AI on the edge can maximize the edge computing power already in play through optimization.
The company is also moving forward by implementing dozens of proofs of concepts with customers across manufacturing, automotive, healthcare, power generation, logistics, and other industries.

The Bottom Line​

Supercomputers, quantum computers, and state-of-the-art AI racks hosted in massive data centers, are undoubtedly the tip of the spear of modern innovation. However, the real world works at the edge level.
From the smartphone in your pocket to the automation of machines in industrial environments, healthcare, and agriculture, modern edge networks connect us all.
As bigger, faster, harder, and stronger AI models roll out, NTT and IBM invest in small and tiny models. They believe it is the solution to the future of giant AIs in our edge world.

Because of Akida's 4-bit capability, it can have "home-made" models which are more compact than the standard 8-bit models, and, of course, Akida can handle the 8-bit models as well, with the accompanying reduction in efficiency advantage. And let's not forget Akida's 1 and 2-bit capability for super low power consumption.

I think that developing Akida-specific models for Valeo, Mercedes and others is where a lot of our effort will be focussed. These will not be universal models. They will be more functionally biassed.
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Esq.111

Fascinatingly Intuitive.
Sorry if I missed something but people are referring to forecast revenue. Can someone please show me the link to where we have this forecast? Thanks in advance 👍
Afternoon Gazzafish ,

Info from the latest APPENDIX 4D Half Year Financial Report , page 13.



Regards,
Esq.
 

Attachments

  • 20240904_131101.jpg
    20240904_131101.jpg
    3 MB · Views: 93
  • Like
Reactions: 11 users
Because of Akida's 4-bit capability, it can have "home-made" models which are more compact than the standard 8-bit models, and, of course, Akida can handle the 8-bit models as well, with the accompanying reduction in efficiency advantage. And let's not forget Akida's 1 and 2-bit capability for super low power consumption.

I think that developing Akida-specific models for Valeo, Mercedes and others is where a lot of our effort will be focussed. These will not be universal models. They will be more functionally biassed.
I concur Diogenese 👍
And he said we "will" be the first @Bravo.

However, Kaist claimed to be running ChatGPT2 in full, on their neuromorphic research chip, back in March this year, which post date's TL's comments.


"Neuromorphic computing is a technology that even companies like IBM and Intel have not been able to implement, and we are proud to be the first in the world to run the LLM with a low-power neuromorphic accelerator," Yoo said.
 
Last edited:
  • Like
  • Wow
Reactions: 6 users

Diogenese

Top 20
Because of Akida's 4-bit capability, it can have "home-made" models which are more compact than the standard 8-bit models, and, of course, Akida can handle the 8-bit models as well, with the accompanying reduction in efficiency advantage. And let's not forget Akida's 1 and 2-bit capability for super low power consumption.

I think that developing Akida-specific models for Valeo, Mercedes and others is where a lot of our effort will be focussed. These will not be universal models. They will be more functionally biassed.
Come to think about it, maintaining and updating models could be a nice little ongoing earner.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

wilzy123

Founding Member
Forget th edge boxes guys, we will start to get traction in th next 3 quarters from auto, industrial and euro space industry!!! 🚀 🌌..... Yeah! ....
Caution, delusional up ramper... Who is creeping into top holder territory on pure faith and speculation....
God save us.

Autonomous Vehicles, Military and Defense, Unmanned Aerial Vehicles, Robotics
 
  • Like
Reactions: 10 users

Boab

I wish I could paint like Vincent
  • Like
  • Fire
Reactions: 7 users

Slade

Top 20
Nice update to our website.

 
  • Like
  • Fire
  • Love
Reactions: 37 users

SERA2g

Founding Member
The question that should have been raised at the AGM to Sean is: what is BRN’s strategy if none of the current client engagements are successful or extend? If there are 2-3 year lead times involved in evaluating Akida prior to any product involvement, where does it leave the business if these current engagements are unsuccessful? Clearly none have been successful to date and in my view the company has been poorly mismanaged and Sean is not the right CEO to instil confidence for shareholders and steer BRN in the right direction.
Ok Bacon Lover 🤣
Can see your writing style from a mile away hahahahaah
 
  • Haha
  • Like
  • Wow
Reactions: 14 users
Wow we never ended up red like the rest of the asx.

1725431188347.gif
 
  • Haha
  • Like
Reactions: 11 users

CHIPS

Regular
Been a few years since our last child was born, but brings back so many happy memories after becoming a grandad. View attachment 68922

Very cute :love: ... the baby I mean :LOL:!

Congratulations to the parents and grandparents 💐💐.
Have a great time with her (it's a girl?) but don't give her too many kisses 😁

baby kiss GIF
 
  • Like
  • Love
  • Haha
Reactions: 6 users

manny100

Regular
By a few edge boxes that will help you current holding Mate.

On a serious not its quite interesting.

We have finally been given guidance of some growth numbers be it what it is but it's progress.

Edge boxes for sale again with a price increase to 1495 USD.

Clearly not indicative of what the companies SP reflects in a way just my opinion.

So from what I see it looks like Akida 1000 is and will still be produced where the Akida 2.0 seems to be tabYoo IP only for some reason likely the higher cost to make and better performance. Maybe some other reasons one can speculate with a non competitive clause for a buyer as was mentioned
I think you might find there is a fair bit of interest in Gen 2 and engagements.
Chips on that basis not necessary for GEN 2.
AKIDA 1000 and 1500 chips were produced to get the industries 'head around, this new fangled contraption'.
 
  • Like
Reactions: 3 users

rgupta

Regular
I concur Diogenese 👍
And he said we "will" be the first @Bravo.

However, Kaist claimed to be running ChatGPT2 in full, on their neuromorphic research chip, back in March this year, which post date's TL's comments.


"Neuromorphic computing is a technology that even companies like IBM and Intel have not been able to implement, and we are proud to be the first in the world to run the LLM with a low-power neuromorphic accelerator," Yoo said.
So they said they can do transformers as well on their SNN.
As per brainchip we can also take transformer load with akida 2000 but the same is do able because of TENNs.
On top brainchip also said earlier the results are very encouraging when comparing akida with gpt 2.
Is there a co incidence??
On top the processor used is samsung on 28 nm.
 
  • Like
  • Fire
Reactions: 6 users
So they said they can do transformers as well on their SNN.
As per brainchip we can also take transformer load with akida 2000 but the same is do able because of TENNs.
On top brainchip also said earlier the results are very encouraging when comparing akida with gpt 2.
Is there a co incidence??
On top the processor used is samsung on 28 nm.
I don't think Kaist, has anything to do with us, personally.

Process size, doesn't mean anything, it's just a good proven one, that doesn't cost as much as the smaller ones (nobody, is going to produce "research chips" in 7nm for example).

I don't think their chip is pure digital either (I think @Diogenesed looked into it?)..

No surprise, that Samsung is involved and I believe they have a history with us, but that is one of their main foundries.

Any input from BrainChip, is inspired in my opinion.

And I'd Love for Samsung, to be onboard.
 
  • Like
  • Fire
  • Love
Reactions: 8 users

Diogenese

Top 20
I don't think Kaist, has anything to do with us, personally.

Process size, doesn't mean anything, it's just a good proven one, that doesn't cost as much as the smaller ones (nobody, is going to produce "research chips" in 7nm for example).

I don't think their chip is pure digital either (I think @Diogenesed looked into it?)..

No surprise, that Samsung is involved and I believe they have a history with us, but that is one of their main foundries.

Any input from BrainChip, is inspired in my opinion.

And I'd Love for Samsung, to be onboard.
Yes. KAIST are into analog. The term "in-memory compute" is usually used in relation to analog, in that the calculations are performed by the memory circuits by accumulating a voltage whose amplitude is proportional to the number of input signals.
 
  • Like
  • Fire
Reactions: 10 users

KKFoo

Regular
Because of Akida's 4-bit capability, it can have "home-made" models which are more compact than the standard 8-bit models, and, of course, Akida can handle the 8-bit models as well, with the accompanying reduction in efficiency advantage. And let's not forget Akida's 1 and 2-bit capability for super low power consumption.

I think that developing Akida-specific models for Valeo, Mercedes and others is where a lot of our effort will be focussed. These will not be universal models. They will be more functionally biassed.
Hi Diogenese, I believe you are best person for this question.. Are the VVDN edge box use for research and development purposes or it can be used by end user? Let's say I want to set up a face recognition security system in my office, can I just buy an edge box and plug it into my camera system and it is ready to go or I still need to go and develop some software system to interact with the edge box?
Thank you in advance if you can provide me with the answer..
 
  • Like
Reactions: 6 users

Diogenese

Top 20
Hi Diogenese, I believe you are best person for this question.. Are the VVDN edge box use for research and development purposes or it can be used by end user? Let's say I want to set up a face recognition security system in my office, can I just buy an edge box and plug it into my camera system and it is ready to go or I still need to go and develop some software system to interact with the edge box?
Thank you in advance if you can provide me with the answer..
Hi KK,

The Edge Boxes are definitely suitable for end use. Of course they can also be used for R&D, but they are the real thing.

They come with pre-made models, but you can also develop your own model library or adapt the pre-made ones using on-chip learning.
 
  • Like
  • Love
  • Fire
Reactions: 25 users

rgupta

Regular
I don't think Kaist, has anything to do with us, personally.

Process size, doesn't mean anything, it's just a good proven one, that doesn't cost as much as the smaller ones (nobody, is going to produce "research chips" in 7nm for example).

I don't think their chip is pure digital either (I think @Diogenesed looked into it?)..

No surprise, that Samsung is involved and I believe they have a history with us, but that is one of their main foundries.

Any input from BrainChip, is inspired in my opinion.

And I'd Love for Samsung, to be onboard.
The reason I try to relate is not only 28nm chip, but also a report by brainchip where they compare akida with chat gpt2. But the biggest shock to me is they said their chip can do transformers though 1000 was not able to perform that function but akida 2 can do the same.
Anyway at the end a lot of wires are entangled and only time will have all the answers. But one thing for sure there is a lot of happening behind the scenes and still no concrete news.
 
  • Like
Reactions: 2 users
Top Bottom