BRN Discussion Ongoing

Who knows really? This is why we are 16cents.

Your link says the hearing aide uses a DNN (deep neural network) as opposed to an SNN a spiking neural network. Not good.
But you can make your SNN have many hidden layers which would be called a Deep (Spiking) neural network.

Then they call their processor a "neuro processor".
So this also is (maybe purposefully) confusing. All the NVIDA type processors are referred to as "neural processing". Ours ( and loihi ) are "neuromorphic processors."

So that is the trouble we have when trying to decipher what's happening thru marketing blurbs. So really we can only go on what shares the board members are holding.

Therefore the answer to your question is: You are not allowed to know.
Their potential customer base will work it out soon enough. There’s enough seed BRN customers that will have products on market in 3-7 years for the others to start playing follow the leader.

Reason BRN is at 16c is because it was being priced as a multi IP company with growing revenue. Then the market gradually realised BRN customers were not imminently getting products into the market and were 3-7 years away from decent market penetration, and re-rated BRN to being an R&D company with no commercially significant revenue to speak of.. Hence 16c

Nothing wrong with that unless management led you to believe more should be expected by now.
 
  • Like
  • Thinking
Reactions: 6 users

jtardif999

Regular

Business as Usual? Computing Security in the Age of AI​

Many existing foundational security technologies and standards will be more relevant and important than ever in the age of AI.
By Richard Grisenthwaite, EVP, Chief Architect & Fellow, Arm
Artificial Intelligence (AI)Security
Share
AI-security-1400x788.jpeg

As with any technology revolution, AI presents both opportunities and challenges to people’s digital experiences. Alongside its potentially transformative impact, AI also presents unique security threats, with a vast amount of sensitive data being collected, held, and then used to provide highly personalized technology experiences to the end-user. The focus on security is driving industry and government discussions as we work on solutions to maximize AI’s benefits and minimize any potential societal impact.
Security has always been in Arm’s DNA. Addressing security challenges is fundamental to Arm being the technology foundation for AI everywhere. While AI is accelerating technology innovation on an unprecedent scale, Arm’s foundational security technologies deployed in our industry-leading IP and paired with standards will continue to play a significant role managing fresh security threats in the ongoing evolution of AI.

Security’s role in AI at the edge​

As AI becomes more ubiquitous, we expect significant growth in AI inference workloads being run at the edge of the network – on the devices. Inference requires less compute power as it uses an already trained model, with this supporting the broader drive for more efficient AI computing at the edge. This provides quicker user experiences with less delays, as the processing of AI workloads happens closer to where the data is captured.
From a security perspective, this distribution of AI to the edge brings benefits to businesses and users. A key security benefit is that sensitive user data can be handled and processed on the actual device, rather than being sent to third parties to process. This allows both businesses and consumers to have more control of their data.
Arm-NN-blog-post-image-1200x675.jpg

There are plenty of great AI-based security use cases currently, but a good example that really showcases the benefits of AI at the edge is smart vision. Intelligent cameras are being developed and deployed in homes, care homes and hospitals as a way of monitoring elderly relatives in case they fall. Being able to process the image and scene recognition on the actual device creates an inherently more secure system, removing the risk that comes from sending sensitive information to a third-party for processing. This also makes it far more acceptable to have these cameras in environments where they are most needed, which is often where significant privacy concerns exist.

Trusting the hardware​

However, businesses need to be able to trust this hardware, especially in the age of AI where they want to protect their expensively generated AI models from attacks. The demand for secure hardware was reflected in the recent PSA Certified 2023 Security report, which showed that 69 percent of technology decision makers are willing to pay a premium to secure devices, with 65 percent specifically looking for security credentials during purchasing decisions. It is fundamental that edge devices are effectively secured against malicious attackers who wish to steal the intellectual property of AI and machine learning (ML) based models.

Processor security​

The move towards AI at the edge is taking place on the CPU, whether it is handling workloads in their entirety or in combination with a co-processor like a GPU or NPU. With a significant amount of AI computing happening on the CPU, security in the age of AI depends on how secure the CPU is. This is why securing AI is very much dependent on the basics of securing compute.
1600x900-1200x675.png

Deploying code using AI and ML tools and frameworks helps identify security vulnerabilities, but the same technologies can be used by attackers to identify areas to exploit in millions of lines of code. This means that computer architects need to continue their efforts to improve the security of computing systems. This is something that Arm has done for years, where we continuously develop and invest in new security architecture features.

Memory Tagging Extension​

One of these features is Arm’s Memory Tagging Extension (MTE), which is built into the Arm architecture across Arm’s latest v9 CPUs. MTE allows for the dynamic identification of both spatial and temporal memory safety issues, with these accounting for 70 percent of all serious security bugs. These security threats will continue to persist as AI evolves.
MTE-blog-post-image-1200x675.jpg

MTE is already being embraced by the mobile market. MediaTek has implemented the technology on its Arm-based Dimensity 9300 system-on-chip (SoC) for flagship smartphones, while Google has enabled MTE in Android 14. Vivo, which is adopting Dimensity 9300 in its new X100 and X100 Pro flagship smartphones, recently announced a memory safety developer program which makes MTE available to its developer community. These commitments to enabling MTE across the mobile ecosystem will deliver better, more secure user experiences and a quicker time-to-market for millions of developers worldwide. It is likely that we will see MTE used beyond mobile in high-performance IoT markets that feature devices using Arm’s A-profile processors.

Arm security technologies​

As part of the Armv9 architecture, we announced Realm Management Extensions, which is the basis of the of Arm Confidential Compute architecture. This helps to secure the data running virtual machines from attacks arising from the hypervisor being compromised. There is a clear need for this technology in data centers that are being used to train advanced ML models, but it will also be important to secure edge computing systems across IoT markets where trained ML models will be deployed.
We have also introduced Pointer Authentication (PAC) and Branch Target Identification (BTI) as security technologies that are built into the Armv9 architecture to provide far stronger protections against code reuse attacks like Return-Orientated Programming (ROP) and Jump-Orientated Programming (JOP). This is important in the age of AI because attackers will be able to use AI and ML-based tools to develop sophisticated ways of reusing code that already exists. PAC and BTI are being deployed across the A-profile and M-profile Arm architectures that are used in consumer technology and IoT markets.
Security-image11.png

Finally, Arm continues to work in partnership with the industry on our security framework and certification scheme, PSA Certified, with a mission to create a baseline of best practice for all connected devices. Built in from the core, this helps to improve the basic security hygiene of systems and fulfil the consumer expectation that if devices scale then they should be secure, with this targeting IoT devices built on the A-profile and M-profile Arm architectures.

The future of security with Morello​

Alongside these existing security features, Arm is always looking at new technologies, standards, and collaborations to advance security. Morello is one great example, with this program focusing on new ways to design CPU architecture that make processors more robust and deter certain key security breaches. In collaboration with the University of Cambridge and SRI International, this has led to a prototype technology that, if successful, could be implemented in future hardware.
morello-1200x531.jpg

Accelerating security in age of AI​

AI and ML-based technologies are becoming more pervasive across every corner of computing. This will bring opportunities and challenges for security, especially as more AI workloads move to the edge.
Alongside the fast-paced AI-based innovation, the fundamental hygiene principle of security will still be required. In fact, many Arm foundational security technologies that are already in place today will be more relevant than ever in the age of AI.
This is why we are fully committed to advancing the security of our architecture, IP and processors and supporting technology components and standards that it generates. This will continue to accelerate as we add more AI and ML capabilities, with Arm being the secure compute platform for the world’s AI-based experiences.
Attacking the CPU is the prime target for hackers who want to hold companies to ransom over their AI models. Akida2.0 does not need a CPU, it can’t be hacked (as there is no software to hack) - Akida will eventually proliferate, it will be a must have. AIMO.
 
  • Like
  • Fire
  • Love
Reactions: 24 users
Their potential customer base will work it out soon enough. There’s enough seed BRN customers that will have products on market in 3-7 years for the others to start playing follow the leader.

Reason BRN is at 16c is because it was being priced as a multi IP company with growing revenue. Then the market gradually realised BRN customers were not imminently getting products into the market and were 3-7 years away from decent market penetration, and re-rated BRN to being an R&D company with no commercially significant revenue to speak of.. Hence 16c

Nothing wrong with that unless management led you to believe more should be expected by now.
That's OK mate ,patience is required at times , obviously your not the patient type, infact the stirring kind of guy why don't you go back to hot crapper
 
  • Like
Reactions: 2 users
Yep 7 years before humans become redundant and replaced by machines.

I suspect that some will become redundant as AKIDA 6,7 & 8 roll out but at least the collective IQ over at HC will start to increase.

Cannot wait.

Until then we will just have to content ourselves with the current Science Fiction level of performance of AKIDA 1.0 and the mind boggling performance of AKIDA 2.0 with TENNS and ViT.😂🤣😂

My opinion only DYOR
Fact Finder
 
  • Like
  • Haha
  • Love
Reactions: 38 users

AARONASX

Holding onto what I've got
Do we really have to wait that long? Do you know how old I am already?

Its Been A Long Time Waiting GIF
We will not have to wait that long for Akida 1.0/2.0 to be seen in the wild, what Peter envisions is the GI equal to a human at level 10.0.

As of today 2.0 is the Ape! somewhat intelligent but needs a lot of training, 10.0 should train itself (i hope)...I don't think us humans are ready for completely GI AI at moment..IMO
1705962409401.png
 
  • Like
  • Haha
Reactions: 11 users
Their potential customer base will work it out soon enough. There’s enough seed BRN customers that will have products on market in 3-7 years for the others to start playing follow the leader.

Reason BRN is at 16c is because it was being priced as a multi IP company with growing revenue. Then the market gradually realised BRN customers were not imminently getting products into the market and were 3-7 years away from decent market penetration, and re-rated BRN to being an R&D company with no commercially significant revenue to speak of.. Hence 16c

Nothing wrong with that unless management led you to believe more should be expected by now.
It must be difficult to run this narrative when even on HC posters such as Tom & Jerry and Dead Rise disagree with your deliberate misconstruing of what was said by Rob Telson.

It is plain to them 3 to 7 years relates to the introduction of artificial general intelligence.

But that is the mark of the true antagonist never let the facts stop one from taking a contrary position in fact it causes maximum annoyance.

Congratulations well done.

My opinion only DYOR
(& fabrication)
Fact Finder
 
  • Like
  • Fire
  • Haha
Reactions: 45 users
CES ONSEMi demo video

Along with further insight into the approach to market and general overview.

 
  • Like
  • Fire
  • Love
Reactions: 48 users
just posted on twitter🥰😘


BrainChip's "All Things AI” podcast
@nayampally_n
sits with Yann Le Faou, Director of Touch MCU products and Machine Learning at Microchip Technology.



Key points:

- The purpose of the demo was to prove both the hardware and software are ready to go right now, even in 8-bit. Costumers don’t need to develop those aspects; they just need to choose the specific application they want to apply AI to.

- In September 2023, Microchip introduced a complementary ML modelling tool, aiming to simplify the adoption and streamline the process for end customers. There was mention of customers setting deadlines to launch products by the year 2025.

Given that the Microchip ML tool was only released three months ago, any customer or IP deal arising from this partnership may not surface for a fair while yet.

Sounded promising, and the Microchip presenter noted a significant level of interest in the ML tool. Presuming we aren't exclusively partnered to provide the IP, but even a slice of Microchip's extensive customer base or a few big ones would be enough.
 
  • Like
  • Fire
  • Love
Reactions: 32 users

Damo4

Regular
CES ONSEMi demo video

Along with further insight into the approach to market and general overview.



Great to see the demo, and especially to hear about the airbag deployment theories.

Weird bloke interviewing Todd though lol
 
  • Like
  • Haha
  • Love
Reactions: 17 users

Worker122

Regular
Nice post from ipXchange. (LinkedIn)

Brain-like hardware accelerator provides power-efficient AI inferencing by ignoring irrelevant data!

In ipXchange’s next AI-focussed interview from CES 2024, Guy chats with Todd Vierra about BrainChip ’s Essential AI computation architecture.

Where many ‘AI’ chips are simply a matrix multiplication engine – essentially just doing quick maths rather than truly intelligent thought – Brainchip’s solution attempts to replicate the way the human brain operates. By recognising important data – from a sensor, for example – and ignoring irrelevant data, Brainchip’s solution operates with much less memory and less computing power for higher efficiency than mathematical solutions that process the complete dataset.

Todd then shows us an example application with a Time-of-Flight (ToF) sensor performing distance and depth measurements to recognise users in a way that is much harder to trick than a 2D AI imaging system. In this example, Brainchip’s solution takes the data from the sensor – i.e. Nolan and Guy’s faces – and classifies them after quickly training the model to recognise the subtle differences of their features. This is done with respect to their shoulders for even greater certainty of user identity. This type of setup might be used in industrial and in-cabin automotive applications for user verification, safety – such as for user-specific airbag deployment – and security.

Brainchip’s solution works as a neuromorphic – i.e. brain-like – AI hardware accelerator in conjunction with a host processor. Brainchip also provides the software and IP solutions to put this architecture into fully integrated chipsets, but with development systems preloaded with standard demo use cases, design engineers can easily get started using this technology for low-power edge inferencing.

Learn about Brainchip’s Akida AKD1000 SoC by heading to the ipXchange website, where you can apply to evaluate this technology for use in a commercial project: https://lnkd.in/eFRn2nnf
 
  • Like
  • Fire
Reactions: 33 users

Taproot

Regular
Great to see the demo, and especially to hear about the airbag deployment theories.

Weird bloke interviewing Todd though lol
VVDN / BrainChip Edge box uses an NXP Processor.
VVDN + NXP have been long time partners.
Qualcomm tried to take over NXP a few years back.
 
  • Like
  • Fire
  • Wow
Reactions: 34 users

wilzy123

Founding Member
  • Haha
  • Like
Reactions: 14 users

IiTrader

Emerged
Has Brinchip ever sold any "Akida AKD1000 AI Hardware" since it launch?
 
  • Haha
Reactions: 1 users

Sotherbys01

Regular


Nandan Nayampally, CMO at BrainChip and Yann Le Faou, Director of Touch MCU products and Machine Learning at Microchip Technology.

Discussion focused on the:
  • widespread adoption of AI at CES and its essential role in corporate strategies.
  • market's progression from initial AI adoption to more widespread exploration and implementation.
  • shift from conceptual AI applications to their actual operational use.
  • challenges of simplifying AI access and usage, highlighting the role of developmental platforms.
  • need for cost-effective AI implementation, particularly in consumer electronics, to facilitate broader adoption.


So glad to see the table cloth ironed and with our name on it.........
 
  • Like
Reactions: 2 users

TECH

Regular
CES ONSEMi demo video

Along with further insight into the approach to market and general overview.



Todd was pretty much on top of the questions being fired at him, but "we have just launched our VVDN Edge Box and Unigen Box ??

Did I hear that correctly ?
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Damo4

Regular
  • Like
  • Fire
  • Love
Reactions: 21 users

SERA2g

Founding Member
Open Neuromorphic Case Study Using Akida

32 mins…. :-(
I would take that with a grain of salt. "Might not be" coming from a university professor who oversees university projects funded by Intel.

The entire Open Neuromorphic community are super pro Loihi because it's Intel and are always on the look out for positions or funding from the big companies. The founder recently joined Qualcomm for example.

Keep listening past your cherry picked 32 minute section through to around the 40 minute mark. Glowing comments around akida, the ease of use, quick set up to convert CNN to SNN using MetaTF, boiler plate using edge impulse and the exceptional results.

There's a section in there where he mentioned they came second place in the hackathon and the part that the judges particularly liked was the fact they presented a $3,500 option as well as a $100 option. They go on to agree akida can be implemented at a significantly reduced price point to other technologies.

They also say results on Akida are "orders of magnitude better" when comparing to CPU platforms and there's also a question from one of the listeners around the fact akida is digital so (paraphrasing heavily here) is there any point in having asynchronous chips or analog anymore. I couldn't fully understand the discussion potentially due to a technical gap in my knowledge but also their English. Seemed to me they were discussing whether akida would make some chips redundant (why bother with them type attitude when akida is digital and power consumption results are so exceptional).

The presenter also mentions they didn't have accuracy improvements on existing technology but they also didn't spend time adjusting the code and the dataset so putting accuracy aside the power consumption results are very strong arguments for akida technology.

Strongly recommend people give this video a listen as it is a good indication of feedback from someone in the wild implementing akida and then discussing their findings.

FWIW I'm only about 40 minutes in so there could be more good or bad feedback I have not yet heard.
 
  • Like
  • Love
  • Fire
Reactions: 36 users

DK6161

Regular
CES ONSEMi demo video

Along with further insight into the approach to market and general overview.


Might be better to get someone who can explain the technology in a lot simpler ways and more enthusiastic to be at our next demo.
No offence, but this was hard to watch.
 
  • Like
Reactions: 2 users

DK6161

Regular
Our SP continues to fall. Not a good start to the year. Feel like writing off 2024. Come on BRN improve your sales team and give us something to be proud off!
 
  • Like
  • Fire
Reactions: 9 users

Rskiff

Regular
Might be better to get someone who can explain the technology in a lot simpler ways and more enthusiastic to be at our next demo.
No offence, but this was hard to watch.
Really, I found it very easy to follow. Perhaps this stock isn't for you.
 
  • Like
  • Haha
  • Fire
Reactions: 23 users
Top Bottom