BRN Discussion Ongoing

McHale

Regular
A lot to think about McH,

Note to self:
A. models
B. Mercedes NAOMI4
C. s/w

A. Models are what the NN has to search through.

I'll confine my thoughts to images and speech, but other sensor inputs are treated on the same principles.

Images: Static (photos, drawings); Moving (Video)


Sound: Key word spotting, NLP; other sounds.

Each of these can be divided into several layers of sub-categories with increasing specificity. In a NN, the larger the model, the more power is consumed in making an inference/classification, because the processor needs to examine each example in th model to see which the sensor input most nearly resembles.

Thus it make sense to have specific models foe spcific tasks. The narrower the task, the smaller the model can be.

For example, with image classification in an ADAS/AV, images of astronomy of scuba diving are irrelevant. So ADAS models are compiled from millions of images captured from vehicle-mounted cameras/videos.

Akida excels at classifing static images, and can do this at many frames per second. However, Akida 1 then relied on the associated CPU running software to process the classified images to determine an object's speed and direction. That's the genius of TENNS - it is capable of performing the speed analysis in silicon or in software far more efficiently than conventional software.

I prefer to talk about images/video because Natural Language processing is something I struggle to comprehend, but apparently TENNS makes this a cakewalk too.

Open AI tries to have everything in its model, but that burns a massive amount of energy for a single inquiry - a bit like biting off more than it can chew.

So now we have RAG, where subject-specific models can be downloaded depending on what the NN procesor is intended to do.

B. NAOMI4 - Yes This is a German government funded research project and will not produce a commercial outcome any time soon.

C. H/W v S/W

Valeo does not have an Akida silicon in its SCALA 3. It uses software to process the lidar sensor signals. Because we've been working with them for several years in a JD, I'm hopeful that the software will iclude Akida 2/TENNS simulation software. Sean did mention that we now have an algorithm product line.

The rationale for this was explained in the Derek de Bono/Valeo podcast posted yesterday that software allows for continual upgrading. He also mentioned that provision for some H/W upgrades could also be accommodated. Given TENNS young age, it will have developed significantly in the last couple of years, so it could not be set in silicon at this early stage, although Anil did announce some now deferred preparations for taping out some months ago.

Again, I am hopeful that Akida2/TENNS will be included in the software of both Valeo and Mercedes SDVs (and in other EAP participants' products) because it produces real-time results at a much lower power consumption.

Then there's PICO ... the dormant watchdog ...

Hi Dio thanks for your response to my post from last Thursday, but I must admit to not having really put what I was trying to say in
a clear or properly worded fashion.

When I was talking about models I was really meaning to speak to the different programming/coding languages that can be (need to be) used to interface with the various versions/iterations of Akida, for instance Python, PyTorch, Keras and a number of others I have seen. mentioned.

So in my post, I said models in an incorrect context, because although I do not understand a good deal of the pertinent technical niceties, I do, I believe, know that a model is like a library that can be used for certain applications/uses of Akida, which you also described.

Going back to the coding languages, why do the various iterations of Akida require the use of different coding languages, if that statement is in fact correct. Regardless however, why are there different coding languages required, because I do know that several different languages are used. ?
 
  • Like
  • Love
Reactions: 8 users

1729496703414.png
 
  • Like
  • Love
  • Fire
Reactions: 16 users
Hi Dio thanks for your response to my post from last Thursday, but I must admit to not having really put what I was trying to say in
a clear or properly worded fashion.

When I was talking about models I was really meaning to speak to the different programming/coding languages that can be (need to be) used to interface with the various versions/iterations of Akida, for instance Python, PyTorch, Keras and a number of others I have seen. mentioned.

So in my post, I said models in an incorrect context, because although I do not understand a good deal of the pertinent technical niceties, I do, I believe, know that a model is like a library that can be used for certain applications/uses of Akida, which you also described.

Going back to the coding languages, why do the various iterations of Akida require the use of different coding languages, if that statement is in fact correct. Regardless however, why are there different coding languages required, because I do know that several different languages are used. ?
Hi @McHale. I have no expertise in this field but I thought one of the best things I learnt in the last year or so was that via Edge Impulse we have access to Nvidia’s TAO library (which I expect would be bountiful). As well our own smaller model library. And then of course it can be “Trained” for specific use cases using “Live” data.

This is one of my favourite photos because at some point there will be use cases at the far edge where the others can’t go; that’s where hopefully we can clean up and get a large marketshare. Even more so with Pico now.


1729499780366.jpeg


Not sure if that’s answered your question or gone off on a tangent?
 
  • Like
  • Fire
  • Love
Reactions: 17 users

Tothemoon24

Top 20
Apologies if posted . Interesting listen via YouTube link below
IMG_9788.jpeg











The automotive technology landscape is currently at a pivotal juncture, with its future in dynamic and adaptable systems that allow for continuous improvements to the driving experience through software.

Today, up to 650 million lines of code are in a modern car; for comparison, there are 15 million lines of code in a Boeing 737. This number will only grow further in the future and this transformation will revolutionize how drivers interact with their vehicles and redefine the relationship between vehicle manufacturers and owners.

What is a software-defined vehicle?​

A software-defined vehicle (SDV) is a cohesive blend of hardware and software that enables a smoother interaction between a vehicle’s internal systems and the outside world. SDVs decouple network functions from proprietary hardware, allowing for parallel physical and digital development. This shift enables software to drive differentiation and commercialize vehicle functionalities, maximizing the lifecycle and value of vehicles.


As the automotive industry advances towards SDVs, the roles of vehicle manufacturers, software developers, and other stakeholders will transform. Their collaboration will lead to the creation of smarter, more efficient vehicles that cater to the evolving needs of drivers.

The integration of automotive software with advanced connectivity, artificial intelligence (AI), and user interfaces will further enhance SDV capabilities, ushering in a new era of safer, more sustainable, and enjoyable vehicles.

Top 7 benefits of software-defined vehicles​

  1. Preventative maintenance: SDVs leverage real-time performance insights to enable predictive maintenance, reducing unexpected breakdowns and extending vehicle lifespan. This proactive approach can cut maintenance costs by up to 10 to 20 percent for the end-user.
  1. Reduced manufacturing costs: In SDVs, various vehicle functions are integrated into fewer chips, eliminating the need for multiple, separately sourced chips and silicon. SDVs are also expected to create over USD 650 billion in value for the automotive industry by 2030, accredited to reduced software R&D and development costs.
  1. Enhanced safety: Advanced Driver Assistance Systems (ADAS) is estimated to prevent up to 44 million crashes and 300,000 fatalities in the U.S. by 2050. The flexibility delivered by SDV technology can allow ADAS to evolve even more. These advancements can not only enhance individual vehicles but also reduce road accidents and fatalities, representing a major milestone on the path to fully autonomous driving.
  1. Over-the-Air (OTA) updates: OTA updates allow vehicle manufacturers to deliver new features, fix bugs, and enhance security remotely. This ensures vehicles remain up-to-date and functional, with some manufacturers providing monthly updates.
  1. Comfort and experience: Connected onboard infotainment systems in SDVs offer personalized music, video streaming, navigation, and climate control, adding to the state-of-the-art driving experience. It will open doors to highly adaptable and personalized driving experiences that address shifting preferences and consumer demands.
  1. Smartphone integration: Seamless smartphone integration allows for remote start, lock/unlock, and vehicle status checks. Some applications can also provide real-time traffic updates and personalized navigation.
  1. Real-time connectivity: SDVs support vehicle-to-everything (V2X) communication, enabling real-time data exchange, navigation assistance, and remote diagnostics, which improves the efficiency and safety of driving.

How is Arm leading the software-defined vehicle revolution?​

Arm has been at the forefront of the SDV revolution, offering essential technologies for automotive applications that are needed to make SDVs a reality. Our Automotive Enhanced (AE) IP portfolio, which includes Cortex-A, Cortex-M, and Cortex R CPUs, Mali GPUs, as well as ISPs, with in-built functional safety features, forms the backbone of many automotive computing solutions for SDVs.

Key innovations and initiatives by Arm for SDVs include:

  • High-performance processors: The latest Armv9-based AE IP processors enhance security and performance in SDVs, featuring advanced security tools like Branch Target Identification (BTI), Pointer Authentication (PAC), and Memory Tagging Extension (MTE).
  • Scalable platforms: Arm’s scalable platforms support a wide range of automotive applications, from ADAS to in-vehicle infotainment (IVI), enabling seamless integration and upgradability.
  • Ecosystem collaborations and initiatives: Arm collaborates with numerous automotive manufacturers and technology companies to advance SDV technologies. These partnerships are crucial for accelerating the development and deployment of SDVs.
  • Comprehensive solutions: Arm is working with a wide range of partners, including Amazon Web Services (AWS) and Tata Technologies, to deliver comprehensive solutions for SDVs. These collaborations focus on integrating advanced technologies and ensuring that SDVs meet the highest standards of performance, safety, and security.
  • Virtual platforms: Arm and our partners including Cadence, Corellium and Siemens have developed virtual platforms that allow for the early evaluation and development of software without needing physical silicon. This significantly reduces the time-to-market for new automotive technologies.
  • SOAFFEE: The Scalable Open Architecture for Embedded Edge (SOAFEE) is a collaborative effort with leaders across the automotive supply chain to create a standardized software architecture for SDVs. This initiative aims to streamline development processes and enhance interoperability across the industry.

The future of automotive is being built on Arm​

The shift towards SDVs represents a transformative leap in the automotive industry. With advanced software capabilities and robust hardware, SDVs can significantly enhance safety, reduce costs, and provide a more personalized and connected driving experience.

Arm’s technological marvels, initiatives, and collaborations are paving the way for a new era of automotive innovation, where vehicles are not just modes of transport but sophisticated, software-driven platforms that offer enhanced user experiences and capabilities.
 

Attachments

  • IMG_9788.jpeg
    IMG_9788.jpeg
    524.9 KB · Views: 57
  • Like
  • Fire
  • Thinking
Reactions: 26 users

Diogenese

Top 20
Hi Dio thanks for your response to my post from last Thursday, but I must admit to not having really put what I was trying to say in
a clear or properly worded fashion.

When I was talking about models I was really meaning to speak to the different programming/coding languages that can be (need to be) used to interface with the various versions/iterations of Akida, for instance Python, PyTorch, Keras and a number of others I have seen. mentioned.

So in my post, I said models in an incorrect context, because although I do not understand a good deal of the pertinent technical niceties, I do, I believe, know that a model is like a library that can be used for certain applications/uses of Akida, which you also described.

Going back to the coding languages, why do the various iterations of Akida require the use of different coding languages, if that statement is in fact correct. Regardless however, why are there different coding languages required, because I do know that several different languages are used. ?
Hi McH,

You've lifted the lid on a can of worms.

This is not within my pay grade, so I'll do a ChatGPT impression.

There are many programming languages which have evolved over time, an early example being Fortran, originally from the 1950s, with its penchant for scientific number crunching.

This article provides a potted history of Fortran’s fortunes:

Fortran: https://www.zdnet.com/article/this-...ain-but-its-future-is-still-far-from-certain/

Of course there are newer languages designed with the hindsight of experience and availability of new processor capabilities and with particular strengths:

C++ for GUIs and games

Java for web applications

Python and TensorFlow for machine learning.

Nvidia developed CUDA for its games GPUs, and by a stroke of fortune, these turned out to be adaptable for NNs, if not parsimonious on power usege.

Programmers have their favourite programming languages and know their strengths and weaknesses. it takes time and effort for a programmer to develop expertise in a new language. It seems that Brainchip decided to make Akida compatible with the systems most popular in the AI/ML field such as Python and TensorFlow.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 25 users
20241021_222456.jpg


CROM! ✊


(some gore and blood etc..)

We must crush the competition!

See them driven before us!

And hear the lamentations of their women!


IP announcement this week for certain! 👍
 
Last edited:
  • Haha
  • Fire
  • Like
Reactions: 14 users

IloveLamp

Top 20


1000019183.jpg
 
  • Like
  • Fire
  • Thinking
Reactions: 22 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 6 users


View attachment 71543
Hopefully one of their 43 followers, is in need of our tech and has some weight behind them! 😛

Every "little bit" of exposure helps, I guess..
 
  • Haha
  • Like
  • Thinking
Reactions: 8 users
  • Haha
Reactions: 4 users
This was from a few months ago on the Optimus gig.

Only posting it as hadn't seen this guy before and a quick search didn't show it as posted prev.

It was Kyndryl that caught my eye and never heard of them but website shows they pretty connected out their with their partners. Was good to see aware of us and the achievement....even better if they have ever spoken with us.

Bill Genovese CISSP ITIL​

Kyndryl CQF Institute​

St Augustine, Florida, United States Contact Info​

27K followers 500+ connections​



Bill Genovese CISSP ITIL
CIO Advisory Partner | CTO | Technology Strategy | Corporate Strategy Innovation Selection Committee Member |AI & ML | Senior/Principal Quantum Computing Team Leader
7mo

BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY) has achieved a remarkable milestone in the field of space technology. The company, a leader in neuromorphic AI technology, has successfully launched its Akida AI system into low earth orbit on the Optimus-1 spacecraft, thanks to the Space Machines Company. This event marks a significant advancement in the use of AI for space technology applications. The Akida technology is integrated into the ANT61 Brain computer, which operates as the main control unit for robots designed for the repair and maintenance of space vehicles. The space environment imposes unique challenges, including extreme energy, power, and thermal constraints. However, Akida's event-based, neuromorphic architecture addresses these issues by providing high performance with minimal power consumption. One of the key features of Akida is its on-chip learning capability, which allows the ANT61 Brain to adapt to the changing conditions in space. This feature is crucial for the autonomy of space operations, where environmental variables are in constant flux. Congratulations to BrainChip Holdings Ltd on this exciting achievement! #neuromorphicAI #space #technology #artificialintelligence
BrainChip Boosts Space Heritage with Launch of Akida into Low Earth Orbit

BrainChip Boosts Space Heritage with Launch of Akida into Low Earth Orbit

spacedaily.com



Kyndryl info.


As the world's largest provider of IT infrastructure services, Kyndryl is committed to the health and continuous improvement of the vital systems at the heart of the digital economy. With our partners and thousands of customers worldwide, we co-create solutions to help enterprises reach their peak digital performance.

From a blog this year on their website in relation to automotive telematics. They consider neuromorphic to be in there to help revolutionise. The below is #4 not 1...formatting.



4 steps to prepare your company for telematics at scale​

  1. tay abreast of advances in technology, including analog, quantum and neuromorphic computing. These technologies, coupled with AI, will revolutionize how companies do business, so you’ll need to adjust your telematics strategy accordingly.
 
  • Like
  • Fire
  • Love
Reactions: 24 users

MegaportX

Regular
Will there be a report today ?
 
  • Like
  • Thinking
Reactions: 5 users

Tezza

Regular

MegaportX

Regular
Why today?
Tuesday last year.

BRNAppendix 4C and Quarterly Activities ReportPRICE SENSITIVE24/10/23








BRNAppendix 4C and Quarterly Activities ReportPRICE SENSITIVE
 
  • Like
  • Thinking
Reactions: 7 users

Xray1

Regular
  • Like
  • Fire
Reactions: 10 users

Tezza

Regular
  • Like
  • Haha
Reactions: 6 users

MrNick

Regular


View attachment 71543
Akido eh… Japanese martial arts really are the future…🙄
 
  • Like
  • Haha
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
A must watch video on the link below.

BrainChip's technology would be the perfect fit for Qualcomm IMO. Yes, they have their own NPU, but their NPU doesn't have TENNs or the ultra low power consumption that Akida Pico offers.

As I've mentioned in the past, it would be great to have our Chief Technology Officer, Dr Tony Lewis explain in detail how our technology could compliment/improve Qualcomm's. Tony was was the former Senior Director of Technology at Qualcomm and was the creator of Qualcomm's Zeroth neural processing unit and its software API, which was subsequently incorporated into the SNAPDRAGON processor. So who better to drill down on this topic?

I know, NDA's blah, blah , blah.. Sigh...

Screenshot 2024-10-22 at 8.10.58 am.png




 
  • Like
  • Love
  • Fire
Reactions: 27 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
About 8 months ago, Tony Lewis said we were the first (to his knowledge), to run tiny language models at the edge with very low power.

However, it would appear from the article below that Qualcomm have pipped us to the post by integrating Personal AI's SLM's at the edge with their Snapdragon NPU.

I don't know what sort of power consumption is involved. Although it does say all day battery life.






Screenshot 2024-10-22 at 8.36.23 am.png





Personal AI to Bring Enterprise-Grade AI to the NPU on PCs powered by Snapdragon X Series Processors​

October 21, 2024 16:30 ET| Source: Personal AIFollow

Share

SAN DIEGO, Oct. 21, 2024 (GLOBE NEWSWIRE) --

Co-Branded Marketing Asset Showing HP Laptop with Personal AI Running On Edge


Highlights:
  • Personal AI announces the integration of Personal AI's Small Language Models (SLMs) into PCs powered by Snapdragon X Series processors.
  • This collaboration aims to enhance on-device AI, prioritizing accuracy, privacy, and security for critical enterprise users.
Personal AI announced a collaboration to bring AI directly to edge devices through Snapdragon® X Series platforms. This collaboration marks a step towards ubiquitous AI, combining Personal AI's expertise in private SLMs with Qualcomm Technologies' chipsets.
Showcased during Snapdragon Summit, Personal AI's SLMs were demonstrated on HP EliteBook Ultra Next-Gen AI PCs powered by Snapdragon X Series processors. Running on the Snapdragon NPU delivers enhanced privacy, security, and real-time AI processing capabilities to a wide range of devices, with a focus on enterprise-grade laptops.
"Our collaboration with Personal AI is a leap forward in on-device AI capabilities,” stated Rami Husseini, Director, Product Management at Qualcomm Technologies, Inc. “By utilizing Personal AI's technology on devices that contain our Snapdragon X Series platforms, we're enabling a new era of security-rich, privacy-focused, and powerful AI experiences directly on Windows PCs. Showcasing our commitment to driving innovation in AI and empowering users with cutting-edge technology that is designed to respect their privacy."
Suman Kanuganti, CEO and Co-founder of Personal AI, added, "Working with Qualcomm and HP allows us to bring our vision of highly efficient and scalable AI Personas to millions of users worldwide. By leveraging the NPU capabilities of Snapdragon, we're able to run our models directly on-device, ensuring that sensitive data never leaves the user's hardware. This collaboration underscores our mission to provide AI that enhances productivity, communication, and collaboration, while maintaining the highest standards of data protection and privacy."
Loretta Li-Sevilla, Global Head of Future of Work at HP, commented, "At HP, we're committed to delivering technology that empowers businesses to work smarter and more securely. HP next-generation AI PCs featuring Snapdragon X Series processors enables new compute capabilities. Personal AI Personas can now run offline for highly private inference use, offering our customers unparalleled AI performance and the all-day battery life they need to stay productive in today's fast-paced business environment."
The integration of Personal AI's technology into PCs powered by Snapdragon X Series Processors is already generating excitement among early adopters, who report significant improvements in productivity, accuracy, and data security.
The Chief Strategy Officer of a Personal AI customer shared his experience: "As a global group of iconic brands, solutions like Personal AI have the potential to completely transform how we operate. In today's fast-paced environment, we deal with vast amounts of data from every corner of the globe—market trends, consumer behaviors, competitive movements, pricing fluctuations. Traditional methods of analyzing this information simply can’t keep up with the speed at which we need to act. We can easily load our available public and enterprise information into the Personal AI engine, where we can interact with it like talking to a smart analyst about our meeting conclusions, action items, and capital markets reactions to recent industry movements. I see this as the beginning of adopting AI in a much easier way in our industry.”
About Personal AI
Personal AI develops a horizontal AI training and collaboration platform, focused on private, Small Language Models (SLMs) that multiply the capabilities of enterprise teams. Their technology enables organizations to build networks of AI Personas, each representing key roles within companies. These AI Personas are exclusively trained on proprietary data, ensuring unparalleled accuracy, transparency, and privacy. For more information, please visit https://personal.ai
Contact: jonathan.bikoff@personal.ai

 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 21 users

Esq.111

Fascinatingly Intuitive.
Morning Chippers ,

One for Diogenese to unravel , possibly enlighten us.

Thankyou in advance Diogenese.

Nanoveu , ASX : NVU


1729550073276.png

1729551258932.png
Regards,
Esq.
 

Attachments

  • 1729549998782.png
    1729549998782.png
    448.6 KB · Views: 59
Last edited:
  • Thinking
  • Like
  • Love
Reactions: 12 users
Top Bottom