BRN Discussion Ongoing

McHale

Regular
Totally agree with your post ........... what some posters here are failing to realise, is that at the last AGM there was ~174 Million votes cast against the renumeration report being adopted, thus resulting in the "Strike 1" outcome ................ As all here should well and truely appreciate, is that out voting or negating that previous vote of ~174 Million vote imo is going to be a mammoth task both for us shareholders, as well as that of the Co.
Does anyone here have any idea(s) on who actually cast those 174 million votes?

Was it retail holders, if it was, then was it a group of retailers who were networked on a platform or forum.

Was it an institutional holder(s), what might their motive have been.

Was it a mixture of retail and institutional, but bottom line for me is - and has been, unless we know who the holders were that voted down the remuneration package, then all that can be done is speculation with no clear understanding of what or who motivated that vote.

Now on another matter, and I am working on a distant memory here, which is of a previous AGM, where there was a large vote against PvDM, I can't remember what it was in relation to - perhaps his (re)nomination to the Board, but I do have a recollection of a large vote against PVDM at a previous AGM. There was talk about it over at HC at the time my recollection is that after some discussion it remained a mystery.
 
  • Like
  • Fire
  • Thinking
Reactions: 11 users

FJ-215

Regular
Hi McH,

There was a time under the ebullient Lou "Hockey Sticks" Dinardo when reference was made to "a major North American vehicle maker" which drew a "please explain" from ASX and caused a possible breach of an NDA when we were forced to disclose Ford as an EAP.

Not only that, but it was a blot on our escutcheon, which has the potential to turn around and bite us on the hindquarters when we seek NTSE registration - our first yellow card.

Because the ASX, being gun-shy of tech companies since getting their fingers burnt in the dotcom bubble, is red hot on tech startups. (Don't you just love a metaphor salad?)

The company is well aware of the disgruntlement of the roiling mass of commentators on TSEx in relation to announcements, but they have chosen to be like Caesar's wife in this regard.

Being a bit of a share market ingenue back in 2018 when I first came across BRN, my first purchase was in FOMO mode because I read that PvdM was about to make a presentation to a gaggle of geeks in the US in the following week, and I expected that would launch us to the moon and beyond.* Not a sausage ...

So I think that, as far as management's concern for shareholders goes, they have their eye on the main prize, NYSE registration, and doing anything to pump the ASX price would earn a second yellow card.

*(I still expect that every day now.)
Hi McH,

There was a time under the ebullient Lou "Hockey Sticks" Dinardo when reference was made to "a major North American vehicle maker" which drew a "please explain" from ASX and caused a possible breach of an NDA when we were forced to disclose Ford as an EAP.

Not only that, but it was a blot on our escutcheon, which has the potential to turn around and bite us on the hindquarters when we seek NTSE registration - our first yellow card.

Because the ASX, being gun-shy of tech companies since getting their fingers burnt in the dotcom bubble, is red hot on tech startups. (Don't you just love a metaphor salad?)

The company is well aware of the disgruntlement of the roiling mass of commentators on TSEx in relation to announcements, but they have chosen to be like Caesar's wife in this regard.

Being a bit of a share market ingenue back in 2018 when I first came across BRN, my first purchase was in FOMO mode because I read that PvdM was about to make a presentation to a gaggle of geeks in the US in the following week, and I expected that would launch us to the moon and beyond.* Not a sausage ...

So I think that, as far as management's concern for shareholders goes, they have their eye on the main prize, NYSE registration, and doing anything to pump the ASX price would earn a second yellow card.

*(I still expect that every day now.)
WHAT!!!!??????????????????
 
  • Thinking
Reactions: 1 users

The Pope

Regular
I received a email from LinkedIn today on a Mercedes update. Refer below

Anyone get good or bad vibes BRN is not in this current vehicle update as it’s AI refers to chip to cloud ?

NEWSLETTER ON LINKEDIN​

See what others are saying about this topic:Open on Linkedin
Newsletter cover image

Applications and Opportunities for AI @ Mercedes-Benz​

You can hardly read an article about technology these days without the mention of artificial intelligence (AI). For example, you can read Mercedes-Benz AG early adopted guiding principles for AI that includes information on responsible use, privacy, security and reliability. We have been integrating various AI solutions throughout the company for several years now, focusing on continuous improvement.
Our goal is to build the most desirable cars and we believe intelligent software enables the industry-leading features that our customers expect. While we are the architects of our own software, there are times when we choose to partner with industry leaders to gain a competitive advantage.
Here are several ways we use AI to help us create the best cars in the world.

How MB.OS uses AI to enhance our user experience​

The chip-to-cloud Mercedes-Benz Operating System (MB.OS) allows us to decouple our software releases from vehicle hardware, which makes our software updates an order of magnitude faster than before. Thanks to integration with the AI-enhanced Mercedes-Benz Intelligent Cloud (MIC), our vehicles will always remain up to date. The ability to roll out software optimisations globally in a matter of hours also significantly increases our vehicles' overall safety and quality.
The MIC uses AI to assist in vehicle development, production and fleet operations, which manages our over-the-air (OTA) software update process. Given the consent of our customers, the MIC enables us to monitor and analyse the current software status of any vehicle in the network using predictive failure analysis. This means our system detects potential errors using an intelligent algorithm to ensure software quality from end to end.
For instance, our MBUX Zero Layer update offers a more personalised and intuitive user experience, with the most relevant functions always in view. Media, telephony and navigation are the most important functions for most customers, so they always remain visible. We use AI to automatically offer dozens of additional functions relevant to each customer. These could include comfort programmes, status from connected smart home devices, or a suggestion to call your mother on her birthday.
A preview of the upcoming MB.OS recently launched in the new E-Class demonstrated its ability to create user-defined "routines" that link together a series of the customers’ favourite vehicle functions. We are currently developing AI "routines" meaning the system will be able to learn how vehicle occupants use different features so it can automate the functions for a more personalised experience. The new E‑Class also has the onboard technology to park itself with our INTELLIGENT PARK PILOTfeature.

Strong partners for AI-driven applications​

While we define and control our own hardware and software architecture, the integration of industry-leading technology provides our customers access to best-in-class services, content and features. Our key partners include NVIDIA which contributes with AI and automated driving systems expertise; Google for our next-generation navigation experience and cloud services, plus Microsoft and OpenAI, as featured in our recent ChatGPT beta programme, which enhanced our "Hey Mercedes" MBUX voice assistant.

NVIDIA​

We are currently building an enhanced automated driving system with NVIDIA, which will be fully updatable and makes the most of NVIDIA’s SoC. It will be capable of conducting 254 trillion operations per second to process driving automation data in real-time. We are also harnessing NVIDIA’s competence in AI for our digital twins, which simulate production runs in software, digitalise manufacturing and detecting potential errors earlier. AI enables cost-saving opportunities over conventional programmable logic controllers (PLC) by enormously reducing the required processing time. The successful application of AI-controlled process engineering is an important step in the digitalisation of our vehicle production.

Google​

We’re using Google’s open infrastructure, AI, machine learning, cloud, maps and data capabilities to securely innovate and scale production, accelerate our sustainability efforts, analyse fleet data, advance autonomous driving and create an enhanced customer experience.
Mercedes-Benz builds its own navigation experience enhanced by the Google Maps Platform, offering detailed location information, predictive and real-time traffic data and enhanced routing directly embedded into MB.OS. This year, we enabled access to Place Details by Google over-the-air for all MBUX customers, providing detailed information about more than 200 million locations worldwide. Our latest release was in Japan. Our integrated Google Maps data can even be used for assisted-driving features, such as automatic speed adjustments at intersections. Our partnership goes beyond navigation as we recently introduced YouTube in our current vehicle fleet.

Microsoft​

To make vehicle production more efficient, resilient and sustainable, we have connected our MO360 Data Platform to the Microsoft Cloud. This digital-production ecosystem helps identify potential supply chain bottlenecks and enables dynamic prioritisation of production resources. Microsoft Azure provides the flexibility and cloud computing power to run AI and analytics at global scale, while addressing cybersecurity and compliance standards at a regional level.
Our software engineers are committed to open collaboration and work with a global community of internal and external developers using Free and Open Source Software (FOSS), including GitHub to improve both the quality of the software and the speed of delivery. They use Azure Data Lake, Databricks and Purview to process and govern huge amounts of data and run AI and analytics and use Azure DevOps for software deployment.
Earlier this year, we integrated ChatGPT through Azure OpenAI Service within our MO360 digital production ecosystem to optimise production processes and evaluate our processes and data in real-time. We leveraged the enterprise-grade capabilities of Microsoft’s cloud using intelligent tools in the MO360 digital-production ecosystem to accelerate error identification, analysis and quality management.
We also launched a ChatGPT beta programme in the U.S. for more than 900,000 vehicles. That complements the existing Hey Mercedes voice control by leveraging a large-language model to massively improve natural language understanding, enhancing our validated data with the more natural dialogue format of ChatGPT. Through Azure OpenAI Service, we’re tapping into large-scale generative AI models, enterprise-grade security, privacy and reliability, while retaining complete control over the IT processes. The collected voice command data is stored in the Mercedes-Benz Intelligent Cloud, where it is anonymised and analysed.

What’s next?​

The future of AI presents opportunities in everything from software development, vehicle production and environmental responsibility to enhanced user experiences, including automated driving, navigation, comfort features and voice communications. We are at the forefront of this technology shift and bringing it to your next Mercedes-Benz.

What do you think the next big AI breakthrough will be? Let me know in the comments.
 
  • Like
  • Thinking
  • Fire
Reactions: 11 users

Vladsblood

Regular
Hi McH,

There was a time under the ebullient Lou "Hockey Sticks" Dinardo when reference was made to "a major North American vehicle maker" which drew a "please explain" from ASX and caused a possible breach of an NDA when we were forced to disclose Ford as an EAP.

Not only that, but it was a blot on our escutcheon, which has the potential to turn around and bite us on the hindquarters when we seek NYSE registration - our first yellow card.

Because the ASX, being gun-shy of tech companies since getting their fingers burnt in the dotcom bubble, is red hot on tech startups. (Don't you just love a metaphor salad?)

The company is well aware of the disgruntlement of the roiling mass of commentators on TSEx in relation to announcements, but they have chosen to be like Caesar's wife in this regard.

Being a bit of a share market ingenue back in 2018 when I first came across BRN, my first purchase was in FOMO mode because I read that PvdM was about to make a presentation to a gaggle of geeks in the US in the following week, and I expected that would launch us to the moon and beyond.* Not a sausage ...

So I think that, as far as management's concern for shareholders goes, they have their eye on the main prize, NYSE registration, and doing anything to pump the ASX price would earn a second yellow card.

*(I still expect that every day now.)
Nasdaq$$$$$ Listing YES !! Vlad
 
  • Fire
  • Like
Reactions: 2 users
Going back the cyber-neuro RT project from the other day.

I noticed that while most authors were from Quantum Ventura that Penn State Uni were also involved.

Worth remembering how overlaps within groups, unis, industry etc occur in the background and would be interesting to see whether any of these newer papers, when available, also involved Akida or just Loihi2 or something as a research chip.


Cyber-Neuro RT: Real-time Neuromorphic Cybersecurity
Author links open overlay panelWyler Zahm a, Tyler Stern a, Malyaban Bal b, Abhronil Sengupta b, Aswin Jose a, Suhas Chelian a, Srini Vasan a
a
Quantum Ventura, 1 South Market Street, Suite 1715, San Jose, CA 95113, USA
b
Penn State University, University Park, PA, 16802 USA
Available online 26 November 2022, Version of Record 26 November 2022.

Wyler Zahm has also authored another paper that was recently presented at ICONS conference.

Wyler Zahm
B.S.E. Computer Engineering, B.S.E. Data Science
Quantum Ventura Inc. University of Michigan College of Engineering
Denver Metropolitan Area
534 followers 500+ connections

LinkedIn HERE

Neuromorphic Low Power Cybersecurity Attack Detection
Proceedings of the International Conference on Neuromorphic Systems 2023 (Journal Publication Pending)
Publication Pending.

Another of the authors, Prof. Abhronil Sengupta also had a recent paper he worked on presented at the same conference and co-authored with Malyaban Bal, another Cyber-neuro RT author.

LinkedIn HERE

Work on semi-supervised neuromorphic cybersecurity to be presented at ICONS 2023



Also nice to see the QV Principal Scientist happy to post who he's working with.

HERE


Suhas Chelian
Lead in machine learning, data science
Quantum Ventura Inc. Boston University
San Francisco Bay Area
2K followers 500+ connections

About​

Team lead in machine learning, data science.

I have captured and executed projects for DARPA, NASA, and several international clients (GM, Toyota, Fujitsu).

* 12 projects transitioned--$10M revenue captured (31+ publications, 32+ patents)
* 9 awards including those from NASA, GM, and HRL Laboratories
* I also have startup, contracting and consulting experience
* US citizen (authorized to work)


Last updated: Aug 30, 2023

Experience​

  • Quantum Ventura Inc. Graphic

    Principal Scientist​

    Quantum Ventura Inc.

    Jun 2020 - Present3 years 6 months
    San Jose, California, United States
    • Team lead in deep learning and computer vision projects for several government customers (DOD, DOE, etc.).
    • Managing 6+ team members and subcontractors.

    Sample projects:
    o Missile detection using infrared imagery and bio-inspired computing (technologies: U-Net, Siamese net, Keras).
    o Contraband detection with hyperspectral imaging and neuromorphic computing (technologies: spectral spatial ResNet, BrainChip, Keras).
    o UAV detection using multispectral/hyperspectral imaging (technologies: spectral spatial ResNet, Keras).
    o Verification and validation of deep learning systems; time series prediction, adversarial attacks and hardening (technologies: LSTM, fast gradient sign method, Keras).
    o Network cybersecurity with deep learning using GPUs and neuromorphic computing (technologies: BrainChip, Intel Loihi, Keras).



  • Didn't realise DoE moved to Ph II...wonder if that is part of the work with Akida?
  • View organization page for Quantum Ventura Inc.
    Quantum Ventura Inc.
    224 followers
    2mo

    We're proud to share a recent paper from our group on machine learning for cybersecurity, done under a DOE Phase 2 contract. Using semi-supervised spiking networks, detecting malicious network traffic was shown with good accuracy. Please contact Aaron Goldberg for more details! Thank you Malyaban Bal, Max (Joji) Nishibuchi, Suhas Chelian, srini vasan and Abhronil Sengupta for the awesome work!
 
  • Like
  • Fire
  • Love
Reactions: 15 users
Just saying.
I received a email from LinkedIn today on a Mercedes update. Refer below

Anyone get good or bad vibes BRN is not in this current vehicle update as it’s AI refers to chip to cloud ?

NEWSLETTER ON LINKEDIN​

See what others are saying about this topic:Open on Linkedin
Newsletter cover image

Applications and Opportunities for AI @ Mercedes-Benz​

You can hardly read an article about technology these days without the mention of artificial intelligence (AI). For example, you can read Mercedes-Benz AG early adopted guiding principles for AI that includes information on responsible use, privacy, security and reliability. We have been integrating various AI solutions throughout the company for several years now, focusing on continuous improvement.
Our goal is to build the most desirable cars and we believe intelligent software enables the industry-leading features that our customers expect. While we are the architects of our own software, there are times when we choose to partner with industry leaders to gain a competitive advantage.
Here are several ways we use AI to help us create the best cars in the world.

How MB.OS uses AI to enhance our user experience​

The chip-to-cloud Mercedes-Benz Operating System (MB.OS) allows us to decouple our software releases from vehicle hardware, which makes our software updates an order of magnitude faster than before. Thanks to integration with the AI-enhanced Mercedes-Benz Intelligent Cloud (MIC), our vehicles will always remain up to date. The ability to roll out software optimisations globally in a matter of hours also significantly increases our vehicles' overall safety and quality.
The MIC uses AI to assist in vehicle development, production and fleet operations, which manages our over-the-air (OTA) software update process. Given the consent of our customers, the MIC enables us to monitor and analyse the current software status of any vehicle in the network using predictive failure analysis. This means our system detects potential errors using an intelligent algorithm to ensure software quality from end to end.
For instance, our MBUX Zero Layer update offers a more personalised and intuitive user experience, with the most relevant functions always in view. Media, telephony and navigation are the most important functions for most customers, so they always remain visible. We use AI to automatically offer dozens of additional functions relevant to each customer. These could include comfort programmes, status from connected smart home devices, or a suggestion to call your mother on her birthday.
A preview of the upcoming MB.OS recently launched in the new E-Class demonstrated its ability to create user-defined "routines" that link together a series of the customers’ favourite vehicle functions. We are currently developing AI "routines" meaning the system will be able to learn how vehicle occupants use different features so it can automate the functions for a more personalised experience. The new E‑Class also has the onboard technology to park itself with our INTELLIGENT PARK PILOTfeature.

Strong partners for AI-driven applications​

While we define and control our own hardware and software architecture, the integration of industry-leading technology provides our customers access to best-in-class services, content and features. Our key partners include NVIDIA which contributes with AI and automated driving systems expertise; Google for our next-generation navigation experience and cloud services, plus Microsoft and OpenAI, as featured in our recent ChatGPT beta programme, which enhanced our "Hey Mercedes" MBUX voice assistant.

NVIDIA​

We are currently building an enhanced automated driving system with NVIDIA, which will be fully updatable and makes the most of NVIDIA’s SoC. It will be capable of conducting 254 trillion operations per second to process driving automation data in real-time. We are also harnessing NVIDIA’s competence in AI for our digital twins, which simulate production runs in software, digitalise manufacturing and detecting potential errors earlier. AI enables cost-saving opportunities over conventional programmable logic controllers (PLC) by enormously reducing the required processing time. The successful application of AI-controlled process engineering is an important step in the digitalisation of our vehicle production.

Google​

We’re using Google’s open infrastructure, AI, machine learning, cloud, maps and data capabilities to securely innovate and scale production, accelerate our sustainability efforts, analyse fleet data, advance autonomous driving and create an enhanced customer experience.
Mercedes-Benz builds its own navigation experience enhanced by the Google Maps Platform, offering detailed location information, predictive and real-time traffic data and enhanced routing directly embedded into MB.OS. This year, we enabled access to Place Details by Google over-the-air for all MBUX customers, providing detailed information about more than 200 million locations worldwide. Our latest release was in Japan. Our integrated Google Maps data can even be used for assisted-driving features, such as automatic speed adjustments at intersections. Our partnership goes beyond navigation as we recently introduced YouTube in our current vehicle fleet.

Microsoft​

To make vehicle production more efficient, resilient and sustainable, we have connected our MO360 Data Platform to the Microsoft Cloud. This digital-production ecosystem helps identify potential supply chain bottlenecks and enables dynamic prioritisation of production resources. Microsoft Azure provides the flexibility and cloud computing power to run AI and analytics at global scale, while addressing cybersecurity and compliance standards at a regional level.
Our software engineers are committed to open collaboration and work with a global community of internal and external developers using Free and Open Source Software (FOSS), including GitHub to improve both the quality of the software and the speed of delivery. They use Azure Data Lake, Databricks and Purview to process and govern huge amounts of data and run AI and analytics and use Azure DevOps for software deployment.
Earlier this year, we integrated ChatGPT through Azure OpenAI Service within our MO360 digital production ecosystem to optimise production processes and evaluate our processes and data in real-time. We leveraged the enterprise-grade capabilities of Microsoft’s cloud using intelligent tools in the MO360 digital-production ecosystem to accelerate error identification, analysis and quality management.
We also launched a ChatGPT beta programme in the U.S. for more than 900,000 vehicles. That complements the existing Hey Mercedes voice control by leveraging a large-language model to massively improve natural language understanding, enhancing our validated data with the more natural dialogue format of ChatGPT. Through Azure OpenAI Service, we’re tapping into large-scale generative AI models, enterprise-grade security, privacy and reliability, while retaining complete control over the IT processes. The collected voice command data is stored in the Mercedes-Benz Intelligent Cloud, where it is anonymised and analysed.

What’s next?​

The future of AI presents opportunities in everything from software development, vehicle production and environmental responsibility to enhanced user experiences, including automated driving, navigation, comfort features and voice communications. We are at the forefront of this technology shift and bringing it to your next Mercedes-Benz.

What do you think the next big AI breakthrough will be? Let me know in the comments.
Hi Pope,
In my view there will be significant share price appreciation if Akida is inside.. I believe that is what was pricing in early 2022, that it was, and at a future growing scale.

I’m sceptical now that’s it’s not on the next line of Mercedes cars because I couldn’t believe that there would not be more buying pressure.

It just seems that market penetration with Akida is still a bit of a patience exercise for now.. In my view ofcourse, purely based on how the market is valuing Brainchip..
 
  • Like
Reactions: 4 users
Does anyone here have any idea(s) on who actually cast those 174 million votes?

Was it retail holders, if it was, then was it a group of retailers who were networked on a platform or forum.

Was it an institutional holder(s), what might their motive have been.

Was it a mixture of retail and institutional, but bottom line for me is - and has been, unless we know who the holders were that voted down the remuneration package, then all that can be done is speculation with no clear understanding of what or who motivated that vote.

Now on another matter, and I am working on a distant memory here, which is of a previous AGM, where there was a large vote against PvDM, I can't remember what it was in relation to - perhaps his (re)nomination to the Board, but I do have a recollection of a large vote against PVDM at a previous AGM. There was talk about it over at HC at the time my recollection is that after some discussion it remained a mystery.
Don’t think anyone of note is up for re-election off top of my head at the 2024 AGM..

In Gen 2 we trust, with its expanding use cases.. Time is needed, and Rob and Nandan to weave some magic. It appears Nandan is looking at Tata Elxsi and VVDN..
 
  • Like
Reactions: 2 users

Diogenese

Top 20
Zounds! never was I so bethumped with words since I called my brother's father Dad.
He cudgels our ears.
He gives the bastinado with his tongue ...

If my memory serves me well, Syntiant goes in for Frankenstein NNs.

To understand what Syntiant are claiming as their invention, it is necessary to look at the claims, claim 1 beginning at the bottom of column 14.

They are claiming:

an IC for detecting signals off-line, including:
a host processor with
a co-processor
(presumably the co-processor?) to receive a streaming signal from a sensor,
the co-processor having a recognition network to perform recognition tasks,
the co-processor transmitting result (presumably of the recognition tasks?) to the host processor,
wherein
the co-processor includes a NN,
the NN being adapted to identify target signals in the signal stream,
the target signals being detectable (NB: not identified) using a set of weights (are we doing this in the gym?) while not being on-line,
the host processor being adapted to receive weighting signals from the co-processor (where did the co-processor get the weighting signals from?),
the host processor transmitting the target signals (where to?) indicating detection of user-specified signals (produced out of the user's hat or anatomical recess).

So really, they are attempting to claim an IC including a host processor and a co-processor having a NN with weights for use in identifying key words or other specified characteristics, the co-processor notifying the host processor, and the host processor generating an output signal in response to the co-processor notifying the host processor of a hit.

Now why didn't PvdM think of that?

Claim 6 relates to a method of generating a weight file, an entirely different invention. A patent is only permitted to claim a single invention.

I do really like the precision of the definition in claim10.

This is characteristic of the degeneration of examination standards in USPTO.

Perhaps you would like to bring this patent to the attention of Milind Joshi, Brainchip's patent attorney in Perth.
 
  • Like
  • Fire
  • Thinking
Reactions: 27 users
  • Like
  • Thinking
  • Fire
Reactions: 8 users

Tothemoon24

Top 20

Arm Collaborates with Industry Leaders to Build AI Foundations of the Future​


By Richard Grisenthwaite, EVP, Chief Architect & Fellow, Arm
Artificial Intelligence (AI)Company NewsInternet of Things (IoT)
Share
GettyImages-1283240410-1400x788.jpg

The proliferation of artificial intelligence (AI) relies on continuous alignment between hardware and software innovation. It’s this combination which will improve AI capabilities at every technology touchpoint, from the smallest sensor running workloads at the edge to the biggest server handling complex workloads to train large language models (LLMs). As the ecosystem works to realize the true potential of AI, it’s clear that security, sustainability, and data ceilings will all be a challenge. Therefore, it’s vital that we continue to explore industry-wide collaboration, so that we can achieve AI at scale, including more inference at the edge. Arm is engaged in several new strategic partnerships that will fuel AI innovation and bring AI-based experiences to life.

In addition to our own technology platforms where AI development is already happening, Arm is working with leading technology companies, including AMD, Intel, Meta, Microsoft, NVIDIA, and Qualcomm Technologies, Inc. on a range of initiatives focused on enabling advanced AI capabilities for more responsive and more secure user experiences. These partnerships will create the foundational frameworks, technologies, and specifications required for the 15 million and counting Arm developers to deliver next-generation AI experiences across every corner of computing.

Powering AI at the edge​

While generative AI and LLMs may be capturing headlines today, Arm has been at the forefront of delivering AI at the edge for years, with 70% of third-party AI applications running on Arm CPUs in the smartphone space. However, as we explore how to deliver AI in a sustainable way, and move data around efficiently, the industry needs to evolve to run AI and machine learning (ML) models at the edge, which is challenging as developers are working with increasingly limited computing resources.

Arm is working with NVIDIA to adapt NVIDIA TAO, a low-code open-source AI toolkit for Ethos-U NPUs, which helps to create performance-optimized vision AI models for the purpose of deploying them on these processors. The NVIDIA TAO provides an easy-to-use interface for building on top of TensorFlow and PyTorch, which are leading, free, open-source AI and ML frameworks. For developers, this means easy and seamless development and deployment of their models, while also bringing more complex AI workloads to edge devices for enhanced AI-based experiences.

Advancing neural networks across all devices and markets​

A vital aspect of the continued growth of AI is advancing the deployment of neural networks at the edge. Arm and Meta are working to bring PyTorch to Arm-based mobile and embedded platforms at the edge with ExecuTorch. ExecuTorch makes it far easier for developers to deploy state-of-the-art neural networks that are needed for advanced AI and ML workloads across mobile and edge devices. Moving forward, the collaboration between Arm and Meta will ensure AI and ML models can be easily developed and deployed with PyTorch and ExecuTorch.

The work with Meta builds on significant investments that we have already made in the Tensor Operator Set Architecture (TOSA), which provides a common framework for AI and ML accelerators and supports a broad range of workloads employed by deep neural networks. TOSA will be the cornerstone of AI and ML for a diverse range of processors and billions of devices that are built on the Arm architecture.

Industry-wide scalable AI​

Supporting the wide deployment of data formats is crucial for scaling AI at a relatively low cost. Arm has been working hard to support a variety of emerging small data types focused on AI workloads.
Last year, in a joint collaboration, Arm, Intel, and NVIDIA, published a new 8-bit floating point specification, the ‘FP8’. Since then, the format has gained momentum and the group of companies expanded to AMD, Arm, Google, Intel, Meta, and NVIDIA, who together created the official OCP 8-bit Floating Point Specification (OFP8). In our latest A-profile architecture update, we’ve added OFP8 consistent with this standard to support its rapid adoption in neural networks across the industry. OFP8 is an interchange 8-bit data format that allows the software ecosystem to share neural network models easily, facilitating the continuous advancement of AI computing capabilities across billions of devices.

Open standards are critical to driving innovation, consistency, and interoperability in the AI ecosystem. In continuing our work to support industry collaboration efforts on these standards we recently joined the Microscaling Formats (MX) Alliance, which includes AMD, Arm, Intel, Meta, Microsoft, NVIDIA, and Qualcomm Technologies, Inc. The MX Alliance recently collaborated on the specification for a new technology known as microscaling, which builds on a foundation of years of design space exploration and research, and is a fine-grained scaling method for narrow-bit (8-bit and sub 8-bit) training and inference of AI applications. This specification standardizes these narrow-bit data formats to remove fragmentation across the industry and enable scalable AI.

In the spirit of collaboration, the MX Alliance released this MX specification in an open, license-free format through the Open Compute Project, which consists of hyperscale data center operators and other industry players in computing infrastructure, to encourage broad industry adoption. This is in recognition of the need to provide equitable access to scalable AI solutions across the ecosystem.

Unprecedented AI innovation​

Arm is already foundational to AI deployments in the world today and these collaborations are just some of the ways we are providing the technologies needed for developers to create advanced, complex AI workloads. From sensors, smartphones, and software defined vehicles, to servers and supercomputers, the future of AI will be built on Arm.
 
  • Like
  • Fire
  • Love
Reactions: 18 users

Perhaps

Regular
Nasdaq$$$$$ Listing YES !! Vlad
Hold your horses, you don't know what you're talking about. The lowest form of Nasdaq listing requires an SP of $1. To meet this and have some reserve in case of falling SP most new entries start at Nasdaq with an SP between $3-5.
For Brainchip shares this would mean a resplit of 30:1 to 50:1 as of today. Fresh food for shorting campaigns.
Any idea what happens at the Nasdaq with companies having no revenue and no financial related news? They will be eaten in a way you cry for mercy and crawl back to the ASX.
That whole bundle is a guarantee for losing lots of money, better don't beg for it. Not before there is a steady revenue flow and a higher SP.
 
  • Like
  • Fire
  • Love
Reactions: 24 users

Arm Collaborates with Industry Leaders to Build AI Foundations of the Future​


By Richard Grisenthwaite, EVP, Chief Architect & Fellow, Arm
Artificial Intelligence (AI)Company NewsInternet of Things (IoT)
Share
GettyImages-1283240410-1400x788.jpg

The proliferation of artificial intelligence (AI) relies on continuous alignment between hardware and software innovation. It’s this combination which will improve AI capabilities at every technology touchpoint, from the smallest sensor running workloads at the edge to the biggest server handling complex workloads to train large language models (LLMs). As the ecosystem works to realize the true potential of AI, it’s clear that security, sustainability, and data ceilings will all be a challenge. Therefore, it’s vital that we continue to explore industry-wide collaboration, so that we can achieve AI at scale, including more inference at the edge. Arm is engaged in several new strategic partnerships that will fuel AI innovation and bring AI-based experiences to life.

In addition to our own technology platforms where AI development is already happening, Arm is working with leading technology companies, including AMD, Intel, Meta, Microsoft, NVIDIA, and Qualcomm Technologies, Inc. on a range of initiatives focused on enabling advanced AI capabilities for more responsive and more secure user experiences. These partnerships will create the foundational frameworks, technologies, and specifications required for the 15 million and counting Arm developers to deliver next-generation AI experiences across every corner of computing.

Powering AI at the edge​

While generative AI and LLMs may be capturing headlines today, Arm has been at the forefront of delivering AI at the edge for years, with 70% of third-party AI applications running on Arm CPUs in the smartphone space. However, as we explore how to deliver AI in a sustainable way, and move data around efficiently, the industry needs to evolve to run AI and machine learning (ML) models at the edge, which is challenging as developers are working with increasingly limited computing resources.

Arm is working with NVIDIA to adapt NVIDIA TAO, a low-code open-source AI toolkit for Ethos-U NPUs, which helps to create performance-optimized vision AI models for the purpose of deploying them on these processors. The NVIDIA TAO provides an easy-to-use interface for building on top of TensorFlow and PyTorch, which are leading, free, open-source AI and ML frameworks. For developers, this means easy and seamless development and deployment of their models, while also bringing more complex AI workloads to edge devices for enhanced AI-based experiences.

Advancing neural networks across all devices and markets​

A vital aspect of the continued growth of AI is advancing the deployment of neural networks at the edge. Arm and Meta are working to bring PyTorch to Arm-based mobile and embedded platforms at the edge with ExecuTorch. ExecuTorch makes it far easier for developers to deploy state-of-the-art neural networks that are needed for advanced AI and ML workloads across mobile and edge devices. Moving forward, the collaboration between Arm and Meta will ensure AI and ML models can be easily developed and deployed with PyTorch and ExecuTorch.

The work with Meta builds on significant investments that we have already made in the Tensor Operator Set Architecture (TOSA), which provides a common framework for AI and ML accelerators and supports a broad range of workloads employed by deep neural networks. TOSA will be the cornerstone of AI and ML for a diverse range of processors and billions of devices that are built on the Arm architecture.

Industry-wide scalable AI​

Supporting the wide deployment of data formats is crucial for scaling AI at a relatively low cost. Arm has been working hard to support a variety of emerging small data types focused on AI workloads.
Last year, in a joint collaboration, Arm, Intel, and NVIDIA, published a new 8-bit floating point specification, the ‘FP8’. Since then, the format has gained momentum and the group of companies expanded to AMD, Arm, Google, Intel, Meta, and NVIDIA, who together created the official OCP 8-bit Floating Point Specification (OFP8). In our latest A-profile architecture update, we’ve added OFP8 consistent with this standard to support its rapid adoption in neural networks across the industry. OFP8 is an interchange 8-bit data format that allows the software ecosystem to share neural network models easily, facilitating the continuous advancement of AI computing capabilities across billions of devices.

Open standards are critical to driving innovation, consistency, and interoperability in the AI ecosystem. In continuing our work to support industry collaboration efforts on these standards we recently joined the Microscaling Formats (MX) Alliance, which includes AMD, Arm, Intel, Meta, Microsoft, NVIDIA, and Qualcomm Technologies, Inc. The MX Alliance recently collaborated on the specification for a new technology known as microscaling, which builds on a foundation of years of design space exploration and research, and is a fine-grained scaling method for narrow-bit (8-bit and sub 8-bit) training and inference of AI applications. This specification standardizes these narrow-bit data formats to remove fragmentation across the industry and enable scalable AI.

In the spirit of collaboration, the MX Alliance released this MX specification in an open, license-free format through the Open Compute Project, which consists of hyperscale data center operators and other industry players in computing infrastructure, to encourage broad industry adoption. This is in recognition of the need to provide equitable access to scalable AI solutions across the ecosystem.

Unprecedented AI innovation​

Arm is already foundational to AI deployments in the world today and these collaborations are just some of the ways we are providing the technologies needed for developers to create advanced, complex AI workloads. From sensors, smartphones, and software defined vehicles, to servers and supercomputers, the future of AI will be built on Arm.
Is this the pet that gets you excited about this news release ⬇️⬇️

1698872568261.png
 
  • Like
  • Fire
Reactions: 13 users

IloveLamp

Top 20
  • Like
Reactions: 3 users

Xray1

Regular
Does anyone here have any idea(s) on who actually cast those 174 million votes?

Was it retail holders, if it was, then was it a group of retailers who were networked on a platform or forum.

Was it an institutional holder(s), what might their motive have been.

Was it a mixture of retail and institutional, but bottom line for me is - and has been, unless we know who the holders were that voted down the remuneration package, then all that can be done is speculation with no clear understanding of what or who motivated that vote.

Now on another matter, and I am working on a distant memory here, which is of a previous AGM, where there was a large vote against PvDM, I can't remember what it was in relation to - perhaps his (re)nomination to the Board, but I do have a recollection of a large vote against PVDM at a previous AGM. There was talk about it over at HC at the time my recollection is that after some discussion it remained a mystery.
FYI concerning the vote against Peter's re -election as a director was 109,789,182 Million ~ 27.94% of votes

"Tuesday, 24 May 2022:
In accordance with Listing Rule 3.13.2 and Section 251AA(2) of the Corporations Act, BrainChip Holdings Ltd (ASX:BRN) provides the following information with respect to the results of its Annual General Meeting held today. Resolutions voted on at the meeting If decided by poll Proxies received in advance of the meeting Resolution Result/ Resolution Type Voting method If s250U applies Voted for Voted against Abstained For Against Abstain Discretion No Short description Number % Number % Number Number Number Number Number 1 Adoption of Remuneration Report Carried Ordinary Poll Yes 262,519,106 88.12 35,383,729 11.88 94,770,967 195,936,865 35,042,053 94,166,812 12,950,370 2 Re-election of Peter Van Der Made as Director Carried Ordinary Poll N/A 283,100,194 72.06 109,789,182 27.94 172,514,611 223,140,439 101,974,925 172,511,110 12,554,811 3 Elect
 
  • Like
  • Fire
Reactions: 3 users
An AI arms race in mobile phones, is on its way... have a read of this and tell me we are not perfectly placed for the perfect storm that is approaching. Generative AI in your hand with low power consumption. Just like the phone became our stereo, TV, radio, camera, laptop etc, they will soon be our personal assistant, teacher, fitness coach, personal doctor, therapist, best friend, lover? Lol https://www.cnet.com/tech/mobile/ai-is-coming-for-your-phone-in-a-big-way/
LOL. Too much time on vibrate drains the battery you know.

SC
 
  • Haha
  • Like
Reactions: 4 users

Cgc516

Regular
Closing to the end of 2023, it seems every parter got their own baby ‘with Akida inside’ , however, there is still no any kind of confirmation. Only the 1000eyes with endless guessing!
Keep up .
 
  • Like
Reactions: 5 users

Vladsblood

Regular
  • Like
  • Fire
  • Thinking
Reactions: 12 users

IloveLamp

Top 20
Screenshot_20231102_082039_LinkedIn.jpg
Screenshot_20231102_082323_Chrome.jpg
Screenshot_20231102_082429_LinkedIn.jpg
 
  • Like
  • Fire
  • Wow
Reactions: 17 users

toasty

Regular
From memory the strike was a consequence of a minority vote which achieved the minimum number necessary to pass and certainly doesn't represent my self or the massive majority... your thoughts basically don't represent the general consensus. Let us not forget this.
Thanks for your reply but I was, as stated, voicing my opinion, not stating any fact. I think it salient to remember that the 1st/2nd strike rule was put in place for good reason.
 
  • Like
  • Fire
  • Haha
Reactions: 6 users
Top Bottom