BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!

Arm's Next Chips Should Bring Faster On-Device AI to Future Phones​

The chip is built with efficiency in mind to handle the power drain spikes of using generative AI.
davidlumb-headshot.jpg

David Lumb
May 29, 2024 8:00 a.m. PT
2 min read

A stylized photo of a man using a smartphone, reflected left-to-right and top-and-bottom to show the image four times.


Chip designer Arm on Wednesday introduced its new set of processors that will boost efficiency and speed of phones to come -- and they're built especially to empower on-device generative AI.

Chipmakers like Qualcomm include Arm processors into their full chipsets -- for instance, there are eight Arm Cortex CPU cores in the latest Snapdragon 8 Gen 3, which powers premium phones like the Samsung Galaxy S24 series. Arm's next generation of CPUs and GPUs promise speed and efficiency advancements and will better empower on-device AI, like Samsung's Galaxy AI and tools that add or remove elements of images.

The trio of new chips run on the company's latest Armv9.2 architecture. The top-end Cortex X925, made on a 3-nanometer size die, scored 36% higher on Geekbench metrics than an anonymous premium Android phone from 2023. The mid-end Cortex A725 has 35% better efficiency than its predecessor (Cortex A720) and the lower-end Cortex A525 is 15% more efficient than the Cortex A520.

Arm says these chips are made for AI, and the Cortex X925 has 41% faster time to first token -- a metric for AI speed measuring the first output after inputting a prompt -- using the tiny-Llama language model than the Cortex X4. While Arm acknowledged that any AI processing drains more power on devices than less computationally intensive use (like browsing the internet), the company said its processor efficiency gains balance out higher power drain from AI, which would be beneficial for phone- and device-makers keen on riding the AI wave with more and more implementations.

The company is also announcing its new Kleidi software libraries focusing on AI and computer vision for chipset manufacturers to pick and choose from when implementing its processors. But potentially more exciting for other devices is Arm's commitment to adding more Windows apps to run natively on its processors -- this time, adding Spotify, Chrome, Audacity and a couple others.

Arm also introduced its newest high-end GPU to pair with its CPU chips, the 14-core Immortalis G925, which is the third generation of chips to include ray tracing to render realistic light and reflections, and now includes Unreal's Lumen Raytracing on the chip. Arm says this GPU has 52% better ray tracing performance than its predecessor, the 12-core Immortalis G720.

Arm noted how these graphical improvements lead to better video efficiency on Android platforms, like 10% saved power when watching YouTube videos.

 
  • Like
  • Fire
  • Love
Reactions: 13 users
Further to my prev QV post...it appears they also set up a new company in Japan mid last year.

This site goes into more explanation on the product and architecture and where we sit in the system integration flow.



View attachment 64015



View attachment 64016


View attachment 64017


View attachment 64018


View attachment 64019


View attachment 64020
This came up while looking further into QV stuff and is from last year but they hiring for work in image processing and still using Akida.

Interesting was I picked up it up off a repost by Aswin Jose as below. Nice to see GF around this as well. Liking the web of connections.


Aswin Jose​

GlobalFoundries Anna University​

Singapore Contact Info

536 followers 500+ connections​



Aswin Jose reposted this


View organization page for Quantum Ventura Inc., graphic
Quantum Ventura Inc.
247 followers
1y
We're hiring for a part-time or full-time position in deep learning AND/OR metamaterial optical (meta-optic) image processing and classification technology! Open for remote or those near San Jose, CA

IMG_20240530_130542.jpg
 
  • Like
  • Fire
  • Love
Reactions: 27 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
View attachment 62407

Here's a patent from 2015*.

Oleg Sinyavski ( currently Principal Applied Scientist at Wayve) was one of the inventors of this patent. The other inventor was ...Olivier Coenen, currently Senior Research Scientist at ....BRAINCHIP!!!!

Could someone please call Tony ASAP and ask him to call Olivier to see if he can get on the blower to his old mate Oleg! Best to strike while the iron is 🔥 I say!



View attachment 62399




View attachment 62398



*Systems and apparatus for implementing task-specific learning using spiking neuronsSystems and apparatus for implementing task-specific learning using spiking neurons
US9146546B2 · Issued Sep 29, 2015

And it's not like Wayve couldn't benefit massively from our help IMO. Just look at the list of main challenges that they're facing. Without blowing our trumpet too much, I beleive we are literally the PANACEA for these woes, like Hydrozole is for ringworm, or like a toupee is for a pate (pate = bald head).

View attachment 62408

Can someone please ask Tony if Olivier Coenen has had a chance to talk to Oleg Sinyavsk from Wayve yet?



Riding the Wayve of AV 2.0, Driven by Generative AI​

Startup Wayve develops autonomous driving technologies capable of decision-making in dynamic, real-world environments.
May 29, 2024 by Norm Marks

Share

Generative AI is propelling AV 2.0, a new era in autonomous vehicle technology characterized by large, unified, end-to-end AI models capable of managing various aspects of the vehicle stack, including perception, planning and control.
London-based startup Wayve is pioneering this new era, developing autonomous driving technologies that can be built on NVIDIA DRIVE Orin and its successor NVIDIA DRIVE Thor, which uses the NVIDIA Blackwell GPU architecture designed for transformer, large language model (LLM) and generative AI workloads.
In contrast to AV 1.0’s focus on refining a vehicle’s perception capabilities using multiple deep neural networks, AV 2.0 calls for comprehensive in-vehicle intelligence to drive decision-making in dynamic, real-world environments.



Wayve, a member of the NVIDIA Inception program for cutting-edge startups, specializes in developing AI foundation models for autonomous driving, equipping vehicles with a “robot brain” that can learn from and interact with their surroundings.
“NVIDIA has been the oxygen of everything that allows us to train AI,” said Alex Kendall, cofounder and CEO of Wayve. “We train on NVIDIA GPUs, and the software ecosystem NVIDIA provides allows us to iterate quickly — this is what enables us to build billion-parameter models trained on petabytes of data.”
Generative AI also plays a key role in Wayve’s development process, enabling synthetic data generation so AV makers can use a model’s previous experiences to create and simulate novel driving scenarios.
The company is building embodied AI, a set of technologies that integrate advanced AI into vehicles and robots to transform how they respond to and learn from human behavior, enhancing safety.
Wayve recently announced its Series C investment round — with participation from NVIDIA — that will support the development and launch of the first embodied AI products for production vehicles. As Wayve’s core AI model advances, these products will enable manufacturers to efficiently upgrade cars to higher levels of driving automation, from L2+ assisted driving to L4 automated driving.
As part of its embodied AI development, Wayve launched GAIA-1, a generative AI model for autonomy that creates realistic driving videos using video, text and action inputs. It also launched LINGO-2, a driving model that links vision, language and action inputs to explain and determine driving behavior.
“One of the neat things about generative AI is that it allows you to combine different modes of data seamlessly,” Kendall said. “You can bring in the knowledge of all the texts, the general purpose reasoning and capabilities that we get from LLMs and apply that reasoning to driving — this is one of the more promising approaches that we know of to be able to get to true generalized autonomy and eventually L5 capabilities on the road.”

 
Last edited:
  • Like
  • Love
Reactions: 3 users

FJ-215

Regular
Cigarless once again.


US2024166165A1 FACIAL RECOGNITION ENTRY SYSTEM WITH SECONDARY AUTHENTICATION 20221121 Publication: US2024166165A1·2024-05-23

View attachment 64014



[0023] FIG. 2 shows authentication controller 16 in greater detail. A main processor 30 includes logic 31 which directs operation according to the processes described herein. A program block 32 performs facial recognition and/or gesture recognition by comparing captured images to prestored templates (biometric and nonbiometric).

It looks like Ford are using software to compare stored images with the camera output (processor 30; program block 32). There is no mention of NNs or AI.

Still that does not conclusively rule out using Akida simulation software, but it does not closely describe a NN application. But doing an old fashioned image comparison would use a lot of power. Wouldn't it be funny if the car recognizes the driver, but then has a flat battery from the effort.

TeNNs?

After all there is the hypothesis that Valeo and Mercedes could be using Akida simulation software ...
Hi Dio,

Quick question,
What is the possibility of TENNs becoming a separate product in its own right?

As we haven't produced a reference chip for Akida 2.0, we are running simulations for customers on legacy hardware. Could the algorithm be tuned to run on say, FPGA, CPU/GPU & alternative neuromorphic h/ware? (Loihi?)

OK, that is two questions.
 
Last edited:
  • Like
Reactions: 6 users

Diogenese

Top 20
  • Haha
  • Like
Reactions: 18 users

IloveLamp

Top 20

1000016089.jpg
 
  • Like
  • Fire
Reactions: 4 users

buena suerte :-)

BOB Bank of Brainchip
  • Haha
Reactions: 8 users

buena suerte :-)

BOB Bank of Brainchip
I've been off line for a while. Just got out of hospital after major kidney surgery. The pain has been quite staggering to be honest.
Anyway, for a bit of therapy, I thought I'd take a quick look here, and it really has helped. The pain of the operation has paled into insignificance and is now a distant memory.
Take care buddy :)
 
  • Like
  • Fire
Reactions: 3 users

Diogenese

Top 20
Hi Dio,

Quick question,
What is the possibility of TENNs becoming a separate product in its own right?

As we haven't produced a reference chip for Akida 2.0, we are running simulations for customer on legacy hardware. Could the algorithm be tuned to run on say, FPGA, CPU/GPU & alternative neuromorphic h/ware? (Loihi?)

OK, that is two questions.
Hi FJ,

I have no inside information, just rumors picked up here mainly and BRN webpage, ...

My understanding is that it can be run separately from Akida, but it is better with Akida. Maybe chiplets or software simulation?

Yes, it would be relatively easy to build on FPGA, and since Akida is processor agnostic, I assume any old CPU/GPU would do for the software simulation.
 
  • Like
  • Love
  • Fire
Reactions: 23 users

IloveLamp

Top 20
“The rapid evolution of AI has sparked a revolutionary transformation across various industries we cater to. We’re transitioning from conventional methods to dynamic, AI-driven solutions that enhance efficiency and transform customer experiences. The pace of AI innovation is accelerating, reshaping industries, streamlining operations, and empowering quicker, data-driven decision-making. While AI has been in existence for decades, we’re currently witnessing an unprecedented surge in its capabilities and practical real-world applications.

Bosch Global Delivery Services – Srinivasulu Nasam, Senior Technical Director – GenAI for Systems and Software Engineering at Bosch Global Software Technologies

“We are witnessing rapid advancements in the Technology industry, underscored by the emergence of Generative AI—a revolutionary tool fueled by massive AI models, igniting creativity and problem-solving.

1000016092.jpg
 
  • Like
  • Love
Reactions: 11 users

keyeat

Regular
  • Like
  • Fire
  • Love
Reactions: 59 users

FJ-215

Regular
Hi FJ,

I have no inside information, just rumors picked up here mainly and BRN webpage, ...

My understanding is that it can be run separately from Akida, but it is better with Akida. Maybe chiplets or software simulation?

Yes, it would be relatively easy to build on FPGA, and since Akida is processor agnostic, I assume any old CPU/GPU would do for the software simulation.
Cheers,
I had wondered why TENNs hadn't just been demo'ed on AKD 1000, instead they seem to have been shown as individual ingredients to make up Akida 2.0 .

Anyway, Intel seem to be aiming Loihi 2 at data centres and our man Tony Lewis did mention that TENNs could be scaled up to working at data centre scale..

Did I mention my parents were big band/ jazz fans...... Glen, Benny, Louis this, Cab that......

Heard this song a million times on scatchey 78s before I could put a face to the name when the Blues Brothers movie came out.


 
  • Like
  • Love
  • Fire
Reactions: 11 users

FJ-215

Regular
Cheers,
I had wondered why TENNs hadn't just been demo'ed on AKD 1000, instead they seem to have been shown as individual ingredients to make up Akida 2.0 .

Anyway, Intel seem to be aiming Loihi 2 at data centres and our man Tony Lewis did mention that TENNs could be scaled up to working at data centre scale..

Did I mention my parents were big band/ jazz fans...... Glen, Benny, Louis this, Cab that......

Heard this song a million times on scatchey 78s before I could put a face to the name when the Blues Brothers movie came out.



Actually.........that seems to be a very PC version...........

God I hate this cleansed reality........

This is what i remember.......probably makes my parents......well.....bad parents. Wouldn't swap them for anyone. (except billionaires, obviously!)

1931 HITS ARCHIVE: Minnie The Moocher - Cab Calloway (original version) The78Prof 61.5K subscribers
 
  • Like
  • Fire
Reactions: 6 users
Lol

 
  • Haha
Reactions: 8 users

CHIPS

Regular
  • Like
  • Haha
Reactions: 5 users

CHIPS

Regular
  • Like
  • Love
  • Fire
Reactions: 31 users

JoMo68

Regular
View attachment 64002


This is the compute platform for the future of AI. 👇

🆕 Introducing Arm Compute Subsystems (CSS) for Client.

Designed for AI smartphones and AI PCs, CSS for Client delivers production ready physical implementations of our new CPUs and GPUs to deliver next-gen AI experiences quickly and easily.

It includes:
🔹 Our latest Armv9.2 CPU cluster, including the Arm Cortex-X925 which delivers the highest year-on-year performance uplift in the history of Cortex-X

🔹 The Arm Immortalis-G925 GPU, our most performant and efficient GPU to date with a 37% uplift in graphics performance

We are also launching new KleidiAI software to provide the simplest way for developers to get the best performance out of Arm CPUs.

So whether you want more AI, more performance or more advanced silicon, you can rely on our new solution to provide the foundation for AI-powered experiences on consumer devices. https://okt.to/PkzCt3
Has anyone noticed that if you rearrange the letters (KleidiAI), you get ‘Akida’ …. almost 😄 It’s a sign!!
 
  • Haha
  • Like
  • Wow
Reactions: 23 users
Top Bottom