BRN Discussion Ongoing

Diogenese

Top 20
Wonder if Akida will introduce Intel and NVIDIA properly to the wonderful world of 1 - 4 bit instead ;)



Nvidia, Intel develop memory-optimizing deep learning training standard
Paper: FP8 can deliver training accuracy similar to 16-bit standards

Picture of Ben Wodecki
Ben Wodecki
September 20, 2022

2 Min Read

Paper: FP8 can deliver training accuracy similar to 16-bit standards

Nvidia, Intel and Arm have joined forces to create a new standard designed to optimize memory usage in deep learning applications.

The 8-bit floating point (FP8) standard was developed across several neural network architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Transformer-based models.

The standard is also applicable to language models up to 175 billion parameters, which would cover the likes of GPT-3, OPT-175B and Bloom.

“By adopting an interchangeable format that maintains accuracy, AI models will operate consistently and performantly across all hardware platforms, and help advance the state of the art of AI,” Nvidia’s Shar Narasimhan wrote in a blog post.

Optimizing AI memory usage
When building an AI system, developers need to consider the weight of the system, which governs the effectiveness of what a system learns from its training data.

There are several standards used currently, including FP32 and FP16, but these often reduce the volume of memory required to train a system in place of accuracy.

Their new approach focuses on bits compared with prior methods, so as to use memory capabilities more efficiently; less memory being used by a system means less computational power is needed to run an application.

The trio outlined the new standard in a paper, which covers training and inference evaluation using the standard across a variety of tasks and models.

According to the paper, FP8 achieved “comparable accuracy” to FP16 format across use cases and applications including computer vision.

Results on transformers and GAN networks, like OpenAI’s DALL-E, saw FP8 achieve training accuracy similar to 16-bit precisions while delivering “significant speedups.”

Testing using the MLPerf Inference benchmark, Nvidia Hopper using FP8 achieved 4.5x faster times using the BERT model for natural language processing.

“Using FP8 not only accelerates and reduces resources required to train but also simplifies 8-bit inference deployment by using the same datatypes for training and inference,” according to the paper.
Hi Fmf,

That just triggered a couple of obscure dots ... Akida works on probability, what does the image most closely resemble?

In fact, I reckon we are on the path of the Infinite Improbability Drive. How many heads does PvdM have?
 
  • Like
  • Haha
  • Fire
Reactions: 30 users

Diogenese

Top 20
Hi Fmf,

That just triggered a couple of obscure dots ... Akida works on probability, what does the image most closely resemble?

In fact, I reckon we are on the path of the Infinite Improbability Drive. How many heads does PvdM have?
Bugger!

I had something really serious to say, but Zaphod hijacked my synapses.

It was about Mercedes and Nvidia and maybe Sony and Prophesee ...
 
  • Haha
  • Like
Reactions: 20 users
Hi Fmf,

That just triggered a couple of obscure dots ... Akida works on probability, what does the image most closely resemble?

In fact, I reckon we are on the path of the Infinite Improbability Drive. How many heads does PvdM have?
SNN ;)
 
  • Like
  • Haha
Reactions: 8 users

Diogenese

Top 20
Has anyone done a study of the half-life of good news pre-short selling was invented and post-short selling?
 
  • Like
  • Haha
  • Love
Reactions: 10 users

Diogenese

Top 20
Wonder if Akida will introduce Intel and NVIDIA properly to the wonderful world of 1 - 4 bit instead ;)



Nvidia, Intel develop memory-optimizing deep learning training standard
Paper: FP8 can deliver training accuracy similar to 16-bit standards

Picture of Ben Wodecki
Ben Wodecki
September 20, 2022

2 Min Read

Paper: FP8 can deliver training accuracy similar to 16-bit standards

Nvidia, Intel and Arm have joined forces to create a new standard designed to optimize memory usage in deep learning applications.

The 8-bit floating point (FP8) standard was developed across several neural network architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Transformer-based models.

The standard is also applicable to language models up to 175 billion parameters, which would cover the likes of GPT-3, OPT-175B and Bloom.

“By adopting an interchangeable format that maintains accuracy, AI models will operate consistently and performantly across all hardware platforms, and help advance the state of the art of AI,” Nvidia’s Shar Narasimhan wrote in a blog post.

Optimizing AI memory usage
When building an AI system, developers need to consider the weight of the system, which governs the effectiveness of what a system learns from its training data.

There are several standards used currently, including FP32 and FP16, but these often reduce the volume of memory required to train a system in place of accuracy.

Their new approach focuses on bits compared with prior methods, so as to use memory capabilities more efficiently; less memory being used by a system means less computational power is needed to run an application.

The trio outlined the new standard in a paper, which covers training and inference evaluation using the standard across a variety of tasks and models.

According to the paper, FP8 achieved “comparable accuracy” to FP16 format across use cases and applications including computer vision.

Results on transformers and GAN networks, like OpenAI’s DALL-E, saw FP8 achieve training accuracy similar to 16-bit precisions while delivering “significant speedups.”

Testing using the MLPerf Inference benchmark, Nvidia Hopper using FP8 achieved 4.5x faster times using the BERT model for natural language processing.

“Using FP8 not only accelerates and reduces resources required to train but also simplifies 8-bit inference deployment by using the same datatypes for training and inference,” according to the paper.
I think the archaeology department in Cairo may be able to make use of this.
 
  • Haha
  • Like
  • Love
Reactions: 8 users

Dozzaman1977

Regular
Just stumbled on to this article, sorry if it has already been posted .
Fingers crossed Akida will be included.



1671085120655.png
 
  • Like
  • Love
  • Fire
Reactions: 53 users

Diogenese

Top 20
Wonder if Akida will introduce Intel and NVIDIA properly to the wonderful world of 1 - 4 bit instead ;)



Nvidia, Intel develop memory-optimizing deep learning training standard
Paper: FP8 can deliver training accuracy similar to 16-bit standards

Picture of Ben Wodecki
Ben Wodecki
September 20, 2022

2 Min Read

Paper: FP8 can deliver training accuracy similar to 16-bit standards

Nvidia, Intel and Arm have joined forces to create a new standard designed to optimize memory usage in deep learning applications.

The 8-bit floating point (FP8) standard was developed across several neural network architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Transformer-based models.

The standard is also applicable to language models up to 175 billion parameters, which would cover the likes of GPT-3, OPT-175B and Bloom.

“By adopting an interchangeable format that maintains accuracy, AI models will operate consistently and performantly across all hardware platforms, and help advance the state of the art of AI,” Nvidia’s Shar Narasimhan wrote in a blog post.

Optimizing AI memory usage
When building an AI system, developers need to consider the weight of the system, which governs the effectiveness of what a system learns from its training data.

There are several standards used currently, including FP32 and FP16, but these often reduce the volume of memory required to train a system in place of accuracy.

Their new approach focuses on bits compared with prior methods, so as to use memory capabilities more efficiently; less memory being used by a system means less computational power is needed to run an application.

The trio outlined the new standard in a paper, which covers training and inference evaluation using the standard across a variety of tasks and models.

According to the paper, FP8 achieved “comparable accuracy” to FP16 format across use cases and applications including computer vision.

Results on transformers and GAN networks, like OpenAI’s DALL-E, saw FP8 achieve training accuracy similar to 16-bit precisions while delivering “significant speedups.”

Testing using the MLPerf Inference benchmark, Nvidia Hopper using FP8 achieved 4.5x faster times using the BERT model for natural language processing.

“Using FP8 not only accelerates and reduces resources required to train but also simplifies 8-bit inference deployment by using the same datatypes for training and inference,” according to the paper.
Reminds me of when they merged the International Patent Classification (IPC) system with the US Patent Classification system to form the dubiously named Cooperative Patent Classification system (CRC).

The IPC was aa reasonably structured hierarchy, but the US system, developed after and influenced by the French Revolution, was more of a laisse faire FIFO system.

Some of the US classes had approximate IPC equivalents, but those that didn't were tacked on the end of the nearest group.
 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 7 users

HopalongPetrovski

I'm Spartacus!
Bugger!

I had something really serious to say, but Zaphod hijacked my synapses.

It was about Mercedes and Nvidia and maybe Sony and Prophesee ...
panic-mainwaring.gif
 
  • Haha
  • Like
  • Love
Reactions: 13 users
Reminds me of when they merged the International Patent Classification (IPC) system with the US Patent Classification system to form the dubiously named Cooperative Patent Classification system (CRC).

The IPC was aa reasonably structured hierarchy, but the US system, developed after and influenced by the French Revolution, was more of a FIFO system.

Some of the US classes had approximate IPC equivalents, but those that didn't were tacked on the end of the nearest group.
Interesting to look at some hairy coconut vacancies.



Though, references to DNN but also some keyword crossovers like neural acceleration, new neural, sparsity etc.

These few from the second half this year.

Machine Learning Engineer, Training an Acceleration​


Seattle, Washington, United States
Machine Learning and AI

Description​

We’re looking for strong software engineer/leads to build a next generation Deep Learning technology stack to accelerate on-device machine learning capabilities and emerging innovations. You’ll be part of close nit software developers and deep learning experts working in the area of hardware aware neural network optimization, algorithms, and neural architecture search. We’re looking for candidates with strong software engineering skills, passionate about machine learning, computational science and hardware. RESPONSIBILITIES: * Design and develop APIs for common and emerging deep learning primitives: layers, tensor operations, optimizers and more specific hardware features. * Implement efficient tensor operations and DNN training algorithms. * Train and evaluate DNNs for the purpose of benchmarking neural network optimization algorithms. Our framework reduces latency and power consumption of neural networks found in many Apple products. * Perform research in emerging areas of efficient neural network development including quantization, pruning, compression and neural architecture search, as well as novel differentiable compute primitives. * We encourage publishing novel research at top ML conferences.

Camera Machine Learning Engineer - ISP Algorithms​


Santa Clara Valley (Cupertino), California, United States
Machine Learning and AI
Add to Favorites Camera Machine Learning Engineer - ISP Algorithms
Share Camera Machine Learning Engineer - ISP Algorithms

Key Qualifications​

  • Self driven and passionate for image quality excellence!
  • Strong machine learning and deep learning fundamentals, ideally in fields related to image processing and restoration, such as de-noising, super-resolution, semantic segmentation, GANs, saliency
  • Strong proficiency in Python and at least one major deep learning framework (Pytorch or Tensorflow preferred)
  • A keen interest towards real-time performance optimization, previous experience taking approaches from research papers and successfully deploying them in a resource-constrained, mobile computing environment
  • Understanding of the Physics and Math being the digital imaging formation process, from image capture and imaging sensor characteristics, optics fundamentals, image signal processing; and their influence to final image and video quality would be a plus
  • Solid programming skills in C / C++, Matlab is a bonus
  • Previous experience with network compression, quantization, performance and memory profiling and optimization is a bonus

Description​

In the Camera ML Algorithm Engineer role, you will develop and ship features in one or more of the following fields: pixel processing and image restoration (de-noising, de-blurring, super-resolution, style transfer, SDR to HDR mapping), scene understanding (object detection and tracking, semantic segmentation, scene analysis for auto-focus, exposure and white balance, saliency detection), real-time optical flow, image registration and fusion, optimization for low latency and low power consumption.

AI/ML - Deep Learning Software Engineer, CoreML, Machine Learning Platform & Technology​


Seattle, Washington, United States
Machine Learning and AI

Key Qualifications​

  • Strong C/C++ programming skills
  • Experience with Python programming
  • Excellent in API design, software architecture and data structures
  • Excellent problem solving and debugging skills
  • Experience, or deep interest, in deep learning libraries such as TensorFlow, PyTorch, JAX etc.

Description​

In this role, you will work on the CoreML framework and the underlying compiler stack that powers it. You will work closely with the compiler team, including the hardware specific compiler teams for CPU, GPU and ANE. You will get an opportunity to work on different levels of the ML stack at Apple, by contributing to the core C++ libraries and to the python bridge connecting it to external frameworks such as TensorFlow and PyTorch. In addition, you will... - Work closely with ANE/GPU/CPU hardware backends teams and the ML compiler team at Apple, to co-design features for the neural network inference stack - Design and implement new neural network ops: CPU C++ implementations and python bridge to TensorFlow/PyTorch via CoreMLTools - Design and implement new deep learning quantization features across the stack: affine quantization, pruning, sparsity etc - Work closely with Apple researchers and app developers to optimize their deep learning model deployments on device, by implementing new NN ops, optimizations, graph passes etc in ML stack - Design and develop APIs for common and emerging deep learning primitives: ops, tensor operations, optimizers and more specific hardware features.

C++ SWE (Machine Learning Acceleration: Infrastructure and Frameworks)​


Seattle, Washington, United States
Machine Learning and AI

Key Qualifications​

  • 2+ years of experience developing ML frameworks and software solutions in industry or academia.
  • Experience using modern machine learning frameworks like TensorFlow or PyTorch.
  • Experience with modern IR for ML workloads (MLIR).
  • Strong fundamentals in problem solving and algorithm design
  • Passion for software architecture, API and development tool design
  • Ability to write flawless, readable and maintainable code in C++
  • Strong communication skills, and ability to present deep technical ideas to audience with different skillsets.
  • Collaborative team player who can work well across multiple teams and organizations.
  • Understanding of compiler development
  • Understanding of hardware acceleration for ML workloads

Description​

Responsibilities include: Developing machine learning infrastructure that will be used by product teams for developing, evaluating and deploying machine learning models. Develop and maintain large code base by writing readable, modular and well tested code. Providing technical support to product and algorithm teams on the best practices for developing efficient machine learning models, and analyzing failure cases. Interacting with high level ML framework such as CoreML. Interacting with the compiler for Apple proprietary Neural Engine Accelerator to expose / enable new features of the Neural Engine Accelerator.
 
  • Like
  • Fire
  • Haha
Reactions: 15 users

Diogenese

Top 20
Interesting to look at some hairy coconut vacancies.



Though, references to DNN but also some keyword crossovers like neural acceleration, new neural, sparsity etc.

These few from the second half this year.

Machine Learning Engineer, Training an Acceleration​


Seattle, Washington, United States
Machine Learning and AI

Description​

We’re looking for strong software engineer/leads to build a next generation Deep Learning technology stack to accelerate on-device machine learning capabilities and emerging innovations. You’ll be part of close nit software developers and deep learning experts working in the area of hardware aware neural network optimization, algorithms, and neural architecture search. We’re looking for candidates with strong software engineering skills, passionate about machine learning, computational science and hardware. RESPONSIBILITIES: * Design and develop APIs for common and emerging deep learning primitives: layers, tensor operations, optimizers and more specific hardware features. * Implement efficient tensor operations and DNN training algorithms. * Train and evaluate DNNs for the purpose of benchmarking neural network optimization algorithms. Our framework reduces latency and power consumption of neural networks found in many Apple products. * Perform research in emerging areas of efficient neural network development including quantization, pruning, compression and neural architecture search, as well as novel differentiable compute primitives. * We encourage publishing novel research at top ML conferences.

Camera Machine Learning Engineer - ISP Algorithms​


Santa Clara Valley (Cupertino), California, United States
Machine Learning and AI
Add to Favorites Camera Machine Learning Engineer - ISP Algorithms
Share Camera Machine Learning Engineer - ISP Algorithms

Key Qualifications​

  • Self driven and passionate for image quality excellence!
  • Strong machine learning and deep learning fundamentals, ideally in fields related to image processing and restoration, such as de-noising, super-resolution, semantic segmentation, GANs, saliency
  • Strong proficiency in Python and at least one major deep learning framework (Pytorch or Tensorflow preferred)
  • A keen interest towards real-time performance optimization, previous experience taking approaches from research papers and successfully deploying them in a resource-constrained, mobile computing environment
  • Understanding of the Physics and Math being the digital imaging formation process, from image capture and imaging sensor characteristics, optics fundamentals, image signal processing; and their influence to final image and video quality would be a plus
  • Solid programming skills in C / C++, Matlab is a bonus
  • Previous experience with network compression, quantization, performance and memory profiling and optimization is a bonus

Description​

In the Camera ML Algorithm Engineer role, you will develop and ship features in one or more of the following fields: pixel processing and image restoration (de-noising, de-blurring, super-resolution, style transfer, SDR to HDR mapping), scene understanding (object detection and tracking, semantic segmentation, scene analysis for auto-focus, exposure and white balance, saliency detection), real-time optical flow, image registration and fusion, optimization for low latency and low power consumption.

AI/ML - Deep Learning Software Engineer, CoreML, Machine Learning Platform & Technology​


Seattle, Washington, United States
Machine Learning and AI

Key Qualifications​

  • Strong C/C++ programming skills
  • Experience with Python programming
  • Excellent in API design, software architecture and data structures
  • Excellent problem solving and debugging skills
  • Experience, or deep interest, in deep learning libraries such as TensorFlow, PyTorch, JAX etc.

Description​

In this role, you will work on the CoreML framework and the underlying compiler stack that powers it. You will work closely with the compiler team, including the hardware specific compiler teams for CPU, GPU and ANE. You will get an opportunity to work on different levels of the ML stack at Apple, by contributing to the core C++ libraries and to the python bridge connecting it to external frameworks such as TensorFlow and PyTorch. In addition, you will... - Work closely with ANE/GPU/CPU hardware backends teams and the ML compiler team at Apple, to co-design features for the neural network inference stack - Design and implement new neural network ops: CPU C++ implementations and python bridge to TensorFlow/PyTorch via CoreMLTools - Design and implement new deep learning quantization features across the stack: affine quantization, pruning, sparsity etc - Work closely with Apple researchers and app developers to optimize their deep learning model deployments on device, by implementing new NN ops, optimizations, graph passes etc in ML stack - Design and develop APIs for common and emerging deep learning primitives: ops, tensor operations, optimizers and more specific hardware features.

C++ SWE (Machine Learning Acceleration: Infrastructure and Frameworks)​


Seattle, Washington, United States
Machine Learning and AI

Key Qualifications​

  • 2+ years of experience developing ML frameworks and software solutions in industry or academia.
  • Experience using modern machine learning frameworks like TensorFlow or PyTorch.
  • Experience with modern IR for ML workloads (MLIR).
  • Strong fundamentals in problem solving and algorithm design
  • Passion for software architecture, API and development tool design
  • Ability to write flawless, readable and maintainable code in C++
  • Strong communication skills, and ability to present deep technical ideas to audience with different skillsets.
  • Collaborative team player who can work well across multiple teams and organizations.
  • Understanding of compiler development
  • Understanding of hardware acceleration for ML workloads

Description​

Responsibilities include: Developing machine learning infrastructure that will be used by product teams for developing, evaluating and deploying machine learning models. Develop and maintain large code base by writing readable, modular and well tested code. Providing technical support to product and algorithm teams on the best practices for developing efficient machine learning models, and analyzing failure cases. Interacting with high level ML framework such as CoreML. Interacting with the compiler for Apple proprietary Neural Engine Accelerator to expose / enable new features of the Neural Engine Accelerator.
... and in the wheel department, we are looking for anyone who has any ideas about how to reduce the wear on the corners of our basalt square tyres.

We have also had some reports of mal de mer on our rectangular tyres, but only when they get out of synch.
 
Last edited:
  • Haha
  • Like
Reactions: 12 users

Aretemis

Regular
Hi Fmf,

That just triggered a couple of obscure dots ... Akida works on probability, what does the image most closely resemble?

In fact, I reckon we are on the path of the Infinite Improbability Drive. How many heads does PvdM have?
Not if the vogons get their hands on it first
 
  • Like
  • Haha
Reactions: 4 users
Recall someone (sorry not searched who) had mentioned Cambridge Consultants previously.

Excerpt from an Oct article by their VP referencing the need for energy considerations. Right up our alley.



Efficient AI means businesses can achieve more with less​

Cambridge Consultants's Ram Naidu outlines the how to pick the right technique for your AI needs.
October 20, 2022


Every business now has a concern over the rising cost of energy, linked to the carbon cost and the need for sustainable solutions. The more we can get away from the greedy energy demands of large compute costs and adopt efficient AI methods the better. This leads to an exploration of energy efficiency, whether through neuromorphic methods or using low-bit encoding. Again, there will not be a universal off-the-shelf solution to cutting energy costs. But it is a parameter we must consider and find where the right compromise can be made.

So, what does a successful AI solution look like? Its approach must depend on data quantity, labeled data availability, and energy cost implementation amongst a host of other considerations. Looking at one component of this in isolation isn’t the path to success. A successful AI solution requires a holistic approach to cover the needs and costs with a mature view of all the competing drivers. This was inevitable - AI had so much success so soon with low-hanging fruit. As the field matures, so must our ability to approach AI with a clear eye on the value it can bring. If you’re all set for the summit, great. I look forward to seeing you, and perhaps continuing the conversation, at the IoT World & The AI Summit in Austin, Texas, on Nov. 2-3, 2022.
 
  • Like
  • Love
  • Fire
Reactions: 20 users

TopCat

Regular
Recall someone (sorry not searched who) had mentioned Cambridge Consultants previously.

Excerpt from an Oct article by their VP referencing the need for energy considerations. Right up our alley.



Efficient AI means businesses can achieve more with less​

Cambridge Consultants's Ram Naidu outlines the how to pick the right technique for your AI needs.
October 20, 2022


Every business now has a concern over the rising cost of energy, linked to the carbon cost and the need for sustainable solutions. The more we can get away from the greedy energy demands of large compute costs and adopt efficient AI methods the better. This leads to an exploration of energy efficiency, whether through neuromorphic methods or using low-bit encoding. Again, there will not be a universal off-the-shelf solution to cutting energy costs. But it is a parameter we must consider and find where the right compromise can be made.

So, what does a successful AI solution look like? Its approach must depend on data quantity, labeled data availability, and energy cost implementation amongst a host of other considerations. Looking at one component of this in isolation isn’t the path to success. A successful AI solution requires a holistic approach to cover the needs and costs with a mature view of all the competing drivers. This was inevitable - AI had so much success so soon with low-hanging fruit. As the field matures, so must our ability to approach AI with a clear eye on the value it can bring. If you’re all set for the summit, great. I look forward to seeing you, and perhaps continuing the conversation, at the IoT World & The AI Summit in Austin, Texas, on Nov. 2-3, 2022.
They’re a design partner with ARM and they’ve worked with Prophesee to develop PureSentry , a way of detecting contamination in cell therapy.
 
  • Like
  • Fire
Reactions: 10 users
1671090440375.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Mt09

Regular
  • Like
Reactions: 7 users

Diogenese

Top 20
I don’t remember seeing or reading this article but I would like to think that Rob Telson stating that they saw Nvidia more as a partner than as a competitor and with Nvidia through Mercedes Benz at least being fully aware of AKIDA Science Fiction that in their role as a consultant to Sony EV they may have mentioned Brainchip:

Computing Hardware Underpinning the Next Wave of Sony, Hyundai, and Mercedes EVs​

January 30, 2022 by Tyler Charboneau

Major automakers Sony, Hyundai, and Mercedes-Benz have recently announced their EV roadmaps. What computing hardware will appear in these vehicles?

With electric vehicles (EVs) becoming increasingly mainstream, automakers are engaging in the next great development war in hopes of elevating themselves above their competitors. Auto executives expect EVs, on average, to account for 52% of all sales by 2030. Accordingly, investing in new computing technologies and EV platforms is key.
While the battery is the heart of the EV, intelligently engineering the car's “brain” is equally important. The EV’s computer is responsible for controlling a plethora of functions—ranging from regenerative-braking feedback, to infotainment operation, to battery management, to instrument cluster operation. Specifically, embedded chips like the CPU enable these features.

Diagram of some EV subsystems

Diagram of some EV subsystems. Image used courtesy of MDPI

Modernized solutions like GM’s Super Cruise and Ultra Cruise claim to effectively handle 95% of driving scenarios. Ultra Cruise alone will leverage a new AI-capable 5nm processor. Drivers are demanding improved safety features like advanced lane centering, emergency braking, and adaptive cruise control. In fact, Volkswagen’s ID.4 EV received poor marks from buyers because it lacked such core capabilities.
What other hardware-level developments have manufacturers unveiled?

Sony Enters the EV Fray​

At CES 2022, Sony announced its intention to form a new company called Sony Mobility. This offshoot will be dedicated solely to exploring EV development—building on Sony’s 2020 VISION-S research initiative. While Sony unveiled its coup EV prototype two years ago, dubbed VISION-S 01, this year’s VISION-S 02 prototype is an SUV. However, the company hasn’t committed to bringing these cars to mass-market consumers themselves.
It’s said that both Qualcomm and NVIDIA have been involved throughout the development process. However, the two prominent electronics manufacturers haven’t made their involvement with Sony clear (and vice versa). Tesla has adopted NVIDIA hardware to support its machine-learning algorithms; it’s, therefore, possible that Sony has taken similar steps.
Additionally, NVIDIA has long touted its DRIVE Orin SoC, DRIVE Hyperion, and DRIVE AGX Pegasus SoC/GPU. These are specifically built to power autonomous vehicles. The same can be said for its DRIVE Sim program, which enables self-driving simulations based on dynamic data.

The NVIDIA DRIVE Atlan

The NVIDIA DRIVE Atlan. Image used courtesy of NVIDIA

The Sony VISION-S 02 features a number of internal displays and driver-monitoring features. This is where Qualcomm’s involvement may begin. The chipmaker previously introduced the Snapdragon Digital Chassis, a hardware-software suite that supports the following:
  • Advanced driver-assistance feature development
  • 4G, 5G, Wi-Fi, and Bluetooth connectivity
  • Virtual assistance, voice control, and graphical information
  • Car-to-Cloud connectivity
  • Navigation and GPS
It’s unclear if any of Sony’s EVs are reliant on either supplier for in-cabin functionality or overall development. However, both companies have a vested interest in the EV-AV market, and at least have held consulting roles with Sony for two years.

Hyundai and IonQ Join Forces​

SCROLL TO CONTINUE WITH CONTENT

Since Hyundai unveiled its BlueOn electric car in 2010, the company has been hard at work developing improved EVs behind the scenes. These efforts have led to recent releases of the IONIQ EV and Kona Electric. However, the automaker concedes that battery challenges have plagued the ownership experience of EVs following their market launch. Batteries continue to suffer wear and tear from charge and discharge cycling. Capacities have left something to be desired, as have overall durability and safety throughout an EV’s lifespan.
A recent partnership with quantum-computing experts at IonQ aims to solve many of these problems. Additionally, the duo hopes to lower battery costs while improving efficiencyalong the way. IonQ’s quantum processors are doing the legwork here—alongside the company’s quantum algorithms. The goal is to study lithium-based battery chemistries while leveraging Hyundai’s data and expertise in the area.

IonQ

One of IonQ’s ion-trap chips announced in August 2021. Image used courtesy of IonQ

By 2025, Hyundai is aiming to introduce more than 12 battery electric vehicles (BEVs) to consumers. Batteries remain the most expensive component in all EVs, and there’s a major incentive to reduce their costs and pass savings down to consumers. This will boost EV uptake. While the partnership isn’t supplying Hyundai vehicles with hardware components at scale, the venture could help Hyundai design better chip-dependent battery-management systems in the future.

Mercedes-Benz Delivers Smarter Operation​

Stemming from time in the lab, including contributions from Formula 1 and Formula E, Mercedes-Benz has developed its next-generation VISION EQXX vehicle. A major selling point of Mercedes’ newest EV is the cockpit design—which features displays and graphics spanning the vehicle’s entire width. The car is designed to be human-centric and actually mimic the human mind during operation.
How is this possible? The German automaker has incorporated BrainChip’s Akida neural processor and associated software suite. This chipset powers the EQXX’s onboard systems and runs spiking neural networks. This operation saves power by only consuming energy during periods of learning or processing. Such coding dramatically lowers energy consumption.

Diagram of some of Akida's IP's IP

Diagram of some of Akida's IP. Image used courtesy of Brainchip

Additionally, it makes driver interaction much smoother via voice control. Keyword recognition is now five to ten times more accurate than it is within competing systems, according to Mercedes. The result is described as a better driving experience while markedly reducing AI energy needs across the vehicle’s entirety. The EQXX and EVs after it will think in much more humanistic ways and support continuous learning. By doing so, Mercedes hopes to continually refine the driving experience throughout periods of extended ownership, across hundreds of thousands of miles.

The Future of EV Electronics​

While companies have achieved Level 2+ autonomy through driver-assistance packages, upgradeable EV software systems may eventually unlock fully-fledged self-driving. Accordingly, chip-level innovations are surging forward to meet future demand.
It’s clear that EV development has opened numerous doors for electrical engineers and design teams. The inclusion of groundbreaking new components rooted in AI and ML will help drivers connect more effectively with their vehicles. Interestingly, different automakers are taking different approaches on both software and hardware fronts.
Harmonizing these two facets of EV computing will help ensure a better future for battery-powered cars—making them more accessible and affordable to boot”


The Brainchip stated ambition in automotive is to first make every automotive sensor smart and later take control by becoming the central processing unit to which all these smart sensors report.

My opinion only so DYOR
FF

AKIDA BALLISTA

PS: As we approach the festive season when hopefully there will be time for reflection please if you have been too busy to decide upon a plan as 2023 is shaping up as a breakout year for Brainchip use some of that time to do so.

If it was not clear to you from the MF article it should be that manipulators are already planning their activities for 2023 and will be out in force even if the price is rising off the back of price sensitive announcements claiming that any income no matter that starts to appear does not justify the share price hoping to manipulate retail.

The only way to avoid being manipulated is to have a plan locked in before emotion comes into play and hasty decisions are made which are later become a cause for regret.

I always find that it is useful to look at the timing of events such as collaborations, product launches ...

Sony Mobility was announced as a concept at CES 2022.

Sony Enters the EV Fray​

At CES 2022, Sony announced its intention to form a new company called Sony Mobility. This offshoot will be dedicated solely to exploring EV development—building on Sony’s 2020 VISION-S research initiative. While Sony unveiled its coup EV prototype two years ago, dubbed VISION-S 01, this year’s VISION-S 02 prototype is an SUV. However, the company hasn’t committed to bringing these cars to mass-market consumers themselves.
It’s said that both Qualcomm and NVIDIA have been involved throughout the development process. However, the two prominent electronics manufacturers haven’t made their involvement with Sony clear (and vice versa). Tesla has adopted NVIDIA hardware to support its machine-learning algorithms; it’s, therefore, possible that Sony has taken similar steps.
Additionally, NVIDIA has long touted its DRIVE Orin SoC, DRIVE Hyperion, and DRIVE AGX Pegasus SoC/GPU. These are specifically built to power autonomous vehicles. The same can be said for its DRIVE Sim program, which enables self-driving simulations based on dynamic data
.


CES 2022 was in January, 6 months before the Prophesee/Akida reveal and 6 months after the Sony/Prophesee collaboration was announced (but a couple of years after the collaboration commenced ... NDA?)


https://www.prophesee.ai/2021/09/09/sony-event-based-vision-sensors-prophesee-co-development/

20210909:

Sony to Release Two Types of Stacked Event-Based Vision Sensors with the Industry’s Smallest*1 4.86μm Pixel Size for Detecting Subject Changes Only​


Atsugi, Japan — Sony Semiconductor Solutions Corporation (“Sony”) [SSS] today announced the upcoming release of two types of stacked event-based vision sensors. These sensors designed for industrial equipment are capable of detecting only subject changes, and achieve the industry’s smallest*1 pixel size of 4.86μm.

These two sensors were made possible through a collaboration between Sony and Prophesee, by combining Sony’s CMOS image sensor technology with Prophesee’s unique event-based vision sensing technology.
This enables high-speed, high-precision data acquisition and contributes to improve the productivity of the industrial equipment.

So clearly, Prophesee and Sony had been collaborating for quite a while before September 2021, going through initial feasibility analysis, software simulation, silicon design & tapeout, engineering samples ... . That looks like maybe 2 years of collaboration. Presumably iCatch/VeriSilicon were in at the start as well.

... and thanks to @Fullmoonfever we know that the logic was from iCatch using CNN hardware from VeriSilicon.

... and thanks to Prophesee (June 2022), we know Akida beat the socks off VeriSilicon and any other pretenders (including Qualcomm and Nvidia).

So maybe Sony Mobility can save itself a couple of years futile tinkering ...

... if only they talk to SSS.

@Diogenese

Maybe the answer.

In 2020 started using VeriSilicon NPU for their latest V37 at the time.

Would presume has just been iterations of same since.

Haven't looked into the VeriSilicon capabilities yet. Time for 💤



VeriSilicon VIP9000 and ZSP are Adopted by iCatch Next Generation AI-powered Automotive Image Processing SoC

Shanghai, China, May 12, 2020 – VeriSilicon today announced that iCatch Technology, Inc. (TPEX: 6695), a global leader in low-power and intelligent image processing SoC solutions, has selected VeriSilicon VIP9000 NPU and ZSPNano DSP IP. Both will be utilized in the iCatch’s next generation AI-powered image processing SoC with embedded neural network accelerators powered by VeriSilicon’s NPU for applications such as automotive electronics, industrial, appliance, consumer electronics, AIoT, smart home, commercial and more.
 
  • Like
  • Fire
  • Love
Reactions: 32 users
The International VLSI Design & Embedded Systems conference held in Hyderabad, India will be happening from 8 - 12 Jan 2023. A lot of the big name Semiconductor companies will be there.

Here's the link https://vlsid.org/

"International VLSI Design & Embedded Systems conference is a Premier Global conference with legacy of over three and half decades. This Global Annual technical conference that focusses on latest advancements in VLSI and Embedded Systems, is attended by over 2000 engineers, students & faculty, industry, academia, researchers, bureaucrats and government bodies.

Semiconductors are the intangible backbone of every industry across the globe. Silicon took the lion’s share over the past decades and remained the primary enabler for digitization of the world. With scaling reaching its fundamental limits, it is time to look at addressing technological challenges at higher levels of abstraction in CMOS based design and at the same
time, look beyond Silicon for further performance enhancement.

VLSID 2023 – the first physical conference post pandemic, acts a platform for industry and academia alike to discuss, deliberate and explore into the frontiers of semiconductor eco-system that could eventually enable disruptive technologies for global digitalization."
 
  • Like
  • Fire
  • Love
Reactions: 21 users

equanimous

Norse clairvoyant shapeshifter goddess
  • Like
  • Fire
  • Love
Reactions: 39 users

TechGirl

Founding Member
Word is getting out there (y)

Little 6 min podcast from yesterday, first 2 mins he talks about OTC, worth listing too as what they are doing no doubt our tech will help them grow, next 2 mins he talks about BRN recently joining IFS & minute 5 he talks about "the market for Machine Learning is projected to grow from $21.5 billion USD in 2021 to $276.58 billion by 2028", last minute is just his disclosures.


AI Eye Podcast 744: Stocks discussed: (OTCPINK: GTCH) (ASX: BRN) (NasdaqGS: INTC)​





Listen to today's podcast:

https://www.investorideas.com/Audio/Podcasts/2022/121422-AI-Eye.mp3


Vancouver, Kelowna, Delta, BC - December 14, 2022 (Investorideas.com Newswire) Investorideas.com, a global investor news source covering Artificial Intelligence (AI) brings you today's edition of The AI Eye - watching stock news, deal tracker and advancements in artificial intelligence - featuring technology company GBT Technologies Inc. (OTCPINK:GTCH).


AI Eye Podcast 744:​

Stocks discussed: (OTCPINK: GTCH) (ASX: BRN) (NasdaqGS: INTC)


Hear the AI Eye on Spotify
Today's Column -
The AI Eye - Watching stock news, deal tracker and advancements in artificial intelligence

GBT Files Continuation Application for AI-Powered Facial/Body Recognition Patent, and BrainChip Joins Intel Foundry Services


Stocks discussed: (OTCPINK:GTCH) (ASX:BRN) (NasdaqGS:INTC)


Link to website:
 
  • Like
  • Love
  • Fire
Reactions: 61 users
... and in the wheel department, we are looking for anyone who has any ideas about how to reduce the wear on the corners of our basalt square tyres.

We have also had some reports of mal de mer on our rectangular tyres, but only when they get out of synch.
That’s easily solved just fit square pneumatic rubber compound tyres.

Had exactly the same problem with my wheel barrow. Fixed it instantly.😂🤣🤡😂😂🤓
 
  • Haha
  • Like
  • Wow
Reactions: 9 users
Top Bottom