Townyj
Ermahgerd
LOL!! GL getting away now
Following on from the above, I saw this article today discussing BMW’s Facebook pages and the brand’s main Instagram account which both had their profile pictures changed on Tuesday. Here's some of what the article had to say.
View attachment 24409
![]()
BMW Turns Into "DeeMW" on Social Media (on Purpose, Not Because They Were Hacked)
The Bavarian automaker startled many of its fans in the online world after the white and blue logo turned into a black-and-yellow “Dee” imagewww.autoevolution.com
His follow list is getting smaller and smaller
Found this article very interesting about Sony
Sony is implementing Edge AI sensors into vehicles that use way less power. hmmm i have some pretty speculative opinions about the IP being used.
Sony is working on new sensors for self-driving that it claims use 70% less electricity.
The sensors would help significantly extend the range of electric vehicles with autonomous capabilities.
According to a report in Nikkei Asia, they will be made by Sony Semiconductor Solutions and be paired with software developed by Japanese start-up Tier IV.
The companies aim to deliver Level 4 tech, as defined by the Society of Automotive Engineers, by 2030. This means that the car drives itself, with no requirement for human intervention.
To achieve Level 4, autonomous vehicles (AVs) need a wide array of hardware, including sensors and cameras, that transmit massive amounts of data, requiring vast amounts of power.
Sony is hoping to reduce electricity usage via edge computing, with as much data as possible processed through artificial intelligence-equipped sensors and software on the vehicles themselves, rather than being transmitted to external networks.
This approach would potentially make AVs safer, too, by cutting communication lags.
It’s also claimed that Sony will incorporate image recognition and radar technologies into the new sensor, which would assist self-driving in rain and other adverse weather conditions.
The company currently controls around 50% of the global market for image sensors, and also has strong experience in edge computing, having commercialized technology in chips for retailers and industrial equipment.
Tier IV, meanwhile, provides open-source self-driving software. Among its partners are Taiwan consumer electronics company Foxconn, which is planning to challenge car makers with an EV platform of its own, and Japanese company Yamaha, with whom it is developing autonomous transport solutions for factories.
In recent years, Sony has become a much more visible presence in the automotive arena. In 2020, the company displayed an electric sedan concept called the VISION-S at CES in Las Vegas and at the 2022 event it revealed an SUV version, the VISION-S 02.
Earlier this year, it announced it was teaming up with automaker Honda to form a new company to build electric vehicles and “provide services for mobility,” Sony Honda Mobility Inc.
The VISION-S featured a total of 40 sensors – 18 cameras, 18 radar/ultrasonic and four lidar – suggesting automation will have a key role to play in the new company.
Planning ahead:I don’t remember seeing or reading this article but I would like to think that Rob Telson stating that they saw Nvidia more as a partner than as a competitor and with Nvidia through Mercedes Benz at least being fully aware of AKIDA Science Fiction that in their role as a consultant to Sony EV they may have mentioned Brainchip:
Computing Hardware Underpinning the Next Wave of Sony, Hyundai, and Mercedes EVs
January 30, 2022 by Tyler Charboneau
Major automakers Sony, Hyundai, and Mercedes-Benz have recently announced their EV roadmaps. What computing hardware will appear in these vehicles?
With electric vehicles (EVs) becoming increasingly mainstream, automakers are engaging in the next great development war in hopes of elevating themselves above their competitors. Auto executives expect EVs, on average, to account for 52% of all sales by 2030. Accordingly, investing in new computing technologies and EV platforms is key.
While the battery is the heart of the EV, intelligently engineering the car's “brain” is equally important. The EV’s computer is responsible for controlling a plethora of functions—ranging from regenerative-braking feedback, to infotainment operation, to battery management, to instrument cluster operation. Specifically, embedded chips like the CPU enable these features.
![]()
Diagram of some EV subsystems. Image used courtesy of MDPI
Modernized solutions like GM’s Super Cruise and Ultra Cruise claim to effectively handle 95% of driving scenarios. Ultra Cruise alone will leverage a new AI-capable 5nm processor. Drivers are demanding improved safety features like advanced lane centering, emergency braking, and adaptive cruise control. In fact, Volkswagen’s ID.4 EV received poor marks from buyers because it lacked such core capabilities.
What other hardware-level developments have manufacturers unveiled?
Sony Enters the EV Fray
At CES 2022, Sony announced its intention to form a new company called Sony Mobility. This offshoot will be dedicated solely to exploring EV development—building on Sony’s 2020 VISION-S research initiative. While Sony unveiled its coup EV prototype two years ago, dubbed VISION-S 01, this year’s VISION-S 02 prototype is an SUV. However, the company hasn’t committed to bringing these cars to mass-market consumers themselves.
It’s said that both Qualcomm and NVIDIA have been involved throughout the development process. However, the two prominent electronics manufacturers haven’t made their involvement with Sony clear (and vice versa). Tesla has adopted NVIDIA hardware to support its machine-learning algorithms; it’s, therefore, possible that Sony has taken similar steps.
Additionally, NVIDIA has long touted its DRIVE Orin SoC, DRIVE Hyperion, and DRIVE AGX Pegasus SoC/GPU. These are specifically built to power autonomous vehicles. The same can be said for its DRIVE Sim program, which enables self-driving simulations based on dynamic data.
![]()
The NVIDIA DRIVE Atlan. Image used courtesy of NVIDIA
The Sony VISION-S 02 features a number of internal displays and driver-monitoring features. This is where Qualcomm’s involvement may begin. The chipmaker previously introduced the Snapdragon Digital Chassis, a hardware-software suite that supports the following:
It’s unclear if any of Sony’s EVs are reliant on either supplier for in-cabin functionality or overall development. However, both companies have a vested interest in the EV-AV market, and at least have held consulting roles with Sony for two years.
- Advanced driver-assistance feature development
- 4G, 5G, Wi-Fi, and Bluetooth connectivity
- Virtual assistance, voice control, and graphical information
- Car-to-Cloud connectivity
- Navigation and GPS
Hyundai and IonQ Join Forces
SCROLL TO CONTINUE WITH CONTENT
Since Hyundai unveiled its BlueOn electric car in 2010, the company has been hard at work developing improved EVs behind the scenes. These efforts have led to recent releases of the IONIQ EV and Kona Electric. However, the automaker concedes that battery challenges have plagued the ownership experience of EVs following their market launch. Batteries continue to suffer wear and tear from charge and discharge cycling. Capacities have left something to be desired, as have overall durability and safety throughout an EV’s lifespan.
A recent partnership with quantum-computing experts at IonQ aims to solve many of these problems. Additionally, the duo hopes to lower battery costs while improving efficiencyalong the way. IonQ’s quantum processors are doing the legwork here—alongside the company’s quantum algorithms. The goal is to study lithium-based battery chemistries while leveraging Hyundai’s data and expertise in the area.
![]()
One of IonQ’s ion-trap chips announced in August 2021. Image used courtesy of IonQ
By 2025, Hyundai is aiming to introduce more than 12 battery electric vehicles (BEVs) to consumers. Batteries remain the most expensive component in all EVs, and there’s a major incentive to reduce their costs and pass savings down to consumers. This will boost EV uptake. While the partnership isn’t supplying Hyundai vehicles with hardware components at scale, the venture could help Hyundai design better chip-dependent battery-management systems in the future.
Mercedes-Benz Delivers Smarter Operation
Stemming from time in the lab, including contributions from Formula 1 and Formula E, Mercedes-Benz has developed its next-generation VISION EQXX vehicle. A major selling point of Mercedes’ newest EV is the cockpit design—which features displays and graphics spanning the vehicle’s entire width. The car is designed to be human-centric and actually mimic the human mind during operation.
How is this possible? The German automaker has incorporated BrainChip’s Akida neural processor and associated software suite. This chipset powers the EQXX’s onboard systems and runs spiking neural networks. This operation saves power by only consuming energy during periods of learning or processing. Such coding dramatically lowers energy consumption.
![]()
Diagram of some of Akida's IP. Image used courtesy of Brainchip
Additionally, it makes driver interaction much smoother via voice control. Keyword recognition is now five to ten times more accurate than it is within competing systems, according to Mercedes. The result is described as a better driving experience while markedly reducing AI energy needs across the vehicle’s entirety. The EQXX and EVs after it will think in much more humanistic ways and support continuous learning. By doing so, Mercedes hopes to continually refine the driving experience throughout periods of extended ownership, across hundreds of thousands of miles.
The Future of EV Electronics
While companies have achieved Level 2+ autonomy through driver-assistance packages, upgradeable EV software systems may eventually unlock fully-fledged self-driving. Accordingly, chip-level innovations are surging forward to meet future demand.
It’s clear that EV development has opened numerous doors for electrical engineers and design teams. The inclusion of groundbreaking new components rooted in AI and ML will help drivers connect more effectively with their vehicles. Interestingly, different automakers are taking different approaches on both software and hardware fronts.
Harmonizing these two facets of EV computing will help ensure a better future for battery-powered cars—making them more accessible and affordable to boot”
The Brainchip stated ambition in automotive is to first make every automotive sensor smart and later take control by becoming the central processing unit to which all these smart sensors report.
My opinion only so DYOR
FF
AKIDA BALLISTA
PS: As we approach the festive season when hopefully there will be time for reflection please if you have been too busy to decide upon a plan as 2023 is shaping up as a breakout year for Brainchip use some of that time to do so.
If it was not clear to you from the MF article it should be that manipulators are already planning their activities for 2023 and will be out in force even if the price is rising off the back of price sensitive announcements claiming that any income no matter that starts to appear does not justify the share price hoping to manipulate retail.
The only way to avoid being manipulated is to have a plan locked in before emotion comes into play and hasty decisions are made which are later become a cause for regret.
Wonder if Akida will introduce Intel and NVIDIA properly to the wonderful world of 1 - 4 bit insteadI don’t remember seeing or reading this article but I would like to think that Rob Telson stating that they saw Nvidia more as a partner than as a competitor and with Nvidia through Mercedes Benz at least being fully aware of AKIDA Science Fiction that in their role as a consultant to Sony EV they may have mentioned Brainchip:
Computing Hardware Underpinning the Next Wave of Sony, Hyundai, and Mercedes EVs
January 30, 2022 by Tyler Charboneau
Major automakers Sony, Hyundai, and Mercedes-Benz have recently announced their EV roadmaps. What computing hardware will appear in these vehicles?
With electric vehicles (EVs) becoming increasingly mainstream, automakers are engaging in the next great development war in hopes of elevating themselves above their competitors. Auto executives expect EVs, on average, to account for 52% of all sales by 2030. Accordingly, investing in new computing technologies and EV platforms is key.
While the battery is the heart of the EV, intelligently engineering the car's “brain” is equally important. The EV’s computer is responsible for controlling a plethora of functions—ranging from regenerative-braking feedback, to infotainment operation, to battery management, to instrument cluster operation. Specifically, embedded chips like the CPU enable these features.
![]()
Diagram of some EV subsystems. Image used courtesy of MDPI
Modernized solutions like GM’s Super Cruise and Ultra Cruise claim to effectively handle 95% of driving scenarios. Ultra Cruise alone will leverage a new AI-capable 5nm processor. Drivers are demanding improved safety features like advanced lane centering, emergency braking, and adaptive cruise control. In fact, Volkswagen’s ID.4 EV received poor marks from buyers because it lacked such core capabilities.
What other hardware-level developments have manufacturers unveiled?
Sony Enters the EV Fray
At CES 2022, Sony announced its intention to form a new company called Sony Mobility. This offshoot will be dedicated solely to exploring EV development—building on Sony’s 2020 VISION-S research initiative. While Sony unveiled its coup EV prototype two years ago, dubbed VISION-S 01, this year’s VISION-S 02 prototype is an SUV. However, the company hasn’t committed to bringing these cars to mass-market consumers themselves.
It’s said that both Qualcomm and NVIDIA have been involved throughout the development process. However, the two prominent electronics manufacturers haven’t made their involvement with Sony clear (and vice versa). Tesla has adopted NVIDIA hardware to support its machine-learning algorithms; it’s, therefore, possible that Sony has taken similar steps.
Additionally, NVIDIA has long touted its DRIVE Orin SoC, DRIVE Hyperion, and DRIVE AGX Pegasus SoC/GPU. These are specifically built to power autonomous vehicles. The same can be said for its DRIVE Sim program, which enables self-driving simulations based on dynamic data.
![]()
The NVIDIA DRIVE Atlan. Image used courtesy of NVIDIA
The Sony VISION-S 02 features a number of internal displays and driver-monitoring features. This is where Qualcomm’s involvement may begin. The chipmaker previously introduced the Snapdragon Digital Chassis, a hardware-software suite that supports the following:
It’s unclear if any of Sony’s EVs are reliant on either supplier for in-cabin functionality or overall development. However, both companies have a vested interest in the EV-AV market, and at least have held consulting roles with Sony for two years.
- Advanced driver-assistance feature development
- 4G, 5G, Wi-Fi, and Bluetooth connectivity
- Virtual assistance, voice control, and graphical information
- Car-to-Cloud connectivity
- Navigation and GPS
Hyundai and IonQ Join Forces
SCROLL TO CONTINUE WITH CONTENT
Since Hyundai unveiled its BlueOn electric car in 2010, the company has been hard at work developing improved EVs behind the scenes. These efforts have led to recent releases of the IONIQ EV and Kona Electric. However, the automaker concedes that battery challenges have plagued the ownership experience of EVs following their market launch. Batteries continue to suffer wear and tear from charge and discharge cycling. Capacities have left something to be desired, as have overall durability and safety throughout an EV’s lifespan.
A recent partnership with quantum-computing experts at IonQ aims to solve many of these problems. Additionally, the duo hopes to lower battery costs while improving efficiencyalong the way. IonQ’s quantum processors are doing the legwork here—alongside the company’s quantum algorithms. The goal is to study lithium-based battery chemistries while leveraging Hyundai’s data and expertise in the area.
![]()
One of IonQ’s ion-trap chips announced in August 2021. Image used courtesy of IonQ
By 2025, Hyundai is aiming to introduce more than 12 battery electric vehicles (BEVs) to consumers. Batteries remain the most expensive component in all EVs, and there’s a major incentive to reduce their costs and pass savings down to consumers. This will boost EV uptake. While the partnership isn’t supplying Hyundai vehicles with hardware components at scale, the venture could help Hyundai design better chip-dependent battery-management systems in the future.
Mercedes-Benz Delivers Smarter Operation
Stemming from time in the lab, including contributions from Formula 1 and Formula E, Mercedes-Benz has developed its next-generation VISION EQXX vehicle. A major selling point of Mercedes’ newest EV is the cockpit design—which features displays and graphics spanning the vehicle’s entire width. The car is designed to be human-centric and actually mimic the human mind during operation.
How is this possible? The German automaker has incorporated BrainChip’s Akida neural processor and associated software suite. This chipset powers the EQXX’s onboard systems and runs spiking neural networks. This operation saves power by only consuming energy during periods of learning or processing. Such coding dramatically lowers energy consumption.
![]()
Diagram of some of Akida's IP. Image used courtesy of Brainchip
Additionally, it makes driver interaction much smoother via voice control. Keyword recognition is now five to ten times more accurate than it is within competing systems, according to Mercedes. The result is described as a better driving experience while markedly reducing AI energy needs across the vehicle’s entirety. The EQXX and EVs after it will think in much more humanistic ways and support continuous learning. By doing so, Mercedes hopes to continually refine the driving experience throughout periods of extended ownership, across hundreds of thousands of miles.
The Future of EV Electronics
While companies have achieved Level 2+ autonomy through driver-assistance packages, upgradeable EV software systems may eventually unlock fully-fledged self-driving. Accordingly, chip-level innovations are surging forward to meet future demand.
It’s clear that EV development has opened numerous doors for electrical engineers and design teams. The inclusion of groundbreaking new components rooted in AI and ML will help drivers connect more effectively with their vehicles. Interestingly, different automakers are taking different approaches on both software and hardware fronts.
Harmonizing these two facets of EV computing will help ensure a better future for battery-powered cars—making them more accessible and affordable to boot”
The Brainchip stated ambition in automotive is to first make every automotive sensor smart and later take control by becoming the central processing unit to which all these smart sensors report.
My opinion only so DYOR
FF
AKIDA BALLISTA
PS: As we approach the festive season when hopefully there will be time for reflection please if you have been too busy to decide upon a plan as 2023 is shaping up as a breakout year for Brainchip use some of that time to do so.
If it was not clear to you from the MF article it should be that manipulators are already planning their activities for 2023 and will be out in force even if the price is rising off the back of price sensitive announcements claiming that any income no matter that starts to appear does not justify the share price hoping to manipulate retail.
The only way to avoid being manipulated is to have a plan locked in before emotion comes into play and hasty decisions are made which are later become a cause for regret.
Hi Fmf,Wonder if Akida will introduce Intel and NVIDIA properly to the wonderful world of 1 - 4 bit instead
Nvidia, Intel develop memory-optimizing deep learning training standard
Paper: FP8 can deliver training accuracy similar to 16-bit standards
Picture of Ben Wodecki
Ben Wodecki
September 20, 2022
2 Min Read
Paper: FP8 can deliver training accuracy similar to 16-bit standards
Nvidia, Intel and Arm have joined forces to create a new standard designed to optimize memory usage in deep learning applications.
The 8-bit floating point (FP8) standard was developed across several neural network architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Transformer-based models.
The standard is also applicable to language models up to 175 billion parameters, which would cover the likes of GPT-3, OPT-175B and Bloom.
“By adopting an interchangeable format that maintains accuracy, AI models will operate consistently and performantly across all hardware platforms, and help advance the state of the art of AI,” Nvidia’s Shar Narasimhan wrote in a blog post.
Optimizing AI memory usage
When building an AI system, developers need to consider the weight of the system, which governs the effectiveness of what a system learns from its training data.
There are several standards used currently, including FP32 and FP16, but these often reduce the volume of memory required to train a system in place of accuracy.
Their new approach focuses on bits compared with prior methods, so as to use memory capabilities more efficiently; less memory being used by a system means less computational power is needed to run an application.
The trio outlined the new standard in a paper, which covers training and inference evaluation using the standard across a variety of tasks and models.
According to the paper, FP8 achieved “comparable accuracy” to FP16 format across use cases and applications including computer vision.
Results on transformers and GAN networks, like OpenAI’s DALL-E, saw FP8 achieve training accuracy similar to 16-bit precisions while delivering “significant speedups.”
Testing using the MLPerf Inference benchmark, Nvidia Hopper using FP8 achieved 4.5x faster times using the BERT model for natural language processing.
“Using FP8 not only accelerates and reduces resources required to train but also simplifies 8-bit inference deployment by using the same datatypes for training and inference,” according to the paper.
Bugger!Hi Fmf,
That just triggered a couple of obscure dots ... Akida works on probability, what does the image most closely resemble?
In fact, I reckon we are on the path of the Infinite Improbability Drive. How many heads does PvdM have?
SNNHi Fmf,
That just triggered a couple of obscure dots ... Akida works on probability, what does the image most closely resemble?
In fact, I reckon we are on the path of the Infinite Improbability Drive. How many heads does PvdM have?
I think the archaeology department in Cairo may be able to make use of this.Wonder if Akida will introduce Intel and NVIDIA properly to the wonderful world of 1 - 4 bit instead
Nvidia, Intel develop memory-optimizing deep learning training standard
Paper: FP8 can deliver training accuracy similar to 16-bit standards
Picture of Ben Wodecki
Ben Wodecki
September 20, 2022
2 Min Read
Paper: FP8 can deliver training accuracy similar to 16-bit standards
Nvidia, Intel and Arm have joined forces to create a new standard designed to optimize memory usage in deep learning applications.
The 8-bit floating point (FP8) standard was developed across several neural network architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Transformer-based models.
The standard is also applicable to language models up to 175 billion parameters, which would cover the likes of GPT-3, OPT-175B and Bloom.
“By adopting an interchangeable format that maintains accuracy, AI models will operate consistently and performantly across all hardware platforms, and help advance the state of the art of AI,” Nvidia’s Shar Narasimhan wrote in a blog post.
Optimizing AI memory usage
When building an AI system, developers need to consider the weight of the system, which governs the effectiveness of what a system learns from its training data.
There are several standards used currently, including FP32 and FP16, but these often reduce the volume of memory required to train a system in place of accuracy.
Their new approach focuses on bits compared with prior methods, so as to use memory capabilities more efficiently; less memory being used by a system means less computational power is needed to run an application.
The trio outlined the new standard in a paper, which covers training and inference evaluation using the standard across a variety of tasks and models.
According to the paper, FP8 achieved “comparable accuracy” to FP16 format across use cases and applications including computer vision.
Results on transformers and GAN networks, like OpenAI’s DALL-E, saw FP8 achieve training accuracy similar to 16-bit precisions while delivering “significant speedups.”
Testing using the MLPerf Inference benchmark, Nvidia Hopper using FP8 achieved 4.5x faster times using the BERT model for natural language processing.
“Using FP8 not only accelerates and reduces resources required to train but also simplifies 8-bit inference deployment by using the same datatypes for training and inference,” according to the paper.
Reminds me of when they merged the International Patent Classification (IPC) system with the US Patent Classification system to form the dubiously named Cooperative Patent Classification system (CRC).Wonder if Akida will introduce Intel and NVIDIA properly to the wonderful world of 1 - 4 bit instead
Nvidia, Intel develop memory-optimizing deep learning training standard
Paper: FP8 can deliver training accuracy similar to 16-bit standards
Picture of Ben Wodecki
Ben Wodecki
September 20, 2022
2 Min Read
Paper: FP8 can deliver training accuracy similar to 16-bit standards
Nvidia, Intel and Arm have joined forces to create a new standard designed to optimize memory usage in deep learning applications.
The 8-bit floating point (FP8) standard was developed across several neural network architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Transformer-based models.
The standard is also applicable to language models up to 175 billion parameters, which would cover the likes of GPT-3, OPT-175B and Bloom.
“By adopting an interchangeable format that maintains accuracy, AI models will operate consistently and performantly across all hardware platforms, and help advance the state of the art of AI,” Nvidia’s Shar Narasimhan wrote in a blog post.
Optimizing AI memory usage
When building an AI system, developers need to consider the weight of the system, which governs the effectiveness of what a system learns from its training data.
There are several standards used currently, including FP32 and FP16, but these often reduce the volume of memory required to train a system in place of accuracy.
Their new approach focuses on bits compared with prior methods, so as to use memory capabilities more efficiently; less memory being used by a system means less computational power is needed to run an application.
The trio outlined the new standard in a paper, which covers training and inference evaluation using the standard across a variety of tasks and models.
According to the paper, FP8 achieved “comparable accuracy” to FP16 format across use cases and applications including computer vision.
Results on transformers and GAN networks, like OpenAI’s DALL-E, saw FP8 achieve training accuracy similar to 16-bit precisions while delivering “significant speedups.”
Testing using the MLPerf Inference benchmark, Nvidia Hopper using FP8 achieved 4.5x faster times using the BERT model for natural language processing.
“Using FP8 not only accelerates and reduces resources required to train but also simplifies 8-bit inference deployment by using the same datatypes for training and inference,” according to the paper.
Bugger!
I had something really serious to say, but Zaphod hijacked my synapses.
It was about Mercedes and Nvidia and maybe Sony and Prophesee ...
Interesting to look at some hairy coconut vacancies.Reminds me of when they merged the International Patent Classification (IPC) system with the US Patent Classification system to form the dubiously named Cooperative Patent Classification system (CRC).
The IPC was aa reasonably structured hierarchy, but the US system, developed after and influenced by the French Revolution, was more of a FIFO system.
Some of the US classes had approximate IPC equivalents, but those that didn't were tacked on the end of the nearest group.
... and in the wheel department, we are looking for anyone who has any ideas about how to reduce the wear on the corners of our basalt square tyres.Interesting to look at some hairy coconut vacancies.
![]()
Search Jobs - Multiple filters applied. - Jobs - Careers at Apple
Explore all jobs at Apple. Search by keyword, location, and other criteria. Create a profile and apply today.jobs.apple.com
Though, references to DNN but also some keyword crossovers like neural acceleration, new neural, sparsity etc.
These few from the second half this year.
Machine Learning Engineer, Training an Acceleration
Seattle, Washington, United States
Machine Learning and AI
Description
We’re looking for strong software engineer/leads to build a next generation Deep Learning technology stack to accelerate on-device machine learning capabilities and emerging innovations. You’ll be part of close nit software developers and deep learning experts working in the area of hardware aware neural network optimization, algorithms, and neural architecture search. We’re looking for candidates with strong software engineering skills, passionate about machine learning, computational science and hardware. RESPONSIBILITIES: * Design and develop APIs for common and emerging deep learning primitives: layers, tensor operations, optimizers and more specific hardware features. * Implement efficient tensor operations and DNN training algorithms. * Train and evaluate DNNs for the purpose of benchmarking neural network optimization algorithms. Our framework reduces latency and power consumption of neural networks found in many Apple products. * Perform research in emerging areas of efficient neural network development including quantization, pruning, compression and neural architecture search, as well as novel differentiable compute primitives. * We encourage publishing novel research at top ML conferences.
Camera Machine Learning Engineer - ISP Algorithms
Santa Clara Valley (Cupertino), California, United States
Machine Learning and AI
Add to Favorites Camera Machine Learning Engineer - ISP Algorithms
Share Camera Machine Learning Engineer - ISP Algorithms
Key Qualifications
- Self driven and passionate for image quality excellence!
- Strong machine learning and deep learning fundamentals, ideally in fields related to image processing and restoration, such as de-noising, super-resolution, semantic segmentation, GANs, saliency
- Strong proficiency in Python and at least one major deep learning framework (Pytorch or Tensorflow preferred)
- A keen interest towards real-time performance optimization, previous experience taking approaches from research papers and successfully deploying them in a resource-constrained, mobile computing environment
- Understanding of the Physics and Math being the digital imaging formation process, from image capture and imaging sensor characteristics, optics fundamentals, image signal processing; and their influence to final image and video quality would be a plus
- Solid programming skills in C / C++, Matlab is a bonus
- Previous experience with network compression, quantization, performance and memory profiling and optimization is a bonus
Description
In the Camera ML Algorithm Engineer role, you will develop and ship features in one or more of the following fields: pixel processing and image restoration (de-noising, de-blurring, super-resolution, style transfer, SDR to HDR mapping), scene understanding (object detection and tracking, semantic segmentation, scene analysis for auto-focus, exposure and white balance, saliency detection), real-time optical flow, image registration and fusion, optimization for low latency and low power consumption.
AI/ML - Deep Learning Software Engineer, CoreML, Machine Learning Platform & Technology
Seattle, Washington, United States
Machine Learning and AI
Key Qualifications
- Strong C/C++ programming skills
- Experience with Python programming
- Excellent in API design, software architecture and data structures
- Excellent problem solving and debugging skills
- Experience, or deep interest, in deep learning libraries such as TensorFlow, PyTorch, JAX etc.
Description
In this role, you will work on the CoreML framework and the underlying compiler stack that powers it. You will work closely with the compiler team, including the hardware specific compiler teams for CPU, GPU and ANE. You will get an opportunity to work on different levels of the ML stack at Apple, by contributing to the core C++ libraries and to the python bridge connecting it to external frameworks such as TensorFlow and PyTorch. In addition, you will... - Work closely with ANE/GPU/CPU hardware backends teams and the ML compiler team at Apple, to co-design features for the neural network inference stack - Design and implement new neural network ops: CPU C++ implementations and python bridge to TensorFlow/PyTorch via CoreMLTools - Design and implement new deep learning quantization features across the stack: affine quantization, pruning, sparsity etc - Work closely with Apple researchers and app developers to optimize their deep learning model deployments on device, by implementing new NN ops, optimizations, graph passes etc in ML stack - Design and develop APIs for common and emerging deep learning primitives: ops, tensor operations, optimizers and more specific hardware features.
C++ SWE (Machine Learning Acceleration: Infrastructure and Frameworks)
Seattle, Washington, United States
Machine Learning and AI
Key Qualifications
- 2+ years of experience developing ML frameworks and software solutions in industry or academia.
- Experience using modern machine learning frameworks like TensorFlow or PyTorch.
- Experience with modern IR for ML workloads (MLIR).
- Strong fundamentals in problem solving and algorithm design
- Passion for software architecture, API and development tool design
- Ability to write flawless, readable and maintainable code in C++
- Strong communication skills, and ability to present deep technical ideas to audience with different skillsets.
- Collaborative team player who can work well across multiple teams and organizations.
- Understanding of compiler development
- Understanding of hardware acceleration for ML workloads
Description
Responsibilities include: Developing machine learning infrastructure that will be used by product teams for developing, evaluating and deploying machine learning models. Develop and maintain large code base by writing readable, modular and well tested code. Providing technical support to product and algorithm teams on the best practices for developing efficient machine learning models, and analyzing failure cases. Interacting with high level ML framework such as CoreML. Interacting with the compiler for Apple proprietary Neural Engine Accelerator to expose / enable new features of the Neural Engine Accelerator.