BRN Discussion Ongoing

Glad to hear you had a great day, and I do hope your son receives an offer for his preferred course. I actually work at La Trobe. When BRN announced they were starting their university accelerator program, my first thought was to let the IT/Engineering department know. Clearly it must have fell on deaf ears and/or I need to do a better job at spruiking it. I can't think of anything cooler than having something cutting edge like akida being part of curriculum if I were a student again.

Thanks for bringing this up on Open Day with that lecturer. I will also follow this up at work again too.
"I actually work at La Trobe"

Hmm, doxing is against the rules isn't it?..

7704FE7C-DBF9-4207-828B-4757910A2F12.jpeg

Sorry Wilzy (Willie?) the sledge was too good to pass up..
Some might say there are a few similarities..
Sorry.. 😔
 
  • Haha
  • Like
Reactions: 11 users

McHale

Regular
Brainchip’s links to Tenstorrent through our partner SiFive. As mentioned they will license the X280 processor. 🌍🍻
View attachment 41481

View attachment 41482

And don't forget SAMSUNG and HYUNDAI have just given $100M to Tenstorrent to fast track there AI chips.
View attachment 41483

Brainchip’s links to Tenstorrent through our partner SiFive. As mentioned they will license the X280 processor. 🌍🍻
View attachment 41481

View attachment 41482

And don't forget SAMSUNG and HYUNDAI have just given $100M to Tenstorrent to fast track there AI chips.
View attachment 41483
Mentioned above under heading "About Si Five" is that: Si Five is backed by "SK Hynix".

SK Hynix (according to wiki) is the worlds second largest memory chip maker after Samsung and worlds third largest semi conductor company.

This is the same "SK" that @Frangipani posted about regarding poss links with BRN in their post #61,554 last Fri regarding S Korean players which may be flying under the radar.

SK is a massive S Korean conglomerate who have fingers in many pies including manufacturing of batteries for EV's etc as well as their large interest in chip making.
 
  • Like
  • Fire
  • Love
Reactions: 56 users

Sam

Nothing changes if nothing changes
"I actually work at La Trobe"

Hmm, doxing is against the rules isn't it?..

View attachment 41559
Sorry Wilzy (Willie?) the sledge was too good to pass up..
Some might say there are a few similarities..
Sorry.. 😔
Straight to the flaming bin with you😂
 
  • Haha
  • Love
Reactions: 7 users

wilzy123

Founding Member
"I actually work at La Trobe"

Hmm, doxing is against the rules isn't it?..
Yes it is, however on this occasion it is not, as I sought and obtained approval from myself to release this information.
 
  • Haha
  • Like
Reactions: 21 users

equanimous

Norse clairvoyant shapeshifter goddess
Yes it is, however on this occasion it is not, as I sought and obtained approval from myself to release this information.
So your not Homer Simpson...
 
  • Haha
  • Like
Reactions: 7 users

Fox151

Regular
  • Haha
Reactions: 6 users

wilzy123

Founding Member
  • Haha
  • Like
Reactions: 14 users

suss

Regular
Mentioned above under heading "About Si Five" is that: Si Five is backed by "SK Hynix".

SK Hynix (according to wiki) is the worlds second largest memory chip maker after Samsung and worlds third largest semi conductor company.

This is the same "SK" that @Frangipani posted about regarding poss links with BRN in their post #61,554 last Fri regarding S Korean players which may be flying under the radar.

SK is a massive S Korean conglomerate who have fingers in many pies including manufacturing of batteries for EV's etc as well as their large interest in chip making.
Any known sales people (or other employees) from Brainchip in SK?
 
  • Like
Reactions: 2 users
A bit of revision (March 12th 2023) on Sunday morning is always refreshing. Perhaps this is where some of that customer revenue came from?

“In order to serve the diverse and growing IoT market, developers require a new standard of secure, high-performance microcontrollers, combined with endpoint ML capabilities,” said Paul Williamson, SVP and GM, IoT Line of Business at Arm. “The integration of Arm’s highest performance microcontroller with the Akida portfolio enables our partners to deliver on this potential and efficiently handle advanced machine learning workloads.”

The integration of Akida and the Arm Cortex-M85 processor is an important step forward for BrainChip, demonstrating the company’s commitment to developing cutting-edge AI solutions that deliver exceptional performance and efficiency.

“We are delighted Akida can now seamlessly integrate with the Arm Cortex-M85 processor, which is one of the most advanced and efficient MCU platforms available for intelligent IoT,” said Nandan Nayampally, CMO of BrainChip. “This is a significant milestone for BrainChip, as it drives new possibilities for high-performance, low-power AI processing at the edge. We are excited to offer our mutual customers the benefits of this powerful combination of technologies.”
This is as significant a statement as any in the history of Brainchip in my view
 
  • Like
  • Love
  • Fire
Reactions: 31 users
Xray1
Regular
Jul 24, 2023
Add bookmark
#60,903
I think and strongly hope that Sean H is being extra very careful these days on what he has to say to keep his creditability intact as CEO:

"The Company is currently experiencing its highest ever level of commercial engagements, the volume and quality of which are improving rapidly as a larger number of customers learn about BrainChip and our 2nd Generation technology which will be available in late Q3."
The opinions and research I share are my own and I am not licensed. External links are not recommended. To be safe, conduct your own research or seek advice from a licensed financial advisor.

Stock Disclosure:
Held.
Speaking of X-ray
I’m not sure Radiologists will be big fans of AI related tech. It may likely mean the evaporation of a specialty.
 
  • Like
  • Fire
Reactions: 4 users

equanimous

Norse clairvoyant shapeshifter goddess
Research fellow for Spiking Neural Networks (SNN) / Neuromorphic Computing
Erlangen
Apply on employer site
Save

The Fraunhofer-Gesellschaft (www.fraunhofer.com) currently operates 76 institutes and research institutions throughout Germany and is the world’s leading applied research organization. Around 30 000 employees work with an annual research budget of 2.9 billion euros.
One task of our Broadband and Broadcast Department is driving the development of new technologies for artificial intelligence and communication systems. In the research field "Neuromorphic Computing", our focus is on the energy-efficient implementation of neural networks on highly specialized accelerator architectures. We want to further expand our ambitious team and are looking for you as a passionate expert in the field of spiking neural networks.
You know how to incorporate the latest scientific findings in your search for usable solutions? You evaluate new ideas independently and you have a structured way of working? You are already well connected in the scientific community in the above-mentioned topics or would like to build up and expand your network in the future? Then we have the perfect position for you!
What you will do
  • You actively shape the strategy and research roadmap in our research field neuromorphic computing with a focus on spiking neural networks.
  • You will take over the technical project management in our research and development projects and guide the project members in technical questions.
  • As a research associate, you will research and develop tools and algorithms for spiking neural networks and their implementation in energy-efficient embedded systems, especially neuromorphic hardware, and contribute to the development of neuromorphic AI accelerators.
  • With your expertise, you will support the group management in the acquisition of projects and elaborate the technical aspects of project proposals.
  • In addition, you will support and guide our young scientific talents, especially inpublications.
  • Last but not least, you will publish your own research results in journals and present them at scientific conferences.
What you bring to the table
  • Completed scientific university studies with a doctorate in the field of machine learning, deep learning, artificial intelligence, theoretical neuroscience, neuroinformatics or any comparable discipline
  • In-depth knowledge of machine learning and neural networks with a focus on SNN
  • Good knowledge of the properties and application of different classes and architectures of neuromorphic hardware
  • Good programming skills in Python. Knowledge of other programming languages (e.g. C++, C#, Julia or similar).
  • Knowledge of project management and/or mixed-signal hardware architectures is an advantage
  • Fluent English and good German skills or willing to learn German
What you can expect
  • Through your collaboration in strategic planning and in versatile projects with a high practical relevance at the interface between research and industry, you can actively shape the future topics of neuromorphic computing and edge AI.
  • You can expect an open and collegial environment as well as individual development tailored to your needs through a comprehensive range of further qualifications and the opportunity to develop into a senior research associate.
  • We offer you a broad range of networking opportunities, visibility within and outside of the Fraunhofer-Gesellschaft, as well as contacts to numerous academic and industrial partners.
  • We support your work-life balance with flexible working hours, mobile work, and various support services to better balance private and professional life.
  • If you are interested and highly self-motivated, a postdoctoral lecture qualification is also possible.
  • We will be happy to tell you about further benefits in a personal interview.
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Gazzafish

Regular
Wonder if we are involved here?


NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof​

NeuPro-M™ redefines high-performance AI (Artificial Intelligence) processing for smart edge devices and edge compute with heterogeneous coprocessing, targeting generative and classic AI inferencing workloads.
NeuPro-M is a highly power-efficient and scalable NPU architecture with an exceptional power efficiency of up to 350 Tera Ops Per Second per Watt (TOPS/Watt).
NeuPro-M provides a major leap in performance thanks to its heterogeneous coprocessors that demonstrate compound parallel processing, firstly within each internal processing engine and secondly between the engines themselves.
Ranging from 4 TOPS up to 256 TOPS per core and is fully scalable to reach above 1200 TOPS using multi-core configurations, NeuPro-M can cover a wide range of AI compute application needs which enables it to fit a broad range of end markets including infrastructure, industrial, automotive, PC, consumer, and mobile.
With various orthogonal memory bandwidth reduction mechanisms, decentralized architecture of the NPU management controllers and memory resources, NeuPro-M can ensure full utilization of all its coprocessors while maintaining stable and concurrent data tunneling that eliminate issues of bandwidth limited performance, data congestion or processing unit starvation. These also reduce the dependency on the external memory of the SoC which the NeuPro-M NPU IP is embedded into.
NeuPro-M AI processor builds upon CEVA’s industry-leading position and experience in deep neural networks applications. Dozens of customers are already deploying CEVA’s computer vision & AI platforms along with the full CDNN (CEVA Deep Neural Network) toolchain in consumer, surveillance and ADAS products.
NeuPro-M was designed to meet the most stringent safety and quality compliance standards like automotive ISO 26262 ASIL-B functional safety standard and A-Spice quality assurance standards and comes complete with a full comprehensive AI software stack including:
  • NeuPro-M system architecture planner tool – Allowing fast and accurate neural network development over NeuPro-M and ensure final product performance
  • Neural network training optimizer tool allows even further performance boost & bandwidth reduction still in the neural network domain to fully utilize every NeuPro-M optimized coprocessor
  • CDNN AI compiler & runtime, compose the most efficient flow scheme within the processor to ensure maximum utilization in minimum bandwidth per use-case
  • Compatibility with common open-source frameworks, including TVM and ONNX
The NeuPro-M NPU architecture supports secure access in the form of optional root of trust, authentication against IP / identity theft, secure boot and end to end data privacy.

NuePro-M Diagram updated

BENEFITS​

The NeuPro-M AI processor family is designed to reduce the high barriers-to-entry into the AI space in terms of both NPU architecture and software stack. Enabling an optimized and cost-effective scalable AI platform that can be utilized for a multitude of AI-based inferencing workloads

Self-contained heterogeneous NPU architecture that concurrently processes diverse AI workloads using mixed precision MAC array, True sparsity engine, Weight and Data compression

Scalable performance of 4 to 1,200 TOPS in a modular multi-engine/multi-core architecture at both SoC and Chiplet levels for diverse application needs. Up to 32 TOPS for a single engine NPU core and up to 256TOPS for an 8-engine NPU core

Future proof using a programmable VPU (Vector Processing Unit), supporting any future network layer

MAIN FEATURES​

  • Highly power-efficient with up to 350 TOPS/Watt at 3nm
  • Support wide range of activations & weights data types, from 32-bit Floating Point down to 2-bit Binary Neural Networks (BNN)
  • Unique mixed precision neural engine MAC array micro architecture to support data type diversity with minimal power consumption
  • Unstructured and structured Sparsity engine to avoid operations with zero-value weights or activations of every layer along the inference process. With up to 4x in performance, sparsity will also reduce memory bandwidth and power consumption.
  • Simultaneous processing of the Vector Processing Unit (VPU), a fully programmable processor for handling any future new neural network architectures
  • Lossless Real-time Weight and Data compression/decompression, for reduced external memory bandwidth
  • Scalability by applying different memory configuration per use-case and inherent single core with 1-8 multiengine architecture system for diverse processing performance
  • Secure boot and neural network weights/data against identity theft
  • Memory hierarchy architecture to minimize power consumption attributed to data transfers to and from an external SDRAM as well as optimize overall bandwidth consumption
  • Management controllers decentralized architecture with local data controller on each engine to achieve optimized data tunneling for low bandwidth and maximal utilization as well as efficient parallel processing schema
  • Supports latest NN architectures like: transformers, fully-connected (FC), FC batch, RNN, 3D convolution and more
  • The NeuPro-M NPU IP family includes the following processor options:
    • NPM11 – A single engine core, with processing power of up to 32 TOPS
    • NPM12 – A 2-engine core, with processing power of up to 64 TOPS
    • NPM14 – A 4-engine core, with processing power of up to 128 TOPS
    • NPM18 – An 8-engine core, with processing power of up to 256 TOPS
  • Matrix Decomposition for up to 10x enhanced performance during network inference
 
  • Like
Reactions: 6 users

Learning

Learning to the Top 🕵‍♂️
I wonder if Teksun is using Akida for 'Predictive Maintenance Solutions' also?

Screenshot_20230807_225704_LinkedIn.jpg



We known, Brainchip had partnered with AI Labs for this purpose early in the year.


So has Teksun, saw the opportunity in this areas with Akida also?

"We look forward to supporting their future customers with the rapid adoption of BrainChip’s Akida as a key differentiator for their edge AI product offerings. The Embedded Vision Summit is the perfect venue to showcase how our joint efforts will bring tremendous advancements to the AI and ML markets.” Rob Telson


Learning 🏖
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 47 users

MrRomper

Regular
Not trying to get @Bravo excited with Snapdragon but.......
Prophesee’s neuromorphic Metavision sensors PLUS Qualcomm’s Snapdragon = Smartphones

https://www.electrooptics.com/artic...-high-speed-low-light-photography-smartphones

Event imaging brings super-high-speed, low-light photography to smartphones​

Event imaging comparison shot of man hitting tennis ball

Event-based imaging improves the capture of dynamic scenes ( Prophesee)
Event-based imaging will soon exist in smartphones, thanks to a new collaboration between vision firm Prophesee and semiconductor giant Qualcomm.
The partnership, established in February, will see Prophesee’s neuromorphic Metavision sensors optimised for use with Qualcomm’s Snapdragon mobile platforms.
The move will enable OEMs to integrate event-based vision in mobile devices, improving camera performance in capturing both dynamic and low-light scenes.
A development kit offering compatibility between the Snapdragon platform and Metavision sensors is expected this year.
Event-based vision sensors, also known as neuromorphic or dynamic vision sensors, do not capture image data the same way as conventional sensor technology, instead using a system that mimics the function of a biological retina.
Rather than each pixel monitoring a scene at a fixed rate, they instead operate independently and asynchronously at extreme speeds, only recording changes within the scene (i.e. fluctuations in brightness) as they occur. This prevents the sensor having to capture redundant data if a scene remains unchanged, and means its acquisition speed always matches the scene dynamics.
This new vision category enables significant reductions of power, latency and data processing requirements, achieving an exceptional trade-off between acquisition speed and power consumption – up to three orders of magnitude better than conventional imaging technologies
. Event-based sensors are improving efficiency in applications across industrial automation and monitoring, mobility, medicine and AR/VR.
 
  • Like
  • Thinking
  • Fire
Reactions: 21 users

Diogenese

Top 20
Not trying to get @Bravo excited with Snapdragon but.......
Prophesee’s neuromorphic Metavision sensors PLUS Qualcomm’s Snapdragon = Smartphones

https://www.electrooptics.com/artic...-high-speed-low-light-photography-smartphones

Event imaging brings super-high-speed, low-light photography to smartphones​

Event imaging comparison shot of man hitting tennis ball


Event-based imaging will soon exist in smartphones, thanks to a new collaboration between vision firm Prophesee and semiconductor giant Qualcomm.
The partnership, established in February, will see Prophesee’s neuromorphic Metavision sensors optimised for use with Qualcomm’s Snapdragon mobile platforms.
The move will enable OEMs to integrate event-based vision in mobile devices, improving camera performance in capturing both dynamic and low-light scenes.
A development kit offering compatibility between the Snapdragon platform and Metavision sensors is expected this year.
Event-based vision sensors, also known as neuromorphic or dynamic vision sensors, do not capture image data the same way as conventional sensor technology, instead using a system that mimics the function of a biological retina.
Rather than each pixel monitoring a scene at a fixed rate, they instead operate independently and asynchronously at extreme speeds, only recording changes within the scene (i.e. fluctuations in brightness) as they occur. This prevents the sensor having to capture redundant data if a scene remains unchanged, and means its acquisition speed always matches the scene dynamics.
This new vision category enables significant reductions of power, latency and data processing requirements, achieving an exceptional trade-off between acquisition speed and power consumption – up to three orders of magnitude better than conventional imaging technologies
. Event-based sensors are improving efficiency in applications across industrial automation and monitoring, mobility, medicine and AR/VR.

Hi MrRomper,

We all want @Bravo to be right.

One issue is that this product could be tied up in the October 2021 Prophesee/Synsense agreement to produce the sensor/AI chip.

Our partnership with Prophesee dates from mid-2022. Luca Verre has been quoted as saying that the relationship with BrainChip was in its infancy.

Let's hope that the multi-year Prophesee/Qualcomm deal has room for Akida.

In the following, Prophesee mentions both Qualcomm and Sony together.

Camera chip startup Prophesee and Qualcomm sign multi-year deal | Reuters

February 28, 20236:47 AM GMT+11Last Updated a month ago

Camera chip startup Prophesee and Qualcomm sign multi-year deal

By Jane Lee

OAKLAND, Calif., Feb 27 (Reuters) - Paris-based startup Prophesee, a maker of camera chips inspired by the way the human eye works, said on Monday it has signed a multi-year deal with Qualcomm Inc (QCOM.O) to be used with the smartphone technology giant's product.

While today's camera chips continuously process the full frame of images, Prophesee's chip will only process changes in the scene, such as light or movement, which makes it faster and requires less computing power, said Luca Verre, co-founder and chief executive at Prophesee.

The technology works with pixels on the sensor that only send information to the processor when there is change, while pixels that perceive no change stay muted. There are a million pixels on Prophesee's latest chips.

Manufacturing of the chip will be outsourced to Sony Group Corp (6758.T). "So we are really combining both key players in the space," said Verre, referring to both Qualcomm and Sony, without disclosing financial terms of the deal.

Verre said the Prophesee chip will be used in addition to conventional camera chips in a blueprint for smartphones that will be released this week at Mobile World Congress in Barcelona. Mass production of the chips is planned for next year when they would be integrated into phones, he said.

The additional Prophesee chip will help correct some of the blurry imagery in existing smartphone camera systems, said Verre.




DYNAP-CNN — the World’s First 1M Neuron, Event-Driven Neuromorphic AI Processor for Vision Processing | SynSense


Computation in DYNAP-CNN is triggered directly by changes in the visual scene, without using a high-speed clock. Moving objects give rise to sequences of events, which are processed immediately by the processor. Since there is no notion of frames, DYNAP-CNN’s continuous computation enables ultra-low-latency of below 5ms. This represents at least a 10x improvement from the current deep learning solutions available in the market for real-time vision processing.



SynSense and Prophesee develop one-chip event-based smart sensing solution

/2021/10/15/synsense-prophesee-neuromorphic-processing-and-sensing/

Oct 15, 2021 – SynSense and Prophesee, two leading neuromorphic technology companies, today announced a partnership that will see the two companies leverage their respective expertise in sensing and processing to develop ultra-low-power solutions for implementing intelligence on the edge for event-based vision applications.

… The sensors facilitate machine vision by recording changes in the scene rather than recording the entire scene at regular intervals. Specific advantages over frame-based approaches include better low light response (<1lux) and dynamic range (>120dB), reduced data generation (10x-1000x less than conventional approaches) leading to lower transfer/processing requirements, and higher temporal resolution (microsecond time resolution, i.e. >10k images per second time resolution equivalent).
 
  • Like
  • Love
  • Fire
Reactions: 31 users
Hi MrRomper,

We all want @Bravo to be right.

One issue is that this product could be tied up in the October 2021 Prophesee/Synsense agreement to produce the sensor/AI chip.

Our partnership with Prophesee dates from mid-2022. Luca Verre has been quoted as saying that the relationship with BrainChip was in its infancy.

Let's hope that the multi-year Prophesee/Qualcomm deal has room for Akida.

In the following, Prophesee mentions both Qualcomm and Sony together.

Camera chip startup Prophesee and Qualcomm sign multi-year deal | Reuters

February 28, 20236:47 AM GMT+11Last Updated a month ago

Camera chip startup Prophesee and Qualcomm sign multi-year deal

By Jane Lee

OAKLAND, Calif., Feb 27 (Reuters) - Paris-based startup Prophesee, a maker of camera chips inspired by the way the human eye works, said on Monday it has signed a multi-year deal with Qualcomm Inc (QCOM.O) to be used with the smartphone technology giant's product.

While today's camera chips continuously process the full frame of images, Prophesee's chip will only process changes in the scene, such as light or movement, which makes it faster and requires less computing power, said Luca Verre, co-founder and chief executive at Prophesee.

The technology works with pixels on the sensor that only send information to the processor when there is change, while pixels that perceive no change stay muted. There are a million pixels on Prophesee's latest chips.

Manufacturing of the chip will be outsourced to Sony Group Corp (6758.T). "So we are really combining both key players in the space," said Verre, referring to both Qualcomm and Sony, without disclosing financial terms of the deal.

Verre said the Prophesee chip will be used in addition to conventional camera chips in a blueprint for smartphones that will be released this week at Mobile World Congress in Barcelona. Mass production of the chips is planned for next year when they would be integrated into phones, he said.

The additional Prophesee chip will help correct some of the blurry imagery in existing smartphone camera systems, said Verre.




DYNAP-CNN — the World’s First 1M Neuron, Event-Driven Neuromorphic AI Processor for Vision Processing | SynSense


Computation in DYNAP-CNN is triggered directly by changes in the visual scene, without using a high-speed clock. Moving objects give rise to sequences of events, which are processed immediately by the processor. Since there is no notion of frames, DYNAP-CNN’s continuous computation enables ultra-low-latency of below 5ms. This represents at least a 10x improvement from the current deep learning solutions available in the market for real-time vision processing.



SynSense and Prophesee develop one-chip event-based smart sensing solution

/2021/10/15/synsense-prophesee-neuromorphic-processing-and-sensing/

Oct 15, 2021 – SynSense and Prophesee, two leading neuromorphic technology companies, today announced a partnership that will see the two companies leverage their respective expertise in sensing and processing to develop ultra-low-power solutions for implementing intelligence on the edge for event-based vision applications.

… The sensors facilitate machine vision by recording changes in the scene rather than recording the entire scene at regular intervals. Specific advantages over frame-based approaches include better low light response (<1lux) and dynamic range (>120dB), reduced data generation (10x-1000x less than conventional approaches) leading to lower transfer/processing requirements, and higher temporal resolution (microsecond time resolution, i.e. >10k images per second time resolution equivalent).
I guess Sony AI at least like to hire people with a neuromorphic interest and background.

Better than nothing that they have more than a passing interest or just manufacturing partner Prophesee chips I suppose :unsure:

We know ETH Zurich into neuromorphic though as well.



Raphaela
Kreiser
Google Scholar

Profile​

Raphaela grew up in Hamburg, Germany. After high school, she moved to Osnabruck, Germany to study Cognitive Science. This became her first step into the field of AI. For her M.S. and Ph.D. studies, she moved to Zurich to join the institute of Neuroinformatics at University Zurich / ETH Zurich. There she worked on Spiking Neural Networks for Localization and Mapping, published her research at IEEE CAS and robotics conferences, and obtained the Swiss research grant "Forschungskredit".

Message​

“I am a research scientist at the Sony AI team in Zurich. My interests lie in computer vision, neuromorphic hardware, biologically inspired computing, and robotics. My aim is to understand the computational principles used in biological systems in order to create power-efficient and real-time AI. At Sony AI I want to contribute to sensing that goes beyond human capabilities in order to enable richer and enhanced entertainment experiences.”

Edit:

Another AI researcher into neuromorphic at Sony R&D.


Can only dream :LOL:
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 17 users

cosors

👀
Tomorrow it will be decided whether TSMC will build a plant in Germany/Dresden. This is already considered a done deal and the land has already been purchased.
 
  • Like
  • Fire
  • Wow
Reactions: 16 users

FKE

Regular
Job advertisement from mercedes

Shared on LinkedIn, liked by Chris Stevens



Development engineer in the field of artificial intelligence with a focus on neuromorphic computing and ADAS (f/m/x)







Edit:

Fun fact - different Mercedes employee, different post on LinkedIn, same job i think - liked by Anil

 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 59 users

Diogenese

Top 20
Job advertisement from mercedes

Shared on LinkedIn, liked by Chris Stevens



Development engineer in the field of artificial intelligence with a focus on neuromorphic computing and ADAS (f/m/x)







Edit:

Fun fact - different Mercedes employee, different post on LinkedIn, same job i think - liked by Anil

This adds more weight to the use of Akida in ADAS (Valeo) as well as in the in-cabin communication.
 
  • Like
  • Love
  • Fire
Reactions: 51 users
Top Bottom