Chocolate - the Duracell of 4 year olds.Mum I’m bored. Go and eat some chocolate biscuits. Do I have too. Alright have some cake.![]()
Chocolate - the Duracell of 4 year olds.Mum I’m bored. Go and eat some chocolate biscuits. Do I have too. Alright have some cake.![]()
Now we’re talking… add 20 drops of live Fulvic Acid from Optimally Organics containing the correct amount of Humic and it’s alive lol. Vlad
Mum I’m bored. Go and eat some chocolate biscuits. Do I have too. Alright have some cake.![]()
Could this be us in there given they are partnered with Nvidia,Arm,Renesas etc ???
Diogenese , you may find the paper below of interest .Looks like they could use Akida:
US2022269910A1 METHOD AND SYSTEM FOR DETERMINING AUTO-EXPOSURE FOR HIGH-DYNAMIC RANGE OBJECT DETECTION USING NEURAL NETWORK
View attachment 33404
An auto-exposure control is proposed for high dynamic range images, along with a neural network for exposure selection that is trained jointly, end-to-end with an object detector and an image signal processing (ISP) pipeline. Corresponding method and system for high dynamic range object detection are also provided.
[0023] … a method for determining an auto-exposure value of a low dynamic range (LDR) sensor for use in high dynamic range (HDR) object detection, the method comprising:
employing at least one hardware processor for:
forming an auto-exposure neural network for predicting exposure values for the LDR sensor driven by a downstream object detection neural network in real time;
training the auto-exposure neural network jointly, end-to-end together with the object detection neural network and an image signal processing (ISP) pipeline, thereby yielding a trained auto-exposure neural network; and
using the trained auto-exposure neural network to generate an optimal exposure value for the LDR sensor and the downstream object detection neural network for the HDR object detection.
They are hiding SNN very well in this job opening:Looks like they could use Akida:
US2022269910A1 METHOD AND SYSTEM FOR DETERMINING AUTO-EXPOSURE FOR HIGH-DYNAMIC RANGE OBJECT DETECTION USING NEURAL NETWORK
View attachment 33404
An auto-exposure control is proposed for high dynamic range images, along with a neural network for exposure selection that is trained jointly, end-to-end with an object detector and an image signal processing (ISP) pipeline. Corresponding method and system for high dynamic range object detection are also provided.
[0023] … a method for determining an auto-exposure value of a low dynamic range (LDR) sensor for use in high dynamic range (HDR) object detection, the method comprising:
employing at least one hardware processor for:
forming an auto-exposure neural network for predicting exposure values for the LDR sensor driven by a downstream object detection neural network in real time;
training the auto-exposure neural network jointly, end-to-end together with the object detection neural network and an image signal processing (ISP) pipeline, thereby yielding a trained auto-exposure neural network; and
using the trained auto-exposure neural network to generate an optimal exposure value for the LDR sensor and the downstream object detection neural network for the HDR object detection.
They left out:They are hiding SNN very well in this job opening:
![]()
Privacy Policy
Job Openings
Computer Vision Engineer (C++)
Embedded Software · Montreal, Quebec
Algolux is a globally recognized computer vision company addressing the critical issue of safety for advanced driver assistance systems and autonomous vehicles. Our machine-learning tools and embedded AI software products enable existing and new camera designs to achieve industry-leading performance across all driving conditions. Founded on groundbreaking research at the intersection of deep learning, computer vision, and computational imaging, Algolux has been repeatedly recognized at industry and academic conferences and has been named to the 2021 CB Insights AI 100 List of the world’s most innovative artificial intelligence startups.
We believe in interdisciplinary research at Algolux and candidates will be working with a diverse team of imaging, computer vision, optimization, physics, and optics experts.
As a Deep Learning Engineer, you will contribute to Deep Learning based Computer Vision applications on a variety of software and hardware platforms. The ideal candidate is a Computer Scientist/Software Engineer with a proven ability to write production-quality code as well as experience in Computer Vision.
Key responsibilities:
- Implement computer vision algorithms in python
- Port computer vision, image processing, and deep learning algorithms to Modern C++/CUDA for x86/GPU and ARM64/GPU embedded platforms.
- Validate algorithms and models, following best practices
- Validation of deep learning models, in TensorFlow and PyTorch
- Validation of computer vision implementations in python and/or C++
- Visualization of implemented algorithms
- Perform model conversion from TensorFlow and PyTorch to ONNX and TensorRT.
- Validation of target hardware inference accuracy against ground-truth models.
- Participate in the design of the perception stack’s infrastructure:
- Support deployable, maintainable code for highly critical software systems (e.g. automotive safety).
- Develop in Linux environments and Docker containers.
- Participate in peer design collaboration and code reviews
- Participate in continuous improvement of group development practices and processes.
Requirements:
- Good C++ development skills:
- Strong exposure to modern C++ standards (C++14 or more recent).
- Familiarity with object-oriented software design patterns in C++.
- Comfortable using language features like STL, smart pointers, move semantics, etc.
- Understand memory structures and storage.
- Experience with debugging and using tools such as GDB/LLLDB, Valgrind, etc.
- Familiarity with CMake.
- Strong computer vision skills:
- Good familiarity with frameworks like TensorFlow and PyTorch and deep learning topologies
- Good familiarity with computer vision concepts such as object detection, multi-object tracking, segmentation, etc.
- Good familiarity with single-view, multi-view geometry, camera calibration, camera intrinsic and extrinsic parameters, etc.
- Good familiarity with deep learning models validation and testing approaches
- Excel at working in a highly collaborative environment:
- Familiarity with AGILE development practices.
- Comfortable using collaborative development tools such as Git and Jira.
- Ability to adhere to company coding standards.
- Bachelor's or Master's degree in a STEM-related field, and at least 2-3 years of industry work experience as a Software Developer with computer vision specialization.
- Proven dedication to writing production-quality code that is robust, efficient, portable, maintainable, and bug-free.
Nice to have:
- Understanding of parallel computing and optimization:
- Understanding of GPU architectures and how to optimize code for different GPU-based platforms
- Understanding of multi-threaded programming and thread safety
- Automotive or Embedded Platforms, such as NVIDIA Drive or NVIDIA Jetson
- Experience with other relevant NVIDIA libraries and frameworks, such as CUBLAS, CuDNN, NPP
They received a Gold Star from the teacher in 2021 for their Ai approach and yet there is only old school von Neumann in this job add. Just seems strange and of course they even nominated the $30,000 alternative as the area of expertise.They left out:
Nice to Know: $50 Akida at 300 MHz does vision as well as $30000 Nvidia at 900 MHz.
Check this out Brain Fam!
Here's an article describing Qualcomm's latest chipset Snapdragon 8102. It says it integrates AI in every single capacity and can run large language models like ChatGPT locally. In trying to find out more about this new chipset, I came across this video, which was recorded about 1 month ago, with Qualcomm CEO Cristiano Amon in which he discusses the ability to bring A.I. language models to smartphones and the next frontier of "mixed reality."
Cristiano says this is "the milestone we've been waiting for" and he mentions that Qualcomm, Samsung, Google and Meta are all working together to build the next generation mixed reality devices and he expects that the next computing platform will be glasses. At roughly 35 seconds into the interview he says "the ability to create that much processing power in a smart phone and run that without compromising the battery life is something that only Qualcomm can do!" !
When is Qualcomm going to let the cat out of the bag because I'm itching to bust a move?
View attachment 33413
View attachment 33409
View attachment 33410