BRN Discussion Ongoing

stuart888

Regular
25 FPS? Interesting, just seems slow.

 
  • Like
Reactions: 4 users
Suffice to say, I'M SO EXCITED!!! 🥳🥳🥳


Image result for show me the money gif

I will be happy to see the money in the BrainChip account
Just make it happen……
 
  • Like
Reactions: 4 users

stuart888

Regular
Sure hope this is not a repeat. Game changer, and all good for Brainchip!

The world now knows how to run AI ChatGBT on a laptop, or phone, no internet. Generative AI in my opinion is very good for Brainchip smarts on the Edge. Awareness of AI to all decision makers.

This guy is sharp. Did it all in a week!



View attachment 33392

On Saturday, they are creating a Commercial downloadable version of the whole Chat Brain of Open AI. They had to port some stuff to Apache. It is optimized for the M1 chip, but runs on everything. Any kid with a laptop can grab this entire stack. He explains it all and documents it.

In 10 minutes, mega people can download and run this thing. Next is text to video.

I thought this information was absolutely informative. ☄️☄️☄️
 
  • Like
  • Fire
Reactions: 12 users

Diogenese

Top 20
  • Haha
  • Like
Reactions: 8 users
Now we’re talking… add 20 drops of live Fulvic Acid from Optimally Organics containing the correct amount of Humic and it’s alive lol. Vlad
 
  • Like
  • Love
  • Haha
Reactions: 4 users

stuart888

Regular
It would be nice to come here, and Ask:

Please summarize the Brainchip TSE key points, in the last week, and in 200 words or less?
 
  • Like
  • Love
Reactions: 9 users
  • Haha
Reactions: 1 users

TECH

Regular
Good morning all,

This post on Linkedin will have been viewed by many, which is fine, but I'd just like to say this;

Kris Carlson has been with us, as Anil's righthand man in the US for over 5 year now, if you read the last part of
his comments it is just another way of appreciating the quality of staff working for the company and us in general.

Brainchip is a family, reflecting in this case, integrity, work ethics, and warm respect for other team members who are
all going to be personally rewarded, financially or spiritually, as we ALL move towards commercial success.


Kristofor Carlson
Kristofor CarlsonKristofor Carlson• 1st• 1stManager of Applied Research at BrainChipManager of Applied Research at BrainChip
3w • 3w •


Hey all, I don't post on LinkedIn much, but I thought this deserved a post. We are introducing Akida 2.0! In this version, we extend support for object-detection and segmentation models, add support for vision transformers, and introduce temporal event-based neural networks (TENNs). It has been extremely fun to iterate over Akida 1.0 and I'm really happy with what we're coming out with. Everyone at BrainChip has been working extremely hard on this, so I just want to thank my co-workers for their hard work!


Regards.......Tech :coffee:(y)
 
  • Like
  • Love
  • Fire
Reactions: 91 users

Doz

Regular
1680307923564.png

1680307965087.png
1680307993816.png

1680308150721.png

1680308307418.png
 
  • Like
  • Wow
  • Fire
Reactions: 25 users

Dougie54

Regular
  • Like
Reactions: 6 users

Diogenese

Top 20


Looks like they could use Akida:

US2022269910A1 METHOD AND SYSTEM FOR DETERMINING AUTO-EXPOSURE FOR HIGH-DYNAMIC RANGE OBJECT DETECTION USING NEURAL NETWORK

1680309367804.png


An auto-exposure control is proposed for high dynamic range images, along with a neural network for exposure selection that is trained jointly, end-to-end with an object detector and an image signal processing (ISP) pipeline. Corresponding method and system for high dynamic range object detection are also provided.

[0023] … a method for determining an auto-exposure value of a low dynamic range (LDR) sensor for use in high dynamic range (HDR) object detection, the method comprising:

employing at least one hardware processor for:

forming an auto-exposure neural network for predicting exposure values for the LDR sensor driven by a downstream object detection neural network in real time;

training the auto-exposure neural network jointly, end-to-end together with the object detection neural network and an image signal processing (ISP) pipeline, thereby yielding a trained auto-exposure neural network; and

using the trained auto-exposure neural network to generate an optimal exposure value for the LDR sensor and the downstream object detection neural network for the HDR object detection
.
 
  • Like
  • Love
  • Fire
Reactions: 26 users

Doz

Regular
Looks like they could use Akida:

US2022269910A1 METHOD AND SYSTEM FOR DETERMINING AUTO-EXPOSURE FOR HIGH-DYNAMIC RANGE OBJECT DETECTION USING NEURAL NETWORK

View attachment 33404

An auto-exposure control is proposed for high dynamic range images, along with a neural network for exposure selection that is trained jointly, end-to-end with an object detector and an image signal processing (ISP) pipeline. Corresponding method and system for high dynamic range object detection are also provided.

[0023] … a method for determining an auto-exposure value of a low dynamic range (LDR) sensor for use in high dynamic range (HDR) object detection, the method comprising:

employing at least one hardware processor for:

forming an auto-exposure neural network for predicting exposure values for the LDR sensor driven by a downstream object detection neural network in real time;

training the auto-exposure neural network jointly, end-to-end together with the object detection neural network and an image signal processing (ISP) pipeline, thereby yielding a trained auto-exposure neural network; and

using the trained auto-exposure neural network to generate an optimal exposure value for the LDR sensor and the downstream object detection neural network for the HDR object detection
.
Diogenese , you may find the paper below of interest .


 
  • Love
  • Like
  • Fire
Reactions: 5 users

IloveLamp

Top 20


Screenshot_20230401_111953_LinkedIn.jpg
 
  • Like
Reactions: 5 users

Steve10

Regular
CPI figures from US overnight.

Personal Income MOM = 0.3% vs 0.6% last reading.

PCE Price Index MOM = 0.3% vs 0.6% last reading.

PCE Price Index YOY = 5% vs 5.3% last reading.

Personal Spending MOM = 0.2% vs 2% last reading.

Core PCE Price Index MOM = 0.3% vs 0.5% last reading.

Core PCE Price Index YOY = 4.6% vs 4.7% last reading.

Michigan PMI = 43.8 vs 43.6 last reading. Slightly up but still under 50 in contraction.

Michigan Consumer Sentiment = 62 vs 67 last reading. Consumer feeling the high interest rates.

US 2 year bond yield lower today after rising a little past week.

Nasdaq up 1.68% overnight & SP500 up 1.44%.

1680311880451.png
 
Last edited:
  • Like
  • Haha
  • Fire
Reactions: 24 users

Realinfo

Regular
’When I joined the company, the whole thing felt like a science project’

Lou suggested this during a lunch we had back in early 2019 at Foys Kirribilli. Sitting above the Sydney Flying Squadron’s club house, Lou was captivated by the crews preparing for an afternoon’s racing. Around eight months earlier at the 2018 AGM, we were a long way from agreeing to have lunch together. But somewhat like my tractor challenge to Fact, a little humour and not always needing to have the last word about something, generally diffuses whatever it is one disagrees on.

We are told by people who presumably know, that it takes around three years from go to whoa when it comes to incorporating new technology into products. Lou told us about the importance of getting into the product development cycle at the right time.

There has been much discussion about Valeo this week. I would simply like to add, our battler signed a joint development agreement with them back in June 2020, just a smidge under three years ago. We continue to work with them so one would think that we must be getting close to an earner.

The Ford agreement was signed a month earlier. Some here have written this off, but correct me if I’m wrong…it was a price sensitive ASX announcement so if the agreement’s done ‘n dusted, should there not have been an ASX announcement saying as much?

There was a similar price sensitive ASX announcement in August 2020 about a partnership agreement with Magik Eye.

We discovered via a similar ASX announcement that Vorago had signed an EAP agreement in September 2020, and NASA in December 2020…both presumably still going strong.

Then the IP license agreement with Renasas in December 2020…voila, two years later they’re into tape-out.

These deals, all announced via the ASX were done more than two years ago...three years will tick over during the next few months.

In November 2021, we had the Megachips license agreement announced via the ASX. There is a great deal of expectation about what this will bring, especially after Peter suggested that people are yet to understand how important this agreement is.

Since the close of 2021 there has been a plethora of partnership agreements and arrangements, if not identical, then very similar to the above, announced in all ways except via the ASX. I’m not suggesting this devalues them in anyway, it‘s simply an observation.

I come back to wiser heads than mine and the three year rule.

In June 2019 (almost FOUR years ago) our battler signed an agreement with Socionext. Almost four years ago, Vice President Noriaki Kubo said ‘we are excited to join Brainchip in the design, development and introduction of Akida SoC...bringing AI to edge applications is a major industry development, and also a strategic application segment for Socionext’ Ten months later in April 2020 (three years ago) Mr Kubo added the words COMMERCIAL PARTNERS to how Socionext saw their relationship with our battler. Mr Kubo remains VP of Socionext to this day.

Me myself personally thinks Socionext is the sleeper in the pack !!!
 
  • Like
  • Fire
  • Love
Reactions: 80 users
Looks like they could use Akida:

US2022269910A1 METHOD AND SYSTEM FOR DETERMINING AUTO-EXPOSURE FOR HIGH-DYNAMIC RANGE OBJECT DETECTION USING NEURAL NETWORK

View attachment 33404

An auto-exposure control is proposed for high dynamic range images, along with a neural network for exposure selection that is trained jointly, end-to-end with an object detector and an image signal processing (ISP) pipeline. Corresponding method and system for high dynamic range object detection are also provided.

[0023] … a method for determining an auto-exposure value of a low dynamic range (LDR) sensor for use in high dynamic range (HDR) object detection, the method comprising:

employing at least one hardware processor for:

forming an auto-exposure neural network for predicting exposure values for the LDR sensor driven by a downstream object detection neural network in real time;

training the auto-exposure neural network jointly, end-to-end together with the object detection neural network and an image signal processing (ISP) pipeline, thereby yielding a trained auto-exposure neural network; and

using the trained auto-exposure neural network to generate an optimal exposure value for the LDR sensor and the downstream object detection neural network for the HDR object detection
.
They are hiding SNN very well in this job opening:
Algolux
Privacy Policy
Job Openings

Computer Vision Engineer (C++)​

Embedded Software · Montreal, Quebec

Algolux is a globally recognized computer vision company addressing the critical issue of safety for advanced driver assistance systems and autonomous vehicles. Our machine-learning tools and embedded AI software products enable existing and new camera designs to achieve industry-leading performance across all driving conditions. Founded on groundbreaking research at the intersection of deep learning, computer vision, and computational imaging, Algolux has been repeatedly recognized at industry and academic conferences and has been named to the 2021 CB Insights AI 100 List of the world’s most innovative artificial intelligence startups.

We believe in interdisciplinary research at Algolux and candidates will be working with a diverse team of imaging, computer vision, optimization, physics, and optics experts.
As a Deep Learning Engineer, you will contribute to Deep Learning based Computer Vision applications on a variety of software and hardware platforms. The ideal candidate is a Computer Scientist/Software Engineer with a proven ability to write production-quality code as well as experience in Computer Vision.

Key responsibilities:
  • Implement computer vision algorithms in python
  • Port computer vision, image processing, and deep learning algorithms to Modern C++/CUDA for x86/GPU and ARM64/GPU embedded platforms.
  • Validate algorithms and models, following best practices
    • Validation of deep learning models, in TensorFlow and PyTorch
    • Validation of computer vision implementations in python and/or C++
    • Visualization of implemented algorithms
  • Perform model conversion from TensorFlow and PyTorch to ONNX and TensorRT.
    • Validation of target hardware inference accuracy against ground-truth models.
  • Participate in the design of the perception stack’s infrastructure:
    • Support deployable, maintainable code for highly critical software systems (e.g. automotive safety).
    • Develop in Linux environments and Docker containers.
    • Participate in peer design collaboration and code reviews
    • Participate in continuous improvement of group development practices and processes.

Requirements:
  • Good C++ development skills:
    • Strong exposure to modern C++ standards (C++14 or more recent).
    • Familiarity with object-oriented software design patterns in C++.
    • Comfortable using language features like STL, smart pointers, move semantics, etc.
    • Understand memory structures and storage.
    • Experience with debugging and using tools such as GDB/LLLDB, Valgrind, etc.
    • Familiarity with CMake.
  • Strong computer vision skills:
    • Good familiarity with frameworks like TensorFlow and PyTorch and deep learning topologies
    • Good familiarity with computer vision concepts such as object detection, multi-object tracking, segmentation, etc.
    • Good familiarity with single-view, multi-view geometry, camera calibration, camera intrinsic and extrinsic parameters, etc.
    • Good familiarity with deep learning models validation and testing approaches
  • Excel at working in a highly collaborative environment:
    • Familiarity with AGILE development practices.
    • Comfortable using collaborative development tools such as Git and Jira.
    • Ability to adhere to company coding standards.
  • Bachelor's or Master's degree in a STEM-related field, and at least 2-3 years of industry work experience as a Software Developer with computer vision specialization.
  • Proven dedication to writing production-quality code that is robust, efficient, portable, maintainable, and bug-free.

Nice to have:
  • Understanding of parallel computing and optimization:
  • Understanding of GPU architectures and how to optimize code for different GPU-based platforms
  • Understanding of multi-threaded programming and thread safety
  • Automotive or Embedded Platforms, such as NVIDIA Drive or NVIDIA Jetson
  • Experience with other relevant NVIDIA libraries and frameworks, such as CUBLAS, CuDNN, NPP
 
  • Like
Reactions: 6 users

IloveLamp

Top 20
.....16,000 customers............not consumers.................

CUSTOMERS......

Distilling the technology strategy, Amon called out wireless connectivity, high-performance/low-power computing and artificial intelligence (AI) as applicable “to every industry. We can scale for a phone, all the way to an autonomous car, and to a number of different industrial applications.” Qualcomm counts more than 16,000 industrial IoT customers, he said.

As companies are looking to make commitments to reduce energy consumption, technologies provided by Qualcomm (as opposed to created? 🤔) that provide high performance, efficient computing, becomes essential. And we’re very excited about the opportunity of growing in all of those markets with Qualcomm innovation.”

 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 13 users

Diogenese

Top 20
They are hiding SNN very well in this job opening:
Algolux
Privacy Policy
Job Openings

Computer Vision Engineer (C++)​

Embedded Software · Montreal, Quebec

Algolux is a globally recognized computer vision company addressing the critical issue of safety for advanced driver assistance systems and autonomous vehicles. Our machine-learning tools and embedded AI software products enable existing and new camera designs to achieve industry-leading performance across all driving conditions. Founded on groundbreaking research at the intersection of deep learning, computer vision, and computational imaging, Algolux has been repeatedly recognized at industry and academic conferences and has been named to the 2021 CB Insights AI 100 List of the world’s most innovative artificial intelligence startups.

We believe in interdisciplinary research at Algolux and candidates will be working with a diverse team of imaging, computer vision, optimization, physics, and optics experts.
As a Deep Learning Engineer, you will contribute to Deep Learning based Computer Vision applications on a variety of software and hardware platforms. The ideal candidate is a Computer Scientist/Software Engineer with a proven ability to write production-quality code as well as experience in Computer Vision.

Key responsibilities:
  • Implement computer vision algorithms in python
  • Port computer vision, image processing, and deep learning algorithms to Modern C++/CUDA for x86/GPU and ARM64/GPU embedded platforms.
  • Validate algorithms and models, following best practices
    • Validation of deep learning models, in TensorFlow and PyTorch
    • Validation of computer vision implementations in python and/or C++
    • Visualization of implemented algorithms
  • Perform model conversion from TensorFlow and PyTorch to ONNX and TensorRT.
    • Validation of target hardware inference accuracy against ground-truth models.
  • Participate in the design of the perception stack’s infrastructure:
    • Support deployable, maintainable code for highly critical software systems (e.g. automotive safety).
    • Develop in Linux environments and Docker containers.
    • Participate in peer design collaboration and code reviews
    • Participate in continuous improvement of group development practices and processes.

Requirements:
  • Good C++ development skills:
    • Strong exposure to modern C++ standards (C++14 or more recent).
    • Familiarity with object-oriented software design patterns in C++.
    • Comfortable using language features like STL, smart pointers, move semantics, etc.
    • Understand memory structures and storage.
    • Experience with debugging and using tools such as GDB/LLLDB, Valgrind, etc.
    • Familiarity with CMake.
  • Strong computer vision skills:
    • Good familiarity with frameworks like TensorFlow and PyTorch and deep learning topologies
    • Good familiarity with computer vision concepts such as object detection, multi-object tracking, segmentation, etc.
    • Good familiarity with single-view, multi-view geometry, camera calibration, camera intrinsic and extrinsic parameters, etc.
    • Good familiarity with deep learning models validation and testing approaches
  • Excel at working in a highly collaborative environment:
    • Familiarity with AGILE development practices.
    • Comfortable using collaborative development tools such as Git and Jira.
    • Ability to adhere to company coding standards.
  • Bachelor's or Master's degree in a STEM-related field, and at least 2-3 years of industry work experience as a Software Developer with computer vision specialization.
  • Proven dedication to writing production-quality code that is robust, efficient, portable, maintainable, and bug-free.

Nice to have:
  • Understanding of parallel computing and optimization:
  • Understanding of GPU architectures and how to optimize code for different GPU-based platforms
  • Understanding of multi-threaded programming and thread safety
  • Automotive or Embedded Platforms, such as NVIDIA Drive or NVIDIA Jetson
  • Experience with other relevant NVIDIA libraries and frameworks, such as CUBLAS, CuDNN, NPP
They left out:
Nice to Know: $50 Akida at 300 MHz does vision as well as $30000 Nvidia at 900 MHz.
 
  • Like
  • Haha
  • Fire
Reactions: 21 users
They left out:
Nice to Know: $50 Akida at 300 MHz does vision as well as $30000 Nvidia at 900 MHz.
They received a Gold Star from the teacher in 2021 for their Ai approach and yet there is only old school von Neumann in this job add. Just seems strange and of course they even nominated the $30,000 alternative as the area of expertise.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Thinking
  • Haha
Reactions: 8 users
Top Bottom