Doing the usual random BRN searches & this came up.
Did a word search on TSE and didn't see posted so someone may know about or maybe has been mentioned but I got word search wrong.
Leapmind?
I know it states CNN and wondered if maybe
@Diogenese wanted to dig and see if either analog set up of some sort or possibly & hopefully a CNN2SNN hidden in the background or something using Edge Impulse or MegaChips given Leapmind & MC domiciled in Japan?
All sounds very familiar to me anyway
Some info below.
About LeapMind
LeapMind Inc. was founded in 2012 with the corporate mission "To create innovative devices with machine learning and make them available everywhere." Total investment in LeapMind to date has reached 4.99 billion yen (as of May 2021). The company's strength is in extremely low bit quantization for compact deep learning solutions. It has a proven track record of achievement with over 150 companies, many of which are centered in manufacturing, including the automobile industry. It is also developing its Efficiera semiconductor IP, based on its experience in the development of both software and hardware
Head office: Shibuya Dogenzaka Sky Building 3F, 28-1 Maruyama-cho, Shibuya-ku, Tokyo 150-0044
Representative: Soichi Matsuda, CEO
Established: December 2012
URL:
https://leapmind.io/en/
From their May 22 press release.
Focusing to visual inspection, edge AI enables fast and secured manufacturing DX
leapmind.io
Features of “Efficiera Anomaly Detection model”
•Training completes with only normal data
- Annotation work is not required which is often outsourced, no necessity of concluding non-disclosure agreement.
- Avoids the need of large amount of defective data, and training completes just in a few seconds which enabling rapid introduction of AI to the manufacturing site
•Completes both inference and training on small edge device with FPGA
- No need of send image data outside the company, the risk of information leakage is reduced, and complicated non-disclosure agreements are not required which could lead to labor savings
- Can be used in environment without network
•Re-training can be done without an AI engineer
- Visualization of abnormal areas by heat map.
- AI training results can be saved and reloaded (redo and version management) easily on site which makes it easy to use to those companies that have difficulty in securing IT human resources.
About Efficiera®
Efficiera is an ultra-low power AI inference accelerator IP specialized for CNN inference processing that runs as a circuit on FPGA or ASIC devices.
The "extremely low bit quantization" technology minimizes the number of quantization bits to 1 - 2 bits, maximizing the power and area efficiency of convolution, which accounts for most of the inference processing, without the need for advanced semiconductor manufacturing processes or special cell libraries. By using this product, deep learning functions can be incorporated into a variety of edge devices, including consumer electronics such as home appliances, industrial equipment such as construction machinery, surveillance cameras, broadcasting equipment, as well as small machines and robots that are constrained by power, cost, and heat dissipation, which has been technically difficult in the past. LeapMind provides three Deep Learning models - Efficiera Object Detection model, Efficiera Noise Reduction model and Efficiera Anomaly Detection model. Visit product website at
https://leapmind.io/business/ip/
Extremely low bit quantization
Quantizing deep learning models to extreme low bits makes them significantly lighter and faster without sacrificing their performance (Figure 1).
Specifically, we achieve weight savings by replacing the parameters in an inference model with 1 or 2 bits instead of the usual single-precision floating-point number (32 bits).
The limit of quantization without performance degradation is generally believed to be 8 bits, but LeapMind has succeeded in achieving negligible performance degradation even with a combination of 1-bit weight (weight factor) and 2-bit activation (input), which is well below 8 bits (Figure 2).