BRN Discussion Ongoing

Ahh, the old Elizabethan Collar hey, who needs a Christmas hat when you're going for the 'royal' look 🤣 The young lad is looking much perkier today Rise 👍
😄 I've had a few people call him Count Barkula. Yeah he's looking much better got another vet visit soon so they can check up on him to see if all is good. MERRY CHRISTMAS mate.🎄🍻
 
  • Like
  • Haha
  • Fire
Reactions: 12 users

HopalongPetrovski

I'm Spartacus!
It was probably too much to expect anyone to open my present and read to the end where it has an exciting conclusion but also a hidden gem as this research was internally funded by:

“Riverside Research Wins 5-Year, $49.5M AFRL MESA II Contract​

Dec 08, 2022
Riverside Research, an independent nonprofit national security company, announces win of the Air Force Research Laboratory (AFRL) Microelectronics and Embedded System Assurance (MESA) II Research and Development contract.
Riverside Research Wins 5-Year, $49.5M AFRL MESA II Contract -

The $49.5M, five-year contract allows Riverside Research to continue its breakthrough research and long-standing technical leadership in this critical mission area.
"We are honored to continue our support for advancing scientific research in support of AFRL and our national security missions," said Riverside Research Vice President, Engineering and Systems Integration, Mary Barefoot.
Riverside Research has conducted research and development on behalf of AFRL for over twenty years, providing independent and unbiased technical R&D in microelectronics, open architecture, electromagnetics, PNT, materials and plasma physics. This contract enables Riverside Research to continue supporting AFRL in its advancement of technologies for the warfighter.”


My opinion only DYOR
FF

AKIDA BALLISTA


Thank you again for sharing.
What warmed the cockles of this old heart on a hot Christmas afternoon lay at the very end of their conclusions wherein they state........

"Future work will focus on the implementation of the algorithm on an FPGA or on a neuromorphic chip for hardware acceleration and testing in an infrared object detection task potentially with edge maps as features together with a pre-processing layer to remove any distortions, enhance contrast, remove blur, etc., and a spiking neuron layer as a final layer to introduce a machine learning component. An extension of the SNN archi- tecture of the Canny edge detector with additional processing layers for object detection in LiDAR point clouds would be another interesting new direction of research [39]."


"Reference 39. Choi, Y.; Kim, N.; Hwang, S.; Park, K.; Yoon, J.S.; An, K.; Kweon, I.S. KAIST Multi-Spectral Day/Night Data Set for

Autonomous and Assisted Driving.
IEEE Trans. Intell. Transp. Syst. 2018, 19, 934–948. [CrossRef]."

Beyond citing us in the introduction they are basically saying they will be using us going forward in this both fundamental and wide ranging application research. No doubt everyone from NASA scientists to doorbell manufacturers everywhere are wetting themselves in anticipation. 🤣
I Know I am.
Happy Santa day all. 😂
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 32 users

Earlyrelease

Regular
Merry Xmas Chippers

See the Perth crew around March for the next little tipple.
 

Attachments

  • 1884C17C-BA3C-4F9D-85AF-8B2D184301DE.jpeg
    1884C17C-BA3C-4F9D-85AF-8B2D184301DE.jpeg
    2 MB · Views: 84
  • Like
  • Love
  • Haha
Reactions: 23 users
Thank you again for sharing.
What warmed the cockles of this old heart on a hot Christmas afternoon lay at the very end of their conclusions wherein they state........

"Future work will focus on the implementation of the algorithm on an FPGA or on a neuromorphic chip for hardware acceleration and testing in an infrared object detection task potentially with edge maps as features together with a pre-processing layer to remove any distortions, enhance contrast, remove blur, etc., and a spiking neuron layer as a final layer to introduce a machine learning component. An extension of the SNN archi- tecture of the Canny edge detector with additional processing layers for object detection in LiDAR point clouds would be another interesting new direction of research [39]."


"Reference 39. Choi, Y.; Kim, N.; Hwang, S.; Park, K.; Yoon, J.S.; An, K.; Kweon, I.S. KAIST Multi-Spectral Day/Night Data Set for

Autonomous and Assisted Driving.
IEEE Trans. Intell. Transp. Syst. 2018, 19, 934–948. [CrossRef]."

Beyond citing us in the introduction they are basically saying they will be using us going forward in this both fundamental and wide ranging application research. No doubt everyone from NASA scientists to doorbell manufacturers everywhere are wetting themselves in anticipation. 🤣
I Know I am.
Happy Santa day all. 😂
Congratulations you are the first one to find the surprise at the end.
As a reward you get a Christmas Cracker joke:

Q. What do Santa’s little helpers learn at school?

A. The elf-abet.

FF

AKIDA BALLISTA
 
  • Haha
  • Like
  • Wow
Reactions: 31 users

RobjHunt

Regular
Congratulations you are the first one to find the surprise at the end.
As a reward you get a Christmas Cracker joke:

Q. What do Santa’s little helpers learn at school?

A. The elf-abet.

FF

AKIDA BALLISTA
You’re such a Dad 😉
 
  • Haha
  • Like
Reactions: 11 users
It was probably too much to expect anyone to open my present and read to the end where it has an exciting conclusion but also a hidden gem as this research was internally funded by:

“Riverside Research Wins 5-Year, $49.5M AFRL MESA II Contract​

Dec 08, 2022
Riverside Research, an independent nonprofit national security company, announces win of the Air Force Research Laboratory (AFRL) Microelectronics and Embedded System Assurance (MESA) II Research and Development contract.
Riverside Research Wins 5-Year, $49.5M AFRL MESA II Contract -

The $49.5M, five-year contract allows Riverside Research to continue its breakthrough research and long-standing technical leadership in this critical mission area.
"We are honored to continue our support for advancing scientific research in support of AFRL and our national security missions," said Riverside Research Vice President, Engineering and Systems Integration, Mary Barefoot.
Riverside Research has conducted research and development on behalf of AFRL for over twenty years, providing independent and unbiased technical R&D in microelectronics, open architecture, electromagnetics, PNT, materials and plasma physics. This contract enables Riverside Research to continue supporting AFRL in its advancement of technologies for the warfighter.”


My opinion only DYOR
FF

AKIDA BALLISTA
This link to part of the website at Riverside makes interesting reading:


“Ready Anytime, Anywhere​

The goal of Riverside Research Institute’s Artificial Intelligence and Machine Learning (AI/ML) lab is to put the best of teamed human and machine intelligence on tap for the warfighter, anytime and anywhere. Our lab works “at the edge” near the sensor with AI and ML solutions, back at the control station where military assets are being deployed and controlled, and in the rear echelon data lake or in space networks where analytics persistently mine information for insight. We help warfighters do their jobs more efficiently and effectively.”

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 22 users
Just snorted a glass of Christmas bubbly when I saw this

https://www.msn.com/en-au/money/listdetails/Thematic Watchlist Idea: Artificial Intelligence/fl-3f847a0f9cb8?src=fxins&listSrc=pipeline&miid=AA14Jpsf&ocid=msedgntp&cvid=ea861c7732bc4f85be76801a10896c24

Can machines think? Here are the companies working to say YES!.​

Watchlist Idea from Microsoft Start • 24/12/2022Follow
IN THIS LIST

AAPL

Apple Inc

131.86

-0.28%

AMZN

Amazon.com Inc

85.25

1.74%

GOOG


89.81

1.76%
Artificial intelligence (AI) uses computers to perform tasks that normally require human intelligence. Many believe that AI will have a profound and pervasive impact on our economy and society in the years ahead, potentially transforming everything from healthcare to transportation. Securities within this list are investing in modern technologies to develop AI products. We use an algorithm that helps us determine which securities have high impact to this domain. This list's performance is calculated on an equally weighted method.
This list has performed -41.63% over the past year. By comparison, S&P/ASX 200 is -4.21% over the same period. The beta of this list, which is a measure of volatility, is Moderately High at 1.18. List Beta is calculated using an equally weighted average beta of the securities within this list. This list includes 90.00% of Technology stocks, 10.00% of Consumer Cyclicals stocks.
List performance is calculated using an equal-weight methodology. This list is generated by scanning the web and using our algorithms to surface potentially relevant securities to the topic. The list is intended to be educational and includes securities that may be suitable for a watchlist. It is not intended for investment or trading purposes. Microsoft does not recommend using the data and information provided as the basis for making any investment decision.

APPLE INC.​

AAPL. Apple Inc. (Apple) designs, manufactures and markets smartphones, personal computers, tablets, wearables and accessories and sells a range of related services. The Company’s products include iPhone, Mac, iPad, AirPods, Apple TV, Apple Watch, Beats products, HomePod, iPod touch and accessories. The Company operates various platforms, including the App Store, which allows customers to discover and download applications and digital content, such as books, music, video, games and podcasts. Apple offers digital content through subscription-based services, including Apple Arcade, Apple Music, Apple News+, Apple TV+ and Apple Fitness+. Apple also offers a range of other services, such as AppleCare, iCloud, Apple Card and Apple Pay. Apple sells its products and resells third-party products in a range of markets, including directly to consumers, small and mid-sized businesses, and education, enterprise and government customers through its retail and online stores and its direct sales force.
Apple Inc. is -12.72 over the past month and -25.20 over the past year, underperforming the S&P/ASX 200 by -10.86 over the past month and -20.99 over the past year.

AMAZON.COM, INC.​

AMZN. Amazon.com, Inc. provides a range of products and services to customers. The products offered through its stores include merchandise and content that it purchased for resale and products offered by third-party sellers. It also manufactures and sells electronic devices, including Kindle, Fire tablet, Fire TV, Echo, and Ring, and it develops and produces media content. It operates through three segments: North America, International and Amazon Web Services (AWS). The AWS segment consists of global sales of compute, storage, database, and other services for start-ups, enterprises, government agencies, and academic institutions. It provides advertising services to sellers, vendors, publishers, authors, and others, through programs, such as sponsored advertisements, display, and video advertising. It serves consumers through its online and physical stores. Customers access its offerings through websites, mobile applications, Alexa, devices, streaming, and physically visiting its stores.
Amazon.com Inc. is -9.43 over the past month and -50.17 over the past year, underperforming the S&P/ASX 200 by -7.58 over the past month and -45.95 over the past year.

ALPHABET INC.​

GOOG. Alphabet Inc. is a holding company. The Company's segments include Google Services, Google Cloud, and Other Bets. The Google Services segment includes products and services such as ads, Android, Chrome, hardware, Google Maps, Google Play, Search, and YouTube. The Google Cloud segment includes Google's infrastructure and platform services, collaboration tools, and other services for enterprise customers. The Other Bets segment includes earlier stage technologies that are further afield from its core Google business, and it includes the sale of health technology and Internet services. Its Google Cloud provides enterprise-ready cloud services, including Google Cloud Platform and Google Workspace. Google Cloud Platform enables developers to build, test, and deploy applications on its infrastructure. The Company's Google Workspace collaboration tools include applications, such as Gmail, Docs, Drive, Calendar, Meet, and various others. The Company also has various hardware products.
Alphabet Inc. is -9.12 over the past month and -38.96 over the past year, underperforming the S&P/ASX 200 by -7.27 over the past month and -34.75 over the past year.

NVIDIA CORPORATION​

NVDA. NVIDIA Corporation is a personal computer (PC) gaming market. The Company’s segments include Graphics and Compute & Networking. The Graphics segment includes GeForce graphics processing units (GPUs) for gaming and PCs, the GeForce NOW game streaming service and related infrastructure, and solutions for gaming platforms; Quadro/NVIDIA RTX GPUs for enterprise workstation graphics; virtual GPU software for cloud-based visual and virtual computing; automotive platforms for infotainment systems, and Omniverse software for building three-dimensional (3D) designs and virtual worlds. The Compute & Networking segment includes Data Center platforms and systems for artificial intelligence (AI), high-performance computing (HPC), and accelerated computing; Mellanox networking and interconnect solutions; automotive AI Cockpit, autonomous driving development agreements, and autonomous vehicle solutions; cryptocurrency mining processors (CMP); Jetson for robotics, and NVIDIA AI Enterprise.
NVIDIA Corp. is -7.95 over the past month and -48.70 over the past year, underperforming the S&P/ASX 200 by -6.10 over the past month and -44.48 over the past year.

BRAINCHIP HOLDINGS LTD​

BRN. BrainChip Holdings Ltd is an Australia-based technology company. The principal activity of the Company is the development of software and hardware accelerated solutions for advanced artificial intelligence (AI) and machine learning applications with a primary focus on the development of its Akida Neuromorphic Processor to provide a complete ultra-low power and fast AI Edge network for vision, audio, olfactory and smart transducer applications. Its products include MetaTF, Akida1000 reference chip and Akida Enablement Platforms. The MetaTF development environment is a machine learning framework used for the creation, training, and testing of neural networks, supporting the development of systems for Edge AI on its Akida event domain neural processor. The Akida1000 reference chip is fully functional and enables working system evaluation. Its Akida Enablement Platforms include Akida PCIe Board, Akida Development Kit Shuttle PC and Akida Development Kit Raspberry Pi.
BrainChip Holdings Ltd. is -5.76 over the past month and 0.77 over the past year, underperforming the S&P/ASX 200 by -3.90 over the past month and 4.98 over the past year.


...
Because they also know:

“Mark Cuban: The world’s first trillionaire will be an artificial intelligence entrepreneur​

Published Mon, Mar 13 2017 9:54 AM EDTUpdated Mon, Mar 13 2017 10:01 AM EDT

Catherine Clifford
@CATCLIFFORD@IN/CATCLIFFORD/
Share
104338159-makeit_sxsw_mark_cuban_AI_trillionaires_mezz.jpg

Bill Gates is the richest man in the world right now, with more than $85 billion to his name, and,according to one estimate, if he makes it to his mid-80's, he will likely be the world's first trillionaire. But self-made billionaire Mark Cuban predicts that the world's first trillionaires will actually be entrepreneurs working with artificial intelligence.
"I am telling you, the world's first trillionaires are going to come from somebody who masters AI and all its derivatives and applies it in ways we never thought of," says the star investor of ABC's "Shark Tank," speaking to a packed house in Austin at the SXSW Conference and FestivalsSunday night”
 
  • Like
  • Love
  • Fire
Reactions: 39 users
Even Kings College London knew about AKIDA and STDP training advantages back in 2019:

1671950765443.png
 
  • Like
  • Fire
  • Love
Reactions: 26 users
I recently posted part of an interview between the former CEO of Brainchip and Alan Kohler where Alan Kohler asked about the opportunities for AKIDA in the quantum computing space and Mr. Dinardo answered there was scope for AKIDA to work beside quantum computing:

Edge to quantum: hybrid quantum-spiking neural network image classifier​

A Ajayan1 and A P James2,1
Published 9 September 2021 • © 2021 The Author(s). Published by IOP Publishing Ltd
Neuromorphic Computing and Engineering ,Volume 1, Number 2Focus Issue on Extreme Edge ComputingCitation A Ajayan and A P James 2021 Neuromorph. Comput. Eng. 1 024001DOI 10.1088/2634-4386/ac1cec
Download Article PDF

1387 Total downloads

Tweeted by 15

1 readers on Mendeley

See more details
" data-badge-popover="right" style="margin: 0px; padding: 0px; border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-variant-caps: inherit; font-stretch: inherit; line-height: inherit; vertical-align: baseline; color: rgb(0, 110, 178); text-decoration: underline; display: inline-block;">
Article has an altmetric score of 8



Turn on MathJax
Share this article

Abstract​

The extreme parallelism property warrant convergence of neural networks with that of quantum computing. As the size of the network grows, the classical implementation of neural networks becomes computationally expensive and not feasible. In this paper, we propose a hybrid image classifier model using spiking neural networks (SNN) and quantum circuits that combines dynamic behaviour of SNN with the extreme parallelism offered by quantum computing. The proposed model outperforms models in comparison with spiking neural network in classical computing, and hybrid convolution neural network-quantum circuit models in terms of various performance parameters. The proposed hybrid SNN-QC model achieves an accuracy of 99.9% in comparison with CNN-QC model accuracy of 96.3%, and SNN model of accuracy 91.2% in MNIST classification task. The tests on KMNIST and CIFAR-1O also showed improvements.
 
  • Like
  • Fire
  • Love
Reactions: 19 users
Extremely interesting work on artificial tactile response for robotics and the importance of feedforward spiking neural networks to advance this research:


“Conclusion​

The functional modeling of the tactile pathway from the cutaneous mechanoreceptors (first layer), to the cuneate nucleus (second layer) up to the somatosensory area 3b (third layer), provides a mechanistic tool for understanding the role of different neuronal networks in tactile information processing. The current research highlights the importance of each stage in neuronal population coding in the detection of edge orientation. It also provides a deeper understanding of how the response of cortical neurons to edge stimuli changes as the mechanoreceptor innervation mechanisms and receptive fields are changed. The simulated spiking neural networks are functionally compatible with physiological observations across a wide range of conditions sampled from literature. Indeed, many recent neurophysiological findings have been embedded in the proposed model and its performance—based on spiking responses of cortical neurons—has been demonstrated for decoding of edge orientations.
One of the key features of the human fingertip is its ability to recognize edge orientation. In this way, it was illustrated that the random innervation of the mechanoreceptors by the primary afferents allows the encoding of orientation information through the spatiotemporal spiking pattern. This structure organizes a peripheral neural mechanism for extraction and then transmission of geometric features of the touched objects. The proposed hierarchical spiking neural network successfully discriminated edge orientation stimuli irrespective of edge location. It was shown that using the first spikes of cortical neurons; the orientation of stimuli (scanned or indented edge) was recognizable. The effect of afferent receptive field size was compared in two different experiments (scanned and indented edge). Orientation detection of the scanned edge stimuli in the first spikes of cortical neurons was improved when the afferents’ receptive field size was increased. Nevertheless, for the indented edge experiment, the situation was reversed and increasing the size of the afferents receptive field resulted in the reduction of correct detection. The findings showed that the importance of receptive field size depends on the specific tasks and experiments. Recent studies have shown that the main connections in neuronal pathways are formed during the developmental process38,39,40. However, the exact cortical dynamics and function have not been studied yet. Here, we investigated edge orientation detection through the cortical neurons as a biomimetic classifier. We showed that the intensity of a neuron’s response would signal edge orientation because its firing rate would increase with the degree of spatial coincidence between the neuron’s highly sensitive zones (excitatory region of receptive field) and the local skin deformations formed by edge indentation. That is, for a given neuron, some edge orientations exhibit more spatial coincidence than others and thus stronger responses are produced.
Also, the role of the inhibitory current which forms the lateral inhibition within the cuneate nucleus was studied. Indeed, the simulation results suggest that when lateral inhibition has increased the process of spike filtering is amplified. This leads to the reduction in “noise” within the system and hence the third-order neurons are activated by a strong and consistent signal. This also increases the spatial resolution of the receptive fields and gives them a more distinct border which improves discrimination between two separate points of simultaneous stimulation. Although other forms of lateral inhibition are also observed, the “feedforward” type of lateral inhibition is likely the most significant41. Various aspects of tactile sensitivity have been related to different forms of neuronal inhibitory function. Impaired reactions to tactile stimuli in children with autism spectrum disorder (ASD) are frequently reported symptoms. Indeed, impairments in filtering of or adaptation to tactile inputs have been described in ASD42. Under the assumption that the inhibitory mechanism is altered in ASD43,44, it can be suggested that dysfunction in lateral inhibition of the second layer of tactile processing or malfunction in the formation of the inhibitory sub-regions of the cortical neurons may also have a role. Understanding the specific mechanisms underlying sensory symptoms in ASD is still under investigation which may allow for more specific therapeutic approaches in the future.
The main limitation of the proposed spiking model is the lack of neural recordings for all network layers. Nevertheless, the model is based on the significant literature and published data for model building and validation. The proposed spiking network for a tactile system can be employed in the design and implementation of sensory neuroprostheses applications45,46,47,48. Additionally, the broad significance of this work is that the biomimetic tactile sensing and edge encoding are useful in robotic applications for shape recognition and object grasping and palpation49,50,51.”
 
  • Like
  • Fire
Reactions: 13 users
BMW will be doing a reveal at CES 23 and the videos in the article link below indicate it may an in-car personal assistant. It'll probably be worth watching to see if there is anything to indicate features like sentence comprehension in real time (think AKD2000).


Earlier this month, BMW gave some fans the impression its main social media accounts were hacked after mysterious messages and weird graphics popped up on Facebook, Instagram, Twitter, and even LinkedIn. However, it didn’t fool everyone as the probability of having all pages hacked concomitantly is close to zero. We still don’t know what “Dee” is all about, but we will have an answer in less than two weeks from now.

According to another cryptic video released on social media, BMW announces we’ll get to meet Dee on January 5 during the first day of CES 2023. It likely has something to do with artificial intelligence and may or may not be linked to the already confirmed concept car the luxury brand will showcase in Las Vegas. During the meeting held in early November to present the Q3 2022 quarterly report, CFO Dr. Nicolas Peter said a new Vision concept is coming to CES.

The concept is intriguing not just because of Dee, but also due to the reason BMW said it’ll provide an early look at the Neue Klasse platform. The electric-first architecture with next-generation round battery cells won’t debut on a production car until 2025 with an all-new EV that will be manufactured at the Debrecen plant in Hungary. Previously, the German brand confirmed NE will premiere with two models in the 3 Series segment, so expect an i3 Sedan and an iX3 crossover.
 
  • Like
  • Fire
  • Love
Reactions: 28 users
Has someone already posted this post from Renesas - with 3 new videos dated 12/12/2022? (video links below)


1671955322697.png


Links to videos

Vision and Strategy of Renesas Automotive Business

Renesas MCU and SoC Strategy for Automotive Solutions

Renesas Analog and Power Strategy for Automotive
 
  • Like
  • Fire
Reactions: 26 users
Found this from 20/12/2022

“Applying such a conversion process enables real-time processing on an automotive SoC.”

1671957714700.png



Background

Model transformation of deep learning for real-time processing for automotive SoCs

Deep learning is developed using underlying software (deep learning frameworks) such as TensorFlow and PyTorch.

By simply porting the models learned in a deep learning framework as is, it is impossible to perform real-time processing on an in-vehicle SoC such as R-Car, because the inference process of deep learning requires a great deal of computation and memory usage.

Therefore, it is necessary to apply non-equivalent model compression such as quantization and pruning to the trained model, and performance optimization using a deep learning compiler.

First, let us discuss model compression. In quantization, the inference process, which is usually computed in floating point, is converted to approximate integer operations such as 8-bit.

Pruning reduces computation and memory usage by setting weights that contribute little to the recognition result to zero and skipping the computation for those weights. Both of these transformations are non-equivalent algorithmic transformations to the original inference process and are likely to degrade recognition accuracy.

To optimize performance, the deep learning compiler transforms the program for the inference process of the trained model so that it can be processed faster by a deep learning accelerator, or it applies memory optimization such that fast and small SRAM allocated to output data in one layer can be re-used for output data in another layer.

Applying such a conversion process enables real-time processing on an automotive SoC.

Inference flow in R-Car using Renesas tools and software​

CNN-IP, the H/W accelerator in Renesas' R-Car, can perform inference operations using integer values for reasons of computational efficiency. For this reason, the user must use the R-Car CNN tool provided by Renesas to perform quantization, one of the model transformations described above.

First, before actually performing the quantization, calibration must be performed to calculate the quantization parameters (Scale and Zero point) to convert the floating numbers to integers. For this purpose, an external tool (TFMOT, ONNX runtime, etc.), depending on the format of the network model, is used to obtain the maximum and minimum output values for each layer from a large number of input images. From these maximum/minimum values, quantization parameters such as scale/zero point can be calculated, and the R-Car CNN tool uses these quantization parameters to quantize the parameters for each layer.

The R-Car CNN tool then creates a Command List from the network model and the quantized parameters of each layer. The Command List is a binary data file that tells CNN-IP which commands to execute and which parameters to set. By giving this Command List to the CNN-IP, inference can be performed.

Since the Command List is uniquely determined from the network model and quantization parameters, it only needs to be created once in advance. By executing the aforementioned Command List for each image, inference can be performed on the actual device.

Figure 1 shows a block diagram of inference on R-Car V4H using Renesas tools and software.
Image
Renesasのツール・ソフトウェアを使って推論をする際のブロック図

Figure 1. Block diagram of reasoning with Renesas tools and software

About each simulator​

Overview and features of each simulator​

Renesas has prepared simulators to solve the following two user challenges:
A) Before developing an application, the user wants to check the change in accuracy due to quantization.
B) To check and debug user applications using Command List without using actual devices.
There are three types of Renesas simulators, each of which addresses different tasks and has different features. The features of each are shown in Table 1. Each has different accuracy and processing speed. For each, we will introduce the details of the features and use cases, referring to the block diagram.

Table 1. Overview and Features of Each Simulator
ChallengeNameSpeedAccuracyInputOutput
AInstruction Set Simulator (ISS)SlowDevice exact matchInput image
Command List (*1)
Inference result
BAccurate SimulatorMediumDevice exact matchInput image
Network model
Quantized parameters of each layer (*2)
Inference result
Fast SimulatorFastErrors foundInput image
Network model
Quantized parameters
Inference result

(*1) The Command List is created using the R-Car CNN tool based on the network model and quantization parameters, following the same procedure for inference on the actual device as described above.
(*2) Accurate Simulator runs within the R-Car CNN tool. When the user provides the R-Car CNN tool with the network model and quantization parameters, the tool automatically calculates the quantized parameters for each layer, which are then input to Accurate Simulator.

ISS​

This simulator is designed to debug output results using the same software configuration and input data (Command List, mainly register settings) as the actual device as much as possible. It does not reproduce timing and is not intended for timing verification.

The results are exactly the same as on the actual device, and the speed is slower than the Accurate Simulator because the output is reproduced on an instruction basis.
Image
ISSを使うシステムのブロック図

Figure 2. Block diagram of a system using ISS

Accurate Simulator​

This simulator takes a network model as input and is used to verify accuracy without using actual devices. For each layer, an algorithm is implemented such that the output is a perfect match to the device's calculation algorithm.

Since it is about 10 times faster than ISS, it is useful only for verifying accuracy.
Image
Accurate Simulatorを使うシステムのブロック図

Figure 3. Block diagram of a system using Accurate Simulator

Fast Simulator​

This simulator is used to check the quantization error for a large number of images.

Fast Simulator extends pseudo-quantization functionality to the deep learning framework (Tensor Flow in R-Car V4H) after each layer of inference operations with floating-point numbers.

Pseudo-quantization is a method of reproducing pseudo-quantization errors by adding the same amount of error to floating-point numbers as the degradation in accuracy due to quantization without changing the floating-point numbers.

By adding only the pseudo-quantization function to Tensor Flow, which runs at high speed, it is possible to make it run at speed similar to Tensor Flow at high speed.

Also, since the input/output interface is common to the deep learning framework, it is easy for users to check quantization errors while switching between the deep learning framework.

However, since inference operations and pseudo-quantization in each layer generate slight floating-point arithmetic errors, the results are not in perfect agreement with the results of the actual operations.
Image
Fast Simulatorを使うシステムのブロック図
Figure 4. Block diagram of a system using Fast Simulator


 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 18 users

Terroni2105

Founding Member
Just came across the In-Cabin tech Review written in December, you need to fill in your details and give your email before you can download the report. It features the “first ever in cabin market map” (below picture).


it has BrainChip in there (bottom left hand corner) and the report gives a bit of an overview of the sector and players
1671962972618.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 49 users

equanimous

Norse clairvoyant shapeshifter goddess
Just came across the In-Cabin tech Review written in December, you need to fill in your details and give your email before you can download the report. It features the “first ever in cabin market map” (below picture).


it has BrainChip in there (bottom left hand corner) and the report gives a bit of an overview of the sector and players
View attachment 25292
I fixed it

1671967607188.png
 
  • Like
  • Haha
  • Fire
Reactions: 33 users

Sproggo

Member
Where is Uiux? Would love to see that dude back👍
 
  • Like
  • Fire
Reactions: 18 users

FJ-215

Regular
Hi All,

Promised myself I wouldn't look in today!! But, well, you know.

I'm not religious in any way shape or form but I do love Xmas. A time for family, friends and peace on Earth. Best wishes to all who frequent these forums. Such a great place to come and visit. A special heart felt thanks to all who research and contribute here.

Did I mention peace on Earth.......

Merry Christmas all (even if it's not your thing)



It's a pretty thing isn't it!!
 
Last edited:
  • Love
  • Like
  • Fire
Reactions: 36 users

FJ-215

Regular
Yeah,

Boxing Day.....

For the year ahead......

Love this.... hope we won't need it but so good.......

 
  • Like
  • Love
Reactions: 8 users

FJ-215

Regular
@Diogenese

A very merry Christmas, With apologies to Eta....

A decent version......



I think I have done this before????
 
  • Like
  • Love
Reactions: 5 users

Foxdog

Regular
BMW will be doing a reveal at CES 23 and the videos in the article link below indicate it may an in-car personal assistant. It'll probably be worth watching to see if there is anything to indicate features like sentence comprehension in real time (think AKD2000).


Earlier this month, BMW gave some fans the impression its main social media accounts were hacked after mysterious messages and weird graphics popped up on Facebook, Instagram, Twitter, and even LinkedIn. However, it didn’t fool everyone as the probability of having all pages hacked concomitantly is close to zero. We still don’t know what “Dee” is all about, but we will have an answer in less than two weeks from now.

According to another cryptic video released on social media, BMW announces we’ll get to meet Dee on January 5 during the first day of CES 2023. It likely has something to do with artificial intelligence and may or may not be linked to the already confirmed concept car the luxury brand will showcase in Las Vegas. During the meeting held in early November to present the Q3 2022 quarterly report, CFO Dr. Nicolas Peter said a new Vision concept is coming to CES.

The concept is intriguing not just because of Dee, but also due to the reason BMW said it’ll provide an early look at the Neue Klasse platform. The electric-first architecture with next-generation round battery cells won’t debut on a production car until 2025 with an all-new EV that will be manufactured at the Debrecen plant in Hungary. Previously, the German brand confirmed NE will premiere with two models in the 3 Series segment, so expect an i3 Sedan and an iX3 crossover.
Wouldn't it be unreal if BMW announce that they are working with Brainchip to bring Dee into reality. Despite all of the other dots joined, for me this would be irrefutable confirmation that AKIDA is the only and best solution in the market and if you don't have it then you will fall behind into obscurity. Surely if MERC and BMW are both on board we should see $2.34 eclipsed post CES.
 
  • Like
  • Fire
  • Thinking
Reactions: 28 users
Top Bottom