When the Qualcomm processor was being developed AKIDA 2nd Gen would not have been available. Any phones containing AKIDA would likely be 2 years plus away. The lead times are pretty big and it would likely come (if it does) via Prophesse who have a multi year deal with Qualcomm.In the wee small hours, I mis-posted this on the Talga thread: #947 :
For Qualcomm advocates:
https://www.qualcomm.com/products/mobile/snapdragon/smartphones/mobile-ai
Our fastest and most advanced Qualcomm AI Engine has at its heart, the powerful Hexagon processor.
The Qualcomm Hexagon processor is the most essential element of the Qualcomm AI engine. This year we added new architectural features to the heart of our AI Engine. Let’s dive into them.
With a dedicate power delivery system we can freely provide power to Hexagon adaptive to its workload, activate the performance all the way up for heavy workloads or down to extreme power savings
We also added a special hardware to improve group convolution, activation function acceleration and doubled the performance of the Tensor accelerator
Our unique approach to accelerating complex AI models is by breaking down neural networks into micro tiles to speed up the inferencing process. This allows, the scalar, vector and tensor accelerators to work at the same time without having to engage the memory each time, saving power and time [#### ViT? ####]
We are now enabling seamless multi-IP communication effectively with Hexagon using a physical bridge. This link drives high bandwidth and low latency driven use cases like the Cognitive-ISP or upscaling of low resolution in gaming scenarios
We successfully enabled transformation of several DL models from FP32 to INT16 to INT8 while not compromising on accuracy and getting the added advantage of higher performance at lower memory consumption. Now we are pushing the boundaries with INT4 for even higher power savings without compromising accuracy or performance.
I was sprung by @Proga.
I guess the main implication for BRN is that Qualcomm will not be adopting Akida any time soon. That said, their ViT patent seems very clunky:
WO2023049655A1 TRANSFORMER-BASED ARCHITECTURE FOR TRANSFORM CODING OF MEDIA 2021-09-27
Systems and techniques are described herein for processing media data using a neural network system. For instance, a process can include obtaining a latent representation of a frame of encoded image data and generating, by a plurality of decoder transformer layers of a decoder sub-network using the latent representation of the frame of encoded image data as input, a frame of decoded image data. At least one decoder transformer layer of the plurality of decoder transformer layers includes: one or more transformer blocks for generating one or more patches of features and determine self-attention locally within one or more window partitions and shifted window partitions applied over the one or more patches; and a patch un-merging engine for decreasing a respective size of each patch of the one or more patches.
View attachment 48181
[0112] As previously noted, systems and techniques are described herein for performing image and/or video coding (e.g., low latency encoding and decoding) using one or more transformer neural networks. The transformer neural networks can include transformer blocks and/or transformer layers that are organized according to, for example, the hyperprior architecture of FIG. 4 and/or the scale-space flow (SSF) architecture of FIG. 6B described below. For example, the four convolutional networks ga , gs, ha , and hs that are depicted in FIG. 4 can instead be provided as a corresponding four transformer neural networks, as will be explained in greater depth below.
[0113] In some examples, one or more transformer-based neural networks described herein can be trained using a loss function that is based at least in part on rate distortion. Distortion may be determined as the mean square error (MSE) between an original image (e.g., an image that would be provided as input to an encoder sub-network) and a decompressed/decoded image (e.g., the image that is reconstructed by a decoder sub-network). In some examples, a loss function used in training a transformer-based media coding neural network can be based on a trade-off between distortion and rate with a Lagrange multiplier. One example of such a rate-distortion loss function is L = D + * R, where D represents distortion, R represents rate, and different 0 values represent models trained for different bitrates and/or peak-signal- to-noise ratios (PSNR).
[0115] … a backpropagation training process can be used to adjust weights (and in some cases other parameters, such as biases) of the nodes of the neural network, e.g., an encoder and/or decoder sub-network, such as those depicted in FIGS. 5A and 5B, respectively). Backpropagation includes a forward pass, a loss function, a backward pass, and a weight update. In some examples, the loss function can include the rate-distortion-based loss function described above. The forward pass, loss function, backward pass, and parameter update can be performed for one training iteration. The process is repeated for a certain number of iterations for each set of training data until the weights of the parameters of the encoder or decoder sub-network are accurately tuned.
So, when Akida 2 with ViT and TeNNs hits the streets, Qualcomm may need to review their isolationist policy.
At the 'Understanding BRN role in the AI revolution' interview in April 2023 around the 11.20 minute mark Sean talked about the different types of partners. He said If they (customers) want a Prophesse camera they want to know Brainchip works well with them.
This was said in relation to Technical partners. He also said if you want to buy a processor from ARM or Sci Fi you want to know Brainchip works well with them.
Explained in simple terms what Enablement partners means - BRN works well with them and Enablement partners means - BRN enables workloads, performance, power etc for clients who sell an end product. Worth another listen.
Back to Qualcomm near term no but medium/long term a good chance via Prophesse.
A lot of holders/posters do not fully grasp the enormity of lead times to get AKIDA into products that are released to the market.