QUALCOMM / BRN

M_C

Founding Member
Hey all,

New Qualcomm thread.............think we're going to need it. To kick it off I came across this interesting article linking Qualcomm to a few car manufactures we have been linking to BRN including Toyota, Honda and Mercedes...............🤩

Nothing definitive here but given everything we have on Qualcomm to date, I'd say there is a favourable connection here..............after all "we see these companies as partners not competitors...." Right?

"The key is that Qualcomm is not going it alone, announcing partnerships with companies including Honda, Volvo and Renault, noting that it is also working with most major automakers, including Mercedes-Benz, BMW, Toyota and Chevrolet."


This article talks also about Qualcomms recent partnership with Ferrarri

 
Last edited:
  • Like
  • Fire
Reactions: 54 users
D

Deleted member 118

Guest
 
  • Like
  • Fire
Reactions: 7 users
  • Like
  • Fire
  • Thinking
Reactions: 10 users

M_C

Founding Member
Interesting announcement here from Qualcomm on their collab with BMW.................

Nakul Duggal is the Senior VP and GM of Automotive at Qualcomm. I find it interesting that he decided to specify a software collaboration, and chose not to mention Qualcomm hardware (even though clearly Qualcomm hardware is being used with BMW). Why? Because in essence it isn't really Qualcomm technology??? Very strange............and yet interesting

May need to translate link -


Capture.PNG
 
Last edited:
  • Like
  • Fire
Reactions: 10 users

Slade

Top 20
Everything in the linked article sounds like a road to Akida. Why would Qualcomm burn huge amounts of money, time and effort when they can purchase BrainChip's IP? With BrainChip's patents in place, could they even find a way of producing the edge solution that they so desperately desire? Would they want to wait that long? The plot thickens.


Some parts taken from the linked article:

On-device training capability would enhance the user experience while protecting privacy.

We’ve all had those frustrating experiences with our mobile phones when the voice assistant seems to possess artificial stupidity instead of artificial intelligence. The real problem here is that these assistants require cloud connectivity (increasing latency) and do not continuously learn from interactions with you over time. Qualcomm Technologies AI Research is exploring how to change that through on-device learning for mobile and edge devices, enabling personalization while preserving privacy.

Few-shot Learning​

Learning from limited labeled data will be crucial to achieving the required accuracy when training on edge devices. The algorithmic approaches for efficient on-device learning can hinge on adapting models to deal with fewer labeled data samples, such as in few-shot learning or using unlabeled data in unsupervised learning in certain data domains. One can also target models with user-labeled data with limited sample sizes. Getting users to help label samples can also improve accuracy by using cleaner data than is usually available from the environment. An excellent example of few-shot learning is improving keyword spotting (KWS). By adapting the model using data collected from user enrollment, one can personalize the model and significantly improve results.

Conclusions​

On-device learning on a mobile or edge device could allow developers to clear hurdles preventing some AI applications from achieving their full potential. But training on battery-operated power-efficient devices is tough to accomplish. Qualcomm AI Research believes it has meaningful solutions in flight, using not one silver bullet but a range of techniques that can be applied in isolation or harmony. This holistic approach is already showing promise in the lab and could revolutionize edge AI applications in the not-so-distant future.
 
  • Like
  • Fire
  • Love
Reactions: 26 users
D

Deleted member 118

Guest
 
  • Like
  • Love
  • Fire
Reactions: 8 users

ndefries

Regular

this is a must watch on the ideas of mobile connectivity. CEO states 'we are going to be the company that will connecting all those devices at the edge'. A desire to build a digital chassis for car companies to build off.

They are still very focused on sending everything back to the cloud. They want to connect everything to everything so AI can access data from every relevant point... Other cars, telegraph poles, bikes etc. That is going to be a significant amount of data.

You get a sense a whole new way of living and getting around is not far away. Every industry is going to need to invest in edge processing.
 
  • Like
  • Love
  • Fire
Reactions: 16 users
this is a must watch on the ideas of mobile connectivity. CEO states 'we are going to be the company that will connecting all those devices at the edge'. A desire to build a digital chassis for car companies to build off.

They are still very focused on sending everything back to the cloud. They want to connect everything to everything so AI can access data from every relevant point... Other cars, telegraph poles, bikes etc. That is going to be a significant amount of data.

You get a sense a whole new way of living and getting around is not far away. Every industry is going to need to invest in edge processing.
I would think that would defeat the purpose. Develop EV's to help save the planet, then try to connect everything to data centres to burn much more coal. There goes the planet IMO. Just put Akida in everything and it won't need to be that way.

SC
 
  • Like
Reactions: 12 users
D

Deleted member 118

Guest
3 videos on the page to watch, unable to link individually

 
  • Like
Reactions: 2 users

Perhaps

Regular
  • Like
Reactions: 5 users

M_C

Founding Member
@Diogenese........................... integer?



Today, some AI processing has to be offloaded from the mobile device to a cloud service. However, faster response times, improved privacy, better personalization, and improved understanding within the context of the request will require on-device learning.

Capture.PNG
 
  • Like
  • Fire
Reactions: 10 users

Interesting technology that will need the type of Ai on Smartphones that Qualcomm is touting.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
Reactions: 6 users

Diogenese

Top 20
@Diogenese........................... integer?



Today, some AI processing has to be offloaded from the mobile device to a cloud service. However, faster response times, improved privacy, better personalization, and improved understanding within the context of the request will require on-device learning.

View attachment 4011
Hi FU_B,

Back in 2016, Qualcomm thought that training NNs was a software thing:

An apparatus for training a neural network with back propagation, comprising:
a memory; and
at least one processor coupled to the memory, the at least one processor configured:

1) to generate error events representing a gradient of a cost function for the neural network based on a forward pass through the neural network resulting from input events, weights of the neural network and events from a target signal; and

2) to update the weights of the neural network based on the error events
.


[0060] FIG. 5 is a block diagram illustrating the run-time operation 500 of an AI application on a smartphone 502 . The AI application may include a pre-process module 504 that may be configured (using for example, the JAVA programming language) to convert the format of an image 506 and then crop and/or resize the image 508 . The pre-processed image may then be communicated to a classify application 510 that contains a SceneDetect Backend Engine 512 that may be configured (using for example, the C programming language) to detect and classify scenes based on visual input. The SceneDetect Backend Engine 512 may be configured to further preprocess 514 the image by scaling 516 and cropping 518 . For example, the image may be scaled and cropped so that the resulting image is 224 pixels by 224 pixels. These dimensions may map to the input dimensions of a neural network. The neural network may be configured by a deep neural network block 520 to cause various processing blocks of the SOC 100 to further process the image pixels with a deep neural network. The results of the deep neural network may then be thresholded 522 and passed through an exponential smoothing block 524 in the classify application 510 . The smoothed results may then cause a change of the settings and/or the display of the smartphone 502 .

1649563088424.png


The SceneDetectEngine 512 is a software element including DNN 520.

In fact, looking at Fig 4, they weren't even sure NPUs were a thing at all:

1649562909002.png

I couldn't find anything to suggest that Qualcomm knew anything about SNNs apart from what they had read. So it's not impossible that they have seen the light ...
 
  • Like
  • Fire
  • Wow
Reactions: 10 users

M_C

Founding Member
Hi FU_B,

Back in 2016, Qualcomm thought that training NNs was a software thing:

An apparatus for training a neural network with back propagation, comprising:
a memory; and
at least one processor coupled to the memory, the at least one processor configured:

1) to generate error events representing a gradient of a cost function for the neural network based on a forward pass through the neural network resulting from input events, weights of the neural network and events from a target signal; and

2) to update the weights of the neural network based on the error events
.


[0060] FIG. 5 is a block diagram illustrating the run-time operation 500 of an AI application on a smartphone 502 . The AI application may include a pre-process module 504 that may be configured (using for example, the JAVA programming language) to convert the format of an image 506 and then crop and/or resize the image 508 . The pre-processed image may then be communicated to a classify application 510 that contains a SceneDetect Backend Engine 512 that may be configured (using for example, the C programming language) to detect and classify scenes based on visual input. The SceneDetect Backend Engine 512 may be configured to further preprocess 514 the image by scaling 516 and cropping 518 . For example, the image may be scaled and cropped so that the resulting image is 224 pixels by 224 pixels. These dimensions may map to the input dimensions of a neural network. The neural network may be configured by a deep neural network block 520 to cause various processing blocks of the SOC 100 to further process the image pixels with a deep neural network. The results of the deep neural network may then be thresholded 522 and passed through an exponential smoothing block 524 in the classify application 510 . The smoothed results may then cause a change of the settings and/or the display of the smartphone 502 .

View attachment 4015

The SceneDetectEngine 512 is a software element including DNN 520.

In fact, looking at Fig 4, they weren't even sure NPUs were a thing at all:

View attachment 4014
I couldn't find anything to suggest that Qualcomm knew anything about SNNs apart from what they had read. So it's not impossible that they have seen the light ...
Well if the great Mr.BS can't rule it out then THAT"S GOOD ENOUGH FOR ME !!! 😁 Thanks for the input @Diogenese what do they mean by integer math?
 
  • Like
  • Haha
  • Fire
Reactions: 11 users
D

Deleted member 118

Guest
Qualcomm and the metaverse

 
  • Like
Reactions: 5 users

Diogenese

Top 20
Well if the great Mr.BS can't rule it out then THAT"S GOOD ENOUGH FOR ME !!! 😁 Thanks for the input @Diogenese what do they mean by integer math?
Just did a quick skim through Qualcomm's neuromorphic patents:
https://worldwide.espacenet.com/pat... AND nftxt = "neuromorphic" AND nftxt = "NPU"

There's no doubt about whether they could benefit from adopting Akida, but their recycling bins would be full of their old inventions.

Here's a couple of examples:

US2020104690A1 NEURAL PROCESSING UNIT (NPU) DIRECT MEMORY ACCESS (NDMA) HARDWARE PRE-PROCESSING AND POST-PROCESSING

[0072] FIG. 7 is a block diagram illustrating an NPU 700 , including an NPU DMA (NDMA) core 710 and interfaces configured to provide hardware pre-processing and post-processing of NDMA data, according to aspects of the present disclosure. The NDMA core 710 includes a read engine 720 configured to provide a first memory interface to a read client (RCLT) and a write engine 730 configured to provide a second memory interface to a write client (WCLT). The memory interfaces to the client side (e.g., RCLT, WCLT) are memory read/write interfaces using a request/valid handshake. In aspects of the present disclosure, the read client RCLT and the write client WCLT may refer to an array to compute elements of the NPU 700 , which may support, for example, 16-NDMA read channels and 16-NDMA write channels for the various compute units of the NPU 700 .

1649823874630.png



US2020073636A1 MULTIPLY-ACCUMULATE (MAC) OPERATIONS FOR CONVOLUTIONAL NEURAL NETWORKS

1649823934696.png


An integrated circuit device, comprising:
a lookup table (LUT) configured to store a plurality of values; and
a compute unit, comprising:
an accumulator,
a first multiplier configured to receive a first value of a padded input feature and a first weight of a filter kernel, and
a first selector configured to select an input to supply to the accumulator between an output from the first multiplier and an output from the LUT.

2 . The integrated circuit device of claim 1, in which the plurality of values comprise precomputed products of a plurality of input features.

3 . The integrated circuit device of claim 1, further comprising:
an off-cell selector configured to select an input to supply to the LUT between an external source or the first multiplier, in which the off-cell selector further comprises a select line configured to select the input to supply to the LUT based on a padding type of the padded input feature
.

Sounds messy (Mary Shelley again?)
WO2020069239A1 EXPLOITING ACTIVATION SPARSITY IN DEEP NEURAL NETWORKS

A method of exploiting activation sparsity in deep neural networks, comprising:
retrieving an activation tensor and a weight tensor where the activation tensor is a sparse activation tensor;
generating a compressed activation tensor comprising non-zero activations of the activation tensor, where the compressed activation tensor has fewer columns than the activation tensor; and
processing the compressed activation tensor and the weight tensor to generate an output tensor.

2. The method of claim 1, in which generating the compressed activation tensor comprises:
painting the non-zero activations of the activation tensor into a memory buffer; and
redistributing the non-zero activations within the memory buffer to a location in the memory buffer mapped to an empty vector lane of a multiply-accumulate (MAC) hardware during a clock cycle
.
 
  • Like
  • Haha
  • Fire
Reactions: 12 users
Just did a quick skim through Qualcomm's neuromorphic patents:
https://worldwide.espacenet.com/patent/search/family/054006939/publication/US10140573B2?q=pa = "qualcomm" AND nftxt = "neuromorphic" AND nftxt = "NPU"

There's no doubt about whether they could benefit from adopting Akida, but their recycling bins would be full of their old inventions.

Here's a couple of examples:

US2020104690A1 NEURAL PROCESSING UNIT (NPU) DIRECT MEMORY ACCESS (NDMA) HARDWARE PRE-PROCESSING AND POST-PROCESSING

[0072] FIG. 7 is a block diagram illustrating an NPU 700 , including an NPU DMA (NDMA) core 710 and interfaces configured to provide hardware pre-processing and post-processing of NDMA data, according to aspects of the present disclosure. The NDMA core 710 includes a read engine 720 configured to provide a first memory interface to a read client (RCLT) and a write engine 730 configured to provide a second memory interface to a write client (WCLT). The memory interfaces to the client side (e.g., RCLT, WCLT) are memory read/write interfaces using a request/valid handshake. In aspects of the present disclosure, the read client RCLT and the write client WCLT may refer to an array to compute elements of the NPU 700 , which may support, for example, 16-NDMA read channels and 16-NDMA write channels for the various compute units of the NPU 700 .

View attachment 4176


US2020073636A1 MULTIPLY-ACCUMULATE (MAC) OPERATIONS FOR CONVOLUTIONAL NEURAL NETWORKS

View attachment 4177

An integrated circuit device, comprising:
a lookup table (LUT) configured to store a plurality of values; and
a compute unit, comprising:
an accumulator,
a first multiplier configured to receive a first value of a padded input feature and a first weight of a filter kernel, and
a first selector configured to select an input to supply to the accumulator between an output from the first multiplier and an output from the LUT.

2 . The integrated circuit device of claim 1, in which the plurality of values comprise precomputed products of a plurality of input features.

3 . The integrated circuit device of claim 1, further comprising:
an off-cell selector configured to select an input to supply to the LUT between an external source or the first multiplier, in which the off-cell selector further comprises a select line configured to select the input to supply to the LUT based on a padding type of the padded input feature
.

Sounds messy (Mary Shelley again?)
WO2020069239A1 EXPLOITING ACTIVATION SPARSITY IN DEEP NEURAL NETWORKS

A method of exploiting activation sparsity in deep neural networks, comprising:
retrieving an activation tensor and a weight tensor where the activation tensor is a sparse activation tensor;
generating a compressed activation tensor comprising non-zero activations of the activation tensor, where the compressed activation tensor has fewer columns than the activation tensor; and
processing the compressed activation tensor and the weight tensor to generate an output tensor.

2. The method of claim 1, in which generating the compressed activation tensor comprises:
painting the non-zero activations of the activation tensor into a memory buffer; and
redistributing the non-zero activations within the memory buffer to a location in the memory buffer mapped to an empty vector lane of a multiply-accumulate (MAC) hardware during a clock cycle
.
Maybe we can make a BIN to SNN converter.

SC
 
  • Haha
  • Like
Reactions: 12 users

M_C

Founding Member

“We have diversified beyond the smartphone, and IoT is actually a huge part of that whole diversification story,” said Siddhartha Franco, director, business development, Qualcomm Technologies. “There’s a big shift that's happening today in the security industry, and that's a shift from the traditional model of VMS, which stands for video management software, to video surveillance-as-a-service (VSaaS).”

That evolution is enabled in part by “plug-and-play” cameras that allow users to greatly and quickly expand smart camera networks, sometimes with multi-camera devices that have additional sensors collecting their own data. There are now about 1 billion surveillance cameras operating worldwide, Qualcomm said. With all of the data capable of being collected by increasingly large smart camera networks, “we see camera architectures evolving to being able to do more multi-sensing sensor fusion on the edge,” Franco said.

The new SoC makes it easier to set up and manage these increasingly large and complex networks, representing a migration away from having to set up smart camera networks by deploying devices and software one site at a time. Franco said the new processor fits with Qualcomm’s concept of using an “AI box,” an edge appliance that processes data from cameras.

“This AI box will take the streams from the cameras, decode them, understand them, and run analytics on each of these,” he said. “These boxes are capable of handling multiple cameras together. We are helping businesses to quickly transition from the VMS to the VSaaS models.”
 
  • Like
  • Fire
Reactions: 14 users

Perhaps

Regular
So Qualcomm finally arrived. Still believe that Qualcomm/SiFive/Brainchip connection is hot.


Nice take: But also in the interaction between driver and vehicle – the vehicle should be able to automatically adjust to the mood and wishes of the driver (and passengers) and, for example, set the appropriate music.
 
Last edited:
  • Like
  • Fire
Reactions: 6 users
Top Bottom