BRN Discussion Ongoing

KiKi

Regular
I asked Chat GPT with slightly wrong grammar like this and got the following answer. Then, below that, I asked it correctly with the same question but with an S at the end of partner and got a more extensive and also different reply. Seems like a plural S makes a great difference:


Chat GPT: list all brainchip partner

BrainChip Holdings Ltd has partnerships with a number of companies and organizations across different industries. Here are some of the notable partners of BrainChip:
  1. Vorago Technologies: Semiconductor solutions for extreme environments
  2. Socionext: ASIC solutions for AI applications
  3. Tata Consultancy Services: AI-powered business solutions
  4. Magik Eye: 3D sensing technology for AR/VR applications
  5. VORAGO Technologies: Semiconductor solutions for extreme environments
  6. Koniku: Neurocomputation technology for the security industry
  7. Linxens: Biometric authentication technology for smart cards and wearables
  8. NXP Semiconductors: Semiconductor solutions for automotive, industrial and IoT applications
  9. Hagiwara Solutions: IoT solutions and industrial computing
  10. Novo Technologies: AI-based video analytics solutions for public safety and security.
This is not an exhaustive list of all the partners of BrainChip, as the company may have partnerships with other organizations as well.
________________________________________________

Chat GPT: list all brainchip partners

Sure, here is a list of BrainChip's partners:
  1. Socionext: ASIC solutions for AI applications
  2. Magik Eye: 3D sensing technology for AR/VR applications
  3. Koniku: Neurocomputation technology for the security industry
  4. VORAGO Technologies: Semiconductor solutions for extreme environments
  5. Tata Consultancy Services: AI-powered business solutions
  6. Hagiwara Solutions: IoT solutions and industrial computing
  7. NXP Semiconductors: Semiconductor solutions for automotive, industrial and IoT applications
  8. Linxens: Biometric authentication technology for smart cards and wearables
  9. Novo Technologies: AI-based video analytics solutions for public safety and security
  10. Wiliot: IoT semiconductor and software solutions
  11. Ayar Labs: Photonic interconnect solutions for high-performance computing and AI applications
  12. Aeolus Robotics: Robotics solutions for the service industry
  13. Quantum Silicones: High-performance silicone products for the electronics industry
  14. Infineon Technologies: Semiconductor solutions for automotive, industrial and security applications
Please note that this list may not be exhaustive and may change over time, as BrainChip continues to form new partnerships with other companies and organizations.
_________________________________________

Maybe somebody finds something interesting in these replies and wants to dig deeper?
 
  • Like
  • Thinking
Reactions: 5 users
S

Straw

Guest
I missed out on this one -> Vorago Technologies! NASA was clear, but Vorago?
Thank you Chat GPT!
Strange though that this is all the news I get to see.


CHAT GPT:
Please show me news from brainchip

BrainChip Holdings Ltd is a company specializing in the development of artificial intelligence and neural network technology. Here are some of the latest BrainChip news:

In February 2022, the company announced that it had partnered with technology company Vorago Technologies to integrate its AI technology into Vorago's semiconductor products. This is intended to help improve the performance of Vorago's products while reducing power consumption.

In September 2021, BrainChip announced that it had completed its acquisition of VORAGO Technologies. VORAGO is a leading provider of semiconductor solutions for extremely harsh environments such as those found in aerospace, military and industry. The integration of VORAGO's semiconductor technology with BrainChip's AI technology aims to increase the performance of AI applications.

In July 2021, BrainChip announced that its Akida™ Neuromorphic System-on-Chip platform has successfully passed a specific test for use in image processing. The test was carried out by an independent test laboratory and confirmed the Akida platform's ability to recognize objects and analyze images.

In March 2021, the company announced that it had signed an agreement with Valeo AI, a subsidiary of French auto parts maker Valeo. The agreement provides for BrainChip's AI technology to be integrated into Valeo's vehicle systems to improve vehicle safety and efficiency.

This news shows that BrainChip continues to strive to integrate its AI technology into various industries and applications to improve the performance and efficiency of systems.

What does Vorago Technologies do?

VORAGO Technologies is a company specializing in the design and manufacture of semiconductor solutions, especially for extremely harsh environments such as aerospace, defense, industrial and medical.

VORAGO's products include microprocessor-based solutions based on Silicon-On-Insulator (SOI) technology, which are more resilient in harsh environments than traditional semiconductors. These solutions are used in aerospace to increase the reliability of spacecraft and satellites. In the defense industry, they are used for applications such as navigation, communications and surveillance. In industry, they are used for applications such as industrial automation, robotics and monitoring of manufacturing processes.

VORAGO also has a partnership with NASA to develop robust semiconductor solutions for spacecraft and satellites, as well as with other companies and institutions in the aerospace and defense industries. The company is headquartered in Austin, Texas and has a global network of sales and support partners.

What are Vorago technologies doing today?

As an AI language model, I don't have access to real-time data on Vorago Technologies' current activities. However, according to the company's website, Vorago is still focused on providing reliable semiconductor solutions for extreme environments in industries such as aerospace, defense, industrial, and medical.
The company continues to offer microprocessor-based solutions that are based on silicon-on-insulator (SOI) technology, which makes them more resilient than conventional semiconductors. Vorago's products are designed to operate in harsh conditions of high temperatures, radiation, and other environmental stresses, and have applications in spacecraft, satellites, navigation, communication, and industrial automation.
Vorago Technologies also maintains partnerships with various companies and institutions in the aerospace and defense industries, as well as the medical field. The company is committed to advancing the development of innovative and reliable semiconductor solutions for extreme environments.
Pretty sure Vorago was not acquired by Brainchip. Vorago is an Early Access Program participant/partner.
ChatGPT needs some logic lessons. Other than to my knowledge being totally wrong, why would the company buy Vorago in 2021 and then partner with them in 2022.
Maybe we could ask ChatGPT if ChatGPT is any good and give it esteem issues lol
(still far more likely to become humble than 95% of our leaders)
 
Last edited by a moderator:
  • Like
Reactions: 17 users
Pretty sure Vorago was not acquired by Brainchip. Vorago is an Early Access Program participant/partner.
ChatGPT needs some logic lessons. Other than to my knowledge being totally wrong, why would the company buy Vorago in 2021 and then partner with them in 2022.
Maybe we could ask ChatGPT if ChatGPT is any good and give it esteem issues lol
Definitely correct we did not acquire vorago.
Chapgpt great on many fronts but also dismal on many also.
 
  • Like
Reactions: 11 users

Diogenese

Top 20
Pretty sure Vorago was not acquired by Brainchip. Vorago is an Early Access Program participant/partner.
ChatGPT needs some logic lessons. Other than to my knowledge being totally wrong, why would the company buy Vorago in 2021 and then partner with them in 2022.
Maybe we could ask ChatGPT if ChatGPT is any good and give it esteem issues lol
(still far more likely to become humble than 95% of our leaders)
42.
 
  • Haha
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I’ll probably get the left field post of the day with this one

Anyone come across Open Robotics or Turtlebot3 before?

Could just be a student project?

View attachment 33549

View attachment 33551

View attachment 33552

From ChatGPT so needs further verification

View attachment 33553
View attachment 33554

View attachment 33555


He’s definitely got nice hair. I think we can all agree on that. I’m just not too sure what a turtlebot burger is. Are you supposed to order fries with it?
 
  • Haha
  • Like
Reactions: 6 users

The Pope

Regular
Wait! I found something!!!!

This was published online 2 hours ago.

You can't have a truly intelligent Connected Intelligent Edge IMO without AKIDA so I don't know what to think.


Qualcomm flaunts its ability in AI and Connected Intelligent Edge

15
SOCIAL BUZZ


Qualcomm flaunts its ability in AI and Connected Intelligent EdgeSource: Shutterstock

Qualcomm flaunts its ability in AI and Connected Intelligent Edge​

By Dashveenjit Kaur | 3 April, 2023

  • A few months after Qualcomm shared how the company is driving the emergence of the Connected Intelligent Edge, the US chip designer shared further updates in a media briefing in Kuala Lumpur.
Back at Qualcomm Investor Day in 2021, President and CEO Cristiano Amon shared how the company is leading the realization of the “connected intelligent edge,” where the convergence of wireless connectivity, efficient computing, and distributed AI will power devices and experiences. “We are working to enable a world where everyone and everything is intelligently connected,” Amon said.
He meant that Qualcomm’s mobile heritage and DNA put the company in “an incredible position to provide high-performance, low-power computing, on-device intelligence, all wireless technologies, and leadership across AI processing and connectivity and camera graphics and sensors.” For Qualcomm, Amon sees this as an opportunity to engage a US$700 billion addressable market in the next decade.

Several Android phone makers are collaborating with Qualcomm to enable satellite text messaging Image: The launch of the Snapdragon 8 Gen 2 powered Xiaomi 13 Series Source: Qualcomm's Twitter's Twitter

ANDROID PHONE MAKERS AND QUALCOMM COLLABORATE TO ENABLE SATELLITE TEXT MESSAGING

Dashveenjit Kaur | 1 March, 2023

Fast forward to today, Qualcomm, as part of its ongoing campaign, “The Edge of Possible,” highlighted the significance of driving edge computing innovations to transform industries, power new consumer experiences, and fuel the growth of the digital economy. In a media briefing held in Kuala Lumpur on March 29, Qualcomm shared its progress in the space.

Connected Intelligent Edge with Qualcomm

For starters, the Connected Intelligent Edge, according to Qualcomm, is the convergence of 5G networks, edge computing, and AI. “This integration allows data to be processed and analyzed in real time, allowing faster decision-making and response times. As a result, various industries, including manufacturing, transportation, and security, can use this technology,” said Alicia Lim, head of marketing for Qualcomm Southeast Asia.
In terms of sector, for instance, Connected Intelligent Edge devices monitor patients in real-time, like tracking a patient’s vital signs, such as blood pressure and heart rate, and send the information to healthcare providers in real-time. This method allows medical professionals to identify issues and take appropriate action quickly.
In transportation, Connected Intelligent Edge devices are used to make vehicles safer and more efficient. “For example, sensors placed throughout a car can monitor its performance and provide real-time data to the driver, alerting them to any issues. The information can help prevent accidents and improve fuel efficiency, reducing costs and environmental impact,” Lim told the press during the briefing.

From a client’s perspective, Malaysia’s telco provider YTL Communications shared how Qualcomm’s Connected Intelligent Edge devices have helped the company maintain a seamless operation. “We are fortunate to have been given early access to a Connected Intelligent Edge device known as the AI Box by Qualcomm to be deployed in our building,” CEO Wing K Lee shared.
The AI Box is an intelligent edge device that supports traditional IP camera video streams to be offloaded and processed via the box — powered by system-on-chip (SOC) designed for high computing and high concurrency of machine learning-based analytics across multiple camera streams.
“With the AI box, we have applied localized processing intelligence from an analytics standpoint. The AI Box approach is aligned with the industry’s shift from video management software (VMS) to Video software as a service (VsaaS) implementation, enabling us to optimize our operations,” Lee noted.
Separately, Lim also shared how Qualcomm and Microsoft collaborated on Snapdragon to enable these new AI-accelerated experiences for Windows 11 — without impacting performance and power efficiency. “Instead, Snapdragon intelligently offloads these computationally intensive tasks to a dedicated AI engine, which frees up the CPU and GPU resources,” she said.
To top it off, adding on-device AI processing to powerful CPU and GPU capabilities adds the ability to offload compute-intensive processes. “As a result, performance is dramatically increased, delivering a higher user experience. Snapdragon computes platforms offer AI-accelerated user experiences to reach a new level of mobile computing performance,” Lim shared.

If BRN is not involved then interesting developments in AI edge etc by Qualcomm. It appears there is not way knowing if Qualcomm has a form of edge AI compatible to BRN without infringing on BRN patents. Are they using Akida now with snapdragon or upgraded to Akida linked to this announcement?. I assume only the money trail via 4c’s will give answers.

Then if qualcomm have some similar tech to BRN as they are saying and it’s not us then we may be in some poo poo. But then Rob telson likes a couple of posters of companies linked to Qualcomm via twitter. Surely he isn’t just saying kudos to direct competition unless BRN are involved in some way. Hmmm

Then if kudos only there is a concern because Akida was meant to be three years in advance of its competitors. Surely they couldn’t have underestimated the competition that poorly.
The big jump in performance suggest they introduced something new. Hopefully it’s Akida. 😀

The above might jump start some healthy chat on bravo’s post as there is many likes etc but no one (maybe one) appears to have provided supporting posts with such a announcement from Qualcomm assuming BRN are linked in some way.

FF, dio etc do you have any comments to support BRN involvement with Qualcomm’s recent news?
 
  • Like
  • Fire
  • Love
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Haha
  • Sad
Reactions: 11 users

stuart888

Regular
Micro tile inferencing....So this is how Qualcomm are reducing their power consumption?

"The company also announced a new technology called micro tile inferencing, which breaks a neural network into many small pieces so that it can be processed all at once, rather than layer by layer, and very efficiently, saving a great deal of power."

Unlocking unexpected new AI use cases with the Connected Intelligent Edge​


March 7, 2023 6:20 AM
AdobeStock_386600684.jpeg

Image Credit: Adobe Images
Presented by Qualcomm Technologies, Inc.


To data scientists, the raw potential of AI and complex neural networks was clear from the start, or close to it. But it’s only in the past five years that device hardware has become sophisticated enough to make good on the full promise, and bring AI all the way to the edge. On-device AI is what makes AI a reality for the consumers. And now devices of every size, even with lower battery capacity, are able to handle powerful, power-efficient on-device neural networks. It’s the evolution of computing from the cloud, taking inferencing right to the source.
“We’ve spent almost a decade of research on how to make AI work best on the edge,” says Ziad Asghar, senior vice president of product management, Qualcomm Technologies, Inc. “From that, we’ve developed hardware that’s able to do more inferencing for any given amount of power, and AI software stack (Qualcomm AI Stack) and tools to bring the Connected Intelligent Edge to life.”

Leveling up AI use cases and unlocking new ones

AI use cases have made their way to devices already — AI enhanced pictures and videos, AI-based voice assistants, better sound and voice quality, real-time language translation, and more are significantly improved with connectivity and data processing, while numerous brand-new use cases are just starting to make themselves known across camera, gaming, sensors and connectivity, on all devices at the edge.
On the consumer-facing side, use cases embrace everything from smartphones, XR, compute and earbuds to connected intelligent vehicles and smart homes. On the business side, they support digital transformation in the industrial and manufacturing space, connected healthcare and a leap ahead for the AI software tools and platforms companies need to stay competitive in a rapidly changing environment.
Asghar describes the Connected Intelligent Edge itself as a network with multiple nodes, or different products, within it — and many of the new possibilities lie in these device clouds. In a smart home, for example, that might include security cameras, the cars in the garage, appliances, PCs, mobile devices and tablets, all with some amount of AI processing capability.
Those security cameras might recognize a family member in order to open up the smart lock at the front door and activate environmental controls. But the Connected Intelligent Edge also disseminates AI across the whole network, so that use cases are handled with the best accuracy with the best power consumption. If there’s not enough processing power on one product, it can be handed up the line to a more powerful device.
For instance, a security camera shifting a possible false alarm to the unit that can handle anomalies and more complex incidents. The data never leaves the device or local network, so that privacy is assured. And handling latency-sensitive use cases on the device means real-time results, and a a better consumer experience.

Purpose-built AI hardware and developer tools

“From an AI developer perspective, they want a product that excels in terms of performance and in terms of power,” Asghar says. “Which means you want the best-in-class underlying hardware capability.”
That means more processing for any given amount of power. It also means the ability to write software quickly and get to a product quicker, because time to market is key. At the same time, developers need the flexibility of using AI frameworks they’re familiar with, and tools to optimize and improve performance even further. On the hardware side, Qualcomm offers the Hexagon processor with three accelerators (scalar, vector and tensor) that allow a developer to go from mapping directly to how a neural network is set up all the way to the fully connected layer.
At the most recent Snapdragon Summit, Qualcomm had several significant AI announcements, among them, considerable hardware improvements, including an upgrade of Snapdragon 8 Gen 2 AI performance by a factor up to 4.35x. It’s the first commercial design on the edge that can do integer 4-bit (INT4) AI inferencing, which means the same calculations take far fewer bits, and expend significantly less energy while maintaining accuracy.
The company also announced a new technology called micro tile inferencing, which breaks a neural network into many small pieces so that it can be processed all at once, rather than layer by layer, and very efficiently, saving a great deal of power.
And just before MWC, the company announced that it was able to run Stable Diffusion, a text-to-image AI model on a reference device powered by Snapdragon 8 Gen 2. Typically, generating images using Stable Diffusion requires vast amounts of computing power in the cloud but thanks to Qualcomm Technologies’ AI research and the power of on-device AI on the Snapdragon 8 Gen 2, they were able to not only optimize, quantize and deploy these large models using Qualcomm AI Stack onto the device but they were able to run these models in a matter of seconds. This is a significant leap from current capabilities and will allow massive AI models to run locally on devices which means better convenience, power savings, security and more.
But, Asghar says, having great hardware for AI isn’t enough. As the company continues to optimize its hardware, they’ve focused on leveling up software and tools at the same time. For instance, a loss in accuracy has historically been a primary challenge behind reducing a network running at 32-bit floating point down to four bits. Now the Qualcomm AI Studio provides the tools necessary to maintain precision even at integer INT, reducing the power by a factor of 64 and exponentially increasing the number of neural networks that can be run in the same power envelope. This is critical to do AI processing at the edge.
“The huge benefit to the Qualcomm AI Stack is enabling people to use Qualcomm technology easily and effectively without having to do a lot of setup work,” he explains. “Now I can take a mobile model to a security camera application without having to do new work. Why? Because it’s the same AI stack across all of our products. It’s really the notion of create once, and then take it anywhere.”
The Qualcomm AI stack supports popular AI frameworks and runtimes and offers developer libraries and services. The company has built SDKs for its product lines on top of this foundation — for example, Snapdragon Ride for automotive, Intelligent Multimedia SDK for IoT processing, Spaces (part of the Snapdragon Spaces XR Developer Platform for AR glasses) and more, including SDKs focused on specific verticals.
In Snapdragon Spaces, for example, Qualcomm has built AR-specific functions directly into the platform for hand and eye tracking for foveated rendering, 3D reconstruction of spaces, plane detection and more. A developer creating a new use case using AR or VR can pick up those routines and other pre-built pieces and build the final product on top, and get to a finished product faster.
The recently announced Qualcomm AI Studio brings together all the AI stack tools into a new GUI, along with visualization tools to simplify the developer experience — and provide the ability to see the complete model workflow from model design to optimization to deployment and profiling in action.
“If you’re doing anything at the edge, in a constrained power envelope or form factor, then really the best technology for you to bring it into production is based on Qualcomm,” he says. “With Qualcomm AI stack we want to make it as easy as possible for developers to be able to deploy their projects and get to market fast.”

Very interesting Bravo. Lots of details. Sounds like mega different 4-bit AI Inferencing, then packing or unpacking? Pretty deep.

It’s the first commercial design on the edge that can do integer 4-bit (INT4) AI inferencing, which means the same calculations take far fewer bits, and expend significantly less energy while maintaining accuracy. The company also announced a new technology called micro tile inferencing, which breaks a neural network into many small pieces so that it can be processed all at once, rather than layer by layer, and very efficiently, saving a great deal of power.

For instance, a loss in accuracy has historically been a primary challenge behind reducing a network running at 32-bit floating point down to four bits. Now the Qualcomm AI Studio provides the tools necessary to maintain precision even at integer INT, reducing the power by a factor of 64 and exponentially increasing the number of neural networks that can be run in the same power envelope.

Seems to reinforce Brainchip's white paper, 4-bits are Enough!
1680526517972.png

 
  • Like
  • Fire
  • Wow
Reactions: 13 users

Diogenese

Top 20
A reasonably balanced (well curate's egg really) article from MF about BRN.

Unfortunately, it appears that the analyst's research does not involve reading about the shares he is reviewing.

https://www.msn.com/en-au/lifestyle...-low-to-surge-12percent-in-2-days/ar-AA199eS5

BrainChip share price bounces off 52-week low to surge 12% in 2 days​

Story by Sebastian Bowen • Tuesday
...

Why have Brainchip shares bounced 12% in two days?​

Well, it’s not exactly clear, unfortunately. There have been no developments or news out of Branchip for a few weeks now. So the stellar performance of the last few days has got nothing to do with the company itself.

All the reviewer has seen is that the price has gone up in the last 2 days.
 
  • Haha
  • Like
  • Love
Reactions: 9 users

Diogenese

Top 20
If BRN is not involved then interesting developments in AI edge etc by Qualcomm. It appears there is not way knowing if Qualcomm has a form of edge AI compatible to BRN without infringing on BRN patents. Are they using Akida now with snapdragon or upgraded to Akida linked to this announcement?. I assume only the money trail via 4c’s will give answers.

Then if qualcomm have some similar tech to BRN as they are saying and it’s not us then we may be in some poo poo. But then Rob telson likes a couple of posters of companies linked to Qualcomm via twitter. Surely he isn’t just saying kudos to direct competition unless BRN are involved in some way. Hmmm

Then if kudos only there is a concern because Akida was meant to be three years in advance of its competitors. Surely they couldn’t have underestimated the competition that poorly.
The big jump in performance suggest they introduced something new. Hopefully it’s Akida. 😀

The above might jump start some healthy chat on bravo’s post as there is many likes etc but no one (maybe one) appears to have provided supporting posts with such a announcement from Qualcomm assuming BRN are linked in some way.

FF, dio etc do you have any comments to support BRN involvement with Qualcomm’s recent news?
Your Holiness,

I have said all I'm going to say on Qualcomm, repeatedly.
 
  • Haha
  • Like
  • Thinking
Reactions: 10 users

stuart888

Regular
  • Like
  • Fire
Reactions: 5 users

Diogenese

Top 20
Micro tile inferencing....So this is how Qualcomm are reducing their power consumption?

"The company also announced a new technology called micro tile inferencing, which breaks a neural network into many small pieces so that it can be processed all at once, rather than layer by layer, and very efficiently, saving a great deal of power."

Unlocking unexpected new AI use cases with the Connected Intelligent Edge​


March 7, 2023 6:20 AM
AdobeStock_386600684.jpeg

Image Credit: Adobe Images
Presented by Qualcomm Technologies, Inc.


To data scientists, the raw potential of AI and complex neural networks was clear from the start, or close to it. But it’s only in the past five years that device hardware has become sophisticated enough to make good on the full promise, and bring AI all the way to the edge. On-device AI is what makes AI a reality for the consumers. And now devices of every size, even with lower battery capacity, are able to handle powerful, power-efficient on-device neural networks. It’s the evolution of computing from the cloud, taking inferencing right to the source.
“We’ve spent almost a decade of research on how to make AI work best on the edge,” says Ziad Asghar, senior vice president of product management, Qualcomm Technologies, Inc. “From that, we’ve developed hardware that’s able to do more inferencing for any given amount of power, and AI software stack (Qualcomm AI Stack) and tools to bring the Connected Intelligent Edge to life.”

Leveling up AI use cases and unlocking new ones

AI use cases have made their way to devices already — AI enhanced pictures and videos, AI-based voice assistants, better sound and voice quality, real-time language translation, and more are significantly improved with connectivity and data processing, while numerous brand-new use cases are just starting to make themselves known across camera, gaming, sensors and connectivity, on all devices at the edge.
On the consumer-facing side, use cases embrace everything from smartphones, XR, compute and earbuds to connected intelligent vehicles and smart homes. On the business side, they support digital transformation in the industrial and manufacturing space, connected healthcare and a leap ahead for the AI software tools and platforms companies need to stay competitive in a rapidly changing environment.
Asghar describes the Connected Intelligent Edge itself as a network with multiple nodes, or different products, within it — and many of the new possibilities lie in these device clouds. In a smart home, for example, that might include security cameras, the cars in the garage, appliances, PCs, mobile devices and tablets, all with some amount of AI processing capability.
Those security cameras might recognize a family member in order to open up the smart lock at the front door and activate environmental controls. But the Connected Intelligent Edge also disseminates AI across the whole network, so that use cases are handled with the best accuracy with the best power consumption. If there’s not enough processing power on one product, it can be handed up the line to a more powerful device.
For instance, a security camera shifting a possible false alarm to the unit that can handle anomalies and more complex incidents. The data never leaves the device or local network, so that privacy is assured. And handling latency-sensitive use cases on the device means real-time results, and a a better consumer experience.

Purpose-built AI hardware and developer tools

“From an AI developer perspective, they want a product that excels in terms of performance and in terms of power,” Asghar says. “Which means you want the best-in-class underlying hardware capability.”
That means more processing for any given amount of power. It also means the ability to write software quickly and get to a product quicker, because time to market is key. At the same time, developers need the flexibility of using AI frameworks they’re familiar with, and tools to optimize and improve performance even further. On the hardware side, Qualcomm offers the Hexagon processor with three accelerators (scalar, vector and tensor) that allow a developer to go from mapping directly to how a neural network is set up all the way to the fully connected layer.
At the most recent Snapdragon Summit, Qualcomm had several significant AI announcements, among them, considerable hardware improvements, including an upgrade of Snapdragon 8 Gen 2 AI performance by a factor up to 4.35x. It’s the first commercial design on the edge that can do integer 4-bit (INT4) AI inferencing, which means the same calculations take far fewer bits, and expend significantly less energy while maintaining accuracy.
The company also announced a new technology called micro tile inferencing, which breaks a neural network into many small pieces so that it can be processed all at once, rather than layer by layer, and very efficiently, saving a great deal of power.
And just before MWC, the company announced that it was able to run Stable Diffusion, a text-to-image AI model on a reference device powered by Snapdragon 8 Gen 2. Typically, generating images using Stable Diffusion requires vast amounts of computing power in the cloud but thanks to Qualcomm Technologies’ AI research and the power of on-device AI on the Snapdragon 8 Gen 2, they were able to not only optimize, quantize and deploy these large models using Qualcomm AI Stack onto the device but they were able to run these models in a matter of seconds. This is a significant leap from current capabilities and will allow massive AI models to run locally on devices which means better convenience, power savings, security and more.
But, Asghar says, having great hardware for AI isn’t enough. As the company continues to optimize its hardware, they’ve focused on leveling up software and tools at the same time. For instance, a loss in accuracy has historically been a primary challenge behind reducing a network running at 32-bit floating point down to four bits. Now the Qualcomm AI Studio provides the tools necessary to maintain precision even at integer INT, reducing the power by a factor of 64 and exponentially increasing the number of neural networks that can be run in the same power envelope. This is critical to do AI processing at the edge.
“The huge benefit to the Qualcomm AI Stack is enabling people to use Qualcomm technology easily and effectively without having to do a lot of setup work,” he explains. “Now I can take a mobile model to a security camera application without having to do new work. Why? Because it’s the same AI stack across all of our products. It’s really the notion of create once, and then take it anywhere.”
The Qualcomm AI stack supports popular AI frameworks and runtimes and offers developer libraries and services. The company has built SDKs for its product lines on top of this foundation — for example, Snapdragon Ride for automotive, Intelligent Multimedia SDK for IoT processing, Spaces (part of the Snapdragon Spaces XR Developer Platform for AR glasses) and more, including SDKs focused on specific verticals.
In Snapdragon Spaces, for example, Qualcomm has built AR-specific functions directly into the platform for hand and eye tracking for foveated rendering, 3D reconstruction of spaces, plane detection and more. A developer creating a new use case using AR or VR can pick up those routines and other pre-built pieces and build the final product on top, and get to a finished product faster.
The recently announced Qualcomm AI Studio brings together all the AI stack tools into a new GUI, along with visualization tools to simplify the developer experience — and provide the ability to see the complete model workflow from model design to optimization to deployment and profiling in action.
“If you’re doing anything at the edge, in a constrained power envelope or form factor, then really the best technology for you to bring it into production is based on Qualcomm,” he says. “With Qualcomm AI stack we want to make it as easy as possible for developers to be able to deploy their projects and get to market fast.”


"The company also announced a new technology called micro tile inferencing, which breaks a neural network into many small pieces so that it can be processed all at once, rather than layer by layer, and very efficiently, saving a great deal of power."


All right - so now you have my attention:

Qualcomm:
WO2023049655A1 TRANSFORMER-BASED ARCHITECTURE FOR TRANSFORM CODING OF MEDIA 2021-09-27

[This is so hot off the press, the ink isn't dry ... 2023-03-30 ... and the specification is not available on line via Espacenet, but you may find it at google Patents]

Systems and techniques are described herein for processing media data using a neural network system. For instance, a process can include obtaining a latent representation of a frame of encoded image data and generating, by a plurality of decoder transformer layers of a decoder sub-network using the latent representation of the frame of encoded image data as input, a frame of decoded image data. At least one decoder transformer layer of the plurality of decoder transformer layers includes: one or more transformer blocks for generating one or more patches of features and determine self-attention locally within one or more window partitions and shifted window partitions applied over the one or more patches; and a patch un-merging engine for decreasing a respective size of each patch of the one or more patches.



1680527705408.png



Does this not sound a bit like our ViT/TENNs of Gen2?


https://brainchip.com/brainchip-introduces-second-generation-akida-platform/

The second-generation of Akida now includes Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions that supercharge the processing of raw time-continuous streaming data, such as video analytics, target tracking, audio classification, analysis of MRI and CT scans for vital signs prediction, and time series analytics used in forecasting, and predictive maintenance. These capabilities are critically needed in industrial, automotive, digital health, smart home and smart city applications. The TENNs allow for radically simpler implementations by consuming raw data directly from sensors – drastically reduces model size and operations performed, while maintaining very high accuracy. This can shrink design cycles and dramatically lower the cost of development.
Another addition to the second generation of Akida is Vision Transformers (ViT) acceleration, a leading edge neural network that has been shown to perform extremely well on various computer vision tasks, such as image classification, object detection, and semantic segmentation. This powerful acceleration, combined with Akida’s ability to process multiple layers simultaneously and hardware support for skip connections, allows it to self-manage the execution of complex networks like RESNET-50 completely in the neural processor without CPU intervention and minimizes system load.

... but remember what Ella says.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 45 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
"The company also announced a new technology called micro tile inferencing, which breaks a neural network into many small pieces so that it can be processed all at once, rather than layer by layer, and very efficiently, saving a great deal of power."


All right - so now you have my attention:

Qualcomm:
WO2023049655A1 TRANSFORMER-BASED ARCHITECTURE FOR TRANSFORM CODING OF MEDIA 2021-09-27

[This is so hot off the press, the ink isn't dry ... 2023-03-30 ... and the specification is not available on line via Espacenet, but you may find it at google Patents]

Systems and techniques are described herein for processing media data using a neural network system. For instance, a process can include obtaining a latent representation of a frame of encoded image data and generating, by a plurality of decoder transformer layers of a decoder sub-network using the latent representation of the frame of encoded image data as input, a frame of decoded image data. At least one decoder transformer layer of the plurality of decoder transformer layers includes: one or more transformer blocks for generating one or more patches of features and determine self-attention locally within one or more window partitions and shifted window partitions applied over the one or more patches; and a patch un-merging engine for decreasing a respective size of each patch of the one or more patches.



View attachment 33566


Does this not sound a bit like our ViT/TENNs of Gen2?


https://brainchip.com/brainchip-introduces-second-generation-akida-platform/

The second-generation of Akida now includes Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions that supercharge the processing of raw time-continuous streaming data, such as video analytics, target tracking, audio classification, analysis of MRI and CT scans for vital signs prediction, and time series analytics used in forecasting, and predictive maintenance. These capabilities are critically needed in industrial, automotive, digital health, smart home and smart city applications. The TENNs allow for radically simpler implementations by consuming raw data directly from sensors – drastically reduces model size and operations performed, while maintaining very high accuracy. This can shrink design cycles and dramatically lower the cost of development.
Another addition to the second generation of Akida is Vision Transformers (ViT) acceleration, a leading edge neural network that has been shown to perform extremely well on various computer vision tasks, such as image classification, object detection, and semantic segmentation. This powerful acceleration, combined with Akida’s ability to process multiple layers simultaneously and hardware support for skip connections, allows it to self-manage the execution of complex networks like RESNET-50 completely in the neural processor without CPU intervention and minimizes system load.

... but remember what Ella says.
Yup. Ella said “It isn’t where you’ve been, it’s where your going to that counts”. Then she apparently yelled out “Akida Ballista!”
 
  • Like
  • Haha
  • Fire
Reactions: 29 users

Diogenese

Top 20
Yup. Ella said “It isn’t where you’ve been, it’s where your going to that counts”. Then she apparently yelled out “Akida Ballista!”
Caution: This response was generated by ChatGPT
 
  • Haha
  • Like
Reactions: 19 users

SERA2g

Founding Member
The amount of jobs listed recruiting for neuromorphic research has been going gangbusters. No direct mention of Brainchip but we know USAF have at the very least used us in AFRL radar projects.

Announcements would be nice but I feel confident we are going to have overwhelming success in the coming years. Neuromorphic AI is well known at this point by those customers who matter. It’s now just a matter of the research being done to see how a revolutionary emerging technology can solve specific problems. And Brainchip fortunately have the money in the bank giving them the privelige of time.

RESEARCH SCIENTIST - SR. RESEARCH SCIENTIST - STRATEGIC AEROSPACE​


Who We Are:
The Strategic Aerospace Department in the Defense & Intelligence Solutions Division provides the USAF with engineering services that increases the warfighters capabilities specifically for the bombers, tankers and heavies platforms.
Objectives of this Role:
Create Physics-based algorithms/models.
Support the Department's research in the areas of AI as applied to Systems and or UAS/Avionics subsystems.
Hardware and Software.
Support research in the areas of AI as applied to Systems and or UAS/Avionics subsystems.
Perform Literature reviews.
Write code.
Python and C+
based.
Perform Experimentation and Data Science.
Write proposals, articles, and attend conferences.
Daily and Monthly
Responsibilities:
Develop Solutions for AI systems and embedded Avionics systems and subsystems.
Develop Solutions for neuromorphic systems and or Drones (UAS/UAV) AI/ML sensor fusion.
Develop Solutions for Aircraft AI/ML sensor fusion.
Review AI/ML algorithms to verify/validate design requirements.
Integrate with a multidisciplinary engineering team to develop AI related hardware.
Perform Data Science duties.
Requirements:
Requires a Bachelors in mathematics, physics, chemistry, or related
MS or PhD in Artificial Intelligence (AI)/Machine Learning/Data Science/Science/Chemistry/Physics/Neuromorphics/UAS is preferred
3 years:
Artificial Intelligence graduate type degrees and or certificates preferred focusing on UAS, Avionics, AI, and or ML (from both a HW and SW perspective)
Machine Learning as applied to Data Science and technology
Developing AI and other software in Python, Tensor Flow, Keras, PyTorch
Developing software code in C++
A valid/clear driver's license is requiredEstimated Salary: $20 to $28 per hour based on qualifications.
Position requires a PhD in artificial intelligence yet it pays $28 an hour.
The US is fucked.
 
  • Like
  • Wow
  • Fire
Reactions: 26 users

alwaysgreen

Top 20
Fingers crossed for a license agreement for our gen 2 akida from qualcomm prior to the agm.

Did someone say short squeeze??
 
  • Like
  • Fire
  • Thinking
Reactions: 20 users

alwaysgreen

Top 20
Position requires a PhD in artificial intelligence yet it pays $28 an hour.
The US is fucked.

My plumbers are charging me $90 an hour!

No offence to my awesome plumbers but they didn't have to do a PhD in AI!
 
  • Like
  • Haha
  • Fire
Reactions: 13 users

stuart888

Regular
Caution: This response was generated by ChatGPT
Or Bard??? Just saw this to get on their wait list:

1680532122493.png


https://bard.google.com/

Meet Bard: your creative and helpful collaborator, here to supercharge your imagination, boost your productivity, and bring your ideas to life.

Bard is an experiment and may give inaccurate or inappropriate responses. You can help make Bard better by leaving feedback. Join the waitlist and try it for yourself.
 
  • Like
  • Sad
Reactions: 2 users

stuart888

Regular
1680532311902.png


1680532337333.png
 
  • Like
Reactions: 5 users

Townyj

Ermahgerd
  • Fire
  • Haha
Reactions: 2 users
Top Bottom