BRN Discussion Ongoing

Slade

Top 20
Some of the comments are : "We'are waiting expecantly for Akida gen 2, but ...
we get to hang on to SiFive's coat tails:

Through our collaboration with BrainChip, we are enabling the combination of SiFive’s RISC-V processor IP portfolio and BrainChip’s 2nd generation Akida neuromorophic IP to provide a power-efficient, high capability solution for AI processing on the Edge,” said Phil Dworsky, Global Head of Strategic Alliances at SiFive. “Deeply embedded applications can benefit from the combination of compact SiFive Essential™ processors with BrainChip’s Akida-E, efficient processors; more complex applications including object detection, robotics, and more can take advantage of SiFive X280 Intelligence™ AI Dataflow Processors tightly integrated with BrainChip’s Akida-S or Akida-P neural processors.”

Phil Dworsky, Global Head of Strategic Alliances, SiFive

That is the stuff of dreams - we go to the after party with SiFive!

Worth looking at the Arteris and SiFive's recent partnership again.
The SiFive X280 with Akida tagging along looks like it will be joining Arteris to feed some of their hungry customers.
Who are these customers again (Samsung is one):

 
  • Like
  • Fire
  • Love
Reactions: 32 users

Tothemoon24

Top 20
logo
MARKETS
CNBC TV
WATCHLIST

Qualcomm CEO

‘ For you to make that happen you can’t run everything in a data center, you’re going to have to bring the Al to the devices “






TECH

The popularity of ChatGPT is a ‘milestone’ in establishing Qualcomm as an A.I. company, CEO says​

PUBLISHED WED, MAR 1 20239:27 AM ESTUPDATED WED, MAR 1 2023AT 10:51 EST
thumbnail

Jenni Reid
WATCH LIVE
KEY POINTS
  • The chief executive of chipmaker Qualcomm sees the prominence of ChatGPT as a key opportunity to popularize artificial intelligence tools on smartphones.
  • “This is the milestone we’ve been waiting for to establish Qualcomm as an AI company,” Cristiano Amon told CNBC.
  • Amon said that his company had an edge in creating the required processing power to run large language models without compromising battery life.
ChatGPT a 'milestone' for Qualcomm as it showcases A.I. smartphone capability, CEO says

WATCH NOW
VIDEO02:32
ChatGPT a ‘milestone’ for Qualcomm as it showcases A.I. smartphone capability, CEO says

The explosive popularity of ChatGPT is an opportunity to show off the capabilities of artificial intelligence on smartphones, according to chip company Qualcomm’s chief executive.
“This is the milestone we’ve been waiting for to establish Qualcomm as an AI company,” Cristiano Amon told CNBC at the Mobile World Congress in Barcelona.

Developed by research company OpenAI, chatbot ChatGPT has been shared widely online, as users ask it to answer questions, generate text or provide detailed, responsive information.
Qualcomm recently released videos of text being used to generate AI images on an Android phone, which it also demonstrated at the conference.
“You want to generate any image that you want to share with somebody, you want to do it in real-time — think about what Microsoft is doing with search, and you want to chat with the search results,” Amon told CNBC’s Karen Tso and Arjun Kharpal. “For you to make that happen, you can’t run everything in a data center, you’re going to have to bring the AI to the devices.”
Large-language models will be generated entirely within smartphones, he said, meaning that they will be able to work without being connected to the internet.
“The ability to create that much processing power in a smartphone and run that without compromising the battery life is something that only Qualcomm can do,” he claimed.

A.I. race expected to bring flurry of M&A: Trachet CEO

WATCH NOW
VIDEO05:01
A.I. race expected to bring flurry of M&A: Trachet CEO

In a note this week, analysts at Bernstein saidthat the powering of AI queries could be a multi-billion dollar annual market opportunity for chipmakers.
Qualcomm has also supplied chips for a variety of virtual reality devices, partnering with the likes of Meta, Samsung and Google.
Amon said that he believed smart glasses were the next frontier of computing and the “merging of physical and digital spaces.”
“I can see a scenario that you’re going to have your companion glasses to your phone, and eventually you’re just going to have the glasses. And the potential is incredible.”
He added, “It’s going to happen, it’s coming very soon.”
Amon also told CNBC that Qualcomm did not expect to produce modems for Apple’s new iPhone in 2024, suggesting that the tech giant’s highly-anticipated move into in-house products may be approaching.
Squawk on the StreetWATCH IN THE APP
UP NEXT | Squawk on the Street 11:00 am ET
 
  • Like
  • Love
  • Fire
Reactions: 13 users

HopalongPetrovski

I'm Spartacus!

Akida 2nd Generation Platform Brief​


The 2nd generation Akida builds on the existing technology foundation and supercharges the processing of raw time-continuous streaming data, such as video analytics, target tracking, audio classification, analysis of health monitoring data such as heart rate and respiratory rate for vital signs prediction, and time series analytics used in forecasting, and predictive production line maintenance. These capabilities are critically needed in industrial, automotive, digital health, smart home, and smart city applications.
Download Platform Brief

Availability:
Engaging with lead adopters now. General availability in Q3’2023


WOW! This truly is Science fiction indeed! Roll over Darling and back to sleep. I have reading to do!
 
  • Like
  • Haha
  • Fire
Reactions: 36 users

HopalongPetrovski

I'm Spartacus!

One IP Platform, Multiple Configurable Products​




The 2nd generation IP platform will support a very wide range of market verticals and will be delivered in three classes of product.​


Akida-E: Extremely energy-efficient, for always on operation very close to, or at sensors.
Akida-S: Integration into MCU or other general purpose platforms that are used in broad variety of sensor-related applications
Akida-P: Mid-range to higher end configurations with optional vision transformers for ground breaking and yet efficient performance.
Contact Sales


And our synapse is back! All those with tattoo's will be very happy! Love it!
 
  • Like
  • Haha
  • Fire
Reactions: 34 users

Beebo

Regular
Sounds like the next IP license may come from SiFive, and most likely before Q3 (?)
 
  • Like
  • Love
  • Fire
Reactions: 16 users

Tothemoon24

Top 20
E6C320A6-A4BC-48E8-AAC1-45839EBACDE4.png
 
  • Like
  • Fire
  • Love
Reactions: 86 users

HopalongPetrovski

I'm Spartacus!
However, traditional computer vision techniques like Convolutional Neural Networks (CNNs) have some limitations when analyzing images.

CNNs work by passing images through a series of convolutional and pooling layers to extract relevant features.
However, as images become larger and more complex, CNNs become less effective.
This is where Vision Transformers come in.
Vision Transformers, or ViTs, are a type of deep learning model that uses self-attention mechanisms to process visual data.
They were introduced in a paper by Dosovitskiy in 2020 and have since gained popularity in computer vision research.
image.png

In a Vision Transformer, an image is first divided into patches, which are then flattened and fed into a multi-layer transformer network.
The self-attention mechanism allows the model to attend to different parts of the image at different scales, enabling it to simultaneously capture global and local features.
The transformer’s output is passed through a final classification layer to obtain the predicted class label.
The result is a Vision Transformer with self-attention mechanisms that allow the model to focus on different parts of the input image in the spatial dimension.
image-2.png

ViTs show higher accuracy with object classification tasks compared to CNNs.
Due to self-attention used in ViTs they are able to learn features from spatially related regions and becomes very suitable for object detection, tracking and segmentation related solutions. They also have better generalization when trained on a distributed dataset.
image-3.png

This is because transformers utilize multiheaded attention, which is a novel way to extract meaning from sequences of words.
When considering vision transformers, the input to the network is no longer a sequence of words, but frames that have meaning within the images, and across several images that arrive in the form of a video. Vision transformers use multiheaded attention to pay attention to specific things inside of an image.

Vision Transformers Are Popular For Tasks Like​


Item-8.png

Enabled by Akida, Vision Transformers can be applied remotely at low power consumption to serve multiple industries.
  • In the field of autonomous vehicles, they can be used for driver assistance and monitoring, alertness, lane assistance.
  • In agriculture, they can be used to monitor crop health, identify diseases or pests, and estimate yield.
  • In healthcare, they can help analyze medical images to aid diagnosis.
  • In manufacturing, they can automate quality control processes.
  • In smart devices, they can be used for user attention, pose estimation, scene understanding, etc.
Item-9.png
 
  • Like
  • Love
  • Fire
Reactions: 51 users

HopalongPetrovski

I'm Spartacus!
 
  • Like
  • Haha
Reactions: 2 users

HopalongPetrovski

I'm Spartacus!

Temporal Event-Based Neural Networks – A New Approach To 3D And Time Series Data​




As we step into a new era of Al-assisted human intelligence, we are confronted with several technological obstacles, particularly in the areas of:
item-1.png

These challenges must be surmounted to achieve further advancements in this field.
Item-2-image.png

This brings us to the question:​


What comes next for loT devices that are optimized for performance on the Edge?
And how will we integrate Al into our everyday devices?
Section-Emerging-TENNs.png

For orders of magnitude less memory, TENNs can achieve excellent performance on tasks that use temporal and spatial information.

Currently, TENNs have already been demonstrated on spatial-temporal data such as:

Item-3.png

Item-4.png

Item-2.png

TENNs process many types of data including one-dimensional time series from
Item-8.png

…and many others.
What are the best uses of TENNs?
Akida’s high processing at ultra-low power consumption provides the perfect ground for TENN applications in loT devicesleading to a new generation of Smart Hardware.
When applied to sensors that mimic the five human senses
Item-7.png


TENNs can become an extension of our analytical decision-making abilities for real-time processing.
Akida’s high processing at ultra-low power consumption provides the perfect landscape for TENNs applications in AloT
 
  • Like
  • Love
  • Fire
Reactions: 34 users
Last edited:
  • Like
  • Love
  • Fire
Reactions: 21 users

HopalongPetrovski

I'm Spartacus!
Fembots anyone? 🤣

Seriously, have we just taken a huge step into the future?
Domination at the edge and a step back towards the servers who are now targets themselves? 🤣

Well played Brainchip.
It's seemed to be a long time coming, but here you are with the GOODS!
 
  • Like
  • Fire
  • Love
Reactions: 28 users
  • Like
  • Love
  • Fire
Reactions: 32 users
Even China's picked the story up..


Translation of text.

"Australian artificial intelligence start-up BrainChip launched the second-generation Akida chip earlier than expected, which spurred the stock price to soar on the 6th, the largest single-day increase this year. BrainChip shares opened higher and rose 16.67% to 0.6 Australian dollars. The company said in February that Akida was still in development and would not be available until later this year"
 
  • Like
  • Haha
  • Fire
Reactions: 25 users

HopalongPetrovski

I'm Spartacus!
Looks like a really well crafted marketing release. Suitable and beautiful sci fi images along with tempting tidbits of narrative.
And backed up with confirmation articles like the Forbes piece's and comments from our partners and eco system collaborators.
So glad to be hip deep in Brainchip already.
Keep bringing it baby. 🤣
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 39 users

HopalongPetrovski

I'm Spartacus!
  • Like
  • Fire
  • Love
Reactions: 21 users

Beebo

Regular
  • Like
  • Fire
  • Love
Reactions: 42 users

goodvibes

Regular
See what partners, customers and industry influencers are saying about the second generation of Akida - Watch the quotes as they scroll - https://lnkd.in/g3UK3nBp #AIoT #AI #Akida #neuromorphic

 
  • Like
  • Fire
Reactions: 8 users

SERA2g

Founding Member
Some reveals under the “see what they’re saying” section of the 2nd generation product listing.

Such a cool and well put together release.

Hopefully one day our new releases end up being a Steve Jobs style presentation to the masses. Peter up on the stage in a turtle neck sweater, “We are proud to introduce our 4th product, Akida General Intelligence. Let the 4th revolution begin.” 😂

Let’s hope there’s a bit of hype on the market this week. It’d be nice to see the base line shifted significantly higher.

Cheers
 
  • Like
  • Love
  • Fire
Reactions: 37 users

Sirod69

bavarian girl ;-)
BrainChip
BrainChip
1 Std. •

Edge AI and Vision Alliance - BrainChip Introduces Vision Transformers and Spatial-Temporal Convolution for radically fast, hyper-efficient and secure Edge AIoT products, untethered from the cloud - https://lnkd.in/gg6dRRdE

1678124118810.png

Introduces Vision Transformers and Spatial-Temporal Convolution for radically fast, hyper-efficient and secure Edge AIoT products, untethered from the cloud​

 
  • Like
  • Love
  • Fire
Reactions: 19 users
Top Bottom