BRN Discussion Ongoing

Justchilln

Regular
I don't get the point of that everlasting Qualcomm mumbo-jumbo.

Qualcomms first neuromorphic processor Zeroth appeared 2013:

In products available since 2015/16:

Qualcomm never did any marketing using the word neuromorphic, a look at their patent portfolio helps:

Since the beginning I file Qualcomm under the biggest competitors. I can hear it coming, but...but... Akida is better. Who cares. Qualcomm has an inhouse solution for many years in use. Why should they integrate a costly IP from Brainchip, when they can do it on their own?
At the end it's all about business and profits, not about olympics counting neurons and synapses.

By the way, the IP business model is meant for the big players. These days are not comparable with, let's say ARM 1993. Today's big players usually run their own IP business. They are not famous for licensing IPs from small companies. When there is a new technology they need, they usually fix it with an acquisition, like Renesas did last year with Reality AI.
A look at the Qualcomm history of acquisitions shows that's the usual way:

So for a small company selling IP is a risky business with no guarantee of success. The way of partnerships seem to work better than just selling IP to the big ones, but it needs some more time for financial results.
Pretty sure that Qualcomm zeroth is no longer around mate…..
 
  • Like
  • Haha
Reactions: 8 users

The Pope

Regular
Rob Telson (Batman) likes this one (night vision mobile activity) for the dot joiners wishing to investigate further


IMG_0116.jpeg
 

Attachments

  • IMG_0117.jpeg
    IMG_0117.jpeg
    636.3 KB · Views: 65
  • Like
  • Fire
  • Love
Reactions: 19 users

cosors

👀
I don't get the point of that everlasting Qualcomm mumbo-jumbo.

Qualcomms first neuromorphic processor Zeroth appeared 2013:

In products available since 2015/16:

Qualcomm never did any marketing using the word neuromorphic, a look at their patent portfolio helps:

Since the beginning I file Qualcomm under the biggest competitors. I can hear it coming, but...but... Akida is better. Who cares. Qualcomm has an inhouse solution for many years in use. Why should they integrate a costly IP from Brainchip, when they can do it on their own?
At the end it's all about business and profits, not about olympics counting neurons and synapses.

By the way, the IP business model is meant for the big players. These days are not comparable with, let's say ARM 1993. Today's big players usually run their own IP business. They are not famous for licensing IPs from small companies. When there is a new technology they need, they usually fix it with an acquisition, like Renesas did last year with Reality AI.
A look at the Qualcomm history of acquisitions shows that's the usual way:

So for a small company selling IP is a risky business with no guarantee of success. The way of partnerships seem to work better than just selling IP to the big ones, but it needs some more time for financial results.
Qualcomm never did any marketing using the word neuromorphic, a look at their patent portfolio helps:
but
"...
Our unique approach to accelerating complex AI models is by breaking down neural networks into micro tiles to speed up the inferencing process.
..."
https://thestockexchange.com.au/threads/tlg-discussion-2022.7072/post-389769

but Dio:
Probably more that it will affect Talga. (That's another "F" for multitasking for me)
 
Last edited:
  • Like
  • Thinking
Reactions: 2 users

Gies

Regular
I don’t understand why Markus is so excited about this news of 1 year old?
What do I miss?
The alliance with Brainchip is pretty clear
 
  • Like
Reactions: 3 users

Getupthere

Regular
  • Fire
Reactions: 4 users

charles2

Regular
BRCHF traded a grand total of 1699 shares today.

Volume can dry up before a big move. Realistically though, ASX is the tail that wags the dog.

That may have to change before a serious appreciation of Brainchip's
value can be expected.
 
Last edited:
  • Like
  • Fire
Reactions: 7 users

charles2

Regular
Wonder of wonders. INTC revenue guidance SOARS. Must be a good thing.

 
Last edited:
  • Like
Reactions: 6 users
Wonder of wonders. INTC revenue guidance SOARS. Must be a good thing.

Linkin has a recent Brain chip post on Ai at the edge and its cost to get it up and going . Worth a read
 
  • Like
Reactions: 5 users

Tothemoon24

Top 20

SiFive Rolls Out RISC-V Cores Aimed at Generative AI and ML​

October 2023 by Jake Hertz

SiFive has released two new processors, one to target machine learning applications, and one to target general-purpose HPC.​


The RISC-V movement is one of the hottest things in the computing industry at the moment, and arguably no company is more synonymous with RISC-V than Si-Five. With a long history of producing industry-leading RISC-V processors, Si-Five has shown no signs of slowing down yet.

SiFive has released new processors to target machine learning and general-purpose HPC.

SiFive has released new processors to target machine learning and general-purpose HPC.


Last week, SiFive continued its momentum with yet another set of launches: one new processor to target high-performance general-purpose computing, and another to target machine learning and artificial intelligence tasks. In this piece, we’ll take a look at the two new offerings from Si-Five to see what the company is bringing to the table.

P870/P870A: General-Purpose High Performance SoC​

The first of the SiFive releases last week was the Performance P870/P870A processors. Engineered for consumer applications and data centers, the P870 features a number of architectural innovations that make it a formidable processor.
First, the device is built around a 64-bit six-wide out-of-order core architecture that adheres to the RVA 23 standard, while also featuring a shared cluster cache that enables configurations of up to a 32-core cluster.
This is further complemented by augmented Arithmetic Logic Units (ALUs) and additional branch units as compared to previous offerings. It also incorporates advanced features like 128b VLEN RVV, vector crypto and hypervisor extensions, IOMMU, and a non-inclusive L3 cache.

SiFive‘s P870/P870A embeds more features than its predecessors.

SiFive‘s P870/P870A embeds more features than its predecessors.


Together, these architectural enhancements enable high throughput and low latency for the processor, making it ideal for applications that demand rapid data processing and real-time analytics.
The device is said to achieve a 50% peak single-thread performance upgrade over its predecessor, as measured by the specINT2k6 benchmark, while also offering improvements in execution throughput, facilitated by an increased number of instruction sets per cycle
The P870A variant, specifically designed for automotive applications, shares the core architecture with the P870 but is likely to include additional features tailored for the automotive industry, although detailed specifications are yet to be released.

Intelligence X390 Boosts Vector Computation​

The second major release from SiFive last week was the Intelligence X390 processor, designed to meet the escalating demands of AI and ML applications. The X390 builds upon the architectural foundation laid by its predecessor, the X280, but introduces several key enhancements that significantly boost its computational capabilities.
At the core of the X390 is a single-core configuration that offers a fourfold improvement in vector computation. This is achieved through a dual vector ALU setup and a doubled vector length, which collectively contribute to a quadruple increase in sustained data bandwidth.
The processor also incorporates SiFive's VCIX technology, allowing companies to add custom vector instructions or acceleration hardware, thereby offering unprecedented flexibility in performance optimization.
The processor's enhanced vector computation capabilities make it particularly well-suited for neural network training and inference tasks, which often require high computational power. Moreover, the X390's architecture is designed to facilitate a sustained data bandwidth increase, a critical factor for AI/ML applications that require rapid data ingestion and processing.
Performance metrics indicate that the X390 is a powerhouse in its category. It offers a quadruple increase in sustained data bandwidth due to architecture improvements and data width increases.

Pushing RISC-V Forward​

SiFive is playing a pivotal role in propelling the RISC-V industry into new frontiers of performance and applicability. By unveiling processors like the Performance P870/P870A and Intelligence X390, the company is not merely iterating on existing technology but is introducing transformative architectural innovations.
Moreover, by setting new performance standards and expanding the scope of RISC-V's applicability, SiFive is accelerating the industry's move towards more efficient, flexible, and powerful computing architectures.
 
  • Like
  • Fire
  • Thinking
Reactions: 21 users
  • Like
Reactions: 18 users

Labsy

Regular
Small power footprint is a necessity moving forward....

"Swart tells me that the Snapdragon AR2 Gen 1 chipset was designed for devices that will send only around 1 Watt of power or less to the CPU.

Compare that to today’s smartphones. The Snapdragon 8 Gen 2 chipset can draw almost 15 Watts of power. Mobile chipsets in laptops and tablets can draw 25 Watts or more. An Intel i9 processor draws 45 Watts. Qualcomm started with the idea that wearable computers will only consume 1 Watt of power for processing, and maybe everything on board."

 
  • Like
  • Fire
Reactions: 8 users

Esq.111

Fascinatingly Intuitive.
Morning Chippers ,

Just listening to the AVA Risk Group AGM from yesterday....

This company specialise in high security locks , perimeter detection over high value assets.

At the 22.40 mark mentions neural networks .... interesting co and tech.... think Akida would work well in their products.

* Disclosure , i hold a few of these shares.

Pinched from elsewhere so hope it works

https://hotcrapper.com.au/posts/70564487/single

sorry chaps am unable to transfer video .....useless....dam it

From the orifice,
Company , AVA
Poster fcmaster26,
Posted yesterday at 22:10
AGM Video.


Regards,
Esq
 
Last edited:
  • Like
  • Love
Reactions: 12 users

Vladsblood

Regular
Afternoon Chippers ,

Ray of sunshine , the US Government has elected a new House Speaker which should help alleviate some wobbles in the global markets a little.

American spending can resume.

Regards,
Esq
Morning Esq
Dunno about any good coming from the Socialist American Government Esq.

Socialist Governments stick their noses into Publicly listed Companies such as biden’s has just Ordered Nvidia to stop exporting chips to China 🇨🇳. LOL…This is a Democracy???
Nope definitely not so!!!
Vlad
 
  • Like
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Nice find @Getupthere,

NVIDIA wants to turn the Jetson family of devices into powerful edge computing devices capable of running state-of-the-art foundation models.

And the article mentions Edge Impulse's TAO toolkit!

Let's hope this opens up the number of Edge Impulse partners needing an AI accelerator (i.e BrainChip IP) to work with their NVIDIA model for lower power consumption and higher efficiency.

Extract from article
Screen Shot 2023-10-27 at 9.54.30 am.png



Edge Impulse Slide
Screen Shot 2023-10-27 at 10.04.42 am.png
 
  • Like
  • Love
  • Fire
Reactions: 34 users

7für7

Top 20
Did Brainchip already respond to the email that was sent yesterday? Asking for a friend!
 
Last edited:
  • Like
  • Haha
Reactions: 4 users

Damo4

Regular
Not specifically Brainchip related, but a good reminder of why Neural Networks are changing the world.
This example is more about replicating human thought, rather than rapid, low power, edge analysis of data sets.
2030 seems to be the goal for Artificial General Intelligence, still a lot on offer for NN to improve.

Link to the Article:
In a 1st, AI neural network captures 'critical aspect of human intelligence'

Link to the study:
SupplementaryInformation: Human-likesystematicgeneralizationthrougha meta-learningneuralnetwork


Snippet from the link, describing the process:

Neural networks somewhat mimic the human brain's structure because their information-processing nodes are linked to one another, and their data processing flows in hierarchical layers. But historically the AI systems haven't behaved like the human mind because they lacked the ability to combine known concepts in new ways — a capacity called "systematic compositionality."

For example, Lake explained, if a standard neural network learns the words "hop," "twice" and "in a circle," it needs to be shown many examples of how those words can be combined into meaningful phrases, such as "hop twice" and "hop in a circle." But if the system is then fed a new word, such as "spin," it would again need to see a bunch of examples to learn how to use it similarly.

In the new study, Lake and study co-author Marco Baroni of Pompeu Fabra University in Barcelona tested both AI models and human volunteers using a made-up language with words like "dax" and "wif." These words either corresponded with colored dots, or with a function that somehow manipulated those dots' order in a sequence. Thus, the word sequences determined the order in which the colored dots appeared.

So given a nonsensical phrase, the AI and humans had to figure out the underlying "grammar rules" that determined which dots went with the words.

The human participants produced the correct dot sequences about 80% of the time. When they failed, they made consistent types of errors, such as thinking a word represented a single dot rather than a function that shuffled the whole dot sequence.

After testing seven AI models, Lake and Baroni landed on a method, called meta-learning for compositionality (MLC), that lets a neural network practice applying different sets of rules to the newly learned words, while also giving feedback on whether it applied the rules correctly.

The MLC-trained neural network matched or exceeded the humans' performance on these tests. And when the researchers added data on the humans' common mistakes, the AI model then made the same types of mistakes as people did.
 
  • Like
  • Love
Reactions: 8 users

davidfitz

Regular
I heard an interesting story the other day which I thought I would share.

A guy named Sean dreamt of getting his bus license when he was younger and thought that driving a bus would be fun and relatively easy. :cautious:

He finally got his license and his first job was to drive some people to an exciting destination which was run by artificial intelligence called Telsonville. Everyone on the bus was very excited about the journey they were about to embark on. At first the trip was going well and Sean seemed confident in his driving abilities. However, out of the blue the tip of an iceberg appeared which he dramatically crashed into. Unfortunately, not everyone survived and those that did are now suffering a long and painfull recovery. Some may never recover :unsure:

Since the crash, earlier this year, Sean has not been seen driving his bus and is rarely seen at all! Occasionally, say every 3 months or so, he does venture out to the local cocktail bar. He can be seen there drinking his favourite cocktail, an Akida. This is an interesting drink made up of unusual ingredients that no one can quite work out. He will often ask people to try an Akida but most are not quite sure what to make of it. A couple of people have liked it and over time they have added their own ingredients. However, the drink hasn't quite taken off and sales at cocktail bars worldwide are slow. :(

During the periods that Sean cannot be seen he spends most of his time playing board games, his favourite being a game called Brainchip. The board is full of interesting characters, but they don't really do much. I guess it's time for the creators of the game to look at replacing them with more interactive characters that do a lot more? ;)
 
  • Haha
  • Like
  • Fire
Reactions: 15 users

Diogenese

Top 20
In the wee small hours, I mis-posted this on the Talga thread: #947 :


For Qualcomm advocates:

https://www.qualcomm.com/products/mobile/snapdragon/smartphones/mobile-ai

Our fastest and most advanced Qualcomm AI Engine has at its heart, the powerful Hexagon processor.​


The Qualcomm Hexagon processor is the most essential element of the Qualcomm AI engine. This year we added new architectural features to the heart of our AI Engine. Let’s dive into them.

With a dedicate power delivery system we can freely provide power to Hexagon adaptive to its workload, activate the performance all the way up for heavy workloads or down to extreme power savings

We also added a special hardware to improve group convolution, activation function acceleration and doubled the performance of the Tensor accelerator

Our unique approach to accelerating complex AI models is by breaking down neural networks into micro tiles to speed up the inferencing process. This allows, the scalar, vector and tensor accelerators to work at the same time without having to engage the memory each time, saving power and time [#### ViT? ####]

We are now enabling seamless multi-IP communication effectively with Hexagon using a physical bridge. This link drives high bandwidth and low latency driven use cases like the Cognitive-ISP or upscaling of low resolution in gaming scenarios

We successfully enabled transformation of several DL models from FP32 to INT16 to INT8 while not compromising on accuracy and getting the added advantage of higher performance at lower memory consumption. Now we are pushing the boundaries with INT4 for even higher power savings without compromising accuracy or performance
.

I was sprung by @Proga.

I guess the main implication for BRN is that Qualcomm will not be adopting Akida any time soon. That said, their ViT patent seems very clunky:


WO2023049655A1 TRANSFORMER-BASED ARCHITECTURE FOR TRANSFORM CODING OF MEDIA 2021-09-27

Systems and techniques are described herein for processing media data using a neural network system. For instance, a process can include obtaining a latent representation of a frame of encoded image data and generating, by a plurality of decoder transformer layers of a decoder sub-network using the latent representation of the frame of encoded image data as input, a frame of decoded image data. At least one decoder transformer layer of the plurality of decoder transformer layers includes: one or more transformer blocks for generating one or more patches of features and determine self-attention locally within one or more window partitions and shifted window partitions applied over the one or more patches; and a patch un-merging engine for decreasing a respective size of each patch of the one or more patches.

1698365484759.png



[0112] As previously noted, systems and techniques are described herein for performing image and/or video coding (e.g., low latency encoding and decoding) using one or more transformer neural networks. The transformer neural networks can include transformer blocks and/or transformer layers that are organized according to, for example, the hyperprior architecture of FIG. 4 and/or the scale-space flow (SSF) architecture of FIG. 6B described below. For example, the four convolutional networks ga , gs, ha , and hs that are depicted in FIG. 4 can instead be provided as a corresponding four transformer neural networks, as will be explained in greater depth below.

[0113] In some examples, one or more transformer-based neural networks described herein can be trained using a loss function that is based at least in part on rate distortion. Distortion may be determined as the mean square error (MSE) between an original image (e.g., an image that would be provided as input to an encoder sub-network) and a decompressed/decoded image (e.g., the image that is reconstructed by a decoder sub-network). In some examples, a loss function used in training a transformer-based media coding neural network can be based on a trade-off between distortion and rate with a Lagrange multiplier. One example of such a rate-distortion loss function is L = D + * R, where D represents distortion, R represents rate, and different 0 values represent models trained for different bitrates and/or peak-signal- to-noise ratios (PSNR).

[0115] … a backpropagation training process can be used to adjust weights (and in some cases other parameters, such as biases) of the nodes of the neural network, e.g., an encoder and/or decoder sub-network, such as those depicted in FIGS. 5A and 5B, respectively). Backpropagation includes a forward pass, a loss function, a backward pass, and a weight update. In some examples, the loss function can include the rate-distortion-based loss function described above. The forward pass, loss function, backward pass, and parameter update can be performed for one training iteration. The process is repeated for a certain number of iterations for each set of training data until the weights of the parameters of the encoder or decoder sub-network are accurately tuned
.


So, when Akida 2 with ViT and TeNNs hits the streets, Qualcomm may need to review their isolationist policy.
 
  • Like
  • Fire
  • Love
Reactions: 31 users

Diogenese

Top 20
Small power footprint is a necessity moving forward....

"Swart tells me that the Snapdragon AR2 Gen 1 chipset was designed for devices that will send only around 1 Watt of power or less to the CPU.

Compare that to today’s smartphones. The Snapdragon 8 Gen 2 chipset can draw almost 15 Watts of power. Mobile chipsets in laptops and tablets can draw 25 Watts or more. An Intel i9 processor draws 45 Watts. Qualcomm started with the idea that wearable computers will only consume 1 Watt of power for processing, and maybe everything on board."

"These glasses will sense and interact with your environment. They will have six degrees of freedom in detecting movement, so they will be able to sense if you move side to side, lean over, or rotate your head. They will have a variety of sensors, cameras, speakers, and other ways of communicating with you and the world around you."

Sounds like a recipe for mal de mer.
 
  • Like
  • Haha
Reactions: 3 users
Top Bottom