BRN Discussion Ongoing

Just browsing BrainChip website & came across a new blog from 7th Jan titled "4 Bits Are Enough" but it's password protected & we can't read it. Wonder if it will be about benchmarking? Wonder when it will be available?

View attachment 26725

Great find.

Regards
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 9 users

Diogenese

Top 20
I know that, but if MERC stayed with the Akida chip rather than using Akida IP, I'm sure BRN would oblige.
I think any mass order for SoC would be referred to MegaChips.
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Murphy

Life is not a dress rehearsal!
I may have missed it earlier, Diogenese, but what exactly is 'taping-out'?

If you don't have dreams, you can't have dreams come true!
 
  • Like
Reactions: 3 users

jk6199

Regular
I'm just back from a couple of days in Melbourne and was sitting down trying to work out IP commissions.

Some have posted $1 here and 80 cents there for each unit sold as BRN's profit take. Me thinking, how do we make millions from this?

Sitting in a big shopping complex, I watched people lining up and ordering food from a couple of big multinational takeaway food businesses. Then it clicked a lot more.

Every Mercedes car when I ask, "Hey Mercedes, where's the nearest public toilet" when I get caught short travelling? Multiply this by every Mercedes in the future plus 4 to 5 more vehicle companies?

The use of Nanose where it can identify & point out the secret farter in the elevator. Multiply this by every elevator?

That pesky drone that catches me in my own back yard trying to be discreet wearing my mankini, multiply those drones?

Every Christmas party where for some reason, my front door key can't fit in the lock to let me in? Luckily my smart door bell camera recognises me, and lets me in (unless the better half has programmed it to recognise me less that fit for entry)? Multiply that by every other Christmas party.

The list goes on. It's not always the big ticket items that make the money, but the accumulation of everyday life.

So, to all you out there that occasionally get caught short driving, with bad wind at the least opportune times, dressed in a mankini while walking home drunk from a party, welcome to my life. Not quite the secret sauce we talk about, but sauce anyway! ;)
 
  • Like
  • Haha
  • Thinking
Reactions: 27 users
I may have missed it earlier, Diogenese, but what exactly is 'taping-out'?

If you don't have dreams, you can't have dreams come true!
“Tape-out” = create

“Tapping-out” was never said
 
  • Like
Reactions: 1 users

Salad1

Emerged
We need to stop bashing every one who has a different opinion though.
It's cringy and embarrassing.

Everyone is entitled to their opinion, as long as they are not bashing the company and management, which was happening at HC.

But genuine concerns shouldn't be discouraged, otherwise it'd be an echo chamber. People literally have their savings on the line, while my motto is don't put more than you can afford to lose, to say that share holders shouldn't voice their opinions isn't something I agree with.

Downrampers need to dealt with, no doubt.
But just because someone says he's concerned about recent market action isn't a solid base to prove he's a downramper. We all go through such stages of psychological roller coaster. Someone who has been investing for 20 years would take it a bit better than someone who started investing 2 years ago.

This is a forum and there'll be different opinions. Ignore function is handy which I recommend to anyone who doesn't want to hear a particular poster's opinion.
Spot on.
 
  • Like
  • Fire
Reactions: 7 users

HarryCool1

Regular
We need to stop bashing every one who has a different opinion though.
It's cringy and embarrassing.

Everyone is entitled to their opinion, as long as they are not bashing the company and management, which was happening at HC.

But genuine concerns shouldn't be discouraged, otherwise it'd be an echo chamber. People literally have their savings on the line, while my motto is don't put more than you can afford to lose, to say that share holders shouldn't voice their opinions isn't something I agree with.

Downrampers need to dealt with, no doubt.
But just because someone says he's concerned about recent market action isn't a solid base to prove he's a downramper. We all go through such stages of psychological roller coaster. Someone who has been investing for 20 years would take it a bit better than someone who started investing 2 years ago.

This is a forum and there'll be different opinions. Ignore function is handy which I recommend to anyone who doesn't want to hear a particular poster's opinion.
giphy.gif
 
  • Haha
  • Like
  • Love
Reactions: 9 users

Diogenese

Top 20
I may have missed it earlier, Diogenese, but what exactly is 'taping-out'?

If you don't have dreams, you can't have dreams come true!
In the olden days, ICs were made by using photomasks to define the patterns of each layer of the silicon (doped to be semiconductive, positive, negative, insulative). The wafer was coated with photoresist and light shone through the mask to harden parts of the resist, the unhardened photoresist removed, and then an acid etch was performed to remove the unwanted silicon, leaving the bits under the hardened photoresist.

Edit: Rinse and repeat.

The early masks were made by using black tape on glass slides.

Taping out was the process of forming the masks.

Nowadays, the patterns of the layers are encoded in digital files, and they use short wavelength UV or X-rays to cure the photoresist.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 36 users

Murphy

Life is not a dress rehearsal!
In the olden days, ICs were made by using photomasks to define the patterns of each layer of the silicon (doped to be semiconductive, positive, negative, insulative). The wafer was coated with photoresist and light shone through the mask to harden parts of the resist, the unhardened photoresist removed, and then an acid etch was performed to remove the unwanted silicon, leaving the bits under the hardened photoresist.

The early masks were made by using black tape on glass slides.

Taping out was the process of forming the masks.

Nowadays, the patterns of the layers are encoded in digital files, and they use short wavelength UV or X-rays to cure the photoresist.
Thank you sir.

If you don't have dreams, you can't have dreams come true!
 
  • Like
  • Love
Reactions: 7 users

GrandRhino

Founding Member
Just browsing BrainChip website & came across a new blog from 7th Jan titled "4 Bits Are Enough" but it's password protected & we can't read it. Wonder if it will be about benchmarking? Wonder when it will be available?

View attachment 26725

Hey @TechGirl, good find!
It's not password protected for me, maybe they just unlocked it?
 
  • Love
  • Like
  • Fire
Reactions: 13 users

DK6161

Regular
Looking at the increasing Buy and Sell numbers, can't help to think that something big is about to happen.
Definitely a lot of traders queueing up.
Either a huge announcement is comming (maybe with the next 4C) or another meme coming from me 🤣 😅.
 
  • Like
  • Haha
  • Fire
Reactions: 9 users

BaconLover

Founding Member
  • Like
  • Haha
  • Love
Reactions: 17 users

GrandRhino

Founding Member
Just browsing BrainChip website & came across a new blog from 7th Jan titled "4 Bits Are Enough" but it's password protected & we can't read it. Wonder if it will be about benchmarking? Wonder when it will be available?

View attachment 26725

4 Bits Are Enough​


Peter AJ van der Made
Traditional convolutional neural networks (CNNs) use 32-bit floating point parameters and activations. They require extensive computing and memory resources. Early convolutional neural networks such as AlexNet had 62 million parameters.
Over time, CNNs have increased in size and capabilities. GPT3 is a transformer network that has 175 billion parameters. The most significant AI training runs have increased exponentially, with an average doubling period of 3.4-months. Millions or billions of Multiply and Accumulate (MAC) functions must be executed for each inference. These operations are performed in batches of data on large servers with stacks of Graphics Processing Units (GPU) or costly cloud services, and these requirements keep accelerating.
At the other end, the increasing popularity of Deep Learning networks in small electronic devices demands energy-efficient, low-latency solutions of, sometimes, similar models. Deep Learning networks are seen in smartphones, industrial IoT, home appliances, and security devices. Many of these devices are subject to stringent power and security requirements. Security issues can be mitigated by eliminating the uploading of raw data to the internet and performing all or most of the processing on the device itself. However, given the constraints at the edge, models running for these devices must be much more compact in every dimension, without compromising accuracy.
A new architectural approach such as event-based processing with at-memory compute is fundamental to addressing the efficiency challenge. These draw inspiration from neuromorphic principles, mimicking the brain to minimize operations and hence energy consumption. However, energy efficiency is the cumulative effect of not just the architecture, but model size including width of weights and activation parameters. In particular, support for 32-bit floating point requires complex and large-footprint hardware. The reduction in size of these parameters and weights can provide a substantial benefit in performance and in reducing the hardware needed to compute. However, this must be judiciously and innovatively done to keep the outcomes and accuracy similar to the larger models. With the process of quantization, activation parameters and weights can be converted to low bit-width values. Several sources have reported that lower precision computation can provide similar classification accuracy at lower power consumption and better latency. This enables smaller footprint hardware implementation, that reduces development, silicon, and packaging cost, enabling on-device processing in handheld, portable, and edge devices.
MetaTF_Header-400x257-1.jpeg

To make the development process easier, Brainchip has developed the MetaTF™ software that integrates with TensorFlow ™ (and other edge AI development flows), including APIs for 4-bit processing and quantization functionality, to enable retraining and optimization.
The developers can therefore seamlessly build and optimize for the Akida Neural Processor and benefit from executing neural networks entirely on-chip, efficiently, with low latency.
Quantization is the process of mapping continuous infinite values to discrete finite values. Or in the case of modern AI, mapping larger floating-point values to a discrete set of smaller real numbers. The quantization method obtains an efficient representation, manipulation, and communication of numeric values in Machine Learning (ML) applications. 32-bit floating point numbers are distributed over a discrete set of real numbers (0 to 15) to minimize the number of bits required while maintaining the accuracy of the classification. Remarkable performance is achieved in 4-bit quantized models for diverse tasks such as object classification, face recognition, segmentation, object detection, and keyword recognition.
The Brainchip Akida neural processor performs all the operations needed to execute a low bit-width Convolutional Neural Network, thereby offloading the entire task from the central processor or microcontroller. The design is optimized for high-performance Machine Learning applications, resulting in efficient, low power consumption while performing thousands of operations simultaneously on each phase of the 300 MHz clock cycle. A unique feature of the Akida neural processor is the ability to learn in real time, allowing products to be conveniently configured in the field without cloud access. The technology is available as a chip or a small IP block to integrate into an ASIC.
“Table 1 provides the accuracy of several 4-bit CNN networks comparable to floating-point accuracies. For example, AkidaNet is a version of Mobilnet optimized for 4-bit classification, and many other example networks can be downloaded from the Brainchip website. In the quantization column below, ‘a’/’b’/’c’ where ‘a’ means weights bits for first layer, ‘b’ means weights bits for subsequent layers, ‘c’ means output activation map bits for every layer.
Screenshot-2023-01-05-at-4.15.10-PM-1024x587.png

Table 1. Accuracy of inference.
AkidaNet is a feed-forward network optimized to work with 4-bit weights and activations. AkidaNet 0.5 has half the parameters of AkidaNet 1.0. The Akida hardware supports Yolo, DeviceNet, VGG, and other feed-forward networks. Recurrent networks and transformer networks are supported with minimal CPU participation. An example recurrent network implemented on the AKD1000 chip required just 3% CPU participation with 97% of the network running on Akida.
4-bit network resolution is not unique. Brainchip pioneered this Machine Learning technology as early as 2015 and, through multiple silicon implementations, tested and delivered a commercial offering to the market. Others have recently published papers on its advantages, such as IBM, Stanford University and MIT.
Untitled-design-1.png

Akida is based on a neuromorphic, event-based, fully digital design with additional convolutional features. The combination of spiking, event-based neurons, and convolutional functions is unique. It offers many advantages, including on-chip learning, small size, sparsity, and power consumption in the microwatt/milliwatt ranges. The underlying technology is not the usual matrix multiplier, but up to a million digital neurons with either 1, 2, or 4-bit synapses. Akida’s extremely efficient event-based neural processor IP is commercially available as a device (AKD1000) and as an IP offering that can be integrated into partner System on Chips (SoC). The hardware can be configured through the MetaTF software, integrated into TensorFlow layers equating up to 5 million filters, thereby simplifying model development, tuning and optimization through popular development platforms like TensorFlow/Keras and Edge Impulse. There are a fast-growing number of models available through the Akida model zoo and the Brainchip ecosystem.
To dive a little bit deeper into the value of 4-bit, in its 2020 NeurIPS paper IBM described the various pieces that are already present and how they come together. They prove the readiness and the benefit through several experiments simulating 4-bit training for a variety of deep-learning models in computer vision, speech, and natural language processing. The results show a minimal loss of accuracy in the models’ overall performance compared with 16-bit deep learning. The results are also more than seven times faster and seven times more energy efficient. And Boris Murmann, a professor at Stanford who was not involved in the research, calls the results exciting. “This advancement opens the door for training in resource-constrained environments,” he says. It would not necessarily make new applications possible, but it would make existing ones faster and less battery-draining“ by a good margin.”
With the focus on edge AI solutions that are extremely energy-sensitive and thermally constrained and require efficient real-time response, this advantage of 4-bit weights and activations is compelling and shows a strong trend in the coming years. Brainchip has pioneered this path since 2016 and invested in a simplified flow and ecosystem to enable developers. BrainChip’s MetaTF compilation and tooling are integrated into TensorFlow™ and Edge Impulse. TensorFlow/Keras is a familiar environment to most data scientists, while Edge Impulse is a strong emerging platform for Edge AI and TinyML. MetaTF, many application examples, and source code are available free from the Brainchip website: https://doc.brainchipinc.com/examples/index.html
Brainchip continues to invest in advanced machine-learning technologies to further its market leadership.
Source: IBM NeurIPS proceedings 2020: https://proceedings.neurips.cc/paper/2020/file/13b919438259814cd5be8cb45877d577-Paper.pdf
Source: MIT Technology Review. https://www.technologyreview.com/2020/12/11/1014102/ai-trains-on-4-bit-computers/
 
  • Like
  • Fire
  • Love
Reactions: 53 users

Damo4

Regular
Just browsing BrainChip website & came across a new blog from 7th Jan titled "4 Bits Are Enough" but it's password protected & we can't read it. Wonder if it will be about benchmarking? Wonder when it will be available?

View attachment 26725

Not sure if it was only temporarily protected, but it's open now!

https://brainchip.com/4-bits-are-enough/
 
  • Like
  • Love
  • Fire
Reactions: 9 users

Tezza

Regular
A question from the not to bright! If and when a product is put on the shelf with Akida in it, let's say a Samsung fridge, wouldn't it stand to reason that competitors would grab one pull it down and see what makes it tick? If the answer to this is yes, than wouldn't the nda become obsolete and said company could shout from the rooftops, We are using Akida!
 
  • Like
  • Fire
Reactions: 4 users

equanimous

Norse clairvoyant shapeshifter goddess
  • Haha
  • Like
  • Fire
Reactions: 22 users

AARONASX

Holding onto what I've got
A question from the not to bright! If and when a product is put on the shelf with Akida in it, let's say a Samsung fridge, wouldn't it stand to reason that competitors would grab one pull it down and see what makes it tick? If the answer to this is yes, than wouldn't the nda become obsolete and said company could shout from the rooftops, We are using Akida!
I think from memory someone has posted before it's designed in such a way so this cannot be reversed engineered... however not 100%
 
Last edited:
  • Like
Reactions: 8 users

stuart888

Regular
Just browsing BrainChip website & came across a new blog from 7th Jan titled "4 Bits Are Enough" but it's password protected & we can't read it. Wonder if it will be about benchmarking? Wonder when it will be available?

View attachment 26725

Wow, @TechGirl has magic, as once she spoke: Open, it took 59 minutes.

My total guess is @Fact Finder shot off an email and that is all it took.
 
  • Like
  • Love
Reactions: 9 users

equanimous

Norse clairvoyant shapeshifter goddess
Looking at the increasing Buy and Sell numbers, can't help to think that something big is about to happen.
Definitely a lot of traders queueing up.
Either a huge announcement is comming (maybe with the next 4C) or another meme coming from me 🤣 😅.
im tempted to buy more
 
  • Like
  • Haha
  • Fire
Reactions: 9 users

SERA2g

Founding Member
Just browsing BrainChip website & came across a new blog from 7th Jan titled "4 Bits Are Enough" but it's password protected & we can't read it. Wonder if it will be about benchmarking? Wonder when it will be available?

View attachment 26725

This is no longer password protected and the date has been updated to the 10th.

 
  • Like
  • Fire
  • Love
Reactions: 19 users
Top Bottom