BRN Discussion Ongoing

equanimous

Norse clairvoyant shapeshifter goddess
I think the issue here is how to benchmark new ways of doing AI. How do you benchmark 1 shot learning? Or edge learning? E.t.c. I don't think we really know how to do that yet.
Eg

1678095014022.png
 
  • Like
  • Love
  • Fire
Reactions: 28 users

Diogenese

Top 20
  • Like
  • Love
Reactions: 8 users
"A formal platform launch press release will take place on 7 March at 1:00am AEDT, 6 March at 6:00am US PST."

I wonder whether (hope that) this launch will include one or more of the partners involved in the design requests.........Mercedes perhaps????

Or Prophesee?

9249763F-F536-45EB-A64A-617F1C759937.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 15 users

Baisyet

Regular
Has anyone seen new website of BRN it looks stunning.
 
  • Like
  • Love
  • Fire
Reactions: 53 users
Well that was a nice day.

Just catching up.

Amazing what a sanctioned acceptable sensitive Ann can do 😂

Wondering on the 8 bit and if ties in with the suggested industry standard some of the heavy hitters have put together via a white paper or am I on the wrong track?

Would it not be easier & beneficial to have the lower bit flexibility on a client needs basis (still an Akida point of difference in the mkt) whilst also "towing the line" so to speak when it comes to what is being pushed as the standard to come?

Obviously, they will have their own interests in mind when submitting this sort of proposal as a standard however, I guess if you want to be part of the club, need to make it as compatible and interoperable as poss.

Apparently 8 bit benefits transformers networks.



NVIDIA, Arm, and Intel Publish FP8 Specification for Standardization as an Interchange Format for AI​

By Shar Narasimhan
Discuss (1)
+4
Like
Tags: Arm, featured, News, Transformers

AI processing requires full-stack innovation across hardware and software platforms to address the growing computational demands of neural networks. A key area to drive efficiency is using lower precision number formats to improve computational efficiency, reduce memory usage, and optimize for interconnect bandwidth.

To realize these benefits, the industry has moved from 32-bit precisions to 16-bit, and now even 8-bit precision formats. Transformer networks, which are one of the most important innovations in AI, benefit from an 8-bit floating point precision in particular. We believe that having a common interchange format will enable rapid advancements and the interoperability of both hardware and software platforms to advance computing.

NVIDIA, Arm, and Intel have jointly authored a whitepaper, FP8 Formats for Deep Learning, describing an 8-bit floating point (FP8) specification. It provides a common format that accelerates AI development by optimizing memory usage and works for both AI training and inference. This FP8 specification has two variants, E5M2 and E4M3.

This format is natively implemented in the NVIDIA Hopper architecture and has shown excellent results in initial testing. It will immediately benefit from the work being done by the broader ecosystem, including the AI frameworks, in implementing it for developers.

Compatibility and flexibility​

FP8 minimizes deviations from existing IEEE 754 floating point formats with a good balance between hardware and software to leverage existing implementations, accelerate adoption, and improve developer productivity.

E5M2 uses five bits for the exponent and two bits for the mantissa and is a truncated IEEE FP16 format. In circumstances where more precision is required at the expense of some numerical range, the E4M3 format makes a few adjustments to extend the range representable with a four-bit exponent and a three-bit mantissa.

The new format saves additional computational cycles since it uses just eight bits. It can be used for both AI training and inference without requiring any re-casting between precisions. Furthermore, by minimizing deviations from existing floating point formats, it enables the greatest latitude for future AI innovation while still adhering to current conventions.

High-accuracy training and inference​

Testing the proposed FP8 format shows comparable accuracy to 16-bit precisions across a wide array of use cases, architectures, and networks. Results on transformers, computer vision, and GAN networks all show that FP8 training accuracy is similar to 16-bit precisions while delivering significant speedups. For more information about accuracy studies, see the FP8 Formats for Deep Learning whitepaper.

In MLPerf Inference v2.1, the AI industry’s leading benchmark, NVIDIA Hopper leveraged this new FP8 format to deliver a 4.5x speedup on the BERT high-accuracy model, gaining throughput without compromising on accuracy.

Moving towards standardization​

NVIDIA, Arm, and Intel have published this specification in an open, license-free format to encourage broad industry adoption. They will also submit this proposal to IEEE.

By adopting an interchangeable format that maintains accuracy, AI models will operate consistently and performantly across all hardware platforms, and help advance the state of the art of AI.

Standards bodies and the industry as a whole are encouraged to build platforms that can efficiently adopt the new standard. This will help accelerate AI development and deployment by providing a universal, interchangeable precision.
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Sirod69

bavarian girl ;-)
Has anyone seen new website of BRN it looks stunning.
Yes, I looked at that today too! She looks great!
 
  • Like
  • Love
  • Fire
Reactions: 11 users

Boab

I wish I could paint like Vincent
Has anyone seen new website of BRN it looks stunning.
To learn more about Akida 2 is password protected.
 
  • Like
  • Thinking
Reactions: 8 users
No.
THE AKIDA TECHNOLOGY FAMILY IS MADE UP OF:

A. AKD 1000 rebranded AKIDA 1.0 from time to time

B. AKD 1500 yet to be rebranded AKIDA 1.5

C. AKIDA next generation was to be AKD 2000 then rebranded AKIDA 2.0 then company advised they were considering what it would be officially named.

In short there are three chip designs

On the original roadmap put out when Peter van der Made was Acting CEO there was AKD 500,1000 & 1500 after which came AKD 2000, 2500,3000,3500 4000,4500 & 5000 and in last podcast 10,000 was slated for 2030.

The AKD500 slated for 2023 has not yet appeared but I suspect the Renesas chip that is built around two nodes may fill that position even though it will likely carry a Renesas name. The AKD 500 was talked about for use in white goods and other appliances.

My opinion only DYOR
FF

AKIDA BALLISTA
Akida 10 in 2030 may have to be called Rogue One!

SC
 
  • Haha
  • Like
Reactions: 5 users

schuey

Regular
Why would they time this for US market open, when BRN is listed on the ASX?

Not sure I follow the significance?
More money in America
 
  • Like
Reactions: 7 users

JB49

Regular
Has anyone seen new website of BRN it looks stunning.
Does it still say trusted by Valeo? I couldn't find it anymore
 
  • Like
Reactions: 4 users

ndefries

Regular

To learn more about Akida 2 is password protected.
Has anyone tried to guess the password. It's not Akida2
 
  • Haha
  • Like
Reactions: 11 users
Does it still say trusted by Valeo? I couldn't find it anymore
Home page slider of partners.

Which could actually go a bit slower for mine.

Emotion3D, Ai Labs, MB, Prophesee, Valeo etc all there :)
 
  • Like
  • Love
Reactions: 17 users
  • Haha
  • Like
Reactions: 5 users
Well at least one thing will come out of this we will not have to suffer the 'is this competition' where AKIDA 2000 is concerned because with a five year lead and all that it can do about to be laid out in the press release, the website and by Edge Impulse there will be no grey areas for people to worry someone has a similar product.

It will be like comparing a genuine Police Call Box with Dr. Who's Tardis.

The only similarity will be they are painted the same colour.

Like the Tardis the minute you open the door to the AKIDA specs it will make glaringly obvious that you have walked into a science fiction future.

My opinion only DYOR
FF

AKIDA BALLISTA
Can't wait to meet Emily Pond at the AGM

SC
 
  • Like
  • Haha
  • Fire
Reactions: 7 users

equanimous

Norse clairvoyant shapeshifter goddess
Yes, I saw that superficially some time ago, but it's still only a proposal from BrainChips perspective. I can also design a test that shows that I'm better than you and you can do the reverse 🙂
If you have been in this forum long enough you would have seen there have been a ton of independent bench marking with Akida in it.

Your welcome to research this yourself.
 
  • Like
Reactions: 3 users

equanimous

Norse clairvoyant shapeshifter goddess
  • Haha
  • Like
Reactions: 9 users

Tothemoon24

Top 20
Last edited:
  • Like
Reactions: 8 users

Baisyet

Regular
  • Like
  • Thinking
Reactions: 2 users

Boab

I wish I could paint like Vincent
To learn more about Akida 2 is password protected.
I have gone back onto the site a couple of times and its obvious they are still trying to smooth it out.
 
  • Like
Reactions: 4 users
Top Bottom