BRN Discussion Ongoing

Makeme 2020

Regular
MAYBE BRN SHOULD UPDATE THERE LOYAL SHAREHOLDERS.
IS THAT TO MUCH TO ASK.????????
 
  • Like
Reactions: 9 users
I work in an industry (automotive) in which we are really starting to see AI become the new buzzword. The last 6 months have delivered a very rapid uptake by manufacturers particularly in the camera and vision segments and I can foresee this continuing to penetrate into all parts of our industry. So far much of the AI has been dumb AI or very limited in its abilities. Regardless the market is eating it up and is hungry for more.

In my opinion the rapid increase in momentum is beginning and I believe that we are entering that phase of rapid acceleration in uptake that often accompanies new and revolutionary technologies. If my industry is anything to go by and as market awareness continues to increase and more and more companies realise what Brainchips AKIDA can deliver beyond traditional AI, I can only see this acceleration continuing. In my opinion, we are fast approaching a very exciting period of growth.
 
  • Like
  • Love
  • Fire
Reactions: 40 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
  • Love
Reactions: 15 users

gex

Regular
1664440748096.png
 
  • Like
  • Haha
  • Fire
Reactions: 11 users

Makeme 2020

Regular

equanimous

Norse clairvoyant shapeshifter goddess
Hello Eq.
to be honest, things are not that easy in Germany in these times.
The bright spots are the many ambitious companies that are developing good technology (market has yet to realize that).
I saw an EQS the other day, standing at a traffic light; looked like it wanted to communicate about/via Akida.
There are highways you could test it without speed limit - if there are no road works or traffic jam ;)
Regards
cassip
I send my regards to Germany during these times.

Regarding ambitious companies and technology do you know much about this I saw recently

 
  • Like
  • Fire
Reactions: 6 users

equanimous

Norse clairvoyant shapeshifter goddess
I send my regards to Germany during these times.

Regarding ambitious companies and technology do you know much about this I saw recently


@Diogenese Always good to hear your thoughts on engineering.

What are your thoughts on this?
 
  • Like
Reactions: 3 users

krugerrands

Regular
  • Like
Reactions: 1 users

Makeme 2020

Regular
SO WHY ISN'T BRN KEEPING SHAREHOLDERS UPDATEED ???????????????
 

robsmark

Regular
I think all of those who are dissatisfied with how the company is managed should put their concerns to the company in writing.

Failing to receive a satisfactory response should then be a signal to each individual to reassess their plan regarding Brainchip.

It is clear we have different views here however arguing with each other is pointless as none of us have the capacity to change the present position.

My opinion only DYOR
FF

AKIDA BALLISTA
That’s a fair point FF, though everyone here is researched enough to know the wealth creation potential of Brainchip, and assumably like myself, wouldn’t want to risk selling and missing out.

That doesn’t excuse the diabolical transparency of the company though, NDAs or not. At this stage, I’d be over the moon with a simple “we have XYZ NDAs in place, obviously we cannot discuss the details contained in these, but are please with the progress”. At least that would signify some progress. LDN was able to tell us this information, so why has it suddenly become insider knowledge? Instead we get nothing. dont get me wrong, Tony is great, and always makes himself available, but he doesn’t discuss anything of substance, outside of the we are doing everything we can speal.

It’s high time the company starts to take the SP a little more seriously, as quiet frankly they’re reckless for a company of this value. They aren’t a start up anymore, and this “the SP will do what the SP will do” just doesn’t fly anymore. To not release any information about anything at this stage of commercialisation is quiet preposterous, we, and I would like to know how we’re travelling.

The podcasts aren’t what I expect anymore either. If I wanted general industry information I’d do my own reading. I want to know about the company which I (however small) am effectIvey a part owner of. These monthly podcasts might be better presented as an online questions and answer session with the company. They know very well what the can and cannot discuss.

At the risk of speaking for the majority, we believe we know what Akida is, now we’d like to know what Akida has, is, or is likely to achieve. Surely after two years some of this information must be market ready.

I agree that the arguing is pointless. We’re all in the same boat, and I get a great deal of pleasure from reading what you and the rest of the contributors post here. I just think that it’s time we had a substantial update and more regular news flow. I guess I’m trying to explain that I understand why some here (certainly including myself at times) get frustrated.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 34 users

equanimous

Norse clairvoyant shapeshifter goddess
  • Like
Reactions: 10 users

cassip

Regular
I send my regards to Germany during these times.

Regarding ambitious companies and technology do you know much about this I saw recently


Many greetings from Germany to you.
Yes, I read about it in connection with that start up company in Munich. Very interesting!
 
  • Like
Reactions: 4 users

rgupta

Regular
We were told 2 years ago by Louise Dinardo the for former Ceo of BRN that we had 100 NDAS.............
No one denies that fact. I assume you are getting confused with EAP partners and NDAs. Early access partners are way less than the NDAs. NDAs is an instrument to protect the identity of EAP partners. May be possible each EAP signed 8-10 NDAs so as to protect their identity as well as to safeguard brn interests.
My opinion only.
 
  • Like
  • Love
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
BRN Are saying they need to get the WORD OUT ABOUT AKIDA But everything is NDA TOP SECRET................Please explain................

I predict that once one cat gets let out of the box, it will be impossible to hold back the tide of other cats trying to escape their boxes too. It'll be cataclysmic!

jump-cat-infinity-cat.gif
 
  • Like
  • Haha
  • Love
Reactions: 27 users

Makeme 2020

Regular
  • Like
Reactions: 2 users

krugerrands

Regular
Obviously there is potentially a "guilty by association" in being within the SiFive ecosystem and positive comments from SiFive re Akida and inference in the original press release saying that:

Employing Akida, BrainChip’s specialized, differentiated AI engine, with high-performance RISC-V processors such as the SiFive Intelligence Series is a natural choice for companies looking to seamlessly integrate an optimized processor to dedicated ML accelerators that are a must for the demanding requirements of edge AI computing,” said Chris Jones, vice president, products at SiFive.

I agree with you that there is definitely something there to be interested in given the X280 is part of the "Intelligence Series" but unfortunately doesn't confirm that other companies using the X280 have to have Akida in it.

Be nice to see something a little more concrete though than being just a by default assumption.

Maybe SiFive / BRN being partners could confirm that Akida is in the X280?

Bit more background from a recent article covering SiFives new coprocessor interface, Google and NASA.


SiFive Introduces A New Coprocessor Interface, Targets Custom Accelerators​

September 20, 2022 David Schor AI, AI Hardware Summit, Linley Processor Conference, neural processors, SiFive, Vector Coprocessor Interface Extension (VCIX)
sifive-x280-header.png

Last year SiFive introduced the Intelligence X280 processor, part of a new category of RISC-V processors for SiFive that aims at assisting AI and machine learning workloads.
Launched under the new family of processors called SiFive Intelligence, the X280 is the first core to cater to AI acceleration. At a high level, the X280 builds on top of their silicon-proven U7-series high-performance (Linux-capable) core. SiFive’s Intelligence X280 is somewhat of a unique processor from SiFive. Targetting ML workloads, its main feature point is both the new RISC-V Vector (RVV) Extension as well as SiFive Intelligence Extensions – the company’s own RISC-V custom-extension for handling ML workloads which includes fixed-point data types from 8-bits to 64-bits as well as 16-64 bit FP and the BFloat16 data type. On the RVV extension side, the X280 supports 512-bit vector register lengths, allowing variable length operations up to 512-bits.


As we mentioned earlier, the X280 builds on SiFive’s successful U7-series core. This is a 64-bit RISC-V core supporting the RV64GCV ISA and extensions. It is an 8-stage dual-issue in-order pipeline. Each core features 32-KiB private L1 data and instruction caches as well as a private L2 cache.



The RISC-V Vector extension is a variable length instruction set. For the X280, the core utilizes a 256b pipeline. In other words, both the vector ALU and load/store architecture data width is 256-bit, doing two operations per 512-bit register data. In addition to the vector extension, SiFive added the “Intelligence Extensions” part of the RISC-V custom extensions ISA support. SiFive didn’t go into any details as to what those extensions entail but did note that compared to the standard RISC-V Vector ISA, the Intelligence Extensions provide a 4-6x performance improvement in int8 (matmul) and bf16 operations.

One of the interesting things that SiFive has done is add that capability for automatic translations of Arm Neon vector code into RISC-V Vectors directly into their compiler. And while it may not produce the most optimal code, it’s a way to quickly and accurately move on Arm Neon code directly to SiFive’s RISC-V code. At last year’s Linley Processor Conference, According to Chris Lattner at Last year’s, SiFive’s then President of Engineering & Product group noted that SiFive itself has been using this feature to port a large number of software packages.



Each of the X280 cores goes into an X280 Core Complex which supports up to a quad-core coherent multi-core cluster configuration. The core cluster can be fully scaled up in a configuration that consists of up to 4 clusters for a total of 16 cores. A system-level L3 cache made of 1 MiB banks (up to 8 MiB) is also supported. The system supports a rich number of ports for I/O and communication with other important sub-system components via the system matrix.

Vector Coprocessor Interface Extension (VCIX)​

At the 2022 AI Hardware Summit, Krste Asanovic SiFive Co-Founder and Chief Architect introduced a new Vector Coprocessor Interface Extension (VCIX).



As customer evaluation of the X280 went underway, SiFive say it started noticing new potential usage trends for the core. One such usage is not as the primary ML accelerator, but rather as a snappy side coprocessor/control processor with ML acceleration functionality. In other words, SiFive says it has noticed that companies were considering the X280 as a replacement coprocessor and control processor for their main SoC. Instead of rolling out their own sequencers and other controllers, the X280 proved a good potential replacement.

To assist customers with such applications, SiFive developed the new Vector Coprocessor Interface Extension (VCIX, pronounced “Vee-Six”). VCIX allows for tight coupling between the customer’s SoC/accelerator and the X280. For example, consider a hardware AI startup with a novel way of processing neural networks or one that has designed a very large computational engine. Instead of designing a custom sequencer or control unit, they can simply use the X280 as a drop-in replacement. With VCIX, they are given direct connections to the X280. The interface includes direct access into the vector unit and memory units as well as the instruction stream, allowing an external circuit to utilize the vector pipeline as well as directly access the caches and vector register file.

The capabilities of essentially modifying the X280 core are far beyond anything you can get from someone like Arm. In theory, you could have an accelerator processing its own custom instructions by doing operations on its own side and sending various tasks to the X280 (as a standard RISC-V operation) or directly execute various operations on the X280 vector unit by going directly to that unit. Alternatively, the VCIX interface can work backward by allowing for custom execution engines to be connected to X280 for various custom applications (e.g., FFTs, image signal processing, Matrix operations). That engine would then operate as if they are part of the X280, operating in and out of the X280’s own vector register file. In other words, VCIX essentially allows you to much better customize the X280 core with custom instructions and custom operations on top of a fully working RISC-V core capable of booting full Linux and supporting virtualization.



The VCIX is a high-performance direct-coupling interface to the X280 and its instruction stream. To that end, Asanovic noted that on the X280 with the new VCIX interface, the X280 is capable of sending 1,024 bits over onto the accelerator/external component each cycle and retrieving 512 bits per cycle, every cycle sustained over the VCIX interface.

SiFive says that utilizing their Vector Coprocessor Interface Extension, various accesses and operations from outside can now be done in as low as single-digit cycles or 10s of cycles, instead of 100s of cycles from the normal cluster bus interfaces or memory mapped interfaces. Extremely low-cycle latency is important for developing computational circuits that are highly integrated with the X280.



Google Accelerators​

Cliff Young, Google TPU Architect, and MLPerf Co-Founder was also part of the SiFive announcement. As we’ve seen from other Google accelerators, their hardware team always looks to eliminate redundant work by utilizing off-the-shelf solutions if it doesn’t add any real value to design it themselves in-house.



For their own TPU accelerators, beyond the inter-chip interconnect and their highly-refined Matrix Multiply Unit (MXU) which utilizes a systolic array, much of everything else is rather generic and not particularly unique to their chip. Young noted that when they started 9 years ago, they essentially built much of this from scratch, saying “scalar and vector technologies are relatively well-understood. Krste is one of the pioneers
in the vector computing areas and has built beautiful machines that way. But should Google duplicate what Krste has already been doing? Should we be reinventing the wheel along with the Matrix Multiply and the interconnect we already have? We’d be much happier if the answer was ‘no’. If we can focus on the stuff that we do great and we can also reuse a general-purpose processor with a general-purpose software stack and integrate that into our future accelerators.” Young added, “the promise of VCIX is to get our accelerators and our general-purpose cores closer together; not far apart across something like a PCIe interface with 1000s of cycles of delay but right next to each other with just a few 100s of cycles through the on-chip path and down to 10s of cycles through direct vector register access.”

The SiFive-Google partnership announcement is one of several public announcements that took place over the past year. Last year SiFive announced that AI chip startup Tenstorrent will also make use of the X280 processor in its next-generation AI training and inference processors. Earlier this month, NASA announced that it has selected SiFive’s X280 cores for its next-generation High-Performance Spaceflight Computing (HPSC) processor. HPSC will utilize an 8-core X280 cluster along with 4 additional SiFive RISC-V cores to “deliver 100x the computational capability of today’s space computers”.

At last count it would have to be a custom chip integrating to the Akida Neuron fabric through the AXI bus interchange as per the Akida 1000 reference chip.

This can be done using something like the X280 processor but none of the embedded capabilities uses Akida IP afaik.
 
  • Like
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
  • Love
Reactions: 28 users

equanimous

Norse clairvoyant shapeshifter goddess
9 months on and still nothing.........
If your not happy then consider options better suited to your situation.

Im personally happy with BRN and what is have achieved in the last 12 months and in this current world situation.

Are you overextended in BRN?

Reposting an article for reflection post June this year

Swelling losses haven't held back gains for BrainChip Holdings (ASX:BRN) shareholders since they're up 931% over 3 years​


 
Last edited:
  • Like
Reactions: 9 users

Diogenese

Top 20
I send my regards to Germany during these times.

Regarding ambitious companies and technology do you know much about this I saw recently


Particle physics is above my pay grade, but fascinating. I remember the original Pons and Fleischmann flash in the pan for cold fusion which turned out to be a fizzer.

One thing* which caught my attention was the graphite tile lining (6 minute mark), because I have TLG shares - not that they'll have much to spare after servicing the EV market.

The stellarator is like a 3D magnetic mobius strip.

Certainly fusion would be preferable to fusion, if they can get it to work.

* Phew! That was lucky. I originally wrote "One think which caught my attention ..." and wouldn't @Bravo have given me whacko-the-diddle-oh then!
 
  • Like
  • Haha
Reactions: 10 users
True but even though I checked the meaning of ‘bean counter’ as I even have friends who are accountants, and confirmed that it does not mean accountants but has its derivation in those who are overly bureaucratic and penny pinching.

An accountant can be a bean counter but so can anyone else even a florist.

Anyway despite my efforts I actually have upset an accountant.

My opinion only DYOR
FF

AKIDA BALLISTA
I'm pretty upset you are having a go at florists.

No. I'm really only joking 😆

SC
 
  • Haha
  • Like
Reactions: 8 users
Top Bottom