BRN Discussion Ongoing

JB49

Regular
All looking extremely positive moving forward :)


“The Akida Edge AI Box is ideally suited to provide the low latency and high throughput processing with ultra-low power consumption – a necessity for the next generation of smart Edge devices,” said Sean Hehir, CEO of BrainChip.
View attachment 57355
I guess it wasn't sold out as advertised on the website last week. It just wasn't for sale yet.
 
  • Like
  • Fire
Reactions: 5 users
Pre market looking like the sellers are going to try to flood the market again

Need more buyers
Look behind your couch for some change and buy more shares
 
  • Like
  • Haha
  • Fire
Reactions: 11 users

Esq.111

Fascinatingly Intuitive.
Also look under the coffee table ..... I often find a little cash there.


1708468658213.png



Esq.
 
  • Haha
  • Like
Reactions: 31 users

buena suerte :-)

BOB Bank of Brainchip
I guess it wasn't sold out as advertised on the website last week. It just wasn't for sale yet.
Looks that way.. I'm thinking that TEAM BRN knew that they were going to be available very Soon and "Sold out" looked much more positive than..
Not available! But now they are most certainly....... 🎈🎇 FOR SALE!! 🎇 🎈 :) :) :)

Lets go BRN
 
  • Like
  • Fire
  • Love
Reactions: 21 users

7für7

Top 20
Pre market looking like the sellers are going to try to flood the market again

Need more buyers
Look behind your couch for some change and buy more shares
I said recently……. COCKROACHES!!!!!!
 
  • Like
  • Fire
Reactions: 5 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers ,

Now I don't profess to be a charting wizz.....

The three day chart on 1,3,5,15 min & particularly the one hour duration looks Very promising on volume flow for a jolly good surge around midday today.

:whistle:

1708471650118.png



Not Financial Advice.

Regards,
Esq.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 25 users
  • Like
  • Wow
  • Love
Reactions: 11 users

White Horse

Regular
Obviously I'm not Diogenese 🙄..

But my recollection, is that Peter said AKIDA could go down to 7nm process node.

But I'm guessing that's because, that was the smallest viable process node, at that time?

I'm also guessing, that the "neuron fabric" whatever that actually is, would be the limiting factor..

But since it's a "digital" design, there should be no limits? 🤔...

The new smaller process nodes, are completely different "processes" too..
Hi DB,
Aren't Socionext already using our IP in 7nm.
I think there was a lot of press about it around CES 2023.
 
  • Like
  • Thinking
  • Wow
Reactions: 6 users

Learning

Learning to the Top 🕵‍♂️
Thanks for posting this @Learning. Why would BrainChip bother to create a post for another companies PMU on their LinkedIn account? A performance monitoring unit of all things. Is that something that Akida would be used for? Or is there more to it?
Has BrainChip been helping Tachyum with their soon to be released beta version of its prodigy Processor. I haven’t the foggiest but a look at the article below shows that Tachyum have been busy advancing their Prodigee offering.

CEO of Tachyum. “With each new enrichment we are able to incorporate into Prodigy’s software stack, we magnify the ability of a Prodigy platform ready to revolutionize the world.”

Tachyum Upgrades Software Package in Advance of Beta Release of the Prodigy Universal Processor​


LAS VEGAS, February 14, 2024 – Tachyum® today announced that it has upgraded the software stack for the Prodigy® Universal Processor before the anticipated launch of its beta version around the end of quarter. Quality completion of Prodigy’s software stack is a key component as the company continues to advance towards chip production and distribution.
Tachyum software engineers have worked hard to enable the full potential of Prodigy with the development of an ecosystem of applications, system software, frameworks and libraries that are ported to run natively on Prodigy hardware. Once the software package completes its testing and runs cleanly on the FPGA, the company can fully transition to advancing the Universal Processor into production.
The Prodigy software distribution is a completely integrated software stack and package that is ready for deployment “as is.” It is available as a single pre-installed image for Tachyum’s early adopters and customers. Applications have been tested to work out of the box so that customers can immediately start using the reference design. If users encounter any issues during deployment, the software can be quickly and easily restored to its original image.
Included in the software distribution package as part of alpha testing are:
  • Latest versions of the QEMU emulator 8.2
  • GCC 13.2 (GNU Compiler Collection) and glibc 2.39 (GNU C Library)
  • Linux 6.6 LTS (Long Term Support), which contains a large number of changes, updates and improvements
The company also announced plans to switch to the LLVM 18 release once it is available to download. LLVM plays a significant role in every major AI framework. Additionally, it is in the process of adding RAS (Reliability Accessibility Serviceability) in the form of an EDAC (Error Detection and Correction) driver in the next few weeks. Based on customer requests for server applications Tachyum agreed to add bootable SSD RAID next quarter to its UEFI.
As a Universal Processor offering industry-leading performance for all workloads, Prodigy-powered data center servers can seamlessly and dynamically switch between computational domains (such as AI/ML, HPC, and cloud) with a single homogeneous architecture. By eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization, Prodigy reduces CAPEX and OPEX significantly while delivering unprecedented data center performance, power, and economics. Prodigy integrates 192 high-performance custom-designed 64-bit compute cores, to deliver up to 4.5x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.
“Having a robust software stack tested and ready to go upon the launch of the Prodigy Universal Processor chip is key to rapid adoption by data centers around the world looking to leverage their existing applications while achieving industry-leading performance for hyperscale, high-performance computing and artificial intelligence workloads,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “With each new enrichment we are able to incorporate into Prodigy’s software stack, we magnify the ability of a Prodigy platform ready to revolutionize the world.”
Hi Slade,

It's interesting that two independent company are promoting each other isn't?

Thanks to @IloveLamp for a reminder.

Let add a bits of speculation to the mix.

This is from there recent white paper. Maybe @Dio can have a look.

"Introduction
Tachyum Prodigy, the world’s first Universal Processor, was designed from the ground up to provide leading-edge AI features to address the emerging demand for AI across a wide range of applications and workloads. Prodigy’s revolutionary new architecture unifies the functionality of CPU, GPGPU, and TPU to address a wide range of workloads, including today’s ever-increasing AI demands without costly and power-hungry accelerators.

In addition to its unified architecture, Prodigy’s AI subsystem incorporates groundbreaking features that deliver high performance and efficiency for AI applications, including the 4-bit TAI exponential data type and multiple levels of sparse matrix processing which enables Prodigy to process large language models (LLMs) with 2-bit effective weights, providing never-before-seen efficiency.
In addition, Prodigy integrates up to 16 DDR5 memory controllers to provide unprecedented memory bandwidth and capacity. Prodigy’s powerful AI capabilities enable LLMs to run much easier and cost-effectively than existing CPU + GPGPU based systems. A single 96-core Prodigy with 1 TB of memory can run a ChatGPT4 model with 1.7 trillion parameters, whereas it requires 52 Nvidia H100 GPUs to run the same thing at significantly higher cost and power consumption.

This paper presents the Prodigy ATX Platform, focusing on the hardware architecture, target applications, and how it will democratize AI for those who wouldn’t normally have access to sophisticated AI models. The Prodigy ATX Platform allows everyone to develop and run cutting edge AI models for as low as $5,000 in an entry-level platform SKU configuration featuring a 48-core Prodigy and 256 GB of DDR5 memory."



Learning 🪴
 
  • Like
  • Love
  • Fire
Reactions: 23 users
Hi DB,
Aren't Socionext already using our IP in 7nm.
I think there was a lot of press about it around CES 2023.
Hi WH
I missed that but entirely possible because the former CEO Mr. Dinardo actually said AKD1000 could scale to 14nm and 7nm when 7nm was the smallest at a major foundry. Since then Peter van der Made has mentioned 5nm and Anil Mankar has spoken about 4nm. Given that Brainchip sold only two nodes to Renesas and AKD1000 is 80 nodes I as a technophobe would think that as Anil Mankar has stated you can run keyword spotting on 1 node it just increases latency you could easily put 1 node on 2nm at TSMC but more likely much more. Where is @Diogenese?

My opinion only DYOR
Fact Finder
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 12 users

Diogenese

Top 20
@Diogenese - Jem Davis has turned up as lead NED at Literal- Labs AI - https://www.literal-labs.ai they are making some pretty aggressive claims on power saving.. If you have time would you care to comment on this ....

"Literal Labs applies a streamlined and efficient approach to AI that is faster, explainable, and up to 10,000X more energy efficient than today’s neural networks. Similar to NNs in that customers train a dataset, Literal Labs trains Tsetlin machine models specific to customer datasets. Our approach results in an optimised machine model that is then deployed onto the target hardware. The Tsetlin machine model can be deployed as software only or can be accelerated using Literal Labs accelerators. Our benchmarking shows we can achieve 250X faster inferencing than XG Boost using software only, and up to 1,000X faster and up to 10,000X less energy consumption when using hardware acceleration. The company was spun out of Newcastle University by world leaders in Tsetlin machine Dr. Alex Yakovlev and Dr. Rishad Shafik, and led by former Arm CPU division VP and semiconductor startup founder Noel Hurley. "
Hi alby,

There isn't much technical detail available other than Wiki's discussion on Tsetlin machines.

https://en.wikipedia.org/wiki/Tsetlin_machine

1708472387707.png


A Tsetlin machine is a form of learning automaton collective for learning patterns using propositional logic. Ole-Christoffer Granmo created[1] and gave the method its name after Michael Lvovitch Tsetlin, who invented the Tsetlin automaton[2] and worked on Tsetlin automata collectives and games.[3] Collectives of Tsetlin automata were originally constructed, implemented, and studied theoretically by Vadim Stefanuk in 1962.

The Tsetlin machine uses computationally simpler and more efficient primitives compared to more ordinary artificial neural networks.[4]

As of April 2018 it has shown promising results on a number of test sets.[5]
[6]
...
The Tsetlin automaton is the fundamental learning unit of the Tsetlin machine. It tackles the multi-armed bandit problem, learning the optimal action in an environment from penalties and rewards. Computationally, it can be seen as a finite-state machine (FSM) that changes its states based on the inputs. The FSM will generate its outputs based on the current states.

... which is above my pay grade.


There are no published patents for Literal Lbs.

The LL people did publish a paper in Norway in 2022. The abstract is public:

Resultat #2068852 - A Convolutional Tsetlin Machine-based Field Programmable Gate Array Accelerator for Image Classification - Cristin




Vitenskapelig Kapittel/Artikkel/Konferanseartikkel

2022

A Convolutional Tsetlin Machine-based Field Programmable Gate Array Accelerator for Image Classification

  • Svein Anders Tunheim
  • Jiao Lei
  • Rishad Ahmed Shafik
  • Alexandre Yakovlev og
  • Ole-Christoffer Granmo

TITTEL

A Convolutional Tsetlin Machine-based Field Programmable Gate Array Accelerator for Image Classification

SAMMENDRAG

This paper presents a Field Programmable Gate Array (FPGA) implementation of an image classification accelerator based on the Convolutional Tsetlin Machine (CTM). The work is a concept design, and the solution demonstrates recognition of two classes in 4 × 4 images with a 2 × 2 convolution window. More specifically, there are two sub-Tsetlin Machines (TMs), one per class. A single sub-TM employs 40 clauses, each controlled by 20 Tsetlin Automata. The accelerator features random patch selection, in parallel for all clauses, based on reservoir sampling. The design is implemented in a Xilinx Zync XC7Z020 FPGA. With an operating clock speed of 30 MHz, the accelerator is capable of inferring at the rate of 3.3 million images per second with an additional power consumption of 20 mW from idle mode. The average test accuracy is 96.7% when trained on data with 10% noise. A training session with 100 epochs and 8192 examples takes 1.5 seconds. Due to the limited hardware resources required, the CTM accelerator represents a promising concept for online learning in energy-frugal systems. The solution can be scaled to multi-class systems and larger image
s.




From their website:


Literal Labs - Cambridge Future Tech (camfuturetech.com)


Artifical Intelligence Redefined

Similar to NNs in that customers train a dataset, Literal Labs trains Tsetlin machine models specific to customer datasets. This approach results in an optimised machine model that is then deployed onto the target hardware.

The Tsetlin machine model can be deployed as software only or can be accelerated using Literal Labs accelerators. Literal Labs benchmarking shows it can achieve 250X faster inferencing than XG Boost using software only, and up to 1,000X faster and up to 10,000X less energy consumption when using hardware acceleration.

Value Proposition

One of the major challenges with traditional neural network-based models is their resource-intensive nature. As models become more complex, so too does the resource requirement. Literal Labs’ architecture, based on propositional logic, requires significantly fewer resources to solve AI problems, meaning that Literal Labs can deliver intelligent compute on devices with minimal energy usage and with little or no internet coverage.

The company was spun out of Newcastle University by world leaders in Tsetlin machine Dr. Alex Yakelov and Dr. Rishad Shafik, and led by former Arm CPU division VP and semiconductor startup founder Noel Hurley
.

There is a video which shows that the processor is involved in the calculations:



1708473124618.png



This would add to latency, so it is probably slower than Akida.

No wonder Jem was a bit cryptic on the podcast - buyer's remorse. I think he's Tsetlin for second best.
 
  • Like
  • Haha
  • Fire
Reactions: 28 users
Can someone please advise me when the next financial is due by is it this week ?.
 
  • Like
Reactions: 1 users

Diogenese

Top 20
Wait a minute @Dio. ...any ARM CPU ... architecture. Am I supposed to take ARM's statement seriously and it doesn't depend on the 22nm (GF) or any nm at all and that's just the qualification process?

IFS 🤔
Hi cosors,

Akida is process-agnostic as well as processor-agnostic.

It can be built in 60 nm or 7 nm or any size between.

It can work with ARM, Intel, Nvidia.

It can work with (almost) any processor because, with Akida 1000, the processor is only required for configuring the NN (layers, NPEs per layer, weights, ...). In Akida 1000, the processor plays no part in the classification function, which makes Akida very fast.

Akida 2 TeNNs does have some minor processor involvement.
 
  • Like
  • Love
  • Fire
Reactions: 38 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
UMC will also be at Intel’s IFS Direct Connect Feb 22 where the wondrous and soon to be ubiquitous Akida will be presented to all and sundry.

In January Intel and UMC announced a deal whereby UMC would provide IFS with new capabilities such as RF system solutions and WiFi production technologies.

This would be a fantastic fit for us IMO!!! Remember Intellisense chose us to integrate with their RF system solutions. Here’s an excerpt from the IntelliIsense/ Brachip announcement.

Intellisense’s intelligent radio frequency (RF) system solutions enable wireless devices and platforms to sense and learn the characteristics of the communications environment in real time, providing enhanced communication quality, reliability and security. By integrating BrainChip’s Akida™ neuromorphic processor, Intellisense can deliver even more advanced, yet energy efficient, cognitive capabilities to its RF system.
Oh, yes and here’s the other thing!!! UMC recently spent a huge amount of money building a new fab. Take a look at what it says in this article from Feb 2022.




EXTRACT ONLY

The new fab is backed by clients who have signed multiyear supply agreements to secure capacity from 2024, which points to a robust demand outlook for UMC’s 22-nanometer and 28-nanometer technologies for years to come,” the statement said.

UMC, whose clients include Samsung, MediaTek and Qualcomm, said
that the clients are to pay a deposit to secure capacity, a new model implemented last year to cope with supply constraints amid the supply crisis.

Specialty technologies to be manufactured in the new facility — such as embedded high-voltage technology, embedded non-volatile memory, radio frequency silicon-on-insulator technology and mixed signal CMOS — are critical for a broad range of applications, including smartphones, smart home devices and electric vehicle applications, UMC said.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 20 users

Diogenese

Top 20
Obviously I'm not Diogenese 🙄..

But my recollection, is that Peter said AKIDA could go down to 7nm process node.

But I'm guessing that's because, that was the smallest viable process node, at that time?

I'm also guessing, that the "neuron fabric" whatever that actually is, would be the limiting factor..

But since it's a "digital" design, there should be no limits? 🤔...

The new smaller process nodes, are completely different "processes" too..
Yes. Akida can go to 7 nm and probably smaller.

The neuron fabric is the interconnected nodes (4 NPEs per node) and the "switchable" communication network which connects them. The network can be configured to arrange the required number of nodes in each layer. This is all "built" in standard CMOS, so it can be implemented in whatever size CMOS can accommodate without a major redesign of the tape-out. There is nothing stopping it being implemented in FINFET (edge-wise transistors), other than a complete new start-from-scratch tape-out.
 
  • Like
  • Fire
Reactions: 18 users

McHale

Regular
Another issue with the phalanx is the heat it generates in use; 4500 rounds a minute. Once you’ve used it you have to cool it down which doesn’t bode well with multiple targets inbound.

And also the obvious issue of carrying enough onboard and loading the rounds quick enough during conflict situations.

I thought someone was using camera such as prophesee’s with Akida doing the TOF algorithms plotting trajectories and using a laser to bring down drones. Maybe when I read it it was still a research project but whoever can create it will make a motza; and even better save lives.

:)
If the US doesn't yet have that capability, they will have it soon, they are all out working on laser weapons and sophiticated guidance systems to deploy the lasers.
 
  • Like
  • Fire
Reactions: 13 users

buena suerte :-)

BOB Bank of Brainchip
Can someone please advise me when the next financial is due by is it this week ?.
This was the release date last year...So by next Thursday I'm thinking!

1708474926869.png
 
  • Like
  • Fire
Reactions: 8 users

Esq.111

Fascinatingly Intuitive.
Morning McHale ,

Only one week ago or so I read a article on America having successfully shot down a drone with a high energy weapon at over one kilometre away.

The age of laser beams is certainly upon us.

Regards,
Esq.
 
  • Like
  • Wow
  • Fire
Reactions: 18 users

Xray1

Regular
This was the release date last year...So by next Thursday I'm thinking!

View attachment 57365

I note that Sean H will do his Investor Presentation next Tuesday 27/2/24 .... so I wonder when the Annual Report and 4E will be released on the ASX.... Will it be before or after his Presentation .......... Personally, I think that if the Annual Report contains some positive indicators, then it will most likely be released a few days before his presentation, so as to give him some positive investor sentiment beforehand.
 
  • Like
  • Fire
  • Thinking
Reactions: 27 users

Getupthere

Regular

EdgeCortix flagship SAKURA-I Chip showcased for Edge AI applications​

1:45 pm February 19, 2024 By Julian Horsey

EdgeCortix flagship SAKURA-I Chip showcased

freestar
At the prestigious Singapore Airshow 2024, EdgeCortix, a leading Japanese semiconductor company, is set to unveil a new chip that is expected to transform the way artificial intelligence (AI) is processed in the defense and aerospace sectors. The SAKURA-I chip, a specialized co-processor, is designed to handle AI tasks right at the edge of the network, where data is generated and collected. This innovation is particularly important for applications that require immediate decision-making, such as those found in aviation and military operations.
The airshow, scheduled for February 20-25, will serve as the platform for EdgeCortix to introduce this new technology to the world. Visitors to the event will be able to see the SAKURA-I chip in action at the booth of the Acquisitions, Technology and Logistics Agency (ATLA), which represents Japan’s commitment to advancing its air and defense technologies.

EdgeCortix SAKURA-I​

The SAKURA-I chip stands out for its ability to process complex AI tasks with high efficiency and low latency. This is crucial in environments where quick responses are essential and power resources are often limited. The chip’s debut is timely, as there is a growing need for technologies that can operate effectively under these constraints.
“EdgeCortix’s SAKURA-I chip, with its small form factor and high efficiency, is proving a crucial tool in edge computing in defense and aerospace sectors, which EdgeCortix predicts will increasingly rely on software-driven hardware solutions to achieve their tasks going forward,” says Sakyasingha Dasgupta, CEO and Founder of EdgeCortix. “We are honored to be featured in ATLA’s booth representing Japanese innovation at the Singapore Airshow and are proud to stand shoulder-to-shoulder with such elite companies at the show.”
AI is becoming increasingly important in enhancing safety and efficiency across various industries, including transportation and defense. The SAKURA-I chip is designed to meet the demanding needs of these sectors, providing performance improvements that can help prevent accidents, optimize logistics, and ensure secure operations in sensitive areas.
Edge computing, which involves processing data close to where it is generated, is changing the landscape of defense and aerospace. Innovations like the SAKURA-I chip are at the forefront of this shift, enabling faster and more reliable decision-making in situations where time is of the essence.
The Singapore Airshow 2024 is more than just an exhibition; it is a demonstration of Japan’s technological advancements in defense equipment and technology. The partnership between ATLA and EdgeCortix highlights the role of collaboration between the public and private sectors in achieving technological breakthroughs. The SAKURA-I chip exemplifies such progress and is expected to establish new standards in the industry.
EdgeCortix’s participation in the Singapore Airshow 2024 emphasizes the company’s position as a leader in semiconductor technology and its significant contributions to the evolution of AI processing. The SAKURA-I chip marks a step towards more energy-efficient, software-driven hardware solutions in the fields of defense and aerospace. Attendees are encouraged to witness this state-of-the-art technology at the ATLA booth, where Japan’s expertise in air and defense technology will be on full display.
 
  • Like
  • Wow
  • Fire
Reactions: 10 users
Top Bottom