BRN Discussion Ongoing

Frangipani

Top 20
Earlier today at the RISC-V Summit Europe 2024 in Munich: Preparatory meeting for next year’s second RISC-V Space Workshop in Gothenburg, Sweden (April 2-3, 2025), which will be organised by Frontgrade Gaisler.


E50D7874-FAD7-449F-A0E6-B89DF95A5B4B.jpeg



167D7643-161C-4345-9BEF-FDCABD01BBF6.jpeg



2B6CBB6A-9A57-4375-98DC-4E7A56F79139.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 15 users

CWP

Regular
Pitt Street lastest on Brainchip, extract from their email (from a friend, I’m not on their mailing list) someone on their mailing list can provide the whole article im sure.

Because of its specific advantages, i.e. near-zero latency, extremely low energy consumption and autonomous on-chip learning, Akida has very substantial potential across a range of applications, especially in Edge AI. These include autonomous vehicles, drones, robotics, medical diagnostics, i.e. sensors that reside at the Edge of the Internet of Things.


The world is Akida’s oyster


Although investors have become increasingly aware of the massive opportunity that AI presents, e.g. through the rapid rise of ChatGPT and NVIDIA’s stellar growth, AI is still very much in its early stages. We believe the commercial opportunity for BrainChip is very substantial indeed, specifically because the technology underlying Akida is radically different compared to today’s AI solutions and addresses the very large Edge AI market.

We expect Akida to be able to provide AI capabilities to countless types of devices where previously this wasn’t possible due to the restrictions of Cloud-based AI (cost, latency, energy consumption etc).


Valuation of A$1.59 per share


We have valued BRN at A$1.59 per fully diluted share, based on industry M&A activity (please see page 19 for more detail). Investors have been awaiting additional commercial deals in the last few years and, to be fair, their patience has been tested. But we believe these types of deals will be the future catalysts for BrainChip’s share price.

And because of the very broad spectrum of potential applications for Akida and BrainChip’s many ongoing commercial discussions with prospects, we are confident investors’ patience will be rewarded.

Article- https://static1.squarespace.com/sta...BrainChip+intiation+report+2024+25+6+2024.pdf
 
  • Like
  • Love
  • Fire
Reactions: 87 users

MegaportX

Regular
  • Like
  • Fire
  • Love
Reactions: 42 users

buena suerte :-)

BOB Bank of Brainchip
Thanks @MegaportX and @CWP :)


1719269556416.png

Hoping for some MUCH NEEDED positive news Sooooon!!!!! 🙏🙏🙏
1719269489971.png
🙏🙏🙏


1719269356197.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 54 users

SiDEvans

Regular
  • Like
  • Fire
  • Haha
Reactions: 20 users

Esq.111

Fascinatingly Intuitive.

Attachments

  • 20240625_083851.jpg
    20240625_083851.jpg
    1.7 MB · Views: 89
  • Haha
  • Like
  • Love
Reactions: 37 users

DK6161

Regular
  • Like
  • Thinking
Reactions: 3 users

7für7

Top 20
  • Like
  • Fire
Reactions: 6 users

7für7

Top 20
  • Like
  • Haha
  • Fire
Reactions: 3 users

IMG_2089.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 18 users

Esq.111

Fascinatingly Intuitive.

View attachment 65437
Good Morning Supersonic001 ,

Cheers for re posting this.

After closer examination of the above seedling 🌱, the black seed pod shows the AKIDA layout with ...M Class CPU ( ARM ).

I know this is old news to most but I think this is the first time iv visually seen it .

Is this some cryptic message .....

Regards,
Esq.
 
  • Like
  • Fire
  • Love
Reactions: 32 users

davidfitz

Regular
Very sad :(

1719273893187.png
 
  • Sad
  • Like
Reactions: 7 users

Earlyrelease

Regular
Damn about time Perth Akidaholics had another meeting to counsel each other. I couldn’t help myself but breach my own rule about not buying any more. At these prices it worse than going past, well actually not going past, the 50% off magnum ice creams at Cole’s and not stocking up. Damn shorters but I will thank you one day in the future.
 
  • Like
  • Love
  • Fire
Reactions: 27 users

Cardpro

Regular
Meh... below 20c again...
 
  • Like
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Tenstorrent licenses IP from Baya Systems for AI/RISC-V chiplets

Tenstorrent licenses IP from Baya Systems for AI/RISC-V chiplets​

Business news | June 24, 2024
By Jean-Pierre Joosting
SOFTWARE RISC-V AI CHIP FABRIC CHIPLETS


Tenstorrent has licensed Baya Systems’ customizable WeaveIP™ fabric to scale its AI and RISC-V chiplet products.​

Baya’s IP and software flow enables Tenstorrent and its partners to analyze, customize and deploy its intelligent compute platform for current and future workloads and deliver highly scalable chiplet products to meet the emerging demand.


“Baya makes great, comprehensive fabric tools. Their tools start with top level architecture then allow us to plan at a detail level including performance modeling, transport, quality of service and cache coherency,” said Tenstorrent CEO Jim Keller. “This, coupled with their visualization tools, enables designers to build next generation chips, chiplets and IP. This data-driven, correct by construction fabric IP delivers the performance and scale needed for Tenstorrent’s chiplet-based solutions.”

Baya Systems’ WeaveIP portfolio optimizes standard protocols, distributed caching, advanced coherent and non-coherent fabric while allowing customizable protocols for AI and other applications over a unique transport architecture. The WeaverPro™ software provides a data-driven platform that enables designers to architect cache and memory architecture followed by algorithmically optimized unified fabric design from concept to post-silicon tuning, accelerating the development and deployment of a chiplet-ready system architecture that is globally and locally optimized.

“Tenstorrent is reputed for highly customized, high-performance AI and RISC-V solutions tailored to specific workloads and applications, which need to be future-proof,” said Sailesh Kumar, CEO of Baya Systems. “We believe Baya’s high-performance, reliable chiplet-ready fabric, and advanced analysis capability, design-time, and post-silicon runtime tuning, will be an essential component of Tenstorrent’s ability to deliver high-performance cost-effective multi-chip designs that next-level energy efficiency and are future-proofed for fast evolving applications.”

 
  • Like
  • Fire
  • Love
Reactions: 11 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
  • Love
Reactions: 39 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Competing for Supremacy: Strategies to Dominate the AI Accelerator Market​

June 24, 2024 Lauro Rizzatti
The AI accelerator market is expected to continue its rapid growth in the coming years, propelled by more complex AI applications that require even more processing power.


Mastering AI is becoming increasingly vital in shaping economic, social, energy, military and geopolitical landscapes. Enabling the extensive implementation of advanced AI technologies across businesses, government entities and individual bodies is not only strategic but also imperative.
Despite seven decades of mostly unsuccessful investigation, AI has experienced significant growth over the last 10 years, expanding at an exponential rate. This escalating adoption has been propelled by a shift toward highly parallel computing architectures, a departure from conventional CPU-based systems. Traditional CPUs, with their sequential processing nature that handles one instruction at a time, are increasingly unable to meet the demands of advanced, highly parallel AI algorithms—case in point, large language models (LLMs). This challenge has driven the widespread development of AI accelerators, specialized hardware engineered to dramatically enhance the performance of AI applications.
AI applications involve complex algorithms that include billions to trillions of parameters and require integer and floating-point multidimensional matrix mathematics at mixed precision ranging from 4 bits to 64 bits. Although the underlying mathematics consists of simple multipliers and adders, they are replicated millions of times in AI applications, posing a sizable challenge for computing engines.
AI accelerators come in various forms, including GPUs, FPGAs and custom-designed ASICs. They offer dramatic performance enhancements over CPUs, resulting in faster execution times, more efficient model deployment and scalability to handle increasingly complex AI applications.
The booming AI accelerator market is fueled by the widespread adoption of AI across a variety of industries. From facial/image recognition and natural language processing all the way to self-driving vehicles and generative AI elaboration, AI is transforming how we live and work. This revolution has spurred a massive demand for faster, more efficient AI processing, making AI accelerators a crucial component of the AI infrastructure.
Notwithstanding the tremendous market growth, all existing commercial AI processing products have limitations, some more significant than others.
AI accelerator (Source: Vsora)

Current limitations and needs​

AI processing can occur in two primary locations: in the cloud (data centers) or at the edge, each with distinct requirements and challenges.

AI processing in the cloud

The AI accelerator market within data center applications is highly polarized, with one dominant player controlling approximately 95% of the market. To foster greater diversification, a few key issues must be addressed:
  • Massive processing power: The processing power must achieve multiple petaFLOPs delivered consistently under real-world workloads.
  • High cost of AI hardware: The steep price of AI hardware restricts access for smaller enterprises, limiting adoption to the largest corporations.
  • Massive power consumption: AI accelerators consume significant power, necessitating expensive installation facilities. These facilities contribute to substantial operational costs, making scalability difficult.
  • Market monopoly: By controlling the market, the dominant player stifles competition and prevents innovation. More energy-efficient and cost-effective solutions than existing offerings are needed.
It’s worth mentioning that there has been a recent shift in data center focus, from training to inference. This shift amplifies the need to reduce the cost per query and to lower acquisition and operational expenditures.
All the above improvements would not only make advanced AI capabilities more accessible to everyone but also promote more sustainable technological growth, enabling broader adoption across various industries.

AI processing at the edge

In contrast to the AI processing market in data centers, the market for AI processing at the edge is highly fragmented. Numerous commercial products from many startups target niche applications across various industries. From a competitive perspective, this scenario is healthy and encouraging. However, there remains a need for a more comprehensive solution.
Edge AI processing faces a different set of challenges, where low power consumption and cost are key criteria, while compute power is less critical.

Processing efficiency and latency: the Cinderellas of AI attributes

While state-of-the-art AI processors are advertised with impressive processing power, sometimes reaching multiple petaFLOPS, their real-world performance frequently falls short. These specifications typically highlight theoretical maximums and overlook the critical factor of processing efficiency—the percentage of the theoretical power achievable in practical applications. When executing leading-edge LLMs, most AI accelerators experience significant drops in efficiency, often to as low as 1% to 5%.
Latency, another crucial metric, is typically missing from AI processor specifications.
This omission arises not only because latency is highly algorithm-dependent but also due to the generally low efficiency of most processors.
Consider two real-world demands:
  • Autonomous vehicles: These systems require response times under 20 ms to interpret environmental data collected from a diverse set of sensors. Subsequently, they must decide on a course of action and execute it within 30 ms. These are challenging targets to reach.
  • Generative AI: To maintain user engagement, generative AI must produce the first response within a few seconds. To date, this can be achieved by expanding the number of processor accelerators working in parallel. This approach results in significant acquisition costs and operational expenses, with energy consumption becoming a dominant factor.
These scenarios underscore the limitations of commercial processors primarily due to the memory bottleneck that prevents data from being fed to the processing elements fast enough to keep them busy all the time.

A workable solution​

To address these challenges and secure a leading position in the market, companies should develop next-generation AI accelerators with a focus on three primary areas:
  • Technology innovation: A viable solution should be based on a novel AI-specific architecture that defeats the memory bottleneck when the memory cannot deliver data to the multipliers/adders quickly enough. The benefits of higher usable throughput, lower latency, reduced power consumption and cost would be dramatic, leading to leaps in efficiency and broad expansion of their appeal.
  • Scalability and flexibility: Developing a scalable, modular and programmable AI accelerator that can handle diverse AI workloads, not just specific tasks, and easily integrate with a variety of platforms and systems could widen the market. This would open the vast area of edge applications from small startups to large enterprises.
  • Ease of deployment: A supporting software stack would allow algorithmic developers to seamlessly map their algorithms under development onto the AI accelerator without requiring them to understand the complexities of the hardware accelerator—specifically, RTL design and debugging. This would encourage them to fully embrace the solution.
A winning strategy would also establish strategic alliances with software developers, educational institutions and other hardware manufacturers to lead to better integration and adoption rates.

The future of the AI accelerator market​

The AI accelerator market is expected to continue its rapid growth in the coming years, propelled by more complex AI applications that require even more processing power. In this scenario, the demand for high performance with high-efficiency accelerators will only intensify.
Expect to see innovation in AI acceleration architectures, with vendors focused on creating more flexible and energy-efficient solutions. As the race to dominate the AI accelerator market heats up, the ultimate winners will be those who can innovate in efficiency and scalability but also excel in making their technologies accessible and sustainable.
Ultimately, we can anticipate a solution that can perform the expected task optimally—energy-efficient, cost-efficient and with high implementation efficiency. This is not necessarily the same as the lowest power, lowest cost and the highest efficiency.

 
  • Like
  • Love
  • Fire
Reactions: 16 users

GazDix

Regular
Latest Pitt Street Report on Brainchip.

Link here:




BrainChip

@BrainChip_inc
·
5m

Pitt Street Research re-initiates coverage on BrainChip which is commercializing a revolutionary neuromorphic technology. Read Full Report: https://bit.ly/4eBNmBI
 
  • Like
  • Love
Reactions: 18 users

GazDix

Regular
Latest Pitt Street Report on Brainchip.

Link here:

BrainChip
@BrainChip_inc
·
5m

Pitt Street Research re-initiates coverage on BrainChip which is commercializing a revolutionary neuromorphic technology. Read Full Report: https://bit.ly/4eBNmBI
Really good report summarising lots of challenges and opportunities.
But:

TLDR for most of us:

Report thinks the valuation of Brainchip is undervalued and fair value is $1.59 AUD per share.
 
  • Like
  • Love
Reactions: 9 users
Top Bottom