BRN Discussion Ongoing

Dozzaman1977

Regular
It sounds good but has he heard of Retail Food Group and Dominoes then there are all those mining companies with brand new shovels and tin sheds.

He is probably too busy consulting with those listed corporations to do any real research.😂🤣😂🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁

My opinion only DYOR
FF

AKIDA BALLISTA
It would of been nice to buy into the dominos IPO in 2005 at $2.20.
I was thinking about investing in the IPO, but i decided to go and buy a Dominos Pizza and garlic bread to see what quality they were,
I could only eat 1 slice of the pizza (it Tasted terrible) and only 1 piece of garlic bread as it also tasted bad and threw the rest in the bin .... So based on that experience i though who the hell would buy these pizzas over good quality made ones and decided not to invest.
History shows i was completely wrong, people love cheap terrible tasting pizzas.

That $10,000 investment would be worth $210,000 at todays share price (and a lot higher in the past years)

I'm happy to stick to my local pub pizza!!!!
 
  • Like
  • Haha
  • Love
Reactions: 23 users

Diogenese

Top 20
$0.525 - the better the news, the lower the price.

I'll wait till we start earning real revenue and get some really cheap.
 
  • Haha
  • Like
  • Sad
Reactions: 51 users

BonezDiez

Emerged
What a wonderful read, although my comprehension of all the new capabilities is somewhat lacking.
However, I think most of us understand real world comparisons and benchmarking.
Hence, I think this extract from the article is particularly powerful:
......

And so we come to the old proverb that states, “The proof of the pudding is in the eating.” Just how well does the Akida perform with industry-standard, real-world benchmarks?

Well, the lads and lasses at Prophesee.ai are working on some of the world’s most advanced neuromorphic vision systems. From their website we read: “Inspired by human vision, Prophesee’s technology uses a patented sensor design and AI algorithms that mimic the eye and brain to reveal what was invisible until now using standard frame-based technology.”

According to the paper Learning to Detect Objects with a 1 Megapixel Event Camera, Gray.Retinanet is the latest state-of-the-art in event-camera based object detection. When working with the Prophesee Event Camera Road Scene Object Detection Dataset at a resolution of 1280×720, the Akida achieved 30% better precision while using 50X fewer parameters (0.576M compared to 32.8M with Gray.Retinanet) and 30X fewer operations (94B MACs/sec versus 2432B MACs/sec with Gray.Retinanet). The result was improved performance (including better learning and object detection) with a substantially smaller model (requiring less memory and less load on the system) and much greater efficiency (a lot less time and energy to compute).

As another example, if we move to a frame-based camera with a resolution of 1352×512 using the KITTI 2D Dataset, then ResNet-50 is kind of a standard benchmark today. In this case, Akida returns equivalent precision using 50X fewer parameters (0.57M vs. 26M) and 5X fewer operations (18B MACs/sec vs. 82B MACs/sec) while providing much greater efficiency (75mW at 30 frames per second in a 16nm device). This is the sort of efficiency and performance that could be supported by untethered or battery-operated cameras.

Another very interesting application area involves networks that are targeted at 1D data. One example would be processing raw audio data without the need for all the traditional signal conditioning and hardware filtering.

Consider today’s generic solution as depicted on the left-hand side of the image below. This solution is based on the combination of Mel-frequency cepstral coefficients (MFCCs) and a depth-wise separable CNN (DSCNN). In addition to hardware filtering, transforms, and encoding, this memory-intensive solution involves a heavy software load.

max-0216-05-simplifying-raw-audio.png


Raw audio processing: Traditional solution (left) vs. Akida solution (right)
(Source: BrainChip)


By comparison, as we see on the right-hand side of the image, the raw audio signal can be fed directly into an Akida TENN with no additional filtering or DSP hardware. The result is to increase the accuracy from 92% to 97%, lower the memory (26kB vs. 93kB), and use 16X fewer operations (19M MACs/sec vs. 320M MACs/sec). All of this basically returns single inference while consuming two microjoules of energy. Looking at this another way, assuming 15 inferences per second, we’re talking less than 100µW for always-on keyword detection.

Similar 1D data is found in the medical arena for tasks like vital signs prediction based on a patient’s heart rate or respiratory rate. Preprocessing techniques don’t work well with this kind of data, which means we must work with raw signals. Akida’s TENNs do really well with raw data of this type.

In this case, comparisons are made between Akida and the state-of-the-art S4 (SOTA) algorithm (where S4 stands for structured state space sequence model) with respect to vital signs prediction based on heart rate or respiratory rate using the Beth Israel Deaconess Medical Center Dataset. In the case of respiration, Akida achieves ~SOTA accuracy with 2.5X fewer parameters (128k vs. 300k) and 80X fewer operations (0.142B MACs/sec vs. 11.2B MACs/sec). Meanwhile, in the case of heart rate, Akida achieves ~SOTA accuracy with 5X fewer parameters (63k vs. 600k) and 500X fewer operations (0.02B MACs/sec vs. 11.2B MACs/sec).

It’s impossible to list all the applications for which Akida could be used. In the case of industrial, obvious apps are robotics, predictive maintenance, and manufacturing management. When it comes to automotive, there’s real-time sensing and the in-cabin experience. In the case of health and wellness, we have vital signs monitoring and prediction; also, sensory augmentation. There are also smart home and smart city applications like security, surveillance, personalization, and proactive maintenance. And all of these are just scratching the surface of what is possible.
Any idea where the author gets those akida benchmarks from? No mention of akida in the referenced prophese paper…
 
  • Like
Reactions: 1 users

jk6199

Regular
Have you seen the amount of volume sells / buys this morning.

Latest news with more to come? I wonder if any rats are jumping ship?
 
  • Fire
Reactions: 2 users

Xray1

Regular
$0.525 - the better the news, the lower the price.

I'll wait till we start earning real revenue and get some really cheap.
We have only some 3 weeks to go till the end of this quaterly period.... So I hope we see another financial improvement to the bottom line therein.
I am also expecting some further positive ASX announcements to be released between now and just before the AGM in May 2023 so that management and the CEO SH will have something to boast about and provide s/holders with positive endorsements on their investment in BRN
 
  • Like
Reactions: 14 users
Have you seen the amount of volume sells / buys this morning.

Latest news with more to come? I wonder if any rats are jumping ship?
What do you mean by that?
 
It would of been nice to buy into the dominos IPO in 2005 at $2.20.
I was thinking about investing in the IPO, but i decided to go and buy a Dominos Pizza and garlic bread to see what quality they were,
I could only eat 1 slice of the pizza (it Tasted terrible) and only 1 piece of garlic bread as it also tasted bad and threw the rest in the bin .... So based on that experience i though who the hell would buy these pizzas over good quality made ones and decided not to invest.
History shows i was completely wrong, people love cheap terrible tasting pizzas.

That $10,000 investment would be worth $210,000 at todays share price (and a lot higher in the past years)

I'm happy to stick to my local pub pizza!!!!
People who got in and got out of Retail Food Group made money as well but many franchises were sent to the wall and ended up bankrupt as a result of various business practices that the Group engaged in. Dominoes also made shareholders money but had they been requiring the payment of the correct wages to employees the pizzas would have had to sell for more and decent pizzas would likely have been competitive.

When you look in-depth at their business models someone had to be ripped off for it to return profits.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Sad
  • Wow
Reactions: 13 users

Steve10

Regular
Number 1 emerging technology is neuromorphic computing. The BRN shorters are playing with a volcano about to erupt.

What’s on the 2023 Gartner Emerging Technologies and Trends Impact Radar?​

These trends surfaced in our 2023 Gartner Emerging Technologies and Trends Impact Radar, which highlights 26 emerging trends and technologies to which vendors must respond, whether they are a new or established player in that space.

The Impact Radar portrays the maturity, market momentum and influence of technologies, making it a handy tool for product leaders to identify and track the technologies and trends that will help them improve and differentiate their products, remain competitive and capitalize on market opportunities.

1678408195717.png



Four Emerging Technologies Disrupting the Next Three to Eight Years​

Most of this year's emerging technologies and trends are three to eight years away from reaching widespread adoption but represent significant innovation in the years ahead.
Let’s look at four we think will prove especially interesting.

No. 1: Neuromorphic computing​

  • A critical enabler, neuromorphic computing provides a mechanism to more accurately model the operation of a biological brain using digital or analog processing techniques.
  • It will take three to six years to cross over from early-adopter status to early majority adoption.
  • Neuromorphic computing will have a substantial impact on existing products and markets.
Neuromorphic computing systems simplify product development, enabling product leaders to develop AI systems that can better respond to the unpredictability of the real world. Their autonomous capabilities quickly react to real-time events and information, and will form the basis of a wide range of future AI-based products. Early use cases include event detection, pattern recognition and small dataset training.

We expect breakthrough neuromorphic devices by the end of 2023, but it will likely take five years for these devices to reach early majority adoption.

The impact is likely to be significant, though, as neuromorphic computing is expected to disrupt many of the current AI technology developments, delivering power savings and performance benefits not achievable with current generations of AI chips.

No. 2: Self-supervised learning​

  • Self-supervised learning accelerates productivity by using an automated approach to annotating and labeling data.
  • It will take six to eight years to cross over from early-adopter status to early majority adoption.
  • Self-supervised learning will have a significant impact on existing products and markets.
Self-supervised models learn how information relates to other information; for example, which situations typically precede or follow another, and which words often go together.

Self-supervised learning has only recently emerged from academia and is currently practiced by a limited number of AI companies. A few companies focused on computer vision and NLP products have recently added self-supervised learning to their product roadmaps, however.

The potential impact and benefits of self-supervised learning are extensive, as it will extend the applicability of machine learning to organizations with limited access to large datasets. Its relevance is most prominent in AI applications that typically rely on labeled data, primarily computer vision and NLP.

No. 3: Metaverse​

  • The metaverse fuels the smart world by providing an immersive digital environment.
  • It will take eight-plus years to cross over from early-adopter status to early majority adoption.
  • The metaverse will have a very substantial impact on existing products and markets.
The metaverse enables persistent, decentralized, collaborative, interoperable digital content that intersects with the physical world’s real-time, spatially organized and indexed content.

It is an example of a combinatorial trend in which a number of individually important, discrete and independently evolving trends and technologies interact with one another to give rise to another trend. The emerging, supporting technologies and trends include (but are not limited to) spatial computing and the spatial web; digital persistence; multientity environments; decentralization tech; high-speed, low-latency networking; sensing technologies; and AI applications.

The features and functionality these ETT bring to the metaverse will need to reach an early majority in order for the metaverse to cross the chasm. We consider all current examples to be precursors or premetaverse offerings because they are potentially capable and compatible but do not yet meet the definition of the metaverse.

While the benefits and opportunities from the metaverse are not immediately viable, emerging metaverse solutions give an indicator of potential use cases. We expect the transition toward the metaverse to be as significant as the one from analog to digital.
Watch webinar: 2023 Leadership Vision for Product Management Leaders

No. 4: Human-centered AI​

  • Human-centered AI (HCAI) is a common AI design principle calling for AI to benefit people and society, which could improve transparency and privacy.
  • It will take three to six years to reach early majority adoption.
  • HCAI will have a substantial impact on existing products and markets.
HCAI assumes a partnership model of people and AI working together to enhance cognitive performance, including learning, decision making and new experiences. HCAI is sometimes referred to as “augmented intelligence,” “centaur intelligence” or “human in the loop,” but in a wider sense, even a fully automated system must have human benefits as a goal.

HCAI enables vendors to manage AI risks, and to be ethical, responsible and more efficient with automation, while complementing AI with a human touch and with common sense. Many AI vendors have already shifted their positions to the more impactful and responsible HCAI approach. The technology-centric approach of developing AI products has led to numerous negative impacts, urging vendors to rethink their AI product strategies.

The potential impact of HCAI is high because it leverages human abilities to make humans more productive and remove avoidable limitations, biases and blind spots.

In short:
  • The Gartner Emerging Tech Impact Radar highlights the technologies and trends that have the most potential to disrupt a broad cross section of markets.
  • The trends are organized around four key themes, which are critical for product leaders to evaluate as part of their competitive strategy.
  • Product leaders must explore these technologies now to capitalize on market opportunities.
Tuong Nguyen is a Director Analyst within the Emerging Technologies and Trends team in Gartner Research. He undertakes analysis on immersive technologies, metaverse, computer vision, SLAM and human-machine interfaces. He advises tech provider product leaders how to factor emerging tech and trends into creating and evolving highly successful product offerings.

 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 44 users

I think someone published something about Brainchip AKIDA and SiFive X280 Intelligence Series being closely entwined:​

“2023 a Breakout Year for RISC-V​

Article By : SiFive​

RISC-V_logo_hero.jpg

No longer just a tool for embedded, RISC-V has arrived, driven by innovation and adoption in the world's largest semiconductor manufacturers, global hyperscale data centers, and consumer device companies.
At SiFive, where we sit at the forefront of RISC-V, we see several interesting trends continuing to drive strong growth across the industry. Artificial intelligence (AI) and machine learning are becoming virtually ubiquitous, from the datacenter to consumer devices and the enterprise, requiring new architectures, data offload and management and low latency, all while carefully managing power consumption with the massive workloads.

In automotive, the fast growth of autonomous and electric vehicles (EVs) and the proliferation in chips in the vehicle from sensors, to safety, to drivetrain and entertainment, is also calling for new designs, flexibility and a support ecosystem that will be around for decades. In mobile and wearables, the capabilities of the devices is surpassing the quality of traditional cameras for instance with AI growing. And here, like elsewhere, footprint, power consumption, and performance have never been more critical.

All of these trends favor RISC-V with it’s superior compute density, flexibility, and high performance and growing ecosystem. RISC-V is now inevitable and, over the next few years, and you will see exciting advances as it moves from a mostly embedded application to critical functions in the highest performance chips, and the start-ups move to established companies with performance that in some cases already exceeds them. We believe the landscape will look very different over the next decade as RISC-V continues to grow. Already taught in the leading universities around the world, it is also being encouraged by governments in the United States, China, India, and Japan, among others, eager to help their domestic industries but also clearly seeing the benefits of the open architecture for the future of their products.

While there is considerable uncertainty in the macroeconomic environment, the chip shortages of 2021–2022 showed how the semiconductor industry is a part of every industry on the planet and companies from auto manufacturers to consumer companies to governments are investing to ensure their supply chains are able to withstand future impacts. Government investments like the Chips Act in the United States will serve to increase manufacturing capabilities and with more manufacturing and prototype capabilities, new innovations will flourish.

Take the AI Dataflow processor, for example, a RISC-V enabled chip that is being used in hyperscale datacenters to offload and manage critical AI data. This rearchitecting of the datacenter allows dramatically lowers latency and power consumption.

2023’s mantra might also be “power matters”. Across the board, we believe the industry is going to be talking a lot more about compute density, and the need for high performance processors that can also be built on a very small footprint and run at low power.

And this will be a breakout year for RISC-V. Companies are leveraging the RISC-V architecture and its proven high-performance and low-power usage across consumer devices like wearables, automotive, aerospace, and beyond. Last year, we saw announcements about the High Performance Space Compute project adopting RISC-V, and there is much more ahead. No longer just a tool for embedded, RISC-V has arrived driven by innovation and adoption in the world’s largest semiconductor manufacturers, global hyperscale data centers, and consumer device companies.”

It must be getting close to the point where I can say something positive without being called a fanboy or upramper.

My opinion only DYOR
FF

AKIDA BALLISTA
Not significant it was only SiFive talking about AKIDA 2 so they are biased and likely just trying to sell the X280 with AKIDA to their customers. Apologies nothing to see here you can move on now:😂🤣😂

“Through our collaboration with BrainChip, we are enabling the combination of SiFive’s RISC-V processor IP portfolio and BrainChip’s 2nd generation Akida neuromorophic IP to provide a power-efficient, high capability solution for AI processing on the Edge,” said Phil Dworsky, Global Head of Strategic Alliances at SiFive. “Deeply embedded applications can benefit from the combination of compact SiFive Essential™ processors with BrainChip’s Akida-E, efficient processors; more complex applications including object detection, robotics, and more can take advantage of SiFive X280 Intelligence™ AI Dataflow Processors tightly integrated with BrainChip’s Akida-S or Akida-P neural processors.”

Phil Dworsky, Global Head of Strategic Alliances, SiFive

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 24 users
Number 1 emerging technology is neuromorphic computing. The BRN shorters are playing with a volcano about to erupt.

What’s on the 2023 Gartner Emerging Technologies and Trends Impact Radar?​

These trends surfaced in our 2023 Gartner Emerging Technologies and Trends Impact Radar, which highlights 26 emerging trends and technologies to which vendors must respond, whether they are a new or established player in that space.

The Impact Radar portrays the maturity, market momentum and influence of technologies, making it a handy tool for product leaders to identify and track the technologies and trends that will help them improve and differentiate their products, remain competitive and capitalize on market opportunities.

View attachment 31756


Four Emerging Technologies Disrupting the Next Three to Eight Years​

Most of this year's emerging technologies and trends are three to eight years away from reaching widespread adoption but represent significant innovation in the years ahead.
Let’s look at four we think will prove especially interesting.

No. 1: Neuromorphic computing​

  • A critical enabler, neuromorphic computing provides a mechanism to more accurately model the operation of a biological brain using digital or analog processing techniques.
  • It will take three to six years to cross over from early-adopter status to early majority adoption.
  • Neuromorphic computing will have a substantial impact on existing products and markets.
Neuromorphic computing systems simplify product development, enabling product leaders to develop AI systems that can better respond to the unpredictability of the real world. Their autonomous capabilities quickly react to real-time events and information, and will form the basis of a wide range of future AI-based products. Early use cases include event detection, pattern recognition and small dataset training.

We expect breakthrough neuromorphic devices by the end of 2023, but it will likely take five years for these devices to reach early majority adoption.

The impact is likely to be significant, though, as neuromorphic computing is expected to disrupt many of the current AI technology developments, delivering power savings and performance benefits not achievable with current generations of AI chips.

No. 2: Self-supervised learning​

  • Self-supervised learning accelerates productivity by using an automated approach to annotating and labeling data.
  • It will take six to eight years to cross over from early-adopter status to early majority adoption.
  • Self-supervised learning will have a significant impact on existing products and markets.
Self-supervised models learn how information relates to other information; for example, which situations typically precede or follow another, and which words often go together.

Self-supervised learning has only recently emerged from academia and is currently practiced by a limited number of AI companies. A few companies focused on computer vision and NLP products have recently added self-supervised learning to their product roadmaps, however.

The potential impact and benefits of self-supervised learning are extensive, as it will extend the applicability of machine learning to organizations with limited access to large datasets. Its relevance is most prominent in AI applications that typically rely on labeled data, primarily computer vision and NLP.

No. 3: Metaverse​

  • The metaverse fuels the smart world by providing an immersive digital environment.
  • It will take eight-plus years to cross over from early-adopter status to early majority adoption.
  • The metaverse will have a very substantial impact on existing products and markets.
The metaverse enables persistent, decentralized, collaborative, interoperable digital content that intersects with the physical world’s real-time, spatially organized and indexed content.

It is an example of a combinatorial trend in which a number of individually important, discrete and independently evolving trends and technologies interact with one another to give rise to another trend. The emerging, supporting technologies and trends include (but are not limited to) spatial computing and the spatial web; digital persistence; multientity environments; decentralization tech; high-speed, low-latency networking; sensing technologies; and AI applications.

The features and functionality these ETT bring to the metaverse will need to reach an early majority in order for the metaverse to cross the chasm. We consider all current examples to be precursors or premetaverse offerings because they are potentially capable and compatible but do not yet meet the definition of the metaverse.

While the benefits and opportunities from the metaverse are not immediately viable, emerging metaverse solutions give an indicator of potential use cases. We expect the transition toward the metaverse to be as significant as the one from analog to digital.
Watch webinar: 2023 Leadership Vision for Product Management Leaders

No. 4: Human-centered AI​

  • Human-centered AI (HCAI) is a common AI design principle calling for AI to benefit people and society, which could improve transparency and privacy.
  • It will take three to six years to reach early majority adoption.
  • HCAI will have a substantial impact on existing products and markets.
HCAI assumes a partnership model of people and AI working together to enhance cognitive performance, including learning, decision making and new experiences. HCAI is sometimes referred to as “augmented intelligence,” “centaur intelligence” or “human in the loop,” but in a wider sense, even a fully automated system must have human benefits as a goal.

HCAI enables vendors to manage AI risks, and to be ethical, responsible and more efficient with automation, while complementing AI with a human touch and with common sense. Many AI vendors have already shifted their positions to the more impactful and responsible HCAI approach. The technology-centric approach of developing AI products has led to numerous negative impacts, urging vendors to rethink their AI product strategies.

The potential impact of HCAI is high because it leverages human abilities to make humans more productive and remove avoidable limitations, biases and blind spots.

In short:
  • The Gartner Emerging Tech Impact Radar highlights the technologies and trends that have the most potential to disrupt a broad cross section of markets.
  • The trends are organized around four key themes, which are critical for product leaders to evaluate as part of their competitive strategy.
  • Product leaders must explore these technologies now to capitalize on market opportunities.
Tuong Nguyen is a Director Analyst within the Emerging Technologies and Trends team in Gartner Research. He undertakes analysis on immersive technologies, metaverse, computer vision, SLAM and human-machine interfaces. He advises tech provider product leaders how to factor emerging tech and trends into creating and evolving highly successful product offerings.
Great article but am I missing something because it seems obvious that AKIDA technology covers all three categories.

This would be explosive if true. Don’t pull that $3.00 order I might have been off the mark there.😂🤣😂🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Fire
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Have I got this right?


Gartner.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 29 users

Steve10

Regular
Great article but am I missing something because it seems obvious that AKIDA technology covers all three categories.

This would be explosive if true. Don’t pull that $3.00 order I might have been off the mark there.😂🤣😂🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁🪁

My opinion only DYOR
FF

AKIDA BALLISTA

5 years for early majority adoption of BRN tech = 2028. Starting 2023.

Is it too early to forecast dividends?

Going rate for tech companies is about 1-2%.

With $500M revenue x 60% NPAT = $300M x 50% payout ratio = $150M / 1.8B SOI = 8.33c per share implies $4.165 to $8.33 SP at 1-2% dividend. yield. $7.5B MC @ PE25 or $15B MC @ PE50 with the latter more likely at PE50. So 1% dividend & $8.33 SP more likely. Timeframe will be within 5 years.
 
  • Like
  • Fire
  • Wow
Reactions: 22 users

Steve10

Regular
$53.2B AUD edge AI hardware processor market by 2030.

Every 1% market share = $532M x 60% NPAT = $319.2M x 50% payout ratio = $159.6M / 1.8B SOI = 8.87c per share dividend & $8.87 SP at PE50.

BRN should have at least 1% market share by 2028.
 
  • Like
  • Fire
  • Thinking
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Elon Musk is literally reaching out for our help! Can someone please let Rob know so he can get on the blower to him ASAP?


Remember I posted something a little while ago about ChatGPT and how it costs millions of dollars a day to run and how neuromorphic computing could one day be the solution to driving down this staggering expense. Now, if Elon wants to compete with OpenAI's ChatGPT, what better way than to have a mind boggling, science fiction neuromophic based solution to massively reduce power consumption, whilst saving billions of dollars over time, whilst also being generally betterer in every single way?


Screen Shot 2023-03-10 at 12.13.4.png




Screen Shot 2023-03-10 at 12.13.1.png







 
  • Like
  • Fire
  • Haha
Reactions: 18 users

Cardpro

Regular
  • Like
Reactions: 4 users

Slade

Top 20
In the world of tech, a powerhouse team,
Brainchip and partners, a force to be seen,
Renesas, Arm, Intel, SiFive, Prophesee, Megachips, Nviso, alive.
With Edge Impulse and NASA in tow,
Their progress, nothing can overthrow,
A partnership that leaves shorters behind,
And Elon Musk's plans, unrefined.

Brainchip's shareholders will reap the rewards,
As their profits soar with mighty roars,
Their visions of wealth finally fulfilled,
Their pockets with gains, marvelously filled.

In the world of AI, they're breaking new ground,
And Brainchip's partners are tightly bound,
Their collaboration, a force to reckon,
Their shares, a prize to beckon.

And when the dust finally clears, Brainchip and partners will be the pioneers,
The shorters and Elon Musk, to their dismay,
Their losses mounting with every passing day.

For Brainchip's triumph is here to stay,
Their partnership leading the way, To a future where AI reigns supreme,
And Brainchip shareholders live the dream.
 
  • Like
  • Love
  • Fire
Reactions: 38 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
  • Love
Reactions: 6 users
Well someone just bought up 598,675 shares at .53 in one order on market - line wiped it out. As soon as the price went to .52 Basically what was for sale at .525 and .53 and some of .535 was bought out within a minute.

The instos are sucking up the cheap shares...stay strong team!

I bought some more today at .525 - giveaway price.
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
  • Love
Reactions: 8 users
Top Bottom