BRN Discussion Ongoing

Tezza

Regular
"Watch us go now" I think I misinterpreted this comment from Sean.🤣
 
  • Like
Reactions: 1 users

gilti

Regular
Good to see our new substantial holder is continuing their suppression of the sp
so they can accumulate more shares at our expense.
Pricks.
 
  • Like
  • Wow
Reactions: 8 users

Doz

Regular
1763624817973.png





Have a look at this Joe and tell me what you think …

1763624955702.png






1763625030900.png





1763625102091.png






1763625160842.png






1763625211162.png






1763625279041.png



If you ask me Joe , I think we have another internal systems issue with JPMorgan with their lending and borrowing legal requirements . Looks like they maybe using a yellow post it note system internally .

Not sure who made the decision to go above 4.99% , but thank you .


All In my opinion …….
 
  • Like
  • Wow
  • Thinking
Reactions: 25 users

Rach2512

Regular
 
  • Like
  • Love
  • Fire
Reactions: 15 users

7fĂźr7

Top 20
View attachment 93185




Have a look at this Joe and tell me what you think …

View attachment 93186





View attachment 93187




View attachment 93188





View attachment 93189





View attachment 93190





View attachment 93191


If you ask me Joe , I think we have another internal systems issue with JPMorgan with their lending and borrowing legal requirements . Looks like they maybe using a yellow post it note system internally .

Not sure who made the decision to go above 4.99% , but thank you .


All In my opinion …….

So… no 24 cents? 😞
 
  • Like
Reactions: 1 users

7fĂźr7

Top 20
I missed this funny posit

 
  • Like
  • Fire
Reactions: 9 users
View attachment 93185




Have a look at this Joe and tell me what you think …

View attachment 93186





View attachment 93187




View attachment 93188





View attachment 93189





View attachment 93190





View attachment 93191


If you ask me Joe , I think we have another internal systems issue with JPMorgan with their lending and borrowing legal requirements . Looks like they maybe using a yellow post it note system internally .

Not sure who made the decision to go above 4.99% , but thank you .


All In my opinion …….
Be careful what you post as you might be held accountable for the next 38 years
 
  • Haha
Reactions: 2 users
Go brainchip
 
  • Like
Reactions: 1 users
WBT talking Neuromorphic:


No mention of Brainchip :(
 
  • Like
  • Fire
  • Wow
Reactions: 5 users

7fĂźr7

Top 20
Come on mate you’ve got your point over 86669 times can’t you just drop it now

Yes, I can… I would’ve done that anyway.

But I’ll remind you ..and everyone else .. next time you’re in an argument because someone insulted you, etc.

It’s always “the others” that bother you when they defend themselves, but when it hits you personally, you think you can take every right for yourself. Anyway
 
  • Thinking
Reactions: 1 users

Doz

Regular
So… no 24 cents? 😞

Did today’s announcement help you work out part of the answer ? Maybe it will help if you can attempt to decipher today’s messaging without any additional clues .

1763657262008.png
 
Last edited:
  • Like
Reactions: 1 users

Labsy

Regular
What goes down, must come up. Especially if you own over 5% and are in the business of "many multiples" of return...
Hold patiently people and enjoy the ride. We are sitting in the boot now and JP Morgan are driving... Shhhhh 🤫
 
  • Like
  • Love
  • Fire
Reactions: 10 users
Has anyone participated in the CR through Australian Super, I'm having difficulty using the app. Any assistance would be great full.
 

keyeat

Regular
Has anyone participated in the CR through Australian Super, I'm having difficulty using the app. Any assistance would be great full.
Why participate in the CR when cheaper to purchase at .17 cent?
 

Gazzafish

Regular
Why participate in the CR when cheaper to purchase at .17 cent?
In my opinion, the only reason you would do this is to help BRN raise capital. By buying on market, BRN doesn’t get your money, the person selling obviously does. If you buy through the share offer and BRN sell all $2m worth, then they have an extra $2m in the bank potentially delaying the next capital raise in the future…. That’s all….
 
  • Like
  • Fire
  • Love
Reactions: 18 users

DK6161

Regular

7fĂźr7

Top 20
Did today’s announcement help you work out part of the answer ? Maybe it will help if you can attempt to decipher today’s messaging without any additional clues .

View attachment 93200

Whatever bro.. I don’t care about your wrong statements anymore.. have fun and good luck
 
Last edited:
  • Like
Reactions: 1 users

Frangipani

Top 20

7C6A501A-60AA-452D-AE0B-F3D58AA0ADAF.jpeg




Right Sizing AI for Embedded Applications​


By Anand Rangarajan
Director, End Markets, GlobalFoundries

By Todd Vierra
Vice President, Customer Engagement, BrainChip

We all know the AI revolution train is heading straight for the Embedded Station. Some of us are already in the driver’s seat, while others are waiting for the first movers to pave the way so we can become fast adopters. No matter where you are on this journey, one thing becomes clear: AI must adapt to the embedded application sandbox—not the other way around.

Embedded applications typically operate within a power envelope ranging from milliwatts to around 10 watts. For AI to be effective in many embedded markets, it must respect the power-performance boundaries of the application. Imagine your favorite device that you charge once a day. If adding embedded AI to a product means you now need to charge it every four hours, you are likely to stop using the product altogether.

This is where embedded AI fundamentally differs from cloud AI. In the cloud, adding more computations is often the default solution. But in embedded systems, the level of AI compute must be dictated by what the overall power and performance constraints allow. You can’t just throw more compute silicon at the problem.

There are two key approaches to scaling AI effectively for embedded applications:

1. Process Technology​


At the foundational level, advanced process technologies like GlobalFoundries’ 22FDX+ with Adaptive Body Biasing offer a compelling solution. These transistors can deliver high performance during compute-intensive tasks while maintaining low leakage during idle or always-on modes. This dynamic adaptability ensures that the overall power-performance integrity of the application is preserved.
Screenshot-2025-11-20-at-9.46.28-AM.png

2. Alternative Compute Architectures​


Emerging architectures like neuromorphic computing are gaining attention for their ability to run inference at a fraction of the power—and with lower latency—compared to traditional models. These ultra-low-power solutions are particularly promising for applications where energy efficiency is paramount and real-time response is also important.

BrainChip’s AKD1500 Edge AI co-processor, built on GlobalFoundries 22FDX platform, demonstrates how neuromorphic design can make AI practical for the smallest and most power-sensitive devices. Powered by the company’s AkidaTM technology, the chip uses an event-based approach, processing only when there’s information thereby avoiding the constant compute cycles that waste energy by reading and writing to either on-chip SRAM or off-chip DRAM as in traditional AI systems. The co-processor performs event-based convolutions that leverage sparsity throughout the whole network in activation maps and kernels, significantly reducing computation power and latency by running as many layers on the AkidaTM fabric. The diagram below shows all the interfaces, as well as the 8 Node Akida IP as the centerpiece of the AI co-processor.

Screenshot-2025-11-20-at-9.47.50-AM.png


The design further improves efficiency by handling data locally and using operations that cut power consumption dramatically. The result is a chip that delivers real-time intelligence while operating within just a few hundred milliwatts, making it possible to add AI features to wearable, sensors, and other AIoT devices that previously relied on the cloud for such capability.

The Akida low-cost, low-power AI co-processor solution offers a silicon-proven design that has already demonstrated critical performance metrics, substantially reducing risk for developers. With fully functional interfaces tested at operational speeds and proven interoperability across multiple MCU and MPU boards, the platform ensures seamless integration. The AKD1500 co-processor supports both power-conscious MCUs via SPI4 and high-performance MPUs through M.2 and PCIe interfaces, providing flexibility across many configurations. Enabling software development early with silicon prototypes accelerates time to market. Several customers have already advanced to prototype stages, validating the design’s maturity and readiness for deployment. As an example, Onsor Technologies’ Nexa smart glasses utilize the AKD1500 for low power inference to predict epileptic seizures, providing quality-of-life benefits for those suffering from epilepsy.

Screenshot-2025-11-20-at-9.49.50-AM.png


The best part of this is that the AKD1500 can be used with any low cost existing MCU with a SPI interface or an Applications processor where there is a PCIe connection available for higher performance. Adding the AKD1500 AI co-processor makes the time to market very short with available MCUs today.

Final Thoughts


As AI starts to sweep across the length and breadth of embedded space , right sizing becomes not just a technical necessity but a strategic imperative. The goal isn’t to fit the biggest model into the smallest device – it’s to fit the right model into the right device, with the right balance of performance, power, and user experience.
 
  • Like
  • Fire
  • Love
Reactions: 14 users
Top Bottom