BRN Discussion Ongoing

wilzy123

Founding Member
Still 2024/2025? When the hell did the management quote that? Who says explosion of sales / watch the financials (**after 3 years+ not coming ones..!)
What happened to your optimism buddy?

Screenshot_20231024_214023_Chrome.jpg

smirk-funny.gif
 
  • Haha
Reactions: 6 users
i think we really don't know shit.
who knows what deals have been done? do you? , i know I don't know what they have install for 2024?

my personal belief is that maybe just maybe they don't want anything released until 2024 because of the amount of infrastructure that needs to be put in place building a new market for 1, building an Ecco system.
creating partnerships, it all takes time not to mention the workload from the companies that are interested in the tech.

Anyway it will happen, and we are way out in front of the crowd.
You're right, we don't know shit. The only thing we know are the numbers, and they tell us failure on every front.
Your personal belief is irrelevant. 2023 was the given year for recurrent revenue, not 2024 in lumps.
These are the facts, and everything else is nothing but wishful thinking and speculation.
 
  • Like
  • Love
  • Fire
Reactions: 8 users

7für7

Top 20
Didn't anyone find the early release date of todays 4C strange? Going by past releases its should've been out either later this week, or next week?
Why? Is there something going to drop next week?
 
  • Like
  • Thinking
  • Haha
Reactions: 34 users

Galaxycar

Regular
yes I found it strange the early week 4c release,but I suppose it’s not hard to count 27k. AGM Goodbye management,
 

Galaxycar

Regular
Find it will be very hard to justify the 500k Christmas bonus’s management will probably pay themselves for hitting imaginary targets
 

DK6161

Regular
Didn't anyone find the early release date of todays 4C strange? Going by past releases its should've been out either later this week, or next week?
Why? Is there something going to drop next week?
Yeah the share price is dropping next week
 
  • Haha
  • Like
  • Fire
Reactions: 15 users

Diogenese

Top 20

Edge boxes seem to be proliferating already in the market place..

“This portable and compact Edge box is a game-changer that enables customers to deploy AI applications cost-effectively with unprecedented speed and efficiency to proliferate the benefits of intelligent compute.”

You can just imagine a customer or investor looking at that and saying, fuck it, more verbal diarrhoea that doesn’t sell anything or give a differentiator why we should be buying this..

Let’s just go with Qualcomm. A known known, and we know what we’re going to get..
That's what all the people who bought the Edsel said.
 
  • Haha
Reactions: 1 users
Seems some gamer tech heads think (wishful or reasonable :unsure: ) neuromorphic might start to permeate their boards.


CPU technology in 2024​

Leave a Comment / Resource / By Team Wegamegear.com
CPU technology in 2024

Both AMD and Intel are expected to release new CPUs in 2024, hence natural thought that comes to many Gaming enthusiasts, “what will be CPU technology in 2024?” This type of thought may appear straightforward, which is why we have decided to provide a research-oriented perspective on it

Here are some of the new things we can expect to see in

CPU technology in 2024​

New process nodes: AMD and Intel are both expected to release new CPUs based on TSMC’s 3nm process node in 2024. This will allow for further increases in performance and efficiency.

New architectures: AMD is expected to release its Zen 5 architecture in 2024, while Intel is expected to release its 14th-generation Core processors based on the Meteor Lake architecture. Both of these new architectures are expected to offer significant performance improvements over the current generation of CPUs.

More cores and threads: CPUs with higher core and thread counts are becoming more common, and this trend is expected to continue in 2024. We can expect to see more mainstream CPUs with 16 or more cores in 2024.

Integrated AI and machine learning: AI and machine learning are becoming increasingly important in a wide range of applications, and CPUs are becoming better and better at supporting these technologies. We can expect to see more CPUs with integrated AI and machine learning accelerators in 2024.

New technologies

One new technology that we may see in CPUs in 2024 is chiplet design. Chiplet design allows for the creation of CPUs with more cores and threads than would be possible using a traditional monolithic design. This technology is already being used by AMD in its Ryzen Threadripper processors, and it is possible that we will see it used in more mainstream CPUs in 2024.

Another new technology that we may see in CPUs in 2024 is neuromorphic computing. Neuromorphic computing is a type of computing that is inspired by the human brain and mimics the way our brains work. Neuromorphic processors are able to learn and adapt in a way that is similar to how the human brain does. This type of computing could be used to improve the performance of AI and machine learning applications.
 
  • Like
  • Fire
  • Thinking
Reactions: 33 users
Not long to wait for outcome of Ph II with Intellisense and NECR.

Slated to end - May 24.

I like the additional applications it can flow into outside of NASA once it is proven up like auto and telecoms.


IMG_20231024_222755.jpg
 
  • Like
  • Love
  • Fire
Reactions: 49 users

Diogenese

Top 20
Not long to wait for outcome of Ph II with Intellisense and NECR.

Slated to end - May 24.

I like the additional applications it can flow into outside of NASA once it is proven up like auto and telecoms.


View attachment 48000


Here's a press release from March about a Brainchip/Intellisense commercial partnership which I'd missed or forgotten.


Intellisense selects BrainChip’s neuromorphic technology to improve cognitive radio solutions | IoT Now News & Reports (iot-now.com)

20230330

Intellisense selects BrainChip’s neuromorphic technology to improve cognitive radio solutions​

Laguna Hills, United States – BrainChip Holdings Ltd, a commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, has announced that Intellisense Systems has selected its neuromorphic technology to improve the cognitive communication capabilities on size, weight and power (SWaP) constrained platforms (such as spacecraft and robotics) for commercial and government markets.

...

“We are excited to partner with BrainChip and leverage their state-of-the-art neuromorphic technology,” says Frank T. Willis, president and CEO of Intellisense. “By integrating BrainChip’s Akida processor into our cognitive radio solutions, we will be able to provide our customers with an unparalleled level of performance, adaptability and reliability.”

...

“Intellisense provides advanced sensing and display solutions and we are thrilled to be partnering with them to deliver the next generation of cognitive radio capabilities,” says Sean Hehir, CEO of BrainChip. “Our Akida processor is uniquely suited to address the demanding requirements of cognitive radio applications, and we look forward to continue partnering with Intellisense to deliver cutting-edge embedded processing with AI on-chip to their customers
.”
 
  • Like
  • Love
  • Fire
Reactions: 48 users

cosors

👀
From the SiFive X280 datasheet it is clear that their AI/ML does not include Akida. It would have been on the drawing board before we met.



View attachment 42513


View attachment 42511


On the other hand, SiFive have recognized Akida's capabilities.

https://brainchip.com/sifive-and-brainchip-partner-to-demo-ip-compatibility/#:~:text=SiFive and BrainChip have partnered to show their,IP working alongside SiFive’s RISC–V host processor IP.
April 20, 2022

SiFive and BrainChip have partnered to show their IP is compatible in SoC designs for embedded artificial intelligence (AI). The companies have demonstrated BrainChip’s neuromorphic processing unit (NPU) IP working alongside SiFive’s RISC–V host processor IP.

Brainchip’s NPU processor IP, the basis for its Akida chip, is a neuromorphic processor designed to accelerate spiking neural networks. This IP can be used to analyze inputs from most sensor types, including cameras, to provide ultra–low power analysis in real–time applications. A recent BrainChip demo showed its Akida chip in a vehicle, detecting the driver, recognizing the driver’s face, and identifying their voice simultaneously. Keyword spotting required 600 µW, facial recognition needed 22 mW, and the visual wake–word inference used to detect the driver was 6–8 mW.


https://brainchip.com/akida2-0/

Phil Dworsky, Global Head of Strategic Alliances, SiFive


"more complex applications including object detection, robotics, and more can take advantage of SiFive X280 Intelligence™ AI Dataflow Processors tightly integrated with BrainChip’s Akida-S or Akida-P neural processors"

... so it's there for the future.
SiFive X280
Because I just stumbled across the topic. Doesn't the data sheet with the two passages in combination mean that Akida could be both inside and subside the X280? Or is it rather a subsystem within the chip that is meant here, so again inside?

1698157922284.png

1698158063248.png

https://www.sifive.com/cores/intelligence-x280
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 18 users

Diogenese

Top 20
SiFive X280
Because I just stumbled across the topic. Doesn't the data sheet with the two passages in combination mean that Akida could be both inside and subside the X280? Or is it rather a subsystem within the chip that is meant here, so again inside?

View attachment 48001
View attachment 48002
https://www.sifive.com/cores/intelligence-x280
Hi cosors,

X280 has been around since April 2021.

The "multiple NN models" are the libraries of images, phonemes, etc that are used in configuring the NN weights and biases. The "models" are data, not silicon.

The NN accelerators can be hardware, but are separate from the processor and are connected to the processor via the "ports" which are connector plugs with the various standard pin arrangements such as USB 3, I2C, etc. Of course, Akida IP could be merged with the X280 IP to make a single SoC, and the "ports" would be replaced by copper conductor tracks, but that would be X280+.
 
  • Like
  • Fire
  • Love
Reactions: 24 users

cosors

👀
As usual, here are the changes to the top 20 from the 2nd to 3rd quarter:

1. PVDM (founder) - same
2. Citi - 113 million down from 156 million. Massive drop of 43 million shares or 38%.
3. BNP - up about 6 million shares (second biggest accumulators in last two quarters).
4. Merril - up around 100k shares (they have been around this amount all year give or take).
5. JP Morgan - up 11 million shares. They bought over 20 million shares last two 'terrible' quarters. Biggest accumulators on the dip.
6. HSBC Australia - down 15 million shares.
7. BNP (2) - up 4 million.
8. Certane - up 10 million.
9. LDA - same.
10. HSBC (2). down 12 million.
11. BNP (3) - up 1 million.
12. National Nominees - down 7 million.
13. Osserian Fam - same
14. Crossfield - same
15. Certane - up one million
16. Finclear (new) - hold 6.8 million shares. No idea how many they bought.
17. Paul (retail) - same.
18. Warbont nominees - down 2.2 million.
19. Jeff (retail) - same
20. David (retail) - same

Lou Di Nardo (former CEO) and Superhero are out of the top 20.

Interesting changes nonetheless.
Best wishes all, onwards and upwards.
Thanks for your work, very clear!
 
  • Like
  • Fire
Reactions: 9 users

Diogenese

Top 20
  • Haha
  • Like
Reactions: 17 users

cosors

👀
PS: I believe Clark Kent is suing SiFive for trade mark infringement.
I only came up with the idea of a sub Akida-P cluster because it could somehow become too fat huge for inside.)
 
Last edited:
  • Like
Reactions: 4 users

Diogenese

Top 20
I only came up with the idea of a sub Akida-P cluster because it could somehow become too fat for inside.)
Yes. The "P" would be used in servers where a lot of input signals need to be processed, possibly in the VVDN Edge Box, but "P" is very powerful - not really for battery powered devices. It can be up to 1300 times faster than "E".

I like the new ESP display with selectable specifications for each version:
https://brainchip.com/akida-generations/

Interesting to note that the memory per NPE increases to 100 KB in the "P", compared to 25 KB in the "S".

"S" (1 TOPS) is up to 10 times faster than "E", and "P" (131 TOPS) at max is 131 times faster than "S".

So a "P" NPE would have a larger footprint than an "S" NPE.


"E" (2 nodes) functions:
  • vibration Detection
  • Anomaly-Detection.svg
    Anomaly Detection
  • Keyword-Spotting.svg
    Keyword Spotting
  • Sensor-Fusion.svg
    Sensor Fusion
  • Low-res-Presence-Detection.svg
    Low-Res Presence Detection
  • Gesture-Detection.svg
    Gesture Detection

"S" (8 nodes) functions:
  • Advanced-Keyword.svg
    Advanced Keyword Spotting
  • Sensor-Fusion.svg
    Sensor Fusion
  • Low-res-presence.svg
    Low-Res Presence Detection
  • Gesture-Detection.svg
    Gesture Detection & Recognition
  • Object-Classification.svg
    Object Classification
  • Biometric-recognition.svg
    Biometric Recognition
  • Advanced-Speech-rec.svg
    Advanced Speech Recognition
  • Object-Detection.svg
    Object Detection & Semantic Segmentation


"P" (256 nodes) functions:
  • Gesture-Detection.svg
    Gesture Detection*
  • Object-Classification.svg
    Object Classification
  • Advanced-Speech-rec.svg
    Advanced Speech Recognition
  • Object-Detection.svg
    Object Detection & Semantic Segmentation
  • Advanced-sequence-pred.svg
    Advanced Sequence Prediction
  • Video-object-detection.svg
    Video Object Detection & Tracking
  • ViT-networks.svg
    Vision Transformer Networks

* & recognition

A node has 4 NPEs. (neuromorphic processing engine)
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 34 users

cosors

👀
Yes. The "P" would be used in servers where a lot of input signals need to be processed, possibly in the VVDN Edge Box, but "P" is very powerful - not really for battery powered devices. It can be up to 1300 times faster than "E".

I like the new ESP display with selectable specifications for each version:
https://brainchip.com/akida-generations/

Interesting to note that the memory per NPE increases to 100 KB in the "P", compared to 25 KB in the "S".

"S" (1 TOPS) is up to 10 times faster than "E", and "P" (131 TOPS) at max is 131 times faster than "S".

So a "P" NPE would have a larger footprint than an "S" NPE.


"E" (2 nodes) functions:
  • vibration Detection
  • Anomaly-Detection.svg
    Anomaly Detection
  • Keyword-Spotting.svg
    Keyword Spotting
  • Sensor-Fusion.svg
    Sensor Fusion
  • Low-res-Presence-Detection.svg
    Low-Res Presence Detection
  • Gesture-Detection.svg
    Gesture Detection

"S" (8 nodes) functions:
  • Advanced-Keyword.svg
    Advanced Keyword Spotting
  • Sensor-Fusion.svg
    Sensor Fusion
  • Low-res-presence.svg
    Low-Res Presence Detection
  • Gesture-Detection.svg
    Gesture Detection & Recognition
  • Object-Classification.svg
    Object Classification
  • Biometric-recognition.svg
    Biometric Recognition
  • Advanced-Speech-rec.svg
    Advanced Speech Recognition
  • Object-Detection.svg
    Object Detection & Semantic Segmentation


"P" (256 nodes) functions:
  • Gesture-Detection.svg
    Gesture Detection*
  • Object-Classification.svg
    Object Classification
  • Advanced-Speech-rec.svg
    Advanced Speech Recognition
  • Object-Detection.svg
    Object Detection & Semantic Segmentation
  • Advanced-sequence-pred.svg
    Advanced Sequence Prediction
  • Video-object-detection.svg
    Video Object Detection & Tracking
  • ViT-networks.svg
    Vision Transformer Networks

* & recognition

A node has 4 NPEs. (neuromorphic processing engine)
And Akida can still be stacked 64 times if I remember right, so I meant cluster. But that's certainly not what the 280 platform is for.
I'm just waiting for someone to try the maximum possible even if it's only for research purposes. Maybe Brainchip could take this into their own hands like ARM does with their own chip to show what is possible.
It would be interesting to see the real-world comparison too which, as far as I know, we only know from a single AKD1000.

_____
8384TOPs could this be correct?
Screenshot_2023-10-24-18-44-48-04_40deb401b9ffe8e1df2f1cc5ba480b12.jpg

Even if there would be fewer TOPs in real terms as I read today.
https://www.eetimes.com/tops-the-truth-behind-a-deep-learning-lie/
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 17 users
Top Bottom