BRN Discussion Ongoing

cosors

👀
Yes. The "P" would be used in servers where a lot of input signals need to be processed, possibly in the VVDN Edge Box, but "P" is very powerful - not really for battery powered devices. It can be up to 1300 times faster than "E".

I like the new ESP display with selectable specifications for each version:
https://brainchip.com/akida-generations/

Interesting to note that the memory per NPE increases to 100 KB in the "P", compared to 25 KB in the "S".

"S" (1 TOPS) is up to 10 times faster than "E", and "P" (131 TOPS) at max is 131 times faster than "S".

So a "P" NPE would have a larger footprint than an "S" NPE.


"E" (2 nodes) functions:
  • vibration Detection
  • Anomaly-Detection.svg
    Anomaly Detection
  • Keyword-Spotting.svg
    Keyword Spotting
  • Sensor-Fusion.svg
    Sensor Fusion
  • Low-res-Presence-Detection.svg
    Low-Res Presence Detection
  • Gesture-Detection.svg
    Gesture Detection

"S" (8 nodes) functions:
  • Advanced-Keyword.svg
    Advanced Keyword Spotting
  • Sensor-Fusion.svg
    Sensor Fusion
  • Low-res-presence.svg
    Low-Res Presence Detection
  • Gesture-Detection.svg
    Gesture Detection & Recognition
  • Object-Classification.svg
    Object Classification
  • Biometric-recognition.svg
    Biometric Recognition
  • Advanced-Speech-rec.svg
    Advanced Speech Recognition
  • Object-Detection.svg
    Object Detection & Semantic Segmentation


"P" (256 nodes) functions:
  • Gesture-Detection.svg
    Gesture Detection*
  • Object-Classification.svg
    Object Classification
  • Advanced-Speech-rec.svg
    Advanced Speech Recognition
  • Object-Detection.svg
    Object Detection & Semantic Segmentation
  • Advanced-sequence-pred.svg
    Advanced Sequence Prediction
  • Video-object-detection.svg
    Video Object Detection & Tracking
  • ViT-networks.svg
    Vision Transformer Networks

* & recognition

A node has 4 NPEs. (neuromorphic processing engine)
And Akida can still be stacked 64 times if I remember right, so I meant cluster. But that's certainly not what the 280 platform is for.
I'm just waiting for someone to try the maximum possible even if it's only for research purposes. Maybe Brainchip could take this into their own hands like ARM does with their own chip to show what is possible.
It would be interesting to see the real-world comparison too which, as far as I know, we only know from a single AKD1000.

_____
8384TOPs could this be correct?
Screenshot_2023-10-24-18-44-48-04_40deb401b9ffe8e1df2f1cc5ba480b12.jpg

Even if there would be fewer TOPs in real terms as I read today.
https://www.eetimes.com/tops-the-truth-behind-a-deep-learning-lie/
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 17 users

cosors

👀
And Akida can still be stacked 64 times if I remember right, so I meant cluster. But that's certainly not what the 280 platform is for.
I'm just waiting for someone to try the maximum possible even if it's only for research purposes. Maybe Brainchip could take this into their own hands like ARM does with their own chip to show what is possible.
It would be interesting to see the real-world comparison too which, as far as I know, we only know from a single AKD1000.

_____
8384TOPs could this be correct?
View attachment 48003
Even if there would be fewer TOPs in real terms as I read today.
https://www.eetimes.com/tops-the-truth-behind-a-deep-learning-lie/
I know it from my sectors, 'everything' I sell I should have in hardware as an example in the assortment to be able to show it.
So I agree with some here.
The lab should be enlarged, my thought.
 
  • Like
  • Love
Reactions: 3 users

MDhere

Regular
Last edited:
  • Like
  • Fire
  • Love
Reactions: 15 users

cosors

👀
  • Like
  • Haha
Reactions: 5 users

TECH

Regular
Seems some gamer tech heads think (wishful or reasonable :unsure: ) neuromorphic might start to permeate their boards.


CPU technology in 2024​

Leave a Comment / Resource / By Team Wegamegear.com
CPU technology in 2024

Both AMD and Intel are expected to release new CPUs in 2024, hence natural thought that comes to many Gaming enthusiasts, “what will be CPU technology in 2024?” This type of thought may appear straightforward, which is why we have decided to provide a research-oriented perspective on it

Here are some of the new things we can expect to see in

CPU technology in 2024​

New process nodes: AMD and Intel are both expected to release new CPUs based on TSMC’s 3nm process node in 2024. This will allow for further increases in performance and efficiency.

New architectures: AMD is expected to release its Zen 5 architecture in 2024, while Intel is expected to release its 14th-generation Core processors based on the Meteor Lake architecture. Both of these new architectures are expected to offer significant performance improvements over the current generation of CPUs.

More cores and threads: CPUs with higher core and thread counts are becoming more common, and this trend is expected to continue in 2024. We can expect to see more mainstream CPUs with 16 or more cores in 2024.

Integrated AI and machine learning: AI and machine learning are becoming increasingly important in a wide range of applications, and CPUs are becoming better and better at supporting these technologies. We can expect to see more CPUs with integrated AI and machine learning accelerators in 2024.

New technologies

One new technology that we may see in CPUs in 2024 is chiplet design. Chiplet design allows for the creation of CPUs with more cores and threads than would be possible using a traditional monolithic design. This technology is already being used by AMD in its Ryzen Threadripper processors, and it is possible that we will see it used in more mainstream CPUs in 2024.

Another new technology that we may see in CPUs in 2024 is neuromorphic computing. Neuromorphic computing is a type of computing that is inspired by the human brain and mimics the way our brains work. Neuromorphic processors are able to learn and adapt in a way that is similar to how the human brain does. This type of computing could be used to improve the performance of AI and machine learning applications.

Gidday FMF,

"This type of computing could be used to improve the performance of AI and machine learning applications"


The above comment is typical or common with an emphasis on "could be".

We already know that Peters genius architecture "ALREADY DOES IMPROVE" the performance of AI and machine learning applications
by orders of magnitude and this just amplifies the problem (headwind) that our company have been battling ever since I first come on
board as a shareholder, meaning, having the gap between us and the mob does have it's advantages but also it's been a bit of a ball and
chain for us to bare out front, BUT the weight around our legs has definitely been lifted somewhat in the last 24 months in my opinion.

Thanks for all your great posts and contributions to our forum over the last year or so.

Best regards....Tech
🤓
 
  • Like
  • Fire
Reactions: 19 users

Frangipani

Regular

Edge Impulse Wins AI DevWorld’s 2023 AI TechAward for Machine Learning Platforms​

  • Business Wire

  • 24-10-2023

  • Edge Impulse, a leading platform for building, refining and deploying machine learning models and algorithms to edge devices, has won the 2023 AI TechAward for the Machine Learning Platforms category ...
The company joins an industry-wide host of top-level AI movers and shakers such as Accrete, Brighterion, ELSA, Stack Overflow, and Vercel

SAN JOSE, Calif.: Edge Impulse, a leading platform for building, refining and deploying machine learning models and algorithms to edge devices, has won the 2023 AI TechAward for the Machine Learning Platforms category by the Advisory Board for the AI DevWorld conference.

“Edge Impulse is proud to be this year’s winner for AI TechAwards’ Machine Learning Platforms category as it recognizes the great work we as a company have been able to achieve over the last year,” said Zach Shelby, CEO of Edge Impulse. “We look forward to continuing the momentum and growing as the leading edge AI platform.”

Edge Impulse’s edge AI platform has helped Fortune 500 and other notable and varied companies like NASA, HP, Oura, and Know Labs. It has also built partnerships with Amazon Web Services, Lexmark, NVIDIA, Sony, TinyML, Brainchip, and Infineon Technologies, among others.


Edge Impulse is a great example of the newest AI & Machine Learning technologies now allowing developers & engineers and professionals to build upon the burgeoning AI/ML industry,” said Jonathan Pasky, executive producer & co-founder of DevNetwork, producer of the AI DevWorld conference & the 2023 AI TechAwards. “Today’s digital economy increasingly runs on systems needing increased data and intelligence. Edge Impulse's win here at the 2023 AI TechAwards is evidence of their leading role in the growth of the global AI ecosystem.”

Award winners were selected from the independent, expert-led DevNetwork AI Advisory Board, based on criteria including: technical innovation; attracting notable attention and awareness in the AI, machine learning & data science industries; and general regard and use by AI ecosystems and communities.
Edge Impulse will be presented its AI TechAward during AI DevWorld 2023 (Oct 24-26, Santa Clara, CA & Oct 31-Nov 2, Live Online), the premier AI, Machine Learning & Data Science conference.

About Edge Impulse
Edge Impulse offers the latest in machine learning tooling, enabling all enterprises to build smarter edge products. Their technology empowers developers to bring more AI products to market faster, and helps enterprise teams rapidly develop industry-specific solutions in weeks instead of years. Edge Impulse provides powerful automations and low-code capabilities to make it easier to build valuable datasets and develop advanced AI for edge devices. Used by makers of health-wearable devices like Oura, Know Labs, and NOWATCH, industrial organizations like NASA, as well as top silicon vendors and over 80,000 developers, Edge Impulse has become the trusted platform for enterprises and developers alike. It provides a seamless integration experience to optimize and deploy with confidence across the largest hardware ecosystem.

To learn more, visit edgeimpulse.com.

Contacts
Marie Williams
Coderella
(415) 707-2793
press@edgeimpulse.com
 
  • Like
  • Fire
  • Love
Reactions: 46 users

Quiltman

Regular
I’m not trying to connect any dots , at all, just reflecting on time to market and veil of silence that surrounds product development in our segment. It makes investing tough, lot’s of trust in management and trying to understand the tech in enough detail to feel confident in the technical moat that has been created and it’s commercial value.

This comment by Qualcomm VP of product management on new release.

6FE2227E-7D48-42A5-9100-2713A9936944.jpeg
 
  • Like
  • Love
Reactions: 12 users

IloveLamp

Top 20
I’m not trying to connect any dots , at all, just reflecting on time to market and veil of silence that surrounds product development in our segment. It makes investing tough, lot’s of trust in management and trying to understand the tech in enough detail to feel confident in the technical moat that has been created and it’s commercial value.

This comment by Qualcomm VP of product management on new release.

View attachment 48005
1000006978.png
 
  • Like
  • Thinking
  • Fire
Reactions: 19 users

buena suerte :-)

BOB Bank of Brainchip
Morning Chippers,

Well that's the 4C done and dusted...Glad that one is out of the way!!!! and with no real impact to the SP! 🙏

My feeling is that this will be the last very poor 4C we will have to endure! Onwards and upwards from here 🙏

Will 2024 be our turning point??? Very much hope so!

A Green day today would be very welcoming!

I'm feeling an 'Announcement' is on the way very soon!!! 🙏

A positive Global market run ⬆️ overnight!

1698184853479.png

1698184947463.png

1698185050192.png

Cheers :)
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Easytiger

Regular

Who owns BrainChip shares right now?​

Here are the investors that have lost the most money with BrainChip this year.


Sebastian Bowen
@SebastianTBowenPublished May 3, 1:41 pm AEST
BRN
A young investor working on his ASX shares portfolio on his laptop

Image source: Getty Images

You’re reading a free article with opinions that may differ from The Motley Fool’s Premium Investing Services. Become a Motley Fool member today to get instant access to our top analyst recommendations, in-depth research, investing resources, and more. Learn More

It's fair to say that BrainChip Holdings Ltd (ASX: BRN) shares have had a truly terrible month. Around four weeks ago, the BrainChip share price was sitting at 47 cents. But today, it's going for just 40 cents, after touching a new 52-week low of 35 cents a share just last week.
Not much seems to have gone right for BrainChip of late. But the quarterly cash flow activities report that this ASX 200 artificial intelligence share released last week certainly didn't help. As we went through at the time, BrainChip reported that it received just US$40,000 in revenue over the three months to 31 March 2023.
BrainChip shares are now down 47.33% in 2023 alone, and by 58% over the past 12 months:
Zoom1M3M6MYTD1Y5Y10YALL
BrainChip - BRN Open: 0.655 AUD Close: 0.65 AUD Low: 0.635 AUD High: 0.675 AUD Volume: 7.87M % Change: -28.18%
With losses like that under the belt, it might be a good time to check out who actually owns BrainChip shares or at least the largest owners of this company. Luckily, BrainChip has recently released a list of its top 20 shareholders. Let's take a look.

A look at the top holders of BrainChip shares​

According to the company, BrainChip's top 20 shareholders are as follows:
  1. Citicorp, with 9.15% of all outstanding shares
  2. Mr Peter Adrien van der Made, with 8.87%
  3. Merrill Lynch, with 4.88%
  4. BNP Paribas, with 4,75%
  5. HSBC, with 4.44%
  6. JPMorgan, with 2.82%
  7. BNP Paribas (DRP), with 2.53%
  8. HSBC (customer accounts), with 1.17%
  9. National Nominees, with 0.67%
  10. LDA Capital, with 0.52%
  11. BNP Paribas (Retail Clients), with 0.47%
  12. Mrs Rebecca Ossieran-Moisson, with 0.45%
  13. Crossfield Intech (Liebskind Family), with 0.4%
  14. Certane CT Pty Ltd (BrainChip's unallocated long-term incentive plan), with 0.4%
  15. Mr Paul Glendon Hunter, with 0.35%
  16. Certane CT Pty Ltd ((BrainChip's allocated long-term incentive plan), with 0.35%
  17. Mr Louis Dinardo, with 0.34%
  18. Mr Jeffrey Brian Wilton, with 0.31%
  19. Mr David James Evans, with 0.31%
  20. Superhero Securities (Client Accounts), with 0.3%
So an interesting list to be sure.
Merrill Lynch, HSBC, Citicorp and BNP Paribas are all institutional investors that probably hold BrainChip shares on behalf of their clients. But Peter van der Made, Rebecca Ossieran-Moisson, Paul Hunter, Louis Dinardo, Jeffrey Wilton and David Evans are certainly worth a deeper dive.

Who's who of BrainChip​

It's no surprise to see van der Made at the top of the shareholders' list. He is both the founder and current chief technology officer at BrainChip, so it makes sense that he is still one of the largest shareholders of the company.
But the other top individual shareholders are not members of BrainChip's management or board.
Rebecca Ossieran-Moisson appears to be a Western Australia-based academic.
Paul Hunter is an independent insurance business owner.
Louis Dinardo is actually a former CEO of BrainChip who was abruptly terminated from his role as CEO back in 2021.
Jeffrey Wilson is a former academic and current director of research and economics at the Australian Industry Group.
Finally, David Evans is a partner at an architecture firm, as well as an investor.
So there you have it, BrainChip's top 20 shareholders. Notably absent is current BrainChip CEO Sean Hehir.
These investors will certainly be feeling the pain from the recent BrainChip share price performance. No doubt all are hoping the worst is over.
 
  • Like
  • Fire
Reactions: 5 users

Home101

Regular
  • Like
Reactions: 5 users
Edge boxes seem to be proliferating already in the market place..

“This portable and compact Edge box is a game-changer that enables customers to deploy AI applications cost-effectively with unprecedented speed and efficiency to proliferate the benefits of intelligent compute.”

You can just imagine a customer or investor looking at that and saying, fuck it, more verbal diarrhoea that doesn’t sell anything or give a differentiator why we should be buying this..

Let’s just go with Qualcomm. A known known, and we know what we’re going to get..
I can imagine you saying it.
If it’s sold like that, head of BRN sales obviously couldn’t sell it.

If it’s the first and best available edge Ai commercial product, does that not suggest it’s a sales deficiency.

How about marketing the thing.

We don’t know if it is an upgrade on NVIDIA jetson or an alternative solution? Why should I buy this one instead of sticking to Nvidia? What problem does the new solution solve? Where’s the crisis that this new offering compels market leaders to use it.

A few basic sell points..

There’s a guy called Oren Klaff that would eat these guys marketing strategies alive and transform their results.
 
  • Like
Reactions: 5 users

Damo4

Regular
If it’s sold like that, head of BRN sales obviously couldn’t sell it.

If it’s the first and best available edge Ai commercial product, does that not suggest it’s a sales deficiency.

How about marketing the thing.

We don’t know if it is an upgrade on NVIDIA jetson or an alternative solution? Why should I buy this one instead of sticking to Nvidia? What problem does the new solution solve? Where’s the crisis that this new offering compels market leaders to use it.

A few basic sell points..

There’s a guy called Oren Klaff that would eat these guys marketing strategies alive and transform their results.

Hey Schnitzel, I think we might be jumping the gun a little as far as marketing.
The box is not yet available, but if you look at other offerings (including the Nvidia integrated box) there's a lot more information on the website.

I assume it will come out in time (towards the end of this year I think they said?) and then we'll be able to decide how well it's being marketed.
For all we know they may run a side by side comparison with other boxes on offer!
I know marketing starts well before the release, but I think it might still be a little early.
 
  • Like
Reactions: 11 users

Flenton

Regular
I'm struggling to comprehend the 37k revenue.
If your looking at $50 an hour for engineering fees (although I think $100 would be closer to the rate) it's 740 charged hours.
One employee working 37.5 hours a week means they work about 450 hours in 3 months. I see it as either there is a lot of free / good will work being done through the partnerships with big rewards to be paid with product release or we are seriously overstuffed.
 
  • Like
  • Thinking
  • Fire
Reactions: 11 users

toasty

Regular
Hey Schnitzel, I think we might be jumping the gun a little as far as marketing.
The box is not yet available, but if you look at other offerings (including the Nvidia integrated box) there's a lot more information on the website.

I assume it will come out in time (towards the end of this year I think they said?) and then we'll be able to decide how well it's being marketed.
For all we know they may run a side by side comparison with other boxes on offer!
I know marketing starts well before the release, but I think it might still be a little early.
Wouldn't it make sense to start the marketing now to try to stem the flow of orders to the competition?
 
  • Like
  • Fire
  • Love
Reactions: 4 users

Labsy

Regular
I’m not trying to connect any dots , at all, just reflecting on time to market and veil of silence that surrounds product development in our segment. It makes investing tough, lot’s of trust in management and trying to understand the tech in enough detail to feel confident in the technical moat that has been created and it’s commercial value.

This comment by Qualcomm VP of product management on new release.

View attachment 48005
Let's go! brainchip!! let's go!!!
Just saying... no connection. Take it easy fellas...
 
  • Like
  • Fire
Reactions: 7 users

Xray1

Regular
Hey Schnitzel, I think we might be jumping the gun a little as far as marketing.
The box is not yet available, but if you look at other offerings (including the Nvidia integrated box) there's a lot more information on the website.

I assume it will come out in time (towards the end of this year I think they said?) and then we'll be able to decide how well it's being marketed.
For all we know they may run a side by side comparison with other boxes on offer!
I know marketing starts well before the release, but I think it might still be a little early.
I don't know how we can go ahead with any contemplated marketing / sales / IP agreements without firstly having a New VP of Sales put in place after Chris Stevens recent departure from the Co ....
 
  • Like
Reactions: 2 users

Master Thesis on Efficient Mapping of Spiking Neural Networks on Neuromorphic Computing System

Ericsson

About the job​

About This Opportunity

With the rapid adoption of machine learning in telecommunication networks, the energy consumption associated with the training of cognitive algorithms and inference engines is of increased concern. Bio-inspired computing architectures such as Neuromorphic systems could process cognitive tasks in an energy-efficient manner, thereby yielding the networks sustainable. A variety of tasks such as deep learning inference, dynamic programming, quadratic unconstrained binary optimization, etc. can exploit the neuromorphic hardware by reformulating the problem to the Spiking neural network architecture. The capabilities of neuromorphic hardware, the compilation of SNN architecture to the neuromorphic hardware and the problem formulation determines the scale of the problem that can be efficiently solved on the neuromorphic system

Project Goals

This master thesis aims at solving a telco related problem on the neuromorphic hardware by reformulating the problem into an SNN architecture, mapping the customized SNN onto the neuromorphic hardware that exploits the neuromorphic compute capabilities in a resource constrained manner and demonstrating the energy efficient solution.

What You Will Do

The thesis work consists of several items:

  • Understand a Radio Access Network workload that demands energy efficiency.
  • Reformulate the RAN workload into customized SNN architecture and validate the functionality using Neuromorphic Simulator.
  • Devise mapping technique for the customized SNN architecture, that exploits the on-chip resources on the Neuromorphic System in an efficient manner.
  • Documentation of the solution and evaluation.
You will bring

  • MSc Student in Physics/Computer Science/Mathematics or other related fields (Must to have).
  • Proficient in Probability Theory, Deep Neural Networks, Spiking Neural Networks (Must to have).
  • Understanding of Neuromorphic Computing hardware and software stacks (Must to have).
  • Good programing skills are required, knowledge of C++, Python, Linux (Must to have).
Additional Details

The work is expected to start in Jan/Feb 2024. The work is proposed for 2 students for a duration of 6 months. Location is at Ericsson Research in Stockholm (Kista), Sweden.

Please submit your application in English as soon as possible - we are working continuously with candidate selection.
 
  • Like
  • Fire
Reactions: 19 users

7für7

Regular
I don't know how we can go ahead with any contemplated marketing / sales / IP agreements without firstly having a New VP of Sales put in place after Chris Stevens recent departure from the Co ....
If you have a good team you don’t need necessarily a head of sales for a short period. Do you think the sales staff are all siting with doughnuts and coffee or hot chocolate in the office waiting for a new manager? 😅
 
  • Like
  • Haha
Reactions: 12 users
Top Bottom