BRN Discussion Ongoing

"It's funny how they portray me as a basher here and in the German forum as a pusher"

Wow, basher posts must be brutal in Germany! 😳
You should have witnessed the comment section after the 2$ drop
It’s nothing like hotcrapper or tsex have witnessed before
 
  • Like
  • Haha
Reactions: 6 users

wilzy123

Founding Member
Replace "High School" with "TSEx"

 
  • Like
Reactions: 2 users

wilzy123

Founding Member
*Some seed for the forum chooks*

Bodes well for BRN IMO

Arm unveils Total Design ecosystem​


Arm’s Neoverse Compute Subsystems (CSS) has been a faster, lower-risk path to custom silicon for infrastructure, and has provided Arm technology in a new way, delivering pre-integrated and pre-verified solutions that brings more value than discrete IP to the company’s various partners.

https://www.newelectronics.co.uk/content/news/arm-unveils-total-design-ecosystem
 
  • Like
  • Fire
  • Love
Reactions: 20 users

davidfitz

Regular
Nice post...you are another solid contributor to our forum, so I thank you as I'm sure many others do as well, your post above is
right on point, many times a number of former and current staff have indicated we are in a "sweet spot" (like lidar) and expect solid traction from the smart health industry moving forward.

Keep digging, those diamonds are deep !

Love Akida 💘
Hmmm, haven't got a lot of time these days to dig. However, if anyone thinks it will be worthwhile maybe delve into what happened between Biotome and Cardea Bio. Cardea was recently taken over by Paragraf but I cannot tell if they are still working with Biotome. Their website does not get updated enough to know!


 
  • Like
  • Fire
Reactions: 6 users

Diogenese

Top 20
Thank you for the detailed response. But once again, I'm not writing negatively, and I am definitely aware of how Brainchip's model works. I've done enough research even before Brainchip presented their packaging with the robot to us, etc. There was absolutely nothing at that time, and yet I invested. I can't help it if you can't filter out the positive aspects of my posts. So, let me make it clear. No matter what the current stock price is and even if the company is holding back, Brainchip will make us happy. You just have to read between the lines. It's funny how they portray me as a basher here and in the German forum as a pusher. 😂


1697702630380.png
 
  • Haha
  • Like
Reactions: 15 users
Good afternoon to all Chippers, I would like to thank all positive contributors who keep on posting about the progress of Brainchip's technology especially those news about the potential customers using it in their products, it tells me that this company is a long term winner, so the sensible things to do is keep on investing and just wait for the reward..
What is a fair investment in Brn, I have 200k of shares
 
AI Hardware w/ Jim Keller:



In this video Jim Keller (CEO of Tenstorrent) talks about Tenstorrent's concept for their AI/ML hardware and compares it to GPU architectures etc.

At some point he starts to compare these concepts to the structure of the human brain and mentions "cortical columns" (which we know from some of the Brainchip podcasts is also something PvM is interested to bring on the table for Akida's future). This is probably another concept out of the field of neuro science that therefore also found its way into the field of ML/AI and isn't anything exclusive to Brainchip. But nonetheless nice to hear other important players thinking in the same direction and maybe defining one of next Buzz-words ;)

What I found even more interesting is another quote from Mr. Keller @32:27 regarding model training vs learning:

... and there's a whole bunch of research to do on this because I'm pretty sure our brain doesn't train.

Does everybody realize this? This is the funniest thing you know, you start training with complicated things you end training with complicated things and it's all incremental changes.

No human has ever learned that way. We start with really simple things but as you learn you make quantitative leaps.
And the human data rate for training is really low. We do a maximum of 10 million inferences a year.

It's really interesting. So trainings, I promise you, training is going to change.
It doesn't make any sense training a computer for three months on the internet with infimal edits to weights.

No human beings ever learned anything that way it's not a thing.

So however it will be implemented (hardware, software, spiking or else) one-shot and few-shot learning seems to become more important for the coming generations of ML/AI tech (at least according to to Mr. VanderMade and Mr. Keller).

What I'm still wondering or trying to understand is, if and how the self-learned knowledge inside of one or multiple hardware chips can be returned back to a more central instance/pool to actually get insights what and how the device actually learned to check for plausability etc.

But maybe I'm just trying to warp my head around too much things way above my pay grade ...
 
  • Like
  • Fire
  • Love
Reactions: 17 users
  • Like
  • Fire
  • Love
Reactions: 28 users

Terroni2105

Founding Member
Sally Ward Foxton answered my question to the Propheese GenX360 on Twitter. They dont use it specifically.

I hope this news is not old, haven't read for a while here.


Actually she doesn’t say they don’t use it as you infer.
She answers “The prophesee sensor is neuromorphic; this is designed for use with either BrainChip or Synsense or any type of regular processor.”
So it is possible that BrainChip is used.
 
  • Like
  • Fire
Reactions: 31 users

Ian

Founding Member
 
  • Like
  • Love
  • Fire
Reactions: 24 users

Diogenese

Top 20
AI Hardware w/ Jim Keller:



In this video Jim Keller (CEO of Tenstorrent) talks about Tenstorrent's concept for their AI/ML hardware and compares it to GPU architectures etc.

At some point he starts to compare these concepts to the structure of the human brain and mentions "cortical columns" (which we know from some of the Brainchip podcasts is also something PvM is interested to bring on the table for Akida's future). This is probably another concept out of the field of neuro science that therefore also found its way into the field of ML/AI and isn't anything exclusive to Brainchip. But nonetheless nice to hear other important players thinking in the same direction and maybe defining one of next Buzz-words ;)

What I found even more interesting is another quote from Mr. Keller @32:27 regarding model training vs learning:



So however it will be implemented (hardware, software, spiking or else) one-shot and few-shot learning seems to become more important for the coming generations of ML/AI tech (at least according to to Mr. VanderMade and Mr. Keller).

What I'm still wondering or trying to understand is, if and how the self-learned knowledge inside of one or multiple hardware chips can be returned back to a more central instance/pool to actually get insights what and how the device actually learned to check for plausability etc.

But maybe I'm just trying to warp my head around too much things way above my pay grade ...

Hi CMF,

Jim is famously on record in an interview with Sally Ward-Foxton as dismissing "that spiky thingy" out of hand, but maybe that was the reefer talking.

PvdM built federated learning into the first patent, and it was divided out as a separate patent:

US10410117B2 Method and a system for creating dynamic neural function libraries

A method for creating a dynamic neural function library that relates to Artificial Intelligence systems and devices is provided. Within a dynamic neural network (artificial intelligent device), a plurality of control values are autonomously generated during a learning process and thus stored in synaptic registers of the artificial intelligent device that represent a training model of a task or a function learned by the artificial intelligent device. Control Values include, but are not limited to, values that indicate the neurotransmitter level that is present in the synapse, the neurotransmitter type, the connectome, the neuromodulator sensitivity, and other synaptic, dendric delay and axonal delay parameters. These values form collectively a training model. Training models are stored in the dynamic neural function library of the artificial intelligent device. The artificial intelligent device copies the function library to an electronic data processing device memory that is reusable to train another artificial intelligent device.
 
  • Like
  • Love
  • Fire
Reactions: 24 users

Taproot

Regular
 
  • Like
  • Love
Reactions: 8 users

Diogenese

Top 20
  • Haha
  • Like
Reactions: 16 users

Jannemann

Member
Actually she doesn’t say they don’t use it as you infer.
She answers “The prophesee sensor is neuromorphic; this is designed for use with either BrainChip or Synsense or any type of regular processor.”
So it is possible that BrainChip is used.
Yep. There is still hope...

It sounds like it's open for every neuromorphic company. If Propheese already decided for akida they could tell it.. but they haven't yet.
 
  • Like
Reactions: 4 users

Sam

Nothing changes if nothing changes
Good to see BRN and Honda back in talks again guys

1697708081239.png
 
  • Haha
  • Fire
  • Like
Reactions: 14 users

Jumpchooks

Regular

On the ABC tonight, worth watching
 
  • Like
  • Love
Reactions: 6 users

Perhaps

Regular
Yep. There is still hope...

It sounds like it's open for every neuromorphic company. If Propheese already decided for akida they could tell it.. but they haven't yet.
Again and again. Prophesee builds neuromorphic sensor chips on their own. To reach a wide acceptance on the market they make the sensor compatible with all common and future systems, may it be Qualcomm, Nvidia or neuromorphic systems like Brainchip or SynSense. There is a difference between the sensor and the system the sensor is running on. Inside the sensor the use of Akida just makes no sense.
 
  • Like
  • Thinking
Reactions: 6 users

7für7

Top 20
  • Like
  • Haha
Reactions: 2 users

GStocks123

Regular

Attachments

  • IMG_3935.jpeg
    IMG_3935.jpeg
    985.3 KB · Views: 143
  • IMG_3934.jpeg
    IMG_3934.jpeg
    1.1 MB · Views: 135
  • Like
  • Fire
  • Wow
Reactions: 23 users
Recent article from a French publication talking about our new partnership with EDGX but also our new capabilities.

Particularly liked the comments by Laurent Hill, an engineer at ESA.


Translated.

BrainChip moves closer to the Belgian EDGX to place its neuromorphic chip Akida in the’space​

Published 09-10-2023 by Pierrick Arlot
ArchitectureBrainChip
EDGX

The American company of Australian origin’ BrainChip, which has developed under the name of’Akida a neuromorphic processor for network edge (edge), has signed a technological agreement with the Belgian start-up EDGX, created in 2023 to develop intelligent space systems of rupture, in order to develop data processing units for very demanding environments. According to the young company based in Leuven, EDGX is currently developing a new generation of space computers that combine the classic acceleration of (IA) artificial intelligence’ with neuromorphic processing to reduce the power dissipation of embedded AI, increase payload adaptability and pave the way for autonomous learning capabilities embedded within next-generation satellites themselves.

BrainChip, for its part, has developed with Akida an AI processor that is 100% event-driven% digital that uses neuromorphic principles mimicking the human brain and that analyzes only the essential data detected at the point of’ acquisition. The’ idea is to process the data with efficiency and precision, while limiting the consumption. Moreover, the Akida chip, which can also be implemented in the form of d’’a’IP block in a system chip or an Asic circuit, allows the’learning on-premises, independently of the cloud, this, BrainChip says, ensures reduced latency and improves the privacy and security of data.

« We have built the Akida technology roadmap with the AIM of providing high-performance autonomous processing with very low energy consumption that does not require a permanent connection with the cloud, this is essential for constrained environments such as satellites », says Peter van der Made, founder and technical director of BrainChip.

EDGX and BrainChip signed a non-binding memorandum of understanding at the European Space Agency (ESA) conference EDHPC held from 2 to 6 October 2023 in Juan-les-Pins (France).

« What fascinates us most about Akida technology is its ability to operate in two different modes, says Laurent Hili, engineer in microelectronics and data processing at’ESA. On one side the Akida processor can run conventional (CNN) convolutional neural networks, making it compatible with many proven CNN networks. But on the other HAND, in event-driven mode of operation, it is able to reduce the demand for’energy by several orders of magnitude. Knowing that consumption is a primordial concept in the’space environment, the, this type of operation can open the door to a new area of processing in the’space to achieve things that were not previously possible. »

At the beginning of October, BrainChip announced the availability of the second-generation Akida platform, which can be used in applications as diverse as the connected home, the smart city, the industrial’ and the automotive’, and designed for the extremely eco-efficient processing of complex neural network models on network edge devices (edge).

According to BrainChip, the support for 8-bit’activation levels and weight values, and, as well as long-range connection jumps (long-range skip connections) here extends the range of models fully accelerated by the hardware resources of the Akida platform. The second generation of the Akida platform now integrates space-time convolutions of the TENN (Temporal Event-based Neural Networks) type, and, a new class of neural networks that are distinguished by their lightness and efficiency in the’use of resources and are particularly well adapted to the needs of applications where it is necessary to treat in continuous raw data containing important time information. Like video analysis, target tracking, audio classification, MRI and computed tomography analysis for the prediction of vital signs, or the analysis of time series used in predictive maintenance.

This technology, combined with the hardware acceleration of Vision Transformers (ViT) models, paves the way for edge devices capable of handling advanced vision and video applications that consume no power, according to the company, only a few milliwatts at the level of’a sensor (a few microwatts for audio or similar applications).

« The treatment of generative’IA and large LLM language models at the edge level is essential to intelligent situational knowledge in verticals ranging from manufacturing industry to healthcare in the field of going through the defense, Jean-Luc Chatelain, managing director of Verax Capital Advisors, said. Breakthrough innovations such as BrainChip's TENN technology supporting Vision Tranformers based on neuromorphic principles can provide compelling solutions in ultra-low power devices and small format, without compromising accuracy. »
 
  • Like
  • Fire
  • Love
Reactions: 59 users
Top Bottom