BRN Discussion Ongoing

Rangersman

Member
I have only been a holder of brainchip for a few years. I don’t comment much however I am fairly happy how the company is progressing. I think brainchip is building an ecosystem of partners that will deliver revenue in the future. The company have stated time and again that we will see results in the financials, so I am not expecting a huge announcement at any trade events or company report. I don’t get too excited by dot joining either. It’s nice to see but until there is revenue and proof of products by partners we just don’t know. There was a tendency on hot crapper to build up unrealistic expectation in an upcoming event, partnership, or invented timelines and then trash the company when what became expected didn’t eventuate. Even though the expectation was invented on the forum and not by the company. This is what seems to be happening now on this forum, along with the good old flaming and bating. I think we are better than that, we have been until recently. I hope we can be again.
 
  • Like
  • Love
Reactions: 46 users

AusEire

Founding Member. It's ok to say No to Dot Joining
If you don't like what someone has to say, you have a few options:

1. respectfully reply.
2. leave it/ignore it and move on
3. Reply rudely.

Just don't choose option 3 and the forum will run smoothly.
You can say whatever you want.

If do say whatever you want you should be open to criticism or challenge (something which most free speech advocates are not open too). Especially if your opinions have absolutely no basis in facts and are completely imaginary.

For example claims that the CEO and the management of the company are deliberately keeping shareholders in the dark or misleading them!

This opinion has no basis in facts and is a claim being used by freaks such The Dean, Sharespam, Stickfuck etc over in the crapper. All serial downrampers.

Take that whatever way you want.

As a free speech advocate I hope that isn't too rude
 
Last edited:
  • Like
  • Love
  • Haha
Reactions: 39 users

alwaysgreen

Top 20
You can say whatever you want.

If do say whatever you want you should be open to criticism or challenge (something which most free speech advocates are not open too). Especially if your opinions have absolutely no basis in facts and are completely imaginary.

For example claims that the CEO and the management of the company are deliberately keeping shareholders in the dark.

This opinion has no basis in facts and is a claim being used by freaks such The Dean, Sharespam, Stickfuck etc over in the crapper. All serial downrampers.

Take that whatever way you want.

As a free speech advocate I hope that isn't too rude
Who has ever said that on here? What do shareman and the dean have to do with anything?
 
  • Like
  • Fire
Reactions: 4 users

TopCat

Regular
Would it be too early yet to start seeing chips with Akida coming out of Global Foundries?


4CC0C55F-2052-4A75-8959-B0A4B6BCFB39.jpeg
 
  • Like
  • Fire
Reactions: 7 users

Slymeat

Move on, nothing to see.
It's interesting when Peter was asked what customers are best suited to use akida, one of the first thing he said was facial recognition for doorbell market. This isn't the first time we've heard doorbells mentioned in a podcast. We must have a doorbell product out there somewhere!!
I’d love to see Akida and facial recognition incorporated into locking systems. I.e. facial recognition to unlock and lock. No need for keys. I assume identical twins trust each other enough—they most likely already share keys anyway.

Could extend this to those temporary parcel lockers. Instead of receiving a PIN code, which could be intercepted or forgotten, register your face and use facial recognition to unlock the locker until another face is registered.

The possibilities are immense. Facial recognition to access ANY locked ANYTHING. Cars, safes, doors, gates, shared access points, etc. With Akida being ultra low energy, and cheap (only a small percentage of the total cost), this should not be an issue.
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Learning

Learning to the Top 🕵‍♂️
It's not BRN related; however I think quite worthy of reading as a Brainchip investor.
Screenshot_20230222_205355_LinkedIn.jpg
Screenshot_20230222_205410_LinkedIn.jpg


Learning 🏖
 
  • Like
  • Love
  • Fire
Reactions: 34 users

HopalongPetrovski

I'm Spartacus!
I’d love to see Akida and facial recognition incorporated into locking systems. I.e. facial recognition to unlock and lock. No need for keys. I assume identical twins trust each other enough—they most likely already share keys anyway.

Could extend this to those temporary parcel lockers. Instead of receiving a PIN code, which could be intercepted or forgotten, register your face and use facial recognition to unlock the locker until another face is registered.

The possibilities are immense. Facial recognition to access ANY locked ANYTHING. Cars, safes, doors, gates, shared access points, etc. With Akida being ultra low energy, and cheap (only a small percentage of the total cost), this should not be an issue.
I'm slowly getting used to it with my apple products (and forgetting my passwords 🤣 to boot) and it maybe my innate paranoia but I worry just how easily hackable a visage is?
Deep fakes are already a worry if you happen to be a target, and the type of software needed to produce very professional results is getting cheaper and becoming more easily available by the day.
If not now, then soon, I imagine with the evolution of the likes of DaliAI and other AI products.
Hell, with the way the internet has/is being harvested/scraped I wouldn't be surprised if someone with a bit of knowhow and some small incriminating access to some personal data (ala recent Optus/ Medibank breach's) would be able to gain an entree into bank accounts and God knows what else in our interconnected world. Scary times.
 
  • Like
  • Thinking
  • Fire
Reactions: 12 users

Slymeat

Move on, nothing to see.


Refresher below



Design of MRI structured spiking neural networks and learning algorithms for personalized modelling, analysis, and prediction of EEG signals​

Scientific Reports volume 11, Article number: 12064 (2021) Cite this article

Abstract​

This paper proposes a novel method and algorithms for the design of MRI structured personalized 3D spiking neural network models (MRI-SNN) for a better analysis, modeling, and prediction of EEG signals. It proposes a novel gradient-descent learning algorithm integrated with a spike-time-dependent-plasticity algorithm. The models capture informative personal patterns of interaction between EEG channels, contrary to single EEG signal modeling methods or to spike-based approaches which do not use personal MRI data to pre-structure a model. The proposed models can not only learn and model accurately measured EEG data, but they can also predict signals at 3D model locations that correspond to non-monitored brain areas, e.g. other EEG channels, from where data has not been collected. This is the first study in this respect. As an illustration of the method, personalized MRI-SNN models are created and tested on EEG data from two subjects. The models result in better prediction accuracy and a better understanding of the personalized EEG signals than traditional methods due to the MRI and EEG information integration. The models are interpretable and facilitate a better understanding of related brain processes. This approach can be applied for personalized modeling, analysis, and prediction of EEG signals across brain studies such as the study and prediction of epilepsy, peri-perceptual brain activities, brain-computer interfaces, and others.

I asked about using spiking neuromorphic hardware in the Strawman members-only webcast interview of EMVision’s cofounders (Scott Kirkland and Ron Weinberger) yesterday. EMVision have created a portable brain scanner to detect strokes. It uses a neural network and AI algorithms applied to a self generated dataset to determine if a stroke has occurred and importantly to determine what kind of stroke—bleed or clot. It is important to distinguish the difference as they have different treatment, and the wrong treatment IS fatal. I hope to one day see their 2nd generation device proudly sitting beside every defibrillator in the world, and hope it eventually contains Akida.

They laughed off my reasonable question as “probably a Brainchip shareholder”. And they were right, but all I wanted them to do was to start considering training their algorithms and gathered datasets on Akida. It would be to their advantage. They did actually elaborate a little though—so Akida may come into their radar.

The thousands of eyes are accompanied by thousands of mouths that can help spread the word, and I do at every opportunity.

DISC: I hold shares in both BRN and EMV. It’s not half obvious isn’t it!
 
  • Like
  • Fire
  • Love
Reactions: 14 users
This article is about Prophesee, and gives Brainchip a mention (edit).




Why you will be seeing much more from event cameras​

14 February 2023

February/March 2023
Advances in sensors that capture images like real eyes, plus in the software and hardware to process them, are bringing a paradigm shift in imaging, finds Andrei Mihai
Neuromorphic%20vision.jpg

Image credit: Vector Tradition/Shutterstock.com
The field of neuromorphic vision, where electronic cameras mimic the biological eye, has been around for some 30 years. Neuromorphic cameras (also called event cameras) mimic the function of the retina, the part of the eye that contains light-sensitive cells. This is a fundamental change from conventional cameras – and why applications for event cameras for industry and research are also different.
Conventional cameras are built for capturing images and visually reproducing them. They take a picture at certain amounts of time, capturing the field of vision and snapping frames at predefined intervals, regardless of how the image is changing. These frame-based cameras work excellently for their purpose, but they are not optimised for sensing or machine vision. They capture a great deal of information but, from a sensing perspective, much of that information is useless, because it is not changing.
Event cameras suppress this redundancy and have fundamental benefits in terms of efficiency, speed, and dynamic range. Event-based vision sensors can achieve better speed versus power consumption trade-off by up to three orders of magnitude. By relying on a different way of acquiring information compared with a conventional camera, they also address applications in the field of machine vision and AI.
monitoring%20particle%20size%20and%20movement.png

Event camera systems can quickly and efficiently monitor particle size and movement
“Essentially, what we’re bringing to the table is a new approach to sensing information, very different to conventional cameras that have been around for many years,” says Luca Verre, CEO of Prophesee, a market leader in the field.
Whereas most commercial cameras are essentially optimised to produce attractive images, the needs of the automotive, industrial, Internet of Things (IoT) industries, and even some consumer products, often demand different performances. If you are monitoring change, for instance, as much as 90% of the scene is useless information because it does not change. Event cameras bypass that as they only monitor when light goes up or down in certain relative amounts, which produces a so-called “change event”.
In modern neuromorphic cameras, each pixel of the sensor works independently (asynchronously) and records continuously, so there is no downtime, even when you go down to microseconds. Also, since they only monitor changing data, they do not monitor redundant data. This is one of the key aspects driving the field forward.
Innovation in neuromorphic vision
Vision sensors typically gather a lot of data, but increasingly there is a drive to use edge processing for these sensors. For many machine vision applications, edge computation has become a bottleneck. But for event cameras, it is the opposite.
More and more, sensor cameras are used for some local processing, some edge processing, and this is where we believe we have a technology and an approach that can bring value to this application“More and more, sensor cameras are used for some local processing, some edge processing, and this is where we believe we have a technology and an approach that can bring value to this application,” says Verre.
“We are enabling fully fledged edge computing by the fact that our sensors produce very low data volumes. So, you can afford to have a cost-reasonable, low-power system on a chip at the edge, because you can simply generate a few event data that this processor can easily interface with and process locally.
“Instead of feeding this processor with tons of frames that overload them and hinder their capability to process data in real-time, our event camera can enable them to do real-time across a scene. We believe that event cameras are finally unlocking this edge processing.”
Making sensors smaller and cheaper is also a key innovation because it opens up a range of potential applications, such as in IoT sensing or smartphones. For this, Prophesee partnered with Sony, mixing its expertise in event cameras with Sony’s infrastructure and experience in vision sensors to develop a smaller, more efficient, and cheaper event camera evaluation kit. Verre thinks the pricing of event cameras is at a point where they can be realistically introduced into smartphones.
Another area companies are eyeing is fusion kits – the basic idea is to mix the capability of a neuromorphic camera with another vision sensor, such as lidar or a conventional camera, into a single system.
“From both the spatial information of a frame-based camera and from the information of an event-based camera, you can actually open the door to many other applications,” says Verre. “Definitely, there is potential in sensor fusion… by combining event-based sensors with some lidar technologies, for instance, in navigation, localisation, and mapping.”
Neuromorphic computing progress
However, while neuromorphic cameras mimic the human eye, the processing chips they work with are far from mimicking the human brain. Most neuromorphic computing, including work on event camera computing, is carried out using deep learning algorithms that perform processing on CPUs of GPUs, which are not optimised for neuromorphic processing. This is where new chips such as Intel’s Loihi 2 (a neuromorphic research chip) and Lava (an open-source software framework) come in.
“Our second-generation chip greatly improves the speed, programmability, and capacity of neuromorphic processing, broadening its usages in power and latency-constrained intelligent computing applications,” says Mike Davies, Director of Intel’s Neuromorphic Computing Lab.
BrainChip, a neuromorphic computing IP vendor, also partnered with Prophesee to deliver event-based vision systems with integrated low-power technology coupled with high AI performance.
It is not only industry accelerating the field of neuromorphic chips for vision – there is also an emerging but already active academic field. Neuromorphic systems have enormous potential, yet they are rarely used in a non-academic context. Particularly, there are no industrial employments of these bio-inspired technologies. Nevertheless, event-based solutions are already far superior to conventional algorithms in terms of latency and energy efficiency.
Working with the first iteration of the Loihi chip in 2019, Alpha Renner et al (‘Event-based attention and tracking on neuromorphic hardware’) developed the first set-up that interfaces an event-based camera with the spiking neuromorphic system Loihi, creating a purely event-driven sensing and processing system. The system selects a single object out of a number of moving objects and tracks it in the visual field, even in cases when movement stops, and the event stream is interrupted.
In 2021, Viale et al demonstrated the first spiking neuronal network (SNN) on a chip used for a neuromorphic vision-based controller solving a high-speed UAV control task. Ongoing research is looking at ways to use neuromorphic neural networks to integrate chips and event cameras for autonomous cars. Since many of these applications use the Loihi chip, newer generations, such as Loihi 2, should speed development. Other neuromorphic chips are also emerging, allowing quick learning and training of the algorithm even with a small dataset. Specialised SNN algorithms operating on neuromorphic chips can further help edge processing and general computing in event vision.
The development of event-based cameras, inspired by the retina, enables the exploitation of an additional physical constraint – time“The development of event-based cameras, inspired by the retina, enables the exploitation of an additional physical constraint – time. Due to their asynchronous course of operation, considering the precise occurrence of spikes, spiking neural networks take advantage of this constraint,” writes Lea Steffen and colleagues (‘Neuromorphic Stereo Vision: A Survey of Bio-Inspired Sensors and Algorithms’).
Lighting is another aspect the field of neuromorphic vision is increasingly looking at. An advantage of event cameras compared with frame-based cameras is their ability to deal with a range of extreme light conditions – whether high or low. But event cameras can now use light itself in a different way.
Prophesee and CIS have started work on the industry’s first evaluation kit for implementing 3D sensing based on structured light. This uses event-based vision and point cloud generation to produce an accurate 3D Point Cloud.
“You can then use this principle to project the light pattern in the scene and, because you know the geometry of the setting, you can compute the disparity map and then estimate the 3D and depth information,” says Verre. “We can reach this 3D Point Cloud at a refresh rate of 1kHz or above. So, any application of 3D tourism, such as 3D measurements or 3D navigation that requires high speed and time precision, really benefits from this technology. There are no comparable 3D approaches available today that can reach this time resolution.”
Industrial applications of event vision
Due to its inherent advantages, as well as progress in the field of peripherals (such as neuromorphic chips and lighting systems) and algorithms, we can expect the deployment of neuromorphic vision systems to continue – especially as systems become increasingly cost-effective.
monitoring%20engine%20vibration.jpg

Event vision can trace particles or monitor vibrations with low latency, low energy consumption, and relatively low amounts of data
We have mentioned some of the applications of event cameras here at IMVE before, from helping restore people’s vision to tracking and managing space debris. But in the near future perhaps the biggest impact will be at an industrial level.
From tracing particles or quality control to monitoring vibrations, all with low latency, low energy consumption, and relatively low amounts of data that favour edge computing, event vision is promising to become a mainstay in many industrial processes. Lowering costs through scaling production and better sensor design is opening even more doors.
Smartphones are one field where event cameras may make an unexpected entrance, but Verre says this is just the tip of the iceberg. He is looking forward to a paradigm shift and is most excited about all the applications that will soon pop up for event cameras – some of which we probably cannot yet envision.
“I see these technologies and new tech sensing modalities as a new paradigm that will create a new standard in the market. And in serving many, many applications, so we will see more event-based cameras all around us. This is so exciting.”
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 33 users

Slymeat

Move on, nothing to see.
I'm slowly getting used to it with my apple products (and forgetting my passwords 🤣 to boot) and it maybe my innate paranoia but I worry just how easily hackable a visage is?
Deep fakes are already a worry if you happen to be a target, and the type of software needed to produce very professional results is getting cheaper and becoming more easily available by the day.
If not now, then soon, I imagine with the evolution of the likes of DaliAI and other AI products.
Hell, with the way the internet has/is being harvested/scraped I wouldn't be surprised if someone with a bit of knowhow and some small incriminating access to some personal data (ala recent Optus/ Medibank breach's) would be able to gain an entree into bank accounts and God knows what else in our interconnected world. Scary times.
A key only ever defeats an ”honest thief”. So thinking the dishonest ones will already be able to defeat ALL locks, a system that uses something I will never forget to carry (as in my face) would be just fine for me.

But I do take your valid point into consideration.

For bank accounts etc, two-factor authentication should always be used.

A Bluetooth connection to your phone could easily be incorporated into a smart house lock to provide a secondary authentication without need to be connected to the cloud.

The trouble with data hacks, as seen with Optus and Medibank, is that the lazy IT team store this data unencrypted. And that should be completely illegal!! Passwords should NEVER be stored as plain text. Only the HASH of it should be stored, and then only in a way that it is not reversible. I know this from 30 years experience in computing and R&D. These bloody lazy organisations should know better, and protect the data they store about us much better than they currently do.

An unconnected lock storing metadata describing my face only within itself, and Bluetooth synched to my phone, is impossible to remotely hack. I’m happy with that.
 
  • Like
  • Love
Reactions: 10 users
A key only ever defeats an ”honest thief”. So thinking the dishonest ones will already be able to defeat ALL locks, a system that uses something I will never forget to carry (as in my face) would be just fine for me.

But I do take your valid point into consideration.

For bank accounts etc, two-factor authentication should always be used.

A Bluetooth connection to your phone could easily be incorporated into a smart house lock to provide a secondary authentication without need to be connected to the cloud.

The trouble with data hacks, as seen with Optus and Medibank, is that the lazy IT team store this data unencrypted. And that should be completely illegal!! Passwords should NEVER be stored as plain text. Only the HASH of it should be stored, and then only in a way that it is not reversible. I know this from 30 years experience in computing and R&D. These bloody lazy organisations should know better, and protect the data they store about us much better than they currently do.

An unconnected lock storing metadata describing my face only within itself, and Bluetooth synched to my phone, is impossible to remotely hack. I’m happy with that.
Bluetooth is very unsafe mate.
 
  • Like
  • Fire
Reactions: 5 users

JB49

Regular
I’d love to see Akida and facial recognition incorporated into locking systems. I.e. facial recognition to unlock and lock. No need for keys. I assume identical twins trust each other enough—they most likely already share keys anyway.

Could extend this to those temporary parcel lockers. Instead of receiving a PIN code, which could be intercepted or forgotten, register your face and use facial recognition to unlock the locker until another face is registered.

The possibilities are immense. Facial recognition to access ANY locked ANYTHING. Cars, safes, doors, gates, shared access points, etc. With Akida being ultra low energy, and cheap (only a small percentage of the total cost), this should not be an issue.
One thing I've always wondered, would there be much benefit in using akida in instances where the products are connected to mains power. I understand the benefit for battery powered devices. But when connected to mains power would there be much benefit swapping to a SNN?
 
  • Like
  • Thinking
Reactions: 7 users

Slymeat

Move on, nothing to see.
Bluetooth is very unsafe mate.
Yeah - just tossing up ideas. I really don‘t like carrying keys! 🥹

Maybe I’ll just get one of those dermal implants one day. I do personally know the professor of the local university who is researching it. I worked with her in a previous life. She actually has moved to the US now, but is still attached to my uni.
 
  • Like
  • Fire
Reactions: 7 users

Slymeat

Move on, nothing to see.
One thing I've always wondered, would there be much benefit in using akida in instances where the products are connected to mains power. I understand the benefit for battery powered devices. But when connected to mains power would there be much benefit swapping to a SNN?
My video doorbell is battery operated. I’d love its battery life to be extended.

WRT your question though. Power consumption wouldn‘t be as critical for a device connected to the mains. But things like single-shot learning, and storing minimal metadata on the device certainly would be advantages!
 
  • Like
  • Fire
Reactions: 8 users

Learning

Learning to the Top 🕵‍♂️
One thing I've always wondered, would there be much benefit in using akida in instances where the products are connected to mains power. I understand the benefit for battery powered devices. But when connected to mains power would there be much benefit swapping to a SNN?
Hi JB49,

To simply answer your question, its a big 'YES'.

If you had listened to the recent podcasts: Data centers energy usages is almost the size of England. (Data centers are connected to main power)

Our partnership with SiFive, in my opinion is where Akida will be saving energy, in Data centers.

JMHO/ DYOR
Learning 🏖
 
Last edited:
  • Like
  • Fire
Reactions: 18 users

HopalongPetrovski

I'm Spartacus!
A key only ever defeats an ”honest thief”. So thinking the dishonest ones will already be able to defeat ALL locks, a system that uses something I will never forget to carry (as in my face) would be just fine for me.

But I do take your valid point into consideration.

For bank accounts etc, two-factor authentication should always be used.

A Bluetooth connection to your phone could easily be incorporated into a smart house lock to provide a secondary authentication without need to be connected to the cloud.

The trouble with data hacks, as seen with Optus and Medibank, is that the lazy IT team store this data unencrypted. And that should be completely illegal!! Passwords should NEVER be stored as plain text. Only the HASH of it should be stored, and then only in a way that it is not reversible. I know this from 30 years experience in computing and R&D. These bloody lazy organisations should know better, and protect the data they store about us much better than they currently do.

An unconnected lock storing metadata describing my face only within itself, and Bluetooth synched to my phone, is impossible to remotely hack. I’m happy with that.
The problem these days is that we've all been conditioned through social media to just surrender our privacy and if you actually want to do/participate in almost anything you just have to give it up. Almost everyone wants your primary ID docs to set up virtually anything and in reality it's actually for the purpose of expanding their holy databases, cause thats the new currency.
Then, as you say, they don't protect it properly and we wind up in the shit.
Akida protected local ID devices can't come quick enough.
 
  • Like
  • Fire
Reactions: 19 users

Tothemoon24

Top 20
Last edited:
  • Like
  • Fire
Reactions: 20 users

BigDonger101

Founding Member
Yuval Noah Harari, speaks of a "Useless Class" of people, who are not only unemployed, but unemployable.


The mind is our last frontier, that gives us advantage over machines.

Humans are now using them to cheat on "intellectual" endeavors?

Not long before it will be a pointless exercise.

"The machine can produce work and has knowledge that you can never surpass.
Sorry, they got the job"..
Great find Dingo.

Lately, I've only been looking at WEF / WHO releases as they're the ones who will be in control IMO.

Have you seen the one which Harari does on hackable entities (being humans). Klaaus

Very interesting to see what they talk about, but everyone will focus on mainstream bullsh*t.

I think we live in one of the most interesting times...
 
  • Like
  • Fire
Reactions: 8 users

Learning

Learning to the Top 🕵‍♂️
Interesting that RT did say that he sees Nvidia as a future partner rather than competitor

Very interesting!
Thanks for sharing Tothemoon24.

"Jaguar Land Rover has a partnership with Californian artificial intelligence (AI) and computing giant Nvidia to develop active safety, driver assist and automated driving functions."

We know Brainchip had a connection with TCS for along time now.


Learning 🏖
 
  • Like
  • Fire
Reactions: 26 users

Diogenese

Top 20
One thing I've always wondered, would there be much benefit in using akida in instances where the products are connected to mains power. I understand the benefit for battery powered devices. But when connected to mains power would there be much benefit swapping to a SNN?
I've wondered the same thing, but a week or so ago someone posted an article about supermarket checkouts, which said that the image recognition data traffic (check-out to server) would overload the supermarkets communication network.
 
  • Like
Reactions: 7 users
Top Bottom