BRN Discussion Ongoing

Xray1

Regular
IMO..... A good Co announcement came out this afternoon...!!!!
Seems like a couple of the employee's at BRN, know that their onto something good and big ..... thus enticing them to take up a significant amount of options.
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Diogenese

Top 20
Hi big D - I've corrected my post as I relistened and couldn't find it. Bummer.
Well there would have been some months of cooperation before the agreement was announced.

The SynSense/Prophesee partnership was announced a year ago, so I think it's unlikely that the Prophesee/BrainChip cooperation goes back much before that ... but I still hope you're right.
 
  • Like
  • Love
Reactions: 10 users

Diogenese

Top 20
IMO..... A good Co announcement came out this afternoon...!!!!
Seems like a couple of the employee's at BRN, know that their onto something good and big ..... thus enticing them to take up a significant amount of options.
I'm pretty sure that a lot of BRN employees know they're onto a good thing.
 
  • Like
  • Haha
  • Love
Reactions: 19 users

Cardpro

Regular
I'm pretty sure that a lot of BRN employees know they're onto a good thing.
Yes, I've read few employer reviews and if I recall correctly, although someone complained about the management, almost everyone (including the person who complained) said the work is very interesting & innovative or something along those lines.
 

Xray1

Regular
I'm pretty sure that a lot of BRN employees know they're onto a good thing.
The only downside is that they will all end up very rich and will not need to work for the Co.....lol lol
 
  • Haha
  • Like
  • Love
Reactions: 9 users

Diogenese

Top 20
The only downside is that they will all end up very rich and will not need to work for the Co.....lol lol
But they'll want to see it through to the cortex.
 
Last edited:
  • Like
  • Love
Reactions: 10 users
Hi big D - I've corrected my post as I relistened and couldn't find it. Bummer.
Hi @Violin1
I just finished listening again for the fourth time trying to hear what you heard and the only reference to two years related to their work on the Metavision platform.

However this time I took the time to hear through Luca Verre's heavy accent what he was saying on one issue and what he said really needs to be taken notice of, and it is, that when he and his partner started out building their vision sensor they knew they were building a "house of straw".

They knew that the human eye was only one part of the equation as it took in the light and converted it but it was the brain that took over and then processed and acted upon what the eye had seen. So too was this the case with their sensor it was only half of the equation until they found Brainchip.

They had been until this point using optimised old school technology to process what they were collecting with their sensor and in many cases were not achieving the optimum performance but AKIDA resolved that issue and completed their vision.

Now harking back to SynSense. @Diogenese has pointed out probably half a dozen times here that they are not fit to wipe AKIDA's feet to be quite blunt (my interpretation of his science), but as a result of the podcast if you take the list of existing Prophesee partners we now know as a fact that both Sony and Renault have been pulling up short of being able to fully exploit Prophesee's technology without AKIDA.

If you add in Nviso and the claims it has made about what it can achieve using AKIDA to process its Applications it becomes blindly obvious that long term visionary investors have got hold of the comet's tail and while it might currently be pretty stable it will in the foreseeable future dramatically increase speed due to the effect of the gravitational forces as it rounds the Sun and it catapults back across the universe. So hold on very tight.

Next years AGM is getting closer with each passing day.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 63 users

Sirod69

bavarian girl ;-)
Luca wrote:

We are glad to serve BrainChip mission to bring neuromorphic processors to the market solving some of the major roadblocks of machine vision and AI. Thanks Rob Telson for this nice chat!😘
 
  • Like
  • Fire
  • Love
Reactions: 37 users
I am only posting this article because I know I can rely on everyone here to remain calm and not read too much into what Luca Verre says about the project with Sony back in October last year regarding putting processing into the sensor:

Image Sensors World

News and discussions about image sensors

Home Image Sensor Companies Ecosystem Companies Companies Genealogy ▼

Thursday, October 14, 2021

Prophesee CEO on Future Event-Driven Sensor Improvements​


IEEE Spectrum publishes an interview with Prophesee CEO Luca Verre. There is an interesting part about the company's next generation event-driven sensor:

"For the next generation, we are working along three axes. One axis is around the reduction of the pixel pitch. Together with Sony, we made great progress by shrinking the pixel pitch from the 15 micrometers of Generation 3 down to 4.86 micrometers with generation 4. But, of course, there is still some large room for improvement by using a more advanced technology node or by using the now-maturing stacking technology of double and triple stacks. [The sensor is a photodiode chip stacked onto a CMOS chip.] You have the photodiode process, which is 90 nanometers, and then the intelligent part, the CMOS part, was developed on 40 nanometers, which is not necessarily a very aggressive node. Going for more aggressive nodes like 28 or 22 nm, the pixel pitch will shrink very much.

The benefits are clear: It's a benefit in terms of cost; it's a benefit in terms of reducing the optical format for the camera module, which means also reduction of cost at the system level; plus it allows integration in devices that require tighter space constraints. And then of course, the other related benefit is the fact that with the equivalent silicon surface, you can put more pixels in, so the resolution increases.The event-based technology is not following necessarily the same race that we are still seeing in the conventional [color camera chips]; we are not shooting for tens of millions of pixels. It's not necessary for machine vision, unless you consider some very niche exotic applications.

The second axis is around the further integration of processing capability. There is an opportunity to embed more processing capabilities inside the sensor to make the sensor even smarter than it is today. Today it's a smart sensor in the sense that it's processing the changes [in a scene]. It's also formatting these changes to make them more compatible with the conventional [system-on-chip] platform. But you can even push this reasoning further and think of doing some of the local processing inside the sensor [that's now done in the SoC processor].

The third one is related to power consumption. The sensor, by design, is actually low-power, but if we want to reach an extreme level of low power, there's still a way of optimizing it. If you look at the IMX636 gen 4, power is not necessarily optimized. In fact, what is being optimized more is the throughput. It's the capability to actually react to many changes in the scene and be able to correctly timestamp them at extremely high time precision. So in extreme situations where the scenes change a lot, the sensor has a power consumption that is equivalent to conventional image sensor, although the time precision is much higher. You can argue that in those situations you are running at the equivalent of 1000 frames per second or even beyond. So it's normal that you consume as much as a 10 or 100 frame-per-second sensor.[A lower power] sensor could be very appealing, especially for consumer devices or wearable devices where we know that there are functionalities related to eye tracking, attention monitoring, eye lock, that are becoming very relevant.
"

My opinion only but so DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 53 users

equanimous

Norse clairvoyant shapeshifter goddess


Tiny ML​

0*EZyLUuUcqptL9Hx6.jpeg

Source: Medium

About TinyML:​

TinyML is one of the fastest-growing areas of Deep Learning. In a nutshell, it’s an emerging field of study that explores the types of models you can run on small, low-power devices like microcontrollers.
TinyML sits at the intersection of embedded-ML applications, algorithms, hardware and software. The goal is to enable low-latency inference at edge devices on devices that typically consume only a few milliwatts of battery power. By comparison, a desktop CPU would consume about 100 watts (thousands of times more!). Such extremely reduced power draw enables TinyML devices to operate unplugged on batteries and endure for weeks, months and possibly even years — — all while running always-on ML applications at the edge/endpoint.
Although most of us are new to TinyML, it may surprise you to learn that TinyML has served in production ML systems for years. You may have already experienced the benefits of TinyML when you say “OK Google” to wake up an Android device. That’s powered by an always-on, low-power keyword spotter.

Why TinyML?​

If we consider that, according to a forecast by ABI Research, by 2030, it is likely that around 2.5 billion devices will reach the market through TinyML techniques, having as the primary benefit the creation of smart IoT devices and, more than that, popularize them through a possible reduction in costs.
Most IoT devices perform a specific task. They receive input via a sensor, perform calculations, and send data or perform an action.
The usual IoT approach is to collect data and send it to a centralized registration server, and then, you can use machine learning to conclude.
But why don’t we make these devices smart at the embedded system level? We can build solutions like smart traffic signs based on traffic density, send an alert when your refrigerator runs out of stock, or even predict rain based on weather data.
The challenge with embedded systems is that they are tiny. And most of them run on battery. ML models consume a lot of processing power, machine learning tools like Tensorflow are not suitable for creating models on IoT devices.

Building models in TinyML:​

In TinyML, the same ML architecture and approach is used, but on smaller devices capable of performing different functions, from answering audio commands to executing actions through chemical interactions.
The most famous is Tensorflow Lite. With Tensorflow Lite, you can group your Tensorflow models to run on embedded systems. Tensorflow Lite offers small binaries capable of running on low power embedded systems.
0*1rDuTd532AdGGOKc.png

Tensorflow Lite: Tiny ML
One example is the use of TinyML in environmental sensors. Imagine that the device is trained to identify temperature and gas quality in a forest. This device can be essential for risk assessment and identification of fire principles.
Connecting to the network is an energy-consuming operation. Using Tensorflow Lite, you can deploy machine learning models without the need to connect to the Internet. This also solves security issues since embedded systems are relatively easier to exploit.

Advantages of TinyML:​

Data Security: As there is no need to transfer information to external environments, data privacy is more guaranteed. ‍
Energy savings: Transferring information requires an extensive server infrastructure. When there is no data transmission, energy and resources are saved, consequently in costs.

No connection dependency: If the device depends on the Internet to work, and it goes down, it will be impossible to send the data to the server. You try to use a voice assistant, and it does not respond because it is disconnected from the Internet.
Latency: Data Transfer takes time and often brings in a delay. When it does not involve this process, the result is instantaneous.
“Where there is data smoke, there is business fire.” — Thomas Redman
 
  • Like
  • Fire
  • Love
Reactions: 22 users

equanimous

Norse clairvoyant shapeshifter goddess
I dont think this has been posted before


ABSTRACT​

Using deep reinforcement learning policies that are trained in simulation on real robotic platforms requires fine-tuning due to discrepancies between simulated and real environments. Multiple methods like domain randomization and system identification have been suggested to overcome this problem. However, sim-to-real transfer remains an open problem in robotics and deep reinforcement learning. In this paper, we present a spiking neural network (SNN) alternative for dealing with the sim-to-real problem. In particular, we train SNNs with backpropagation using surrogate gradients and the (Deep Q-Network) DQN algorithm to solve two classical control reinforcement learning tasks. The performance of the trained DQNs degrades when evaluated on randomized versions of the environments used during training. To compensate for the drop in performance, we apply the biologically plausible reward-modulated spike timing dependent plasticity (r-STDP) learning rule. Our results show that r-STDP can be successfully utilized to restore the network’s ability to solve the task. Furthermore, since r-STDP can be directly implemented on neuromorphic hardware, we believe it provides a promising neuromorphic solution to the sim-to-real problem.
 
  • Like
  • Thinking
  • Fire
Reactions: 12 users

equanimous

Norse clairvoyant shapeshifter goddess
I dont think this has been posted before


ABSTRACT​

Using deep reinforcement learning policies that are trained in simulation on real robotic platforms requires fine-tuning due to discrepancies between simulated and real environments. Multiple methods like domain randomization and system identification have been suggested to overcome this problem. However, sim-to-real transfer remains an open problem in robotics and deep reinforcement learning. In this paper, we present a spiking neural network (SNN) alternative for dealing with the sim-to-real problem. In particular, we train SNNs with backpropagation using surrogate gradients and the (Deep Q-Network) DQN algorithm to solve two classical control reinforcement learning tasks. The performance of the trained DQNs degrades when evaluated on randomized versions of the environments used during training. To compensate for the drop in performance, we apply the biologically plausible reward-modulated spike timing dependent plasticity (r-STDP) learning rule. Our results show that r-STDP can be successfully utilized to restore the network’s ability to solve the task. Furthermore, since r-STDP can be directly implemented on neuromorphic hardware, we believe it provides a promising neuromorphic solution to the sim-to-real problem.
Refers back to Loihi but still worth posting what people are doing with SNN

@Fact Finder
Found a link to Pakistan in my previous post and SNN search results with the one of the author of this article


Ali Muhammad

Ali Muhammad​

Tampere, Pirkanmaa, Finland​

1,604 followers 500+ connections​

Join to connect
EUROPEAN DYNAMICS EUROPEAN DYNAMICS
Tampere University of Technology Tampere University of Technology

About​

I am passionate about the modern robotics and opportunities it can bring to us. I am interested to understand how digitalization and robotics together can be most beneficial for the future societies. I formulate international industry and academia projects on these topics with multi-disciplinary and multi-cultural teams to make robotics and technologies as part of the solution for grand challenges faced by our generation.

 
  • Like
  • Thinking
Reactions: 5 users

Slade

Top 20
I am only posting this article because I know I can rely on everyone here to remain calm and not read too much into what Luca Verre says about the project with Sony back in October last year regarding putting processing into the sensor:

Image Sensors World

News and discussions about image sensors

Home Image Sensor Companies Ecosystem Companies Companies Genealogy ▼

Thursday, October 14, 2021

Prophesee CEO on Future Event-Driven Sensor Improvements​


IEEE Spectrum publishes an interview with Prophesee CEO Luca Verre. There is an interesting part about the company's next generation event-driven sensor:

"For the next generation, we are working along three axes. One axis is around the reduction of the pixel pitch. Together with Sony, we made great progress by shrinking the pixel pitch from the 15 micrometers of Generation 3 down to 4.86 micrometers with generation 4. But, of course, there is still some large room for improvement by using a more advanced technology node or by using the now-maturing stacking technology of double and triple stacks. [The sensor is a photodiode chip stacked onto a CMOS chip.] You have the photodiode process, which is 90 nanometers, and then the intelligent part, the CMOS part, was developed on 40 nanometers, which is not necessarily a very aggressive node. Going for more aggressive nodes like 28 or 22 nm, the pixel pitch will shrink very much.

The benefits are clear: It's a benefit in terms of cost; it's a benefit in terms of reducing the optical format for the camera module, which means also reduction of cost at the system level; plus it allows integration in devices that require tighter space constraints. And then of course, the other related benefit is the fact that with the equivalent silicon surface, you can put more pixels in, so the resolution increases.The event-based technology is not following necessarily the same race that we are still seeing in the conventional [color camera chips]; we are not shooting for tens of millions of pixels. It's not necessary for machine vision, unless you consider some very niche exotic applications.

The second axis is around the further integration of processing capability. There is an opportunity to embed more processing capabilities inside the sensor to make the sensor even smarter than it is today. Today it's a smart sensor in the sense that it's processing the changes [in a scene]. It's also formatting these changes to make them more compatible with the conventional [system-on-chip] platform. But you can even push this reasoning further and think of doing some of the local processing inside the sensor [that's now done in the SoC processor].

The third one is related to power consumption. The sensor, by design, is actually low-power, but if we want to reach an extreme level of low power, there's still a way of optimizing it. If you look at the IMX636 gen 4, power is not necessarily optimized. In fact, what is being optimized more is the throughput. It's the capability to actually react to many changes in the scene and be able to correctly timestamp them at extremely high time precision. So in extreme situations where the scenes change a lot, the sensor has a power consumption that is equivalent to conventional image sensor, although the time precision is much higher. You can argue that in those situations you are running at the equivalent of 1000 frames per second or even beyond. So it's normal that you consume as much as a 10 or 100 frame-per-second sensor.[A lower power] sensor could be very appealing, especially for consumer devices or wearable devices where we know that there are functionalities related to eye tracking, attention monitoring, eye lock, that are becoming very relevant.
"

My opinion only but so DYOR
FF

AKIDA BALLISTA
Happy Daniel Bryan GIF by WWE
 
  • Haha
  • Like
Reactions: 19 users
D

Deleted member 118

Guest

94C5217F-CE04-450D-85B4-34ECB8C2AFFF.png


Posted only not bothered reading or listening to it.
 
  • Like
  • Fire
Reactions: 11 users

equanimous

Norse clairvoyant shapeshifter goddess
Ok So I came across this and have filled in the survey, you can imagine what I have written. No harm for others to fill in as well ;)


Brain-Inspired / Neuromorphic Research in the UK: Next Steps
We are putting together a case for direct support of brain-inspired computing to be a UK Government priority, notably in terms of creating a national centre in this area. We are building on the report we commissioned in this area (2021, https://bit.ly/3RpDvSS), which recommended such a centre be established; and are engaging with research councils and government to that end.

We want anything we present to government to reflect the views and needs of the community. So, we would really value your thoughts on whether this is a good idea and, if, so, how we might best achieve a more coordinated support framework for brain-inspired computing?

We would be grateful if you could take 5-7 minutes to answer the questions below. There is a free text box at the end of the questionnaire for you to add any further points that you think are relevant.

Thank you for your time.
 
  • Like
  • Love
  • Fire
Reactions: 13 users
I am only posting this article because I know I can rely on everyone here to remain calm and not read too much into what Luca Verre says about the project with Sony back in October last year regarding putting processing into the sensor:

Image Sensors World

News and discussions about image sensors

Home Image Sensor Companies Ecosystem Companies Companies Genealogy ▼

Thursday, October 14, 2021

Prophesee CEO on Future Event-Driven Sensor Improvements​


IEEE Spectrum publishes an interview with Prophesee CEO Luca Verre. There is an interesting part about the company's next generation event-driven sensor:

"For the next generation, we are working along three axes. One axis is around the reduction of the pixel pitch. Together with Sony, we made great progress by shrinking the pixel pitch from the 15 micrometers of Generation 3 down to 4.86 micrometers with generation 4. But, of course, there is still some large room for improvement by using a more advanced technology node or by using the now-maturing stacking technology of double and triple stacks. [The sensor is a photodiode chip stacked onto a CMOS chip.] You have the photodiode process, which is 90 nanometers, and then the intelligent part, the CMOS part, was developed on 40 nanometers, which is not necessarily a very aggressive node. Going for more aggressive nodes like 28 or 22 nm, the pixel pitch will shrink very much.

The benefits are clear: It's a benefit in terms of cost; it's a benefit in terms of reducing the optical format for the camera module, which means also reduction of cost at the system level; plus it allows integration in devices that require tighter space constraints. And then of course, the other related benefit is the fact that with the equivalent silicon surface, you can put more pixels in, so the resolution increases.The event-based technology is not following necessarily the same race that we are still seeing in the conventional [color camera chips]; we are not shooting for tens of millions of pixels. It's not necessary for machine vision, unless you consider some very niche exotic applications.

The second axis is around the further integration of processing capability. There is an opportunity to embed more processing capabilities inside the sensor to make the sensor even smarter than it is today. Today it's a smart sensor in the sense that it's processing the changes [in a scene]. It's also formatting these changes to make them more compatible with the conventional [system-on-chip] platform. But you can even push this reasoning further and think of doing some of the local processing inside the sensor [that's now done in the SoC processor].

The third one is related to power consumption. The sensor, by design, is actually low-power, but if we want to reach an extreme level of low power, there's still a way of optimizing it. If you look at the IMX636 gen 4, power is not necessarily optimized. In fact, what is being optimized more is the throughput. It's the capability to actually react to many changes in the scene and be able to correctly timestamp them at extremely high time precision. So in extreme situations where the scenes change a lot, the sensor has a power consumption that is equivalent to conventional image sensor, although the time precision is much higher. You can argue that in those situations you are running at the equivalent of 1000 frames per second or even beyond. So it's normal that you consume as much as a 10 or 100 frame-per-second sensor.[A lower power] sensor could be very appealing, especially for consumer devices or wearable devices where we know that there are functionalities related to eye tracking, attention monitoring, eye lock, that are becoming very relevant.
"

My opinion only but so DYOR
FF

AKIDA BALLISTA
Those who have read the above might be caused by the following extract:

“The second axis is around the further integration of processing capability. There is an opportunity to embed more processing capabilities inside the sensor to make the sensor even smarter than it is today.”

To think this opportunity was/is AKIDA and that the smile on Luca Verre’s face that Rob Telson speaks of when he was shown what Brainchip had been able to do with AKIDA and the Prophesee vision sensor was part of their project with Sony to develop their next generation vision solution.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 52 users

Cardpro

Regular
With all the information about neuromorphic & edge ai on TSE, it might be worthwhile to offer Bachelor's/Master's Degree of potential application of Edge AIs & Neuromophics using BrainChip offered by TSE managed / led by @Fact Finder & @Diogenese
 
  • Like
  • Haha
  • Fire
Reactions: 20 users
D

Deleted member 118

Guest
  • Like
  • Fire
Reactions: 4 users

Sam

Nothing changes if nothing changes
G’day FF, how is your $2 ish share price looking by Christmas? my $4.25 is looking very unrealistic 😬
 
  • Like
  • Haha
  • Thinking
Reactions: 13 users

TopCat

Regular
Came across this while looking into Qualcomm’s Ride platform. Was trying to find a link between Qualcomm and Prophesee and autonomous vehicles. Not sure of the date though.
52264F80-007A-4BFE-8FF7-35E7B523D406.jpeg

 
  • Like
  • Fire
  • Love
Reactions: 19 users
Top Bottom