BRN Discussion Ongoing

Deadpool

Did someone say KFC
So I noticed an after trade of over 100k shares at 1.07
I wonder why its not registered as official closing price?

Im feeling warm and fuzzies for next week...
Was hoping to accumulate tomorrow afternoon but I fear I may not get the price I'm after...
AKIDA BALLISTA!
Hey there @Labsy, always thought your avatar looked vaguely familiar, so finally decided to click on it and low and behold Les Grossman appears. Not a fan of Tom Cruise in general, but he pulled this character off spectacularly. I can imagine that stock shorters have this same personality.
Anyway, Regards
I'm sure your not that way inclined
Tom Cruise Dance GIF
 
  • Like
  • Haha
  • Fire
Reactions: 9 users

Slymeat

Move on, nothing to see.
I'm not an IT tech head either and additionally a beginner. I have fast skimmed the report and come to the same conclusion. It's CNN and in addition the chip doesn't seem to be ready or all parts integrated. I also read about the accuracy.

"As a result, when performing multi-core parallel inference on a deep CNN, ResNet-20, the measured accuracy on CIFAR-10 classification (83.67%) is still 3.36% lower than that of a 4-bit-weight software model (87.03%)."

"The intermediate data buffers and partial-sum accumulators are implemented by a field-programmable gate array (FPGA) integrated on the same board as the NeuRRAM chip. Although these digital peripheral modules are not the focus of this study, they will eventually need to be integrated within the same chip in production-ready RRAM-CIM hardware."
Of course I don't know what that means in detail. But for me it doesn't seem to be ready. Maybe one of you who knows about this will comment.

"We use CNN models for the CIFAR-10 and MNIST image classification tasks. The CIFAR-10 dataset consists of 50,000 training images and 10,000 testing images belonging to 10 object classes."
https://www.nature.com/articles/s41586-022-04992-8

@Slymeat I had the topic of our manufacturing technology (WBT) the other day so these details jumped out at me. They don't do what Weebit deliberately chose to do. You mentioned that this is exactly where the advantage of Weebit lies, far cheaper and easier to manufacture.

"The RRAM device stack consists of a titanium nitride (TiN) bottom-electrode layer, a hafnium oxide (HfOx) switching layer, a tantalum oxide (TaOx) thermal-enhancement layer and a TiN top-electrode layer."

"The current RRAM array density under a 1T1R configuration is limited not by the fabrication process but by the RRAM write current and voltage. The current NeuRRAM chip uses large thick-oxide I/O transistors as the ‘T’ to withstand >4-V RRAM forming voltage and provide enough write current. Only if we lower both the forming voltage and the write current can we obtain higher density and therefore lower parasitic capacitance for improved energy efficiency."

https://thestockexchange.com.au/threads/brn-discussion-2022.1/post-119627
https://thestockexchange.com.au/threads/brn-discussion-2022.1/post-119495
I fear people are over-thinking things.

As investors, even just as human beings, we all need to accept that we don‘t know everything, and we don‘t need to know everything. Some things we simply need to accept, especially if they come directly from the company or from reputable research institutions.

Weebit Nano state their product is cheaper to produce as it contains no exotic materials and can be mass produced by fabs with no need to re-tool. So let’s start by simply accepting that.

In a previous post, I think it was on the WBT forum, I summarised a detailed article that @cosors brought to my attention, this used a physical 16kb weebit ReRAM chip to perform limited neuromorphic processing. From the physically measured attributes of the 16k chip, they used industry standard software (SPICE) to simulate a 20Mb block of ReRAM to process trained images. They needed to simulate the 20Mb chip as a physical device did not yet exist. From this we should accept that ReRAM CAN be used to perform some neuromorphic operations. And that again is all we need to accept.

In this article, weights are stored in memory cells and ReRAM is used to create logic paths that simulate synapses. ReRAM is also used to store results and even provide some degree of LSTM. The system didn‘t, however, have the ability to learn on the fly.

I stated that I think such systems will have a place, they can be viewed as a poor-man’s/scaled-down version of Akida. And in a lot of situations, that is all that will be needed. They will have a place, and that place may simply be a feeder to future Akida development.

Once developers have something to play with, they may either realize the limitations of their system or decide they need more fuctionality—functions that Akida already provides. It’s quite a natural development cycle to not fully understand what you need, or what is possible, until you first build a prototype.

I believe part of the problem with general uptake/acceptance of BrainChip’s Akida technology is that it is so bloody powerful, it is a foreign concept, and so many people don‘t know what the hell to do with it. And the “it” extends to neuromorphic computing In general, let alone when further complicated by LSTM, cortical columns and the like.

There truly are a lot of WANCAs out there and not all of them are wankers!

Getting people to think of sparse neuromorphic spiking neural networks, as well as the concept of a power-restricted and connectionless edge, may be a bit too much all at once.

Sure BrainChip supplies tools that effortlessly port standard trained CNNs to Akida, but we, on this forum, have heard evidence that developers do this porting and start looking at their designs in a different light. Some of them even deciding to take a backward step when they realise the vast improvements that Akida opens up to them.

As investors we want products out and in the hands of consumers, but the companies developing them don‘t want to release something that has less than optimal potential, and in some cases looks quite bad compared to what they see Akida can achieve for them.

That’s why I applaud BrainChip’s initiative of taking their technologies to universities for the next generation of technical innovators to play with it. Empowering these students with AI concepts thatthey WILL need in the decades to come. What a brilliant move.
 
  • Like
  • Love
  • Fire
Reactions: 66 users

Earlyrelease

Regular
I fear people are over-thinking things.

As investors, even just as human beings, we all need to accept that we don‘t know everything, and we don‘t need to know everything. Some things we simply need to accept, especially if they come directly from the company or from reputable research institutions.

Weebit Nano state their product is cheaper to produce as it contains no exotic materials and can be mass produced by fabs with no need to re-tool. So let’s start by simply accepting that.

In a previous post, I think it was on the WBT forum, I summarised a detailed article that @cosors brought to my attention, this used a physical 16kb weebit ReRAM chip to perform limited neuromorphic processing. From the physically measured attributes of the 16k chip, they used industry standard software (SPICE) to simulate a 20Mb block of ReRAM to process trained images. They needed to simulate the 20Mb chip as a physical device did not yet exist. From this we should accept that ReRAM CAN be used to perform some neuromorphic operations. And that again is all we need to accept.

In this article, weights are stored in memory cells and ReRAM is used to create logic paths that simulate synapses. ReRAM is also used to store results and even provide some degree of LSTM. The system didn‘t, however, have the ability to learn on the fly.

I stated that I think such systems will have a place, they can be viewed as a poor-man’s/scaled-down version of Akida. And in a lot of situations, that is all that will be needed. They will have a place, and that place may simply be a feeder to future Akida development.

Once developers have something to play with, they may either realize the limitations of their system or decide they need more fuctionality—functions that Akida already provides. It’s quite a natural development cycle to not fully understand what you need, or what is possible, until you first build a prototype.

I believe part of the problem with general uptake/acceptance of BrainChip’s Akida technology is that it is so bloody powerful, it is a foreign concept, and so many people don‘t know what the hell to do with it. And the “it” extends to neuromorphic computing In general, let alone when further complicated by LSTM, cortical columns and the like.

There truly are a lot of WANCAs out there and not all of them are wankers!

Getting people to think of sparse neuromorphic spiking neural networks, as well as the concept of a power-restricted and connectionless edge, may be a bit too much all at once.

Sure BrainChip supplies tools that effortlessly port standard trained CNNs to Akida, but we, on this forum, have heard evidence that developers do this porting and start looking at their designs in a different light. Some of them even deciding to take a backward step when they realise the vast improvements that Akida opens up to them.

As investors we want products out and in the hands of consumers, but the companies developing them don‘t want to release something that has less than optimal potential, and in some cases looks quite bad compared to what they see Akida can achieve for them.

That’s why I applaud BrainChip’s initiative of taking their technologies to universities for the next generation of technical innovators to play with it. Empowering these students with AI concepts thatthey WILL need in the decades to come. What a brilliant move.
Sly.
And that why I believe our founders have licences a two mode model and allowed the chip to be scaleable. Finally the price originally talked about was low to a) capture market and b) keep others from spending millions on research and then not recoup it as people wont pay a higher price if there is a better model for cheaper. Thus while this is interesting I totally agree with you as to me ( glass always half full) this just shows how good our product is and what hurdles the others must cross to be equal and that’s before they hit the patent hurdles. Panteen

Brainers stay strong, stay long.
 
  • Like
  • Fire
  • Love
Reactions: 41 users

Cgc516

Regular
  • Like
  • Fire
Reactions: 10 users

Zedjack33

Regular
What we can do the best is to hold our shares even tighter than ever.
Been riding it for a few years now.

I’m holding, but also have a timeline.

Fingers crossed.
 
  • Like
Reactions: 3 users

wilzy123

Founding Member
Normally, when the order happened after market like these big, it will be a SP drop followed next day. Finger cross

I love this fortune cookie logic
 
  • Haha
  • Like
Reactions: 16 users

alwaysgreen

Top 20
Been riding it for a few years now.

I’m holding, but also have a timeline.

Fingers crossed.

Same and same. As long as you have a clear entry/exit strategy and stick to it.

I think patience is extremely important here. Neuromorphic processing is new. So new, that my spellcheck doesn't even recognise the word and tries to change it to metamorphic 🤣.

It will take time for customers to uptake our tech. We know how good it is but customers will need a lot of convincing and likely time.
 
  • Like
  • Haha
Reactions: 14 users

yogi

Regular
Great work T,

I too think this is a big step for our Company.

It's impressive that CMU has already successfully completed a pilot session this year.

From our press release

"The Program successfully completed a pilot session at Carnegie Mellon University this past spring semester and will be officially launching with Arizona State University in September. There are five universities and institutes of technology expected to participate in the program during its inaugural academic year."


You inspired me last night to do a bit of digging so I thought I would look into the Professor John Paul Shen, who said this

"We have incorporated experimentation with BrainChip’s Akida development boards in our new graduate-level course, “Neuromorphic Computer Architecture and Processor Design” at Carnegie Mellon University during the Spring 2022 semester,” said John Paul Shen, Professor, Electrical and Computer Engineering Department at Carnegie Mellon. “Our students had a great experience in using the Akida development environment and analyzing results from the Akida hardware. We look forward to running and expanding this program in 2023"

It didn't take much digging for my socks to be blown off, impressive man (y)

John Shen


John Shen​

Professor, Electrical and Computer Engineering​

Contact

Bio​

John Paul Shen was a Nokia Fellow and the founding director of Nokia Research Center - North America Lab. NRC-NAL had research teams pursuing a wide range of research projects in mobile Internet and mobile computing. In six years (2007-2012), NRC-NAL filed over 100 patents, published over 200 papers, hosted about 100 Ph.D. interns, and collaborated with a dozen universities. Prior to joining Nokia in late 2006, John was the Director of the Microarchitecture Research Lab at Intel. MRL had research teams in Santa Clara, Portland, and Austin, pursuing research on aggressive ILP and TLP microarchitectures for IA32 and IA64 processors. Prior to joining Intel in 2000, John was a tenured Full Professor in the ECE Department at CMU, where he supervised a total of 17 Ph.D. students and dozens of M.S. students, received multiple teaching awards, and published two books and more than 100 research papers. One of his books, “Modern Processor Design: Fundamentals of Superscalar Processors” was used in the EE382A Advanced Processor Architecture course at Stanford, where he co-taught the EE382A course. After spending 15 years in the industry, all in the Silicon Valley, he returned to CMU in the fall of 2015 as a tenured Full Professor in the ECE Department, and is based at the Carnegie Mellon Silicon Valley campus.

Education​

Ph.D.
Electrical Engineering
University of Southern California

M.S.
Electrical Engineering
University of Southern California

B.S.
Electrical Engineering
University of Michigan

Research​

Modern Processor Design and Evaluation​

With the emergence of superscalar processors, phenomenal performance increases are being achieved via the exploitation of instruction-level parallelism (ILP). Software tools for aiding the design and validation of complex superscalar processors are being developed. These tools, such as VMW (Visualization-Based Microarchitecture Workbench), facilitate the rigorous specification and validation of microarchitectures.

Architecture and Compilation for Instruction-Level Parallelism​

Microarchitecture and code transformation techniques for effective exploitation of ILP are being studied. Synergistic combinations of static (compile-time software) and dynamic (run-time hardware) mechanisms are being explored. Going beyond a single instruction stream is necessary to achieve effective use of wide superscalar machines, as well as tightly coupled small-scale multiprocessors.

Dependable and Fault-Tolerant Computing​

Techniques are being developed to exploit the idling machine resources of ILP machines for concurrent error checking. As ILP machines get wider, the utilization of the machine resources will decrease. The idling resources can potentially be used for enhancing system dependability via compile-time transformation techniques.

Keywords​

  • Wearable, mobile, and cloud computing
  • Ultra energy-efficient computing for sensor processing
  • Real-time data analytics
  • Mobile-user behavior modelling and deep learning



I also checked out his Linkedin page see screenshot, I especially like the sentence at the bottom

View attachment 14406



NCAL: Neuromorphic Computer Architecture Lab






"Energy-Efficient, Edge-Native, Sensory Processing Units​

with Online Continuous Learning Capability"​


The Neuromorphic Computer Architecture Lab (NCAL) is a new research group in the Electrical and Computer Engineering Department at Carnegie Mellon University, led by Prof. John Paul Shen and Prof. James E. Smith.

RESEARCH GOAL: New processor architecture and design that captures the capabilities and efficiencies of brain's neocortex for energy-efficient, edge-native, on-line, sensory processing in mobile and edge devices.
  • Capabilities: strong adherence to biological plausibility and Spike Timing Dependent Plasticity (STDP) in order to enable continuous, unsupervised, and emergent learning.
  • Efficiencies: can achieve several orders of magnitude improvements on system complexity and energy efficiency as compared to existing DNN computation infrastructures for edge-native sensory processing.

RESEARCH STRATEGY:
  1. Targeted Applications: Edge-Native Sensory Processing
  2. Computational Model: Space-Time Algebra (STA)
  3. Processor Architecture: Temporal Neural Networks (TNN)
  4. Processor Design Style: Space-Time Logic Design
  5. Hardware Implementation: Off-the-Shelf Digital CMOS

1. Targeted Applications: Edge-Native Sensory Processing
Targeted application domain: edge-native on-line sensory processing that mimics the human neocortex. The focus of this research is on temporal neural networks that can achieve brain-like capabilities with brain-like efficiency and can be implemented using standard CMOS technology. This effort can enable a whole new family of accelerators, or sensory processing units, that can be deployed in mobile and edge devices for performing edge-native, on-line, always-on, sensory processing with the capability for real-time inference and continuous learning, while consuming only a few mWatts.

2. Computational Model: Space-Time Algebra (STA)
A new Space-Time Computing (STC) Model has been developed for computing that communicates and processes information encoded as transient events in time -- action potentials or voltage spikes in the case of neurons. Consequently, the flow of time becomes a freely available, no-cost computational resource. The theoretical basis for the STC model is the "Space-Time Algebra“ (STA) with primitives that model points in time and functional operations that are consistent with the flow of Newtonian time. [STC/STA was developed by Jim Smith]

3. Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.

4. Processor Design Style: Space Time Logic Design
Conventional CMOS logic gates based on Boolean algebra can be re-purposed to implement STA based temporal operations and functions. Temporal values can be encoded using voltage edges or pulses. We have developed a TNN architecture based on two key building blocks: neurons and columns of neurons. We have implemented the excitatory neuron model with its input synaptic weights as well as a column of such neurons with winner-take-all (WTA) lateral inhibition, all using the space time logic design approach and standard digital CMOS design tools.

5. Hardware Implementation: Standard Digital CMOS Technology
Based on the STA theoretical foundation and the ST logic design approach, we can design a new type of special-purpose TNN-based "Neuromorphic Sensory Processing Units" (NSPU) for incorporation in mobile SoCs targeting mobile and edge devices. NSPUs can be a new core type for SoCs already with heterogeneous cores. Other than using off-the-shelf CMOS design and synthesis tools, there is the potential for creating a new custom standard cell library and design optimizations for supporting the design of TNN-based NSPUs for sensory processing.



And the Course Syllabus - Spring 2022 18-743: “Neuromorphic Computer Architecture & Processor Design”

BrainChip is listed on the course

View attachment 14407

View attachment 14408

It's great to be a share holder :)
Good find and TechG much appreciated .Cheers
 
  • Like
  • Love
  • Fire
Reactions: 7 users

Labsy

Regular
Hey there @Labsy, always thought your avatar looked vaguely familiar, so finally decided to click on it and low and behold Les Grossman appears. Not a fan of Tom Cruise in general, but he pulled this character off spectacularly. I can imagine that stock shorters have this same personality.
Anyway, Regards
I'm sure your not that way inclined
Tom Cruise Dance GIF
Haha...no not at all...definitely long term holder.
I guess I like the confidence and strength of the personality. This dance, I will dance...come Xmas...
Cheers buddy.
 
  • Like
  • Haha
  • Fire
Reactions: 12 users

buena suerte :-)

BOB Bank of Brainchip
Last edited:
  • Like
  • Love
  • Sad
Reactions: 11 users

Zedjack33

Regular
What’s with many bailing out of tse?
 
  • Like
Reactions: 1 users

Deadpool

Did someone say KFC
What’s with many bailing out of tse?
Dear @Zedjack33 All I can say on the matter.
Be not afraid of greatness. "Some are born great, some achieve greatness, and others have greatness thrust upon them". and then still others invest in BRN and slowly go insane.
Jack Nicholson Johnny GIF
 
  • Haha
  • Like
  • Love
Reactions: 23 users

uiux

Regular
  • Like
  • Fire
Reactions: 4 users

jtardif999

Regular
IMO members on our scientific advisory board should have signed a Stat Dec on the subject of no conflict of interest to insure against sharing of our trade secrets with competition.
It seems very suss to me that Gunter goes off and releases a competitive chip after being privy to AKIDA IP.
How else could he be an advisor without knowing intimate details of our chip?
Pretty sure PVDM would not have allowed any of the SAB to be privy to trade secrets.
 
  • Like
  • Thinking
Reactions: 5 users

MDhere

Top 20
Dear @Zedjack33 All I can say on the matter.
Be not afraid of greatness. "Some are born great, some achieve greatness, and others have greatness thrust upon them". and then still others invest in BRN and slowly go insane.
Jack Nicholson Johnny GIF
hated that movie, never watched a scary more ever after that one
 
  • Haha
  • Like
Reactions: 4 users

MDhere

Top 20
Please don’t put words in my mouth. I asked a question and never made a statement. If you don’t know the answer that’s fine. Today is the first we have heard of this chip that to many of us sounds very similar to Akida. You tell us that one of the developers was on BrainChip’s scientific board. The question that I asked is reasonable. The guy goes from our scientific board and later is part of a team that releases a neuromorphic chip. Unlike Akida, which we knew about for years before it was developed, this chip seems to have come out of the blue. With all your research you didn’t even know it was being developed.
well that convo escalated a little between the two of you lol How about i sing a song, i know a tad late but it still might chill the air even at this hour 🤣 here goes -o id like to have a beer with Slade, id like to have a beer with uiux, cause Slade and uiux are My mates 🍻🍻
 
  • Like
  • Haha
  • Love
Reactions: 33 users

cosors

👀
Hey there @Labsy, always thought your avatar looked vaguely familiar, so finally decided to click on it and low and behold Les Grossman appears. Not a fan of Tom Cruise in general, but he pulled this character off spectacularly. I can imagine that stock shorters have this same personality.
Anyway, Regards
I'm sure your not that way inclined
Tom Cruise Dance GIF
Believe it or not. My sophisticated spiking network guided me to put him in my follow list within a split second because of his profile picture, was automatic 🤣 Sometimes I have to rely on spiking intuition!
@Labsy that's just meant in a friendly way. I appreciate your contributions. Your picture just attracted me and intuition was right!
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 6 users

cosors

👀
Hi cosors,

This is something which I have noted before - academics confine their research to peer-reviewed publications because anything that is not peer reviewed is not "proven" scientifically.

In fact, finding Akida in peer reviewed papers may be a benefit of the Carnegie Mellon University project, as the students and academics will be experimenting with Akida and producing peer reviewed papers.
I would like to respond again to your good explanation. I have now thought about it longer. And you are right, of course 😶‍🌫️ I can't completely wipe away my thoughts, but that's not what matters. Your argument weighs more. I have just in the short time only Brainchip in the head. It is like an automatism. You have a much more balanced way. Actually, it's the same with me. But I don't have the composure of experience yet and fortunately I only drew their attention to the university offensive in a friendly way. Please forgive my impetuous manner. I am also influenced by Talga where the thing is brilliant but I or we have to defend it for years incessantly who are blindly against it whether from outside or inside the group (HC). That shapes me a bit and I should rethink it. I did not want to annoy you with my first answer. You're right.
___
are you actually the daring barrel from the old days?
 
Last edited:
  • Like
  • Fire
Reactions: 13 users
 
  • Like
  • Fire
  • Love
Reactions: 36 users

chapman89

Founding Member
SPACE SITUATIONAL
AWARENESS
WITH
EVENT-BASED VISION
14.png
10.png
2.png
5.png
3.png
16-1.png
6-1.png
2.png
5.png
5.png
14.png
5.png
6-1.png
10.png
6-1.png
6-1.png
Space Junk

WORLD FIRST NEUROMORPHIC INSPIRED MOBILE TELESCOPE OBSERVATORY FOR HIGH PERFORMANCE SPACE SITUATIONAL AWARENESS

The growing reliance on satellites has led to an increased risk in collisions between space objects. Accurate detection and tracking of satellites has become crucial.​

Astrosite, a world first neuromorphic-inspired mobile telescope observatory developed by the International Centre for Neuromorphic Systems (ICNS)at Western Sydney University is using Event-Based sensing as a more efficient and low-poweralternative for Space Situational Awareness.​

Day & night
high performance operations
High Speed
µs temporal resolution
Continuous
Capture
during device movement

No fixed

exposure time

10 to 1000x

less data
Low
power


Astrosite – Introduction to Space Situational Awareness and Event based Sensors​


WHAT IS SPACE JUNK AND WHY DOES IT MATTER ?
Gregory Cohen and his team at Western Sydney University have been working with the Australian Department of Defense to detect and track both working satellites and space junk, which in some ways can be like looking for the proverbial needle in a haystack.
According to their website there are currently about 4,850 satellites in space, but only about 40% of them are active. Dead satellites and debris in orbit leave potential for disruptions, and this has increased dramatically over the last 50 years, making it even more important to track and monitor objects in space. This is typically done through high resolution cameras that collect mostly images of empty space resulting in huge amounts of unnecessary data. These cameras are also not ideal for capturing images in the daylight.

With tens of thousands of man-made objects currently orbiting in space, the risk of collision between debris, satellites and spacecraft has become a serious concern for organisations with a commercial interest in space, as well as national and international defense agencies” says Associate Professor Gregory Cohen.

Full Lunar Eclipse captured with Prophesee Metavision® sensor​


Low Earth Orbit in High Wind Situations captured with Prophesee Metavision® sensor​

ASTROSITE – EVENT-BASED VISION IN A TELESCOPE


One of the top benefits of Event-Based Vision is the ability to only capture the most essential information and ignore all the redundant noise. What better use of this than the exploring and monitoring the vastness of space.
The Astrosite is a mobile observatory that uses Event-Based Vision in a new telescope system. This allows the system to only collect information when something changes in its field of view – i.e. when a satellite or other object is detected.
This results in far less data being processed than taking a series of snapshots. The high dynamic range that Prophesee’s Metavision Event-Based Vision sensor delivers allows for observation of space even during the daytime.

The Astrosites – ready to be deployed!​

Our world is becoming increasingly reliant on satellites, but we are doing very little to protect them or manage the end-of-life process. As a result, there is a critical need for accurate detection and tracking of satellites. These researchers are harnessing Event-Based Vision solutions to make this important job extremely efficient.
Astrosite_-_8.2.jpg

Astrosite_-_5.2.jpg

Astrosite_-_4.2.jpg


ABOUT INTERNATIONAL CENTRE FOR NEUROMORPHIC SYSTEMS


Western Sydney University has established the International Centre for Neuromorphic Systems (ICNS) – the only dedicated neuromorphic laboratory in Australia – as a home and global hub for leading researchers and students in this increasingly important field. The work of ICNS encompasses all three essential components of data-based decision-making systems, as their vision is to perform world-leading research to develop neuromorphic sensors, algorithms, and processors, and apply them to solve problems in modern society.

https://www.prophesee.ai/space-situational-awareness-event-based-vision/?utm_source=SM&utm_medium=SM&utm_campaign=IC+Launch
 
  • Like
  • Fire
  • Love
Reactions: 45 users
Top Bottom