All roads lead to JAST

uiux

Regular
What is JAST?



https://www.researchgate.net/profil...tterns-using-a-novel-STDP-based-algorithm.pdf


Sud7nnNwvc-IOuiacT72Y07uQOXxgwREwLpw5uApgJS9wAfn09MK80CHzP5FsxpkiK7WAT_W12JR3fUbz2Bi-BtUXQbNG0ic4AU4i0ifFwtkTXyh4kvFiO1twfS8IeyP9Y5Ytqhn



https://www.asx.com.au/asxpdf/20170320/pdf/43gxg7g8c6xq25.pdf


BrainChip Advances its Position as a Leading Artificial Intelligence Provider with an Exclusive License for Next-Generation Neural Network Technology

Agreement with Toulouse Tech Transfer and the CERCO research center, to license the exclusive rights to the JAST learning rules


The JAST Patents & Research

https://www.researchgate.net/public...r_Neuromorphic_Bio-Inspired_Vision_Processing

Digital Design For Neuromorphic Bio-Inspired Vision Processing

“I would like to thank Peter Van Der Made and Anil Mankar from BrainChip company for their support for commercialization of our ideas”

Patents spun out from this research:

1. Unsupervised detection of repeating patterns in a series of events, European Patent Office EP16306525 Nov-2016, Amirreza Yousefzadeh, Timothee Masquelier, Jacob Martin, Simon Thorpe, Licensed to the Californian start-up BrainChip.

2. Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events, European Patent Office EP17305186 Feb2017, Amirreza Yousefzadeh, Bernabe Linares-Barranco, Timothee Masquelier, Jacob Martin, Simon Thorpe, Exclusive Licensed to the Californian start-up BrainChip.

3. Method and Apparatus for Stochastic STDP with Binary Weights, U.S. Patent and Trademark office 012055.0440P Nov-2017, Evangelos Stromatias, Amirreza Yousefzadeh, Teresa Serrano-Gotarredona, Bernabe Linares-Barranco, Licensed to Samsung Advanced Institute of Technology


https://jov.arvojournals.org/article.aspx?articleid=2651951

Unsupervised learning of repeating patterns using a novel STDP based algorithm

Authors

Abstract
Computational vision systems that are trained with deep learning have recently matched human performance (Hinton et al). However, while deep learning typically requires tens or hundreds of thousands of labelled examples, humans can learn a task or stimulus with only a few repetitions. For example, a 2015 study by Andrillon et al. showed that human listeners can learn complicated random auditory noises after only a few repetitions, with each repetition invoking a larger and larger EEG activity than the previous. In addition, a 2015 study by Martin et al. showed that only 10 minutes of visual experience of a novel object class was required to change early EEG potentials, improve saccadic reaction times, and increase saccade accuracies for the particular object trained. How might such ultra-rapid learning actually be accomplished by the cortex? Here, we propose a simple unsupervised neural model based on spike timing dependent plasticity, which learns spatiotemporal patterns in visual or auditory stimuli with only a few repetitions. The model is attractive for applications because it is simple enough to allow the simulation of very large numbers of cortical neurons in real time. Theoretically, the model provides a plausible example of how the brain may accomplish rapid learning of repeating visual or auditory patterns using only a few examples.


https://patents.google.com/patent/EP3324343A1/en

Unsupervised detection of repeating patterns in a series of events

Inventor
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC
  • Centre National de la Recherche Scientifique CNRS

Abstract

A method of performing unsupervised detection of repeating patterns in a series (TS) of events (E21, E12, E5 ...), comprising the steps of:
a) Providing a plurality of neurons (NR1 - NRP), each neuron being representative of W event types;
b) Acquiring an input packet (IV) comprising N successive events of the series;
c) Attributing to at least some neurons a potential value (PT1 - PTP), representative of the number of common events between the input packet and the neuron;
d) Modify the event types of neurons having a potential value exceeding a first threshold TL; and
e) generating a first output signal (OS1 - OSP) for all neurons having a potential value exceeding a second threshold TF, and a second output signal, different from the first one, for all other neurons.

A digital integrated circuit configured for carrying out such a method.


https://patents.google.com/patent/US20190286944A1/en

Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events

Inventor
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC
  • Centre National de la Recherche Scientifique CNRS

Abstract
A method of performing unsupervised detection of repeating patterns in a series of events, includes a) Providing a plurality of neurons, each neuron being representative of W event types; b) Acquiring an input packet comprising N successive events of the series; c) Attributing to at least some neurons a potential value, representative of the number of common events between the input packet and the neuron; d) Modify the event types of neurons having a potential value exceeding a first threshold TL; and e) generating a first output signal for all neurons having a potential value exceeding a second threshold TF, and a second output signal, different from the first one, for all other neurons. A digital electronic circuit and system configured for carrying out such a method is also provided.


https://patents.google.com/patent/US20190138900A1/en

Neuron circuit, system, and method with synapse weight learning

Inventor:
  • Bernabe LINARES-BARRANCO
  • Amirreza YOUSEFZADEH
  • Evangelos STROMATIAS
  • Teresa SERRANO-GOTARREDONA
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC
  • Samsung Electronics Co Ltd

Abstract
A neuron circuit performing synapse learning on weight values includes a first sub-circuit, a second sub-circuit, and a third sub-circuit. The first sub-circuit is configured to receive an input signal from a pre-synaptic neuron circuit and determine whether the received input signal is an active signal having an active synapse value. The second sub-circuit is configured to compare a first cumulative reception counter of active input signals with a learning threshold value based on results of the determination. The third sub-circuit is configured to perform a potentiating learning process based on a first probability value to set a synaptic weight value of at least one previously received input signal to an active value, upon the first cumulative reception counter reaching the learning threshold value, and perform a depressing learning process based on a second probability value to set each of the synaptic weight values to an inactive value.



BERNABE LINARES BARRANCO

http://www2.imse-cnm.csic.es/~bernabe/

He is co-founder of two start-ups, Prophesee SA (www.prophesee.ai) and GrAI-Matter-Labs SAS (www.graimatterlabs.ai), both on neuromorphic hardware.


https://patents.google.com/patent/ES2476115B1/en

Method and device for the detection of the temporary variation of light intensity in a photosensor matrix

Inventor
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC

Abstract
Method and device for detecting the temporal variation of the light intensity in an array of photosensors, comprising a pixel array, an automatic adjustment block for photocurrent amplification and an arbitrator and event encoder block. Each pixel comprises a photosensor that generates a photocurrent, an adjustable gain current mirror connected to the output of the photosensor, a transimpedance amplifier placed at the output of the current mirror, optionally at least one amplification circuit placed at the output of the amplifier of transimpedance, capacitors and threshold detectors to determine if the output voltage exceeds an upper or lower threshold below a lower threshold to generate an event in the pixel.



https://www.prophesee.ai/2018/02/21/chronocam-becomes-prophesee/

CHRONOCAM BECOMES PROPHESEE, INTRODUCING A NEW ERA IN MACHINE VISION

Chronocam, the inventor of the most advanced neuromorphic vision system is now Prophesee, a branding and identity transformation that reflects our expanded vision for revolutionizing how machines see.

Today, with 60+ employees and backed by more than $40 million in capital investment by world class companies like Intel Capital, Renault and Robert Bosch, we are poised to take advantage of a huge opportunity to transform the machine vision landscape. We are addressing the most challenging pain points in enabling better and more efficient ways to allow cameras and sensors improve our lives. Through higher performance, faster data processing and increased power efficiency, Prophesee will enhance safety, productivity and user experiences in many ways not possible with existing technology.


https://www.prophesee.ai/wp-content...EE-Metavision-sensor-packaged.PR_.Sept-25.pdf

Prophesee introduces the first Event-Based Vision sensor in an industry-standard, cost-efficient package

The sensor can be used by system developers to improve and in some cases create whole new industrial uses, including accelerating quality assessment on production lines; positioning, sensing and movement guidance for robots to enable better human collaboration; and equipment monitoring (e.g. caused by vibration, kinetic deviations) making the system an asset for predictive maintenance and reduced machine downtime.


https://www.prophesee.ai/2020/02/19/prophesee-sony-stacked-event-based-vision-sensor/

Prophesee and Sony Develop a Stacked Event-Based Vision Sensor with the Industry’s Smallest*1 Pixels and Highest*1 HDR Performance

The new sensor and its performance results were announced at the International Solid-State Circuits Conference (ISSCC) held in San Francisco in the United States, starting on February 16, 2020.

The new stacked Event-based vision sensor detects changes in the luminance of each pixel asynchronously and outputs data including coordinates and time only for the pixels where a change is detected, thereby enabling high efficiency, high speed, low latency data output. This vision sensor achieves high resolution, high speed, and high time resolution despite its small size and low power consumption.


https://www.bosch-presse.de/presspo...-of-chronocam-led-by-intel-capital-74049.html

Robert Bosch Venture Capital participates in 15 million Dollar funding round of Chronocam led by Intel Capital

Stuttgart, Germany – Robert Bosch Venture Capital, the corporate venture capital company of the Bosch Group, participates in the 15 million Dollar Series B investment round of Chronocam SA. This investment comes about 18 months after Robert Bosch Venture Capital led the Series A round of Chronocam in 2015. The Paris based start-up company is the innovation leader of biologically-inspired vision sensors and computer vision solutions. The new funding comes from lead investor Intel Capital, along with iBionext, 360 Capital, Renault-Nissan Group, as well as the investors of Chronocam’s previous Series A round, CEAi and Robert Bosch Venture Capital.


https://patents.google.com/patent/EP3543898A1/en

Fast detection of secondary objects that may intersect the trajectory of a moving primary object

Inventor
  • Michael Pfeiffer
  • Jochen Marx
  • Oliver Lange
  • Christoph Posch
  • Xavier LAGORCE
  • Spiros NIKOLAIDIS
Current Assignee
  • CHRONOCAM SA
  • Robert Bosch GmbH

Abstract
A system (1) for detecting dynamic secondary objects (55) that have a potential to intersect the trajectory (51) of a moving primary object (50), comprising a vision sensor (2) with a light-sensitive area (20) that comprises event-based pixels (21), so that a relative change in the light intensity impinging onto an event-based pixel (21) of the vision sensor (2) by at least a predetermined percentage causes the vision sensor (2) to emit an event (21a) associated with this event-based pixel (21), wherein the system (1) further comprises a discriminator module (3) that gets both the stream of events (21a) from the vision sensor (2) and information (52) about the heading and/or speed of the motion of the primary object (50) as inputs, and is configured to identify, from said stream of events (21a), based at least in part on said information (52), events (21b) that are likely to be caused by the motion of a secondary object (55), rather than by the motion of the primary object (50).

Vision sensors (2) for use in the system (1).

A corresponding computer program.



The ULPEC Project

http://ulpecproject.eu/wp-content/uploads/2017/11/Ulpec_FactSheet_NEW.pdf

https://ulpecproject.eu/

The long term goal of ULPEC is to develop advanced vision applications with ultra-low power requirements and ultra-low latency. The output of the ULPEC project is a demonstrator connecting a neuromorphic event-based camera to a high speed ultra-low power consumption asynchronous visual data processing system (Spiking Neural Network with memristive synapses).

The project consortium therefore includes an industrial end-user (Bosch), which will more particularly investigate autonomous and computer assisted driving. Autonomous and computer assisted driving are indeed a major disruption in the transport and car manufacturing sector. Vision and recognition of traffic event must be computed with very low latency (to improve security) and low power(to accommodate the power limited environment in a car, such as powerbudget and heat dissipation)


9fbFG1p_PQg33_n8GEelkeFP2xgWu3JYDCHca6dpV4pH9eDWotY50OM9sRaIZseQMGaV2xWP2BhYGXnyySm6wWNBh5eJsC3szrBg3LN88CxkVXRvznw6xe871lU52T3BuZo2erVq



The goal of ULPEC is to demonstrate a microsystem that is natively brain-inspired, connecting an event-based camera to a dedicated Spiking Neural Network enabled by memristive synapses. This high-speed, ultra-low power consumption asynchronous visual data processing system will then manipulate the sensor output end-to-end without changing its nature. ULPEC targets TRL4, as required by the ICT-03-2016 call, thanks to the functional realization of embedded smart event-based camera. The system demonstrator aims to prove that the underlying technology is viable and can be used for traffic road events and other applications. Such a level of integration has never been demonstrated so far, and no commercial equivalent exists on the market.The target use case for ULPEC technologies is the vision and recognition of traffic event (signs,obstacles like other cars, persons, etc.) which is part of the major disruption of autonomous and computer assisted driving in the transport and car manufacturing sector. Beyond transportation, our long-term vision encompasses all advanced vision applications with ultra-low power requirements and ultra-low latency, as well as for data processing in hardware native neural network.



https://www.frontiersin.org/articles/10.3389/fnins.2018.00774/full

Deep Learning With Spiking Neurons: Opportunities and Challenges

Authors:
Michael Pfeiffer* and Thomas Pfeil
Bosch Center for Artificial Intelligence, Robert Bosch GmbH, Renningen, Germany

Reviewed by:
Timothée Masquelier
Centre National de la Recherche Scientifique (CNRS), France


https://de.reuters.com/article/uk-bosch-factory-idUKKBN1961AN

Robert Bosch to invest one billion euros in Dresden semiconductor plant

FRANKFURT (Reuters) - Robert Bosch, is investing 1 billion euros (872.92 million pounds) in a semiconductor plant in Germany, a company source told Reuters, highlighting the world’s largest car parts supplier’s ambitions in self-drive cars and the industrial Internet.

The factory will be built in the eastern German city of Dresden, with most of the investment coming from Bosch and the rest from government and European Union subsidies, the source said. It is set to start production in 2021 and will employ 700 people, the source said. Bosch declined to comment.

Bosch already has a chip factory in Reutlingen in southern Germany and is a leading producer of sensors, but demand is expected to increase with the development of autonomous cars and the advance of more “smart” machines.


https://europe.autonews.com/blogs/why-bosch-bet-big-breakeven-chips

Why Bosch bet big on breakeven chips

"The Dresden plant will focus on producing application-specific integrated circuits (ASICs), which function like the part of our nervous system responsible for reflexes. Decisions are made almost instantaneously before a signal ever reaches the brain."


https://www.reuters.com/article/us-tech-bosch-idUSKBN1WM0YD

Bosch to make silicon carbide chips in electric vehicle range-anxiety play

“Silicon carbide semiconductors bring more power to electric vehicles. For motorists, this means a 6% increase in range,” Bosch board member Harald Kroeger said on Monday.

Bosch also wants to strengthen its position in so-called Application-Specific Integrated Circuits (ASICs) that decide how to act on sensor inputs; and in power electronics that manage everything from a car’s electric windows to its drivetrain.

The average car contains chips worth $370, according to industry estimates, but that figure rises by $450 for emission-free electric vehicles. Another $1,000 will be packed into the future self-driving cars, making semiconductors a growth opportunity in a car industry struggling with stagnant sales.

Throw in $100 for infotainment features, and the typical car will pack more than $1,900 in semiconductors as technology advances and as features now seen only in luxury vehicles spread to mass-market models, Bosch reckons.
 
  • Like
  • Fire
  • Love
Reactions: 50 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
What is JAST?



https://www.researchgate.net/profil...tterns-using-a-novel-STDP-based-algorithm.pdf


Sud7nnNwvc-IOuiacT72Y07uQOXxgwREwLpw5uApgJS9wAfn09MK80CHzP5FsxpkiK7WAT_W12JR3fUbz2Bi-BtUXQbNG0ic4AU4i0ifFwtkTXyh4kvFiO1twfS8IeyP9Y5Ytqhn


https://www.asx.com.au/asxpdf/20170320/pdf/43gxg7g8c6xq25.pdf

BrainChip Advances its Position as a Leading Artificial Intelligence Provider with an Exclusive License for Next-Generation Neural Network Technology

Agreement with Toulouse Tech Transfer and the CERCO research center, to license the exclusive rights to the JAST learning rules


The JAST Patents & Research

https://www.researchgate.net/public...r_Neuromorphic_Bio-Inspired_Vision_Processing

Digital Design For Neuromorphic Bio-Inspired Vision Processing

“I would like to thank Peter Van Der Made and Anil Mankar from BrainChip company for their support for commercialization of our ideas”

Patents spun out from this research:

1. Unsupervised detection of repeating patterns in a series of events, European Patent Office EP16306525 Nov-2016, Amirreza Yousefzadeh, Timothee Masquelier, Jacob Martin, Simon Thorpe, Licensed to the Californian start-up BrainChip.

2. Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events, European Patent Office EP17305186 Feb2017, Amirreza Yousefzadeh, Bernabe Linares-Barranco, Timothee Masquelier, Jacob Martin, Simon Thorpe, Exclusive Licensed to the Californian start-up BrainChip.

3. Method and Apparatus for Stochastic STDP with Binary Weights, U.S. Patent and Trademark office 012055.0440P Nov-2017, Evangelos Stromatias, Amirreza Yousefzadeh, Teresa Serrano-Gotarredona, Bernabe Linares-Barranco, Licensed to Samsung Advanced Institute of Technology


https://jov.arvojournals.org/article.aspx?articleid=2651951

Unsupervised learning of repeating patterns using a novel STDP based algorithm

Authors

Abstract
Computational vision systems that are trained with deep learning have recently matched human performance (Hinton et al). However, while deep learning typically requires tens or hundreds of thousands of labelled examples, humans can learn a task or stimulus with only a few repetitions. For example, a 2015 study by Andrillon et al. showed that human listeners can learn complicated random auditory noises after only a few repetitions, with each repetition invoking a larger and larger EEG activity than the previous. In addition, a 2015 study by Martin et al. showed that only 10 minutes of visual experience of a novel object class was required to change early EEG potentials, improve saccadic reaction times, and increase saccade accuracies for the particular object trained. How might such ultra-rapid learning actually be accomplished by the cortex? Here, we propose a simple unsupervised neural model based on spike timing dependent plasticity, which learns spatiotemporal patterns in visual or auditory stimuli with only a few repetitions. The model is attractive for applications because it is simple enough to allow the simulation of very large numbers of cortical neurons in real time. Theoretically, the model provides a plausible example of how the brain may accomplish rapid learning of repeating visual or auditory patterns using only a few examples.


https://patents.google.com/patent/EP3324343A1/en

Unsupervised detection of repeating patterns in a series of events

Inventor
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC
  • Centre National de la Recherche Scientifique CNRS

Abstract

A method of performing unsupervised detection of repeating patterns in a series (TS) of events (E21, E12, E5 ...), comprising the steps of:
a) Providing a plurality of neurons (NR1 - NRP), each neuron being representative of W event types;
b) Acquiring an input packet (IV) comprising N successive events of the series;
c) Attributing to at least some neurons a potential value (PT1 - PTP), representative of the number of common events between the input packet and the neuron;
d) Modify the event types of neurons having a potential value exceeding a first threshold TL; and
e) generating a first output signal (OS1 - OSP) for all neurons having a potential value exceeding a second threshold TF, and a second output signal, different from the first one, for all other neurons.

A digital integrated circuit configured for carrying out such a method.


https://patents.google.com/patent/US20190286944A1/en

Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events

Inventor
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC
  • Centre National de la Recherche Scientifique CNRS

Abstract
A method of performing unsupervised detection of repeating patterns in a series of events, includes a) Providing a plurality of neurons, each neuron being representative of W event types; b) Acquiring an input packet comprising N successive events of the series; c) Attributing to at least some neurons a potential value, representative of the number of common events between the input packet and the neuron; d) Modify the event types of neurons having a potential value exceeding a first threshold TL; and e) generating a first output signal for all neurons having a potential value exceeding a second threshold TF, and a second output signal, different from the first one, for all other neurons. A digital electronic circuit and system configured for carrying out such a method is also provided.


https://patents.google.com/patent/US20190138900A1/en

Neuron circuit, system, and method with synapse weight learning

Inventor:
  • Bernabe LINARES-BARRANCO
  • Amirreza YOUSEFZADEH
  • Evangelos STROMATIAS
  • Teresa SERRANO-GOTARREDONA
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC
  • Samsung Electronics Co Ltd

Abstract
A neuron circuit performing synapse learning on weight values includes a first sub-circuit, a second sub-circuit, and a third sub-circuit. The first sub-circuit is configured to receive an input signal from a pre-synaptic neuron circuit and determine whether the received input signal is an active signal having an active synapse value. The second sub-circuit is configured to compare a first cumulative reception counter of active input signals with a learning threshold value based on results of the determination. The third sub-circuit is configured to perform a potentiating learning process based on a first probability value to set a synaptic weight value of at least one previously received input signal to an active value, upon the first cumulative reception counter reaching the learning threshold value, and perform a depressing learning process based on a second probability value to set each of the synaptic weight values to an inactive value.



BERNABE LINARES BARRANCO

http://www2.imse-cnm.csic.es/~bernabe/

He is co-founder of two start-ups, Prophesee SA (www.prophesee.ai) and GrAI-Matter-Labs SAS (www.graimatterlabs.ai), both on neuromorphic hardware.


https://patents.google.com/patent/ES2476115B1/en

Method and device for the detection of the temporary variation of light intensity in a photosensor matrix

Inventor
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC

Abstract
Method and device for detecting the temporal variation of the light intensity in an array of photosensors, comprising a pixel array, an automatic adjustment block for photocurrent amplification and an arbitrator and event encoder block. Each pixel comprises a photosensor that generates a photocurrent, an adjustable gain current mirror connected to the output of the photosensor, a transimpedance amplifier placed at the output of the current mirror, optionally at least one amplification circuit placed at the output of the amplifier of transimpedance, capacitors and threshold detectors to determine if the output voltage exceeds an upper or lower threshold below a lower threshold to generate an event in the pixel.



https://www.prophesee.ai/2018/02/21/chronocam-becomes-prophesee/

CHRONOCAM BECOMES PROPHESEE, INTRODUCING A NEW ERA IN MACHINE VISION

Chronocam, the inventor of the most advanced neuromorphic vision system is now Prophesee, a branding and identity transformation that reflects our expanded vision for revolutionizing how machines see.

Today, with 60+ employees and backed by more than $40 million in capital investment by world class companies like Intel Capital, Renault and Robert Bosch, we are poised to take advantage of a huge opportunity to transform the machine vision landscape. We are addressing the most challenging pain points in enabling better and more efficient ways to allow cameras and sensors improve our lives. Through higher performance, faster data processing and increased power efficiency, Prophesee will enhance safety, productivity and user experiences in many ways not possible with existing technology.


https://www.prophesee.ai/wp-content...EE-Metavision-sensor-packaged.PR_.Sept-25.pdf

Prophesee introduces the first Event-Based Vision sensor in an industry-standard, cost-efficient package

The sensor can be used by system developers to improve and in some cases create whole new industrial uses, including accelerating quality assessment on production lines; positioning, sensing and movement guidance for robots to enable better human collaboration; and equipment monitoring (e.g. caused by vibration, kinetic deviations) making the system an asset for predictive maintenance and reduced machine downtime.


https://www.prophesee.ai/2020/02/19/prophesee-sony-stacked-event-based-vision-sensor/

Prophesee and Sony Develop a Stacked Event-Based Vision Sensor with the Industry’s Smallest*1 Pixels and Highest*1 HDR Performance

The new sensor and its performance results were announced at the International Solid-State Circuits Conference (ISSCC) held in San Francisco in the United States, starting on February 16, 2020.

The new stacked Event-based vision sensor detects changes in the luminance of each pixel asynchronously and outputs data including coordinates and time only for the pixels where a change is detected, thereby enabling high efficiency, high speed, low latency data output. This vision sensor achieves high resolution, high speed, and high time resolution despite its small size and low power consumption.


https://www.bosch-presse.de/presspo...-of-chronocam-led-by-intel-capital-74049.html

Robert Bosch Venture Capital participates in 15 million Dollar funding round of Chronocam led by Intel Capital

Stuttgart, Germany – Robert Bosch Venture Capital, the corporate venture capital company of the Bosch Group, participates in the 15 million Dollar Series B investment round of Chronocam SA. This investment comes about 18 months after Robert Bosch Venture Capital led the Series A round of Chronocam in 2015. The Paris based start-up company is the innovation leader of biologically-inspired vision sensors and computer vision solutions. The new funding comes from lead investor Intel Capital, along with iBionext, 360 Capital, Renault-Nissan Group, as well as the investors of Chronocam’s previous Series A round, CEAi and Robert Bosch Venture Capital.


https://patents.google.com/patent/EP3543898A1/en

Fast detection of secondary objects that may intersect the trajectory of a moving primary object

Inventor
  • Michael Pfeiffer
  • Jochen Marx
  • Oliver Lange
  • Christoph Posch
  • Xavier LAGORCE
  • Spiros NIKOLAIDIS
Current Assignee
  • CHRONOCAM SA
  • Robert Bosch GmbH

Abstract
A system (1) for detecting dynamic secondary objects (55) that have a potential to intersect the trajectory (51) of a moving primary object (50), comprising a vision sensor (2) with a light-sensitive area (20) that comprises event-based pixels (21), so that a relative change in the light intensity impinging onto an event-based pixel (21) of the vision sensor (2) by at least a predetermined percentage causes the vision sensor (2) to emit an event (21a) associated with this event-based pixel (21), wherein the system (1) further comprises a discriminator module (3) that gets both the stream of events (21a) from the vision sensor (2) and information (52) about the heading and/or speed of the motion of the primary object (50) as inputs, and is configured to identify, from said stream of events (21a), based at least in part on said information (52), events (21b) that are likely to be caused by the motion of a secondary object (55), rather than by the motion of the primary object (50).

Vision sensors (2) for use in the system (1).

A corresponding computer program.



The ULPEC Project

http://ulpecproject.eu/wp-content/uploads/2017/11/Ulpec_FactSheet_NEW.pdf

https://ulpecproject.eu/

The long term goal of ULPEC is to develop advanced vision applications with ultra-low power requirements and ultra-low latency. The output of the ULPEC project is a demonstrator connecting a neuromorphic event-based camera to a high speed ultra-low power consumption asynchronous visual data processing system (Spiking Neural Network with memristive synapses).

The project consortium therefore includes an industrial end-user (Bosch), which will more particularly investigate autonomous and computer assisted driving. Autonomous and computer assisted driving are indeed a major disruption in the transport and car manufacturing sector. Vision and recognition of traffic event must be computed with very low latency (to improve security) and low power(to accommodate the power limited environment in a car, such as powerbudget and heat dissipation)


9fbFG1p_PQg33_n8GEelkeFP2xgWu3JYDCHca6dpV4pH9eDWotY50OM9sRaIZseQMGaV2xWP2BhYGXnyySm6wWNBh5eJsC3szrBg3LN88CxkVXRvznw6xe871lU52T3BuZo2erVq



The goal of ULPEC is to demonstrate a microsystem that is natively brain-inspired, connecting an event-based camera to a dedicated Spiking Neural Network enabled by memristive synapses. This high-speed, ultra-low power consumption asynchronous visual data processing system will then manipulate the sensor output end-to-end without changing its nature. ULPEC targets TRL4, as required by the ICT-03-2016 call, thanks to the functional realization of embedded smart event-based camera. The system demonstrator aims to prove that the underlying technology is viable and can be used for traffic road events and other applications. Such a level of integration has never been demonstrated so far, and no commercial equivalent exists on the market.The target use case for ULPEC technologies is the vision and recognition of traffic event (signs,obstacles like other cars, persons, etc.) which is part of the major disruption of autonomous and computer assisted driving in the transport and car manufacturing sector. Beyond transportation, our long-term vision encompasses all advanced vision applications with ultra-low power requirements and ultra-low latency, as well as for data processing in hardware native neural network.



https://www.frontiersin.org/articles/10.3389/fnins.2018.00774/full

Deep Learning With Spiking Neurons: Opportunities and Challenges

Authors:
Michael Pfeiffer* and Thomas Pfeil
Bosch Center for Artificial Intelligence, Robert Bosch GmbH, Renningen, Germany

Reviewed by:
Timothée Masquelier
Centre National de la Recherche Scientifique (CNRS), France


https://de.reuters.com/article/uk-bosch-factory-idUKKBN1961AN

Robert Bosch to invest one billion euros in Dresden semiconductor plant

FRANKFURT (Reuters) - Robert Bosch, is investing 1 billion euros (872.92 million pounds) in a semiconductor plant in Germany, a company source told Reuters, highlighting the world’s largest car parts supplier’s ambitions in self-drive cars and the industrial Internet.

The factory will be built in the eastern German city of Dresden, with most of the investment coming from Bosch and the rest from government and European Union subsidies, the source said. It is set to start production in 2021 and will employ 700 people, the source said. Bosch declined to comment.

Bosch already has a chip factory in Reutlingen in southern Germany and is a leading producer of sensors, but demand is expected to increase with the development of autonomous cars and the advance of more “smart” machines.


https://europe.autonews.com/blogs/why-bosch-bet-big-breakeven-chips

Why Bosch bet big on breakeven chips

"The Dresden plant will focus on producing application-specific integrated circuits (ASICs), which function like the part of our nervous system responsible for reflexes. Decisions are made almost instantaneously before a signal ever reaches the brain."


https://www.reuters.com/article/us-tech-bosch-idUSKBN1WM0YD

Bosch to make silicon carbide chips in electric vehicle range-anxiety play

“Silicon carbide semiconductors bring more power to electric vehicles. For motorists, this means a 6% increase in range,” Bosch board member Harald Kroeger said on Monday.

Bosch also wants to strengthen its position in so-called Application-Specific Integrated Circuits (ASICs) that decide how to act on sensor inputs; and in power electronics that manage everything from a car’s electric windows to its drivetrain.

The average car contains chips worth $370, according to industry estimates, but that figure rises by $450 for emission-free electric vehicles. Another $1,000 will be packed into the future self-driving cars, making semiconductors a growth opportunity in a car industry struggling with stagnant sales.

Throw in $100 for infotainment features, and the typical car will pack more than $1,900 in semiconductors as technology advances and as features now seen only in luxury vehicles spread to mass-market models, Bosch reckoI guess I won’t be going to sleep tonight.



PHOAR! Guess I won’t be sleeping tonight.
 
  • Like
  • Haha
Reactions: 12 users

SilentioAsx

Emerged
What is JAST?



https://www.researchgate.net/profil...tterns-using-a-novel-STDP-based-algorithm.pdf


Sud7nnNwvc-IOuiacT72Y07uQOXxgwREwLpw5uApgJS9wAfn09MK80CHzP5FsxpkiK7WAT_W12JR3fUbz2Bi-BtUXQbNG0ic4AU4i0ifFwtkTXyh4kvFiO1twfS8IeyP9Y5Ytqhn


https://www.asx.com.au/asxpdf/20170320/pdf/43gxg7g8c6xq25.pdf

BrainChip Advances its Position as a Leading Artificial Intelligence Provider with an Exclusive License for Next-Generation Neural Network Technology

Agreement with Toulouse Tech Transfer and the CERCO research center, to license the exclusive rights to the JAST learning rules


The JAST Patents & Research

https://www.researchgate.net/public...r_Neuromorphic_Bio-Inspired_Vision_Processing

Digital Design For Neuromorphic Bio-Inspired Vision Processing

“I would like to thank Peter Van Der Made and Anil Mankar from BrainChip company for their support for commercialization of our ideas”

Patents spun out from this research:

1. Unsupervised detection of repeating patterns in a series of events, European Patent Office EP16306525 Nov-2016, Amirreza Yousefzadeh, Timothee Masquelier, Jacob Martin, Simon Thorpe, Licensed to the Californian start-up BrainChip.

2. Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events, European Patent Office EP17305186 Feb2017, Amirreza Yousefzadeh, Bernabe Linares-Barranco, Timothee Masquelier, Jacob Martin, Simon Thorpe, Exclusive Licensed to the Californian start-up BrainChip.

3. Method and Apparatus for Stochastic STDP with Binary Weights, U.S. Patent and Trademark office 012055.0440P Nov-2017, Evangelos Stromatias, Amirreza Yousefzadeh, Teresa Serrano-Gotarredona, Bernabe Linares-Barranco, Licensed to Samsung Advanced Institute of Technology


https://jov.arvojournals.org/article.aspx?articleid=2651951

Unsupervised learning of repeating patterns using a novel STDP based algorithm

Authors

Abstract
Computational vision systems that are trained with deep learning have recently matched human performance (Hinton et al). However, while deep learning typically requires tens or hundreds of thousands of labelled examples, humans can learn a task or stimulus with only a few repetitions. For example, a 2015 study by Andrillon et al. showed that human listeners can learn complicated random auditory noises after only a few repetitions, with each repetition invoking a larger and larger EEG activity than the previous. In addition, a 2015 study by Martin et al. showed that only 10 minutes of visual experience of a novel object class was required to change early EEG potentials, improve saccadic reaction times, and increase saccade accuracies for the particular object trained. How might such ultra-rapid learning actually be accomplished by the cortex? Here, we propose a simple unsupervised neural model based on spike timing dependent plasticity, which learns spatiotemporal patterns in visual or auditory stimuli with only a few repetitions. The model is attractive for applications because it is simple enough to allow the simulation of very large numbers of cortical neurons in real time. Theoretically, the model provides a plausible example of how the brain may accomplish rapid learning of repeating visual or auditory patterns using only a few examples.


https://patents.google.com/patent/EP3324343A1/en

Unsupervised detection of repeating patterns in a series of events

Inventor
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC
  • Centre National de la Recherche Scientifique CNRS

Abstract

A method of performing unsupervised detection of repeating patterns in a series (TS) of events (E21, E12, E5 ...), comprising the steps of:
a) Providing a plurality of neurons (NR1 - NRP), each neuron being representative of W event types;
b) Acquiring an input packet (IV) comprising N successive events of the series;
c) Attributing to at least some neurons a potential value (PT1 - PTP), representative of the number of common events between the input packet and the neuron;
d) Modify the event types of neurons having a potential value exceeding a first threshold TL; and
e) generating a first output signal (OS1 - OSP) for all neurons having a potential value exceeding a second threshold TF, and a second output signal, different from the first one, for all other neurons.

A digital integrated circuit configured for carrying out such a method.


https://patents.google.com/patent/US20190286944A1/en

Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events

Inventor
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC
  • Centre National de la Recherche Scientifique CNRS

Abstract
A method of performing unsupervised detection of repeating patterns in a series of events, includes a) Providing a plurality of neurons, each neuron being representative of W event types; b) Acquiring an input packet comprising N successive events of the series; c) Attributing to at least some neurons a potential value, representative of the number of common events between the input packet and the neuron; d) Modify the event types of neurons having a potential value exceeding a first threshold TL; and e) generating a first output signal for all neurons having a potential value exceeding a second threshold TF, and a second output signal, different from the first one, for all other neurons. A digital electronic circuit and system configured for carrying out such a method is also provided.


https://patents.google.com/patent/US20190138900A1/en

Neuron circuit, system, and method with synapse weight learning

Inventor:
  • Bernabe LINARES-BARRANCO
  • Amirreza YOUSEFZADEH
  • Evangelos STROMATIAS
  • Teresa SERRANO-GOTARREDONA
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC
  • Samsung Electronics Co Ltd

Abstract
A neuron circuit performing synapse learning on weight values includes a first sub-circuit, a second sub-circuit, and a third sub-circuit. The first sub-circuit is configured to receive an input signal from a pre-synaptic neuron circuit and determine whether the received input signal is an active signal having an active synapse value. The second sub-circuit is configured to compare a first cumulative reception counter of active input signals with a learning threshold value based on results of the determination. The third sub-circuit is configured to perform a potentiating learning process based on a first probability value to set a synaptic weight value of at least one previously received input signal to an active value, upon the first cumulative reception counter reaching the learning threshold value, and perform a depressing learning process based on a second probability value to set each of the synaptic weight values to an inactive value.



BERNABE LINARES BARRANCO

http://www2.imse-cnm.csic.es/~bernabe/

He is co-founder of two start-ups, Prophesee SA (www.prophesee.ai) and GrAI-Matter-Labs SAS (www.graimatterlabs.ai), both on neuromorphic hardware.


https://patents.google.com/patent/ES2476115B1/en

Method and device for the detection of the temporary variation of light intensity in a photosensor matrix

Inventor
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC

Abstract
Method and device for detecting the temporal variation of the light intensity in an array of photosensors, comprising a pixel array, an automatic adjustment block for photocurrent amplification and an arbitrator and event encoder block. Each pixel comprises a photosensor that generates a photocurrent, an adjustable gain current mirror connected to the output of the photosensor, a transimpedance amplifier placed at the output of the current mirror, optionally at least one amplification circuit placed at the output of the amplifier of transimpedance, capacitors and threshold detectors to determine if the output voltage exceeds an upper or lower threshold below a lower threshold to generate an event in the pixel.



https://www.prophesee.ai/2018/02/21/chronocam-becomes-prophesee/

CHRONOCAM BECOMES PROPHESEE, INTRODUCING A NEW ERA IN MACHINE VISION

Chronocam, the inventor of the most advanced neuromorphic vision system is now Prophesee, a branding and identity transformation that reflects our expanded vision for revolutionizing how machines see.

Today, with 60+ employees and backed by more than $40 million in capital investment by world class companies like Intel Capital, Renault and Robert Bosch, we are poised to take advantage of a huge opportunity to transform the machine vision landscape. We are addressing the most challenging pain points in enabling better and more efficient ways to allow cameras and sensors improve our lives. Through higher performance, faster data processing and increased power efficiency, Prophesee will enhance safety, productivity and user experiences in many ways not possible with existing technology.


https://www.prophesee.ai/wp-content...EE-Metavision-sensor-packaged.PR_.Sept-25.pdf

Prophesee introduces the first Event-Based Vision sensor in an industry-standard, cost-efficient package

The sensor can be used by system developers to improve and in some cases create whole new industrial uses, including accelerating quality assessment on production lines; positioning, sensing and movement guidance for robots to enable better human collaboration; and equipment monitoring (e.g. caused by vibration, kinetic deviations) making the system an asset for predictive maintenance and reduced machine downtime.


https://www.prophesee.ai/2020/02/19/prophesee-sony-stacked-event-based-vision-sensor/

Prophesee and Sony Develop a Stacked Event-Based Vision Sensor with the Industry’s Smallest*1 Pixels and Highest*1 HDR Performance

The new sensor and its performance results were announced at the International Solid-State Circuits Conference (ISSCC) held in San Francisco in the United States, starting on February 16, 2020.

The new stacked Event-based vision sensor detects changes in the luminance of each pixel asynchronously and outputs data including coordinates and time only for the pixels where a change is detected, thereby enabling high efficiency, high speed, low latency data output. This vision sensor achieves high resolution, high speed, and high time resolution despite its small size and low power consumption.


https://www.bosch-presse.de/presspo...-of-chronocam-led-by-intel-capital-74049.html

Robert Bosch Venture Capital participates in 15 million Dollar funding round of Chronocam led by Intel Capital

Stuttgart, Germany – Robert Bosch Venture Capital, the corporate venture capital company of the Bosch Group, participates in the 15 million Dollar Series B investment round of Chronocam SA. This investment comes about 18 months after Robert Bosch Venture Capital led the Series A round of Chronocam in 2015. The Paris based start-up company is the innovation leader of biologically-inspired vision sensors and computer vision solutions. The new funding comes from lead investor Intel Capital, along with iBionext, 360 Capital, Renault-Nissan Group, as well as the investors of Chronocam’s previous Series A round, CEAi and Robert Bosch Venture Capital.


https://patents.google.com/patent/EP3543898A1/en

Fast detection of secondary objects that may intersect the trajectory of a moving primary object

Inventor
  • Michael Pfeiffer
  • Jochen Marx
  • Oliver Lange
  • Christoph Posch
  • Xavier LAGORCE
  • Spiros NIKOLAIDIS
Current Assignee
  • CHRONOCAM SA
  • Robert Bosch GmbH

Abstract
A system (1) for detecting dynamic secondary objects (55) that have a potential to intersect the trajectory (51) of a moving primary object (50), comprising a vision sensor (2) with a light-sensitive area (20) that comprises event-based pixels (21), so that a relative change in the light intensity impinging onto an event-based pixel (21) of the vision sensor (2) by at least a predetermined percentage causes the vision sensor (2) to emit an event (21a) associated with this event-based pixel (21), wherein the system (1) further comprises a discriminator module (3) that gets both the stream of events (21a) from the vision sensor (2) and information (52) about the heading and/or speed of the motion of the primary object (50) as inputs, and is configured to identify, from said stream of events (21a), based at least in part on said information (52), events (21b) that are likely to be caused by the motion of a secondary object (55), rather than by the motion of the primary object (50).

Vision sensors (2) for use in the system (1).

A corresponding computer program.



The ULPEC Project

http://ulpecproject.eu/wp-content/uploads/2017/11/Ulpec_FactSheet_NEW.pdf

https://ulpecproject.eu/

The long term goal of ULPEC is to develop advanced vision applications with ultra-low power requirements and ultra-low latency. The output of the ULPEC project is a demonstrator connecting a neuromorphic event-based camera to a high speed ultra-low power consumption asynchronous visual data processing system (Spiking Neural Network with memristive synapses).

The project consortium therefore includes an industrial end-user (Bosch), which will more particularly investigate autonomous and computer assisted driving. Autonomous and computer assisted driving are indeed a major disruption in the transport and car manufacturing sector. Vision and recognition of traffic event must be computed with very low latency (to improve security) and low power(to accommodate the power limited environment in a car, such as powerbudget and heat dissipation)


9fbFG1p_PQg33_n8GEelkeFP2xgWu3JYDCHca6dpV4pH9eDWotY50OM9sRaIZseQMGaV2xWP2BhYGXnyySm6wWNBh5eJsC3szrBg3LN88CxkVXRvznw6xe871lU52T3BuZo2erVq



The goal of ULPEC is to demonstrate a microsystem that is natively brain-inspired, connecting an event-based camera to a dedicated Spiking Neural Network enabled by memristive synapses. This high-speed, ultra-low power consumption asynchronous visual data processing system will then manipulate the sensor output end-to-end without changing its nature. ULPEC targets TRL4, as required by the ICT-03-2016 call, thanks to the functional realization of embedded smart event-based camera. The system demonstrator aims to prove that the underlying technology is viable and can be used for traffic road events and other applications. Such a level of integration has never been demonstrated so far, and no commercial equivalent exists on the market.The target use case for ULPEC technologies is the vision and recognition of traffic event (signs,obstacles like other cars, persons, etc.) which is part of the major disruption of autonomous and computer assisted driving in the transport and car manufacturing sector. Beyond transportation, our long-term vision encompasses all advanced vision applications with ultra-low power requirements and ultra-low latency, as well as for data processing in hardware native neural network.



https://www.frontiersin.org/articles/10.3389/fnins.2018.00774/full

Deep Learning With Spiking Neurons: Opportunities and Challenges

Authors:
Michael Pfeiffer* and Thomas Pfeil
Bosch Center for Artificial Intelligence, Robert Bosch GmbH, Renningen, Germany

Reviewed by:
Timothée Masquelier
Centre National de la Recherche Scientifique (CNRS), France


https://de.reuters.com/article/uk-bosch-factory-idUKKBN1961AN

Robert Bosch to invest one billion euros in Dresden semiconductor plant

FRANKFURT (Reuters) - Robert Bosch, is investing 1 billion euros (872.92 million pounds) in a semiconductor plant in Germany, a company source told Reuters, highlighting the world’s largest car parts supplier’s ambitions in self-drive cars and the industrial Internet.

The factory will be built in the eastern German city of Dresden, with most of the investment coming from Bosch and the rest from government and European Union subsidies, the source said. It is set to start production in 2021 and will employ 700 people, the source said. Bosch declined to comment.

Bosch already has a chip factory in Reutlingen in southern Germany and is a leading producer of sensors, but demand is expected to increase with the development of autonomous cars and the advance of more “smart” machines.


https://europe.autonews.com/blogs/why-bosch-bet-big-breakeven-chips

Why Bosch bet big on breakeven chips

"The Dresden plant will focus on producing application-specific integrated circuits (ASICs), which function like the part of our nervous system responsible for reflexes. Decisions are made almost instantaneously before a signal ever reaches the brain."


https://www.reuters.com/article/us-tech-bosch-idUSKBN1WM0YD

Bosch to make silicon carbide chips in electric vehicle range-anxiety play

“Silicon carbide semiconductors bring more power to electric vehicles. For motorists, this means a 6% increase in range,” Bosch board member Harald Kroeger said on Monday.

Bosch also wants to strengthen its position in so-called Application-Specific Integrated Circuits (ASICs) that decide how to act on sensor inputs; and in power electronics that manage everything from a car’s electric windows to its drivetrain.

The average car contains chips worth $370, according to industry estimates, but that figure rises by $450 for emission-free electric vehicles. Another $1,000 will be packed into the future self-driving cars, making semiconductors a growth opportunity in a car industry struggling with stagnant sales.

Throw in $100 for infotainment features, and the typical car will pack more than $1,900 in semiconductors as technology advances and as features now seen only in luxury vehicles spread to mass-market models, Bosch reckons.

Fabulous post Uiux, the video on Bio-inspired AI was wonderful to watch and will really help people understand where the inspiration for developing our technology came from and how incredibly useful it is going to be. It blows my mind to think of the possibilities.
Thank you for sharing.
 
  • Like
Reactions: 4 users

KMuzza

Mad Scientist
Brainchip owe's a lot to have the team leader and british gentelman -Monsieur Simon Thorpe generously allow the exclusive licence
of the JAST system.

https://www.asx.com.au/asxpdf/20170320/pdf/43gxg7g8c6xq25.pdf

I hope the Brainchip legal team has the last paragraph covered when Brainchip has revenue in the Billions of $/ Euros.

" The terms of the exclusive license have not been disclosed but costs related to the transaction in 2017 are expected to be immaterial relative to the Companys total expenses."

AKIDA-(now real) BALLISTA.
 
  • Like
Reactions: 7 users

SilentioAsx

Emerged
Yes I agree …..
It is truly amazing how much the has changed for BRN’s prospects since 2017 !!
 
  • Like
Reactions: 5 users

stuart888

Regular
Absolutely so important is the video below to understanding Brainchip's secret sauce, ultra-efficient spiking architecture. Using no floating-point math, and just incrementing or decrementing is so revealing. Simon Thorpe in the following video, thanks Brainchip for commercializing his team's invention. His words, "spikes allow much-much-more efficient architectures", and shows it off with some very nicely provided examples.

Our quality member @TECH posted this video awhile back, and it is a must watch by all Brainchip investors that want to try to understand what is going on down deep. I have a computer science degree, and worked as a software developer for a couple of decades, but still needed to pause, rewind, and follow very closely to grasp it all. This is a fantastic presentation, and Brainchip bought the exclusive rights to use it.
Regards Stuart (Tampa Florida)

 
  • Like
  • Fire
  • Love
Reactions: 16 users

AusEire

Founding Member. It's ok to say No to Dot Joining
Absolutely so important is the video below to understanding Brainchip's secret sauce, ultra-efficient spiking architecture. Using no floating-point math, and just incrementing or decrementing is so revealing. Simon Thorpe in the following video, thanks Brainchip for commercializing his team's invention. His words, "spikes allow much-much-more efficient architectures", and shows it off with some very nicely provided examples.

Our quality member @TECH posted this video awhile back, and it is a must watch by all Brainchip investors that want to try to understand what is going on down deep. I have a computer science degree, and worked as a software developer for a couple of decades, but still needed to pause, rewind, and follow very closely to grasp it all. This is a fantastic presentation, and Brainchip bought the exclusive rights to use it.
Regards Stuart (Tampa Florida)


Nice plug for Brainchip at the end 👏
Most of this was way over my head but was still interesting to listen to.

Thanks for sharing mate 👍

Akida Ballista 🔥
 
  • Like
Reactions: 4 users

stuart888

Regular
How great would it be if Brainchip's Spiking Neural Network Architecture could be the "low power secret sauce" to computer vision solutions in automobiles/trucks/farm equipment, etc? That is exactly what Brainchip bought with exclusive rights to Simon Thorpe's JAST invention.

He shows off how fast the brain is at Facial Detection, as he flashes the slides at 10 frames per second. Your brain can pick it out with hardly any energy used, after the first retina neuron spikes. A supercomputer might be able to pick it out too, maybe in 10 fps, but it takes massive power/energy to do it.

Then he shows off his fancy math, how his neuron spike firing weights using time delay, and the most important spikes, only the first four are used, the rest shut off in the chip, via electrical engineering techniques. By using just the 4 earliest spikes, a low power chip can do facial detection, with results similar to the super computer in the cloud. Certain spike conditions are used for unsupervised learning too. I watched the video about 5 times, pausing to look stuff up, re-running, over and my computer science degree and software jobs helped. This same logic can be applied to all sorts of pattern recognition Use Cases. Low Power Computer Vision is a might wide use case for so many things.

1650043372350.png
 
  • Like
  • Love
Reactions: 10 users

stuart888

Regular
Sure hope 1000 eyes investors understand how amazing is Brainchip's Spiking Neural Network for low power Artificial Intelligence at the Edge. Brainchip is a bandwidth cloud saver. It kind of blows my mind that the youtube video only has 278 views. Seems to me Simon Thorpe bought by Brainchip logic is huge. I have watched it 5 times easy, stopping, looking up things, figuring it out. That is me.

Brainchip's software works perfect for various components like Wifi, 5G, and the ability to make Computer Vision decisions before wasting time sending the data to the cloud is massive. Brainchip is a bandwidth cloud saver. The video data comes in via TensorFlow, and the low power logic does it thing. Seems like we need more on TensorFlow and the array data that flows out to Akida to us Spiking Neural Networking brains to decision on.

When you walk up to your Amazon Ring Video Camera, Akida spiking neural networks architecture can use low power, make the decision that you are already authorized to the home. So no need to send zillions of pixels to the cloud.

This reduced bandwidth Use Cases just keeps churning in my head. Amazon would love to reduce the clutter sent up to AWS. Go brainchip for bandwidth saver.


 
  • Like
  • Love
  • Fire
Reactions: 13 users

uiux

Regular

BrainChip Partners with Prophesee Optimizing Computer Vision AI Performance and Efficiency

"We've successfully ported the data from Prophesee's neuromorphic-based camera sensor to process inference on Akida with impressive performance," said Anil Mankar, Co-Founder and CDO of BrainChip. "This combination of intelligent vision sensors with Akida's ability to process data with unparalleled efficiency, precision and economy of energy at the point of acquisition truly advances state-of-the-art AI enablement and offers manufacturers a ready-to-implement solution."

"By combining our Metavision solution with Akida-based IP, we are better able to deliver a complete high-performance and ultra-low power solution to OEMs looking to leverage edge-based visual technologies as part of their product offerings, said Luca Verre, CEO and co-founder of Prophesee."
 
  • Like
  • Love
  • Fire
Reactions: 23 users
What is JAST?



https://www.researchgate.net/profil...tterns-using-a-novel-STDP-based-algorithm.pdf


Sud7nnNwvc-IOuiacT72Y07uQOXxgwREwLpw5uApgJS9wAfn09MK80CHzP5FsxpkiK7WAT_W12JR3fUbz2Bi-BtUXQbNG0ic4AU4i0ifFwtkTXyh4kvFiO1twfS8IeyP9Y5Ytqhn


https://www.asx.com.au/asxpdf/20170320/pdf/43gxg7g8c6xq25.pdf

BrainChip Advances its Position as a Leading Artificial Intelligence Provider with an Exclusive License for Next-Generation Neural Network Technology

Agreement with Toulouse Tech Transfer and the CERCO research center, to license the exclusive rights to the JAST learning rules


The JAST Patents & Research

https://www.researchgate.net/public...r_Neuromorphic_Bio-Inspired_Vision_Processing

Digital Design For Neuromorphic Bio-Inspired Vision Processing

“I would like to thank Peter Van Der Made and Anil Mankar from BrainChip company for their support for commercialization of our ideas”

Patents spun out from this research:

1. Unsupervised detection of repeating patterns in a series of events, European Patent Office EP16306525 Nov-2016, Amirreza Yousefzadeh, Timothee Masquelier, Jacob Martin, Simon Thorpe, Licensed to the Californian start-up BrainChip.

2. Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events, European Patent Office EP17305186 Feb2017, Amirreza Yousefzadeh, Bernabe Linares-Barranco, Timothee Masquelier, Jacob Martin, Simon Thorpe, Exclusive Licensed to the Californian start-up BrainChip.

3. Method and Apparatus for Stochastic STDP with Binary Weights, U.S. Patent and Trademark office 012055.0440P Nov-2017, Evangelos Stromatias, Amirreza Yousefzadeh, Teresa Serrano-Gotarredona, Bernabe Linares-Barranco, Licensed to Samsung Advanced Institute of Technology


https://jov.arvojournals.org/article.aspx?articleid=2651951

Unsupervised learning of repeating patterns using a novel STDP based algorithm

Authors

Abstract
Computational vision systems that are trained with deep learning have recently matched human performance (Hinton et al). However, while deep learning typically requires tens or hundreds of thousands of labelled examples, humans can learn a task or stimulus with only a few repetitions. For example, a 2015 study by Andrillon et al. showed that human listeners can learn complicated random auditory noises after only a few repetitions, with each repetition invoking a larger and larger EEG activity than the previous. In addition, a 2015 study by Martin et al. showed that only 10 minutes of visual experience of a novel object class was required to change early EEG potentials, improve saccadic reaction times, and increase saccade accuracies for the particular object trained. How might such ultra-rapid learning actually be accomplished by the cortex? Here, we propose a simple unsupervised neural model based on spike timing dependent plasticity, which learns spatiotemporal patterns in visual or auditory stimuli with only a few repetitions. The model is attractive for applications because it is simple enough to allow the simulation of very large numbers of cortical neurons in real time. Theoretically, the model provides a plausible example of how the brain may accomplish rapid learning of repeating visual or auditory patterns using only a few examples.


https://patents.google.com/patent/EP3324343A1/en

Unsupervised detection of repeating patterns in a series of events

Inventor
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC
  • Centre National de la Recherche Scientifique CNRS

Abstract

A method of performing unsupervised detection of repeating patterns in a series (TS) of events (E21, E12, E5 ...), comprising the steps of:
a) Providing a plurality of neurons (NR1 - NRP), each neuron being representative of W event types;
b) Acquiring an input packet (IV) comprising N successive events of the series;
c) Attributing to at least some neurons a potential value (PT1 - PTP), representative of the number of common events between the input packet and the neuron;
d) Modify the event types of neurons having a potential value exceeding a first threshold TL; and
e) generating a first output signal (OS1 - OSP) for all neurons having a potential value exceeding a second threshold TF, and a second output signal, different from the first one, for all other neurons.

A digital integrated circuit configured for carrying out such a method.


https://patents.google.com/patent/US20190286944A1/en

Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events

Inventor
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC
  • Centre National de la Recherche Scientifique CNRS

Abstract
A method of performing unsupervised detection of repeating patterns in a series of events, includes a) Providing a plurality of neurons, each neuron being representative of W event types; b) Acquiring an input packet comprising N successive events of the series; c) Attributing to at least some neurons a potential value, representative of the number of common events between the input packet and the neuron; d) Modify the event types of neurons having a potential value exceeding a first threshold TL; and e) generating a first output signal for all neurons having a potential value exceeding a second threshold TF, and a second output signal, different from the first one, for all other neurons. A digital electronic circuit and system configured for carrying out such a method is also provided.


https://patents.google.com/patent/US20190138900A1/en

Neuron circuit, system, and method with synapse weight learning

Inventor:
  • Bernabe LINARES-BARRANCO
  • Amirreza YOUSEFZADEH
  • Evangelos STROMATIAS
  • Teresa SERRANO-GOTARREDONA
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC
  • Samsung Electronics Co Ltd

Abstract
A neuron circuit performing synapse learning on weight values includes a first sub-circuit, a second sub-circuit, and a third sub-circuit. The first sub-circuit is configured to receive an input signal from a pre-synaptic neuron circuit and determine whether the received input signal is an active signal having an active synapse value. The second sub-circuit is configured to compare a first cumulative reception counter of active input signals with a learning threshold value based on results of the determination. The third sub-circuit is configured to perform a potentiating learning process based on a first probability value to set a synaptic weight value of at least one previously received input signal to an active value, upon the first cumulative reception counter reaching the learning threshold value, and perform a depressing learning process based on a second probability value to set each of the synaptic weight values to an inactive value.



BERNABE LINARES BARRANCO

http://www2.imse-cnm.csic.es/~bernabe/

He is co-founder of two start-ups, Prophesee SA (www.prophesee.ai) and GrAI-Matter-Labs SAS (www.graimatterlabs.ai), both on neuromorphic hardware.


https://patents.google.com/patent/ES2476115B1/en

Method and device for the detection of the temporary variation of light intensity in a photosensor matrix

Inventor
Current Assignee
  • Consejo Superior de Investigaciones Cientificas CSIC

Abstract
Method and device for detecting the temporal variation of the light intensity in an array of photosensors, comprising a pixel array, an automatic adjustment block for photocurrent amplification and an arbitrator and event encoder block. Each pixel comprises a photosensor that generates a photocurrent, an adjustable gain current mirror connected to the output of the photosensor, a transimpedance amplifier placed at the output of the current mirror, optionally at least one amplification circuit placed at the output of the amplifier of transimpedance, capacitors and threshold detectors to determine if the output voltage exceeds an upper or lower threshold below a lower threshold to generate an event in the pixel.



https://www.prophesee.ai/2018/02/21/chronocam-becomes-prophesee/

CHRONOCAM BECOMES PROPHESEE, INTRODUCING A NEW ERA IN MACHINE VISION

Chronocam, the inventor of the most advanced neuromorphic vision system is now Prophesee, a branding and identity transformation that reflects our expanded vision for revolutionizing how machines see.

Today, with 60+ employees and backed by more than $40 million in capital investment by world class companies like Intel Capital, Renault and Robert Bosch, we are poised to take advantage of a huge opportunity to transform the machine vision landscape. We are addressing the most challenging pain points in enabling better and more efficient ways to allow cameras and sensors improve our lives. Through higher performance, faster data processing and increased power efficiency, Prophesee will enhance safety, productivity and user experiences in many ways not possible with existing technology.


https://www.prophesee.ai/wp-content...EE-Metavision-sensor-packaged.PR_.Sept-25.pdf

Prophesee introduces the first Event-Based Vision sensor in an industry-standard, cost-efficient package

The sensor can be used by system developers to improve and in some cases create whole new industrial uses, including accelerating quality assessment on production lines; positioning, sensing and movement guidance for robots to enable better human collaboration; and equipment monitoring (e.g. caused by vibration, kinetic deviations) making the system an asset for predictive maintenance and reduced machine downtime.


https://www.prophesee.ai/2020/02/19/prophesee-sony-stacked-event-based-vision-sensor/

Prophesee and Sony Develop a Stacked Event-Based Vision Sensor with the Industry’s Smallest*1 Pixels and Highest*1 HDR Performance

The new sensor and its performance results were announced at the International Solid-State Circuits Conference (ISSCC) held in San Francisco in the United States, starting on February 16, 2020.

The new stacked Event-based vision sensor detects changes in the luminance of each pixel asynchronously and outputs data including coordinates and time only for the pixels where a change is detected, thereby enabling high efficiency, high speed, low latency data output. This vision sensor achieves high resolution, high speed, and high time resolution despite its small size and low power consumption.


https://www.bosch-presse.de/presspo...-of-chronocam-led-by-intel-capital-74049.html

Robert Bosch Venture Capital participates in 15 million Dollar funding round of Chronocam led by Intel Capital

Stuttgart, Germany – Robert Bosch Venture Capital, the corporate venture capital company of the Bosch Group, participates in the 15 million Dollar Series B investment round of Chronocam SA. This investment comes about 18 months after Robert Bosch Venture Capital led the Series A round of Chronocam in 2015. The Paris based start-up company is the innovation leader of biologically-inspired vision sensors and computer vision solutions. The new funding comes from lead investor Intel Capital, along with iBionext, 360 Capital, Renault-Nissan Group, as well as the investors of Chronocam’s previous Series A round, CEAi and Robert Bosch Venture Capital.


https://patents.google.com/patent/EP3543898A1/en

Fast detection of secondary objects that may intersect the trajectory of a moving primary object

Inventor
  • Michael Pfeiffer
  • Jochen Marx
  • Oliver Lange
  • Christoph Posch
  • Xavier LAGORCE
  • Spiros NIKOLAIDIS
Current Assignee
  • CHRONOCAM SA
  • Robert Bosch GmbH

Abstract
A system (1) for detecting dynamic secondary objects (55) that have a potential to intersect the trajectory (51) of a moving primary object (50), comprising a vision sensor (2) with a light-sensitive area (20) that comprises event-based pixels (21), so that a relative change in the light intensity impinging onto an event-based pixel (21) of the vision sensor (2) by at least a predetermined percentage causes the vision sensor (2) to emit an event (21a) associated with this event-based pixel (21), wherein the system (1) further comprises a discriminator module (3) that gets both the stream of events (21a) from the vision sensor (2) and information (52) about the heading and/or speed of the motion of the primary object (50) as inputs, and is configured to identify, from said stream of events (21a), based at least in part on said information (52), events (21b) that are likely to be caused by the motion of a secondary object (55), rather than by the motion of the primary object (50).

Vision sensors (2) for use in the system (1).

A corresponding computer program.



The ULPEC Project

http://ulpecproject.eu/wp-content/uploads/2017/11/Ulpec_FactSheet_NEW.pdf

https://ulpecproject.eu/

The long term goal of ULPEC is to develop advanced vision applications with ultra-low power requirements and ultra-low latency. The output of the ULPEC project is a demonstrator connecting a neuromorphic event-based camera to a high speed ultra-low power consumption asynchronous visual data processing system (Spiking Neural Network with memristive synapses).

The project consortium therefore includes an industrial end-user (Bosch), which will more particularly investigate autonomous and computer assisted driving. Autonomous and computer assisted driving are indeed a major disruption in the transport and car manufacturing sector. Vision and recognition of traffic event must be computed with very low latency (to improve security) and low power(to accommodate the power limited environment in a car, such as powerbudget and heat dissipation)


9fbFG1p_PQg33_n8GEelkeFP2xgWu3JYDCHca6dpV4pH9eDWotY50OM9sRaIZseQMGaV2xWP2BhYGXnyySm6wWNBh5eJsC3szrBg3LN88CxkVXRvznw6xe871lU52T3BuZo2erVq



The goal of ULPEC is to demonstrate a microsystem that is natively brain-inspired, connecting an event-based camera to a dedicated Spiking Neural Network enabled by memristive synapses. This high-speed, ultra-low power consumption asynchronous visual data processing system will then manipulate the sensor output end-to-end without changing its nature. ULPEC targets TRL4, as required by the ICT-03-2016 call, thanks to the functional realization of embedded smart event-based camera. The system demonstrator aims to prove that the underlying technology is viable and can be used for traffic road events and other applications. Such a level of integration has never been demonstrated so far, and no commercial equivalent exists on the market.The target use case for ULPEC technologies is the vision and recognition of traffic event (signs,obstacles like other cars, persons, etc.) which is part of the major disruption of autonomous and computer assisted driving in the transport and car manufacturing sector. Beyond transportation, our long-term vision encompasses all advanced vision applications with ultra-low power requirements and ultra-low latency, as well as for data processing in hardware native neural network.



https://www.frontiersin.org/articles/10.3389/fnins.2018.00774/full

Deep Learning With Spiking Neurons: Opportunities and Challenges

Authors:
Michael Pfeiffer* and Thomas Pfeil
Bosch Center for Artificial Intelligence, Robert Bosch GmbH, Renningen, Germany

Reviewed by:
Timothée Masquelier
Centre National de la Recherche Scientifique (CNRS), France


https://de.reuters.com/article/uk-bosch-factory-idUKKBN1961AN

Robert Bosch to invest one billion euros in Dresden semiconductor plant

FRANKFURT (Reuters) - Robert Bosch, is investing 1 billion euros (872.92 million pounds) in a semiconductor plant in Germany, a company source told Reuters, highlighting the world’s largest car parts supplier’s ambitions in self-drive cars and the industrial Internet.

The factory will be built in the eastern German city of Dresden, with most of the investment coming from Bosch and the rest from government and European Union subsidies, the source said. It is set to start production in 2021 and will employ 700 people, the source said. Bosch declined to comment.

Bosch already has a chip factory in Reutlingen in southern Germany and is a leading producer of sensors, but demand is expected to increase with the development of autonomous cars and the advance of more “smart” machines.


https://europe.autonews.com/blogs/why-bosch-bet-big-breakeven-chips

Why Bosch bet big on breakeven chips

"The Dresden plant will focus on producing application-specific integrated circuits (ASICs), which function like the part of our nervous system responsible for reflexes. Decisions are made almost instantaneously before a signal ever reaches the brain."


https://www.reuters.com/article/us-tech-bosch-idUSKBN1WM0YD

Bosch to make silicon carbide chips in electric vehicle range-anxiety play

“Silicon carbide semiconductors bring more power to electric vehicles. For motorists, this means a 6% increase in range,” Bosch board member Harald Kroeger said on Monday.

Bosch also wants to strengthen its position in so-called Application-Specific Integrated Circuits (ASICs) that decide how to act on sensor inputs; and in power electronics that manage everything from a car’s electric windows to its drivetrain.

The average car contains chips worth $370, according to industry estimates, but that figure rises by $450 for emission-free electric vehicles. Another $1,000 will be packed into the future self-driving cars, making semiconductors a growth opportunity in a car industry struggling with stagnant sales.

Throw in $100 for infotainment features, and the typical car will pack more than $1,900 in semiconductors as technology advances and as features now seen only in luxury vehicles spread to mass-market models, Bosch reckons.

With hindsight, worth another read. Love your work Uiux.
So much more to be revealed. Prophesee, tick!
 
  • Like
  • Love
  • Fire
Reactions: 17 users

stuart888

Regular
Way to highlight JAST, this so important invention-solution-wizardry. The Simon Thorpe videos really gave me the extra boost to invest a larger percentage of my money into Brainchip a few years ago. Deep detail into the math, logic, the addition of these techniques. Worth reviewing is a great recommendation.

The JAST Patents & Research

https://www.researchgate.net/public...r_Neuromorphic_Bio-Inspired_Vision_Processing

Digital Design For Neuromorphic Bio-Inspired Vision Processing

“I would like to thank Peter Van Der Made and Anil Mankar from BrainChip company for their support for commercialization of our ideas”

Patents spun out from this research:

1. Unsupervised detection of repeating patterns in a series of events, European Patent Office EP16306525 Nov-2016, Amirreza Yousefzadeh, Timothee Masquelier, Jacob Martin, Simon Thorpe, Licensed to the Californian start-up BrainChip.

2. Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events, European Patent Office EP17305186 Feb2017, Amirreza Yousefzadeh, Bernabe Linares-Barranco, Timothee Masquelier, Jacob Martin, Simon Thorpe, Exclusive Licensed to the Californian start-up BrainChip.

3. Method and Apparatus for Stochastic STDP with Binary Weights, U.S. Patent and Trademark office 012055.0440P Nov-2017, Evangelos Stromatias, Amirreza Yousefzadeh, Teresa Serrano-Gotarredona, Bernabe Linares-Barranco, Licensed to Samsung Advanced Institute of Technology
 
  • Like
  • Fire
Reactions: 15 users

KMuzza

Mad Scientist
Everyone should read this thread

@uiux- can’t wait to see your other MTF code additions - the hours you put in to share with everyone is admirable.

AKIDA BALLISTA UBQTS
 
  • Like
  • Fire
Reactions: 6 users
F

Filobeddo

Guest
Saw somewhere ( I think) that Simon Thorpe was previously on BRN’s scientific advisory board.

Does he still have an association with BRN?
 
  • Like
Reactions: 1 users
  • Love
  • Like
Reactions: 2 users
F

Filobeddo

Guest
  • Like
Reactions: 2 users
Thanks @Rise from the ashes that is from a year ago (unless I’m missing something) when I assume he was on the BRN scientific board

I’m interested to know if he still has an association since
well he no longer appears on the webpage as a sab member. So I assume he has no official direct link to the company anymore.
 
  • Like
Reactions: 2 users
F

Filobeddo

Guest
well he no longer appears on the webpage as a sab member. So I assume he has no official direct link to the company anymore.

No idea on the usual length of tenure for a scientific advisory board member and no idea why Simon Thorpe is no longer on the SAB after 12months. Given his past background with BRN, I’m curious to understand why
 
  • Like
Reactions: 2 users

stuart888

Regular
Simon Thorpe is better than ever explaining Spiking Neural Networking, and the value of the Brainchip solution. While we don't know how many BRN shares he has, we do know he does get future money from Brainchip as the solutions get adopted. He does fabulous presentations, deep in detail, but understandable to most I would assume. Yeah Simon Thorpe!

 
  • Like
  • Fire
Reactions: 4 users

stuart888

Regular
For those that cannot deal with these super techie videos, he highlights the key elements of Hardware Friendly AI Algorithms nicely. Also, for the first time that I know of, he points out their new Terabrain project going on. N of M coding using blocks of data on GPU. This would not be for TinyML Edge, like Brainchip is targeting. This is to go after the power-expensive-AI of GTP-3, and a whole lot of other Use Cases. Brainchip has everything for TinyML, including the JAST ML edge learning part, and now he is after the big energy wasting AI models.

1657571565073.png



1657572099027.png
 
  • Like
  • Fire
Reactions: 4 users
Top Bottom