uiux
Regular
What is JAST?
https://www.researchgate.net/profil...tterns-using-a-novel-STDP-based-algorithm.pdf
https://www.asx.com.au/asxpdf/20170320/pdf/43gxg7g8c6xq25.pdf
BrainChip Advances its Position as a Leading Artificial Intelligence Provider with an Exclusive License for Next-Generation Neural Network Technology
Agreement with Toulouse Tech Transfer and the CERCO research center, to license the exclusive rights to the JAST learning rules
The JAST Patents & Research
https://www.researchgate.net/public...r_Neuromorphic_Bio-Inspired_Vision_Processing
Digital Design For Neuromorphic Bio-Inspired Vision Processing
“I would like to thank Peter Van Der Made and Anil Mankar from BrainChip company for their support for commercialization of our ideas”
Patents spun out from this research:
1. Unsupervised detection of repeating patterns in a series of events, European Patent Office EP16306525 Nov-2016, Amirreza Yousefzadeh, Timothee Masquelier, Jacob Martin, Simon Thorpe, Licensed to the Californian start-up BrainChip.
2. Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events, European Patent Office EP17305186 Feb2017, Amirreza Yousefzadeh, Bernabe Linares-Barranco, Timothee Masquelier, Jacob Martin, Simon Thorpe, Exclusive Licensed to the Californian start-up BrainChip.
3. Method and Apparatus for Stochastic STDP with Binary Weights, U.S. Patent and Trademark office 012055.0440P Nov-2017, Evangelos Stromatias, Amirreza Yousefzadeh, Teresa Serrano-Gotarredona, Bernabe Linares-Barranco, Licensed to Samsung Advanced Institute of Technology
https://jov.arvojournals.org/article.aspx?articleid=2651951
Unsupervised learning of repeating patterns using a novel STDP based algorithm
Authors
Abstract
Computational vision systems that are trained with deep learning have recently matched human performance (Hinton et al). However, while deep learning typically requires tens or hundreds of thousands of labelled examples, humans can learn a task or stimulus with only a few repetitions. For example, a 2015 study by Andrillon et al. showed that human listeners can learn complicated random auditory noises after only a few repetitions, with each repetition invoking a larger and larger EEG activity than the previous. In addition, a 2015 study by Martin et al. showed that only 10 minutes of visual experience of a novel object class was required to change early EEG potentials, improve saccadic reaction times, and increase saccade accuracies for the particular object trained. How might such ultra-rapid learning actually be accomplished by the cortex? Here, we propose a simple unsupervised neural model based on spike timing dependent plasticity, which learns spatiotemporal patterns in visual or auditory stimuli with only a few repetitions. The model is attractive for applications because it is simple enough to allow the simulation of very large numbers of cortical neurons in real time. Theoretically, the model provides a plausible example of how the brain may accomplish rapid learning of repeating visual or auditory patterns using only a few examples.
https://patents.google.com/patent/EP3324343A1/en
Unsupervised detection of repeating patterns in a series of events
Inventor
Current Assignee
Abstract
A method of performing unsupervised detection of repeating patterns in a series (TS) of events (E21, E12, E5 ...), comprising the steps of:
a) Providing a plurality of neurons (NR1 - NRP), each neuron being representative of W event types;
b) Acquiring an input packet (IV) comprising N successive events of the series;
c) Attributing to at least some neurons a potential value (PT1 - PTP), representative of the number of common events between the input packet and the neuron;
d) Modify the event types of neurons having a potential value exceeding a first threshold TL; and
e) generating a first output signal (OS1 - OSP) for all neurons having a potential value exceeding a second threshold TF, and a second output signal, different from the first one, for all other neurons.
A digital integrated circuit configured for carrying out such a method.
https://patents.google.com/patent/US20190286944A1/en
Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events
Inventor
Current Assignee
Abstract
A method of performing unsupervised detection of repeating patterns in a series of events, includes a) Providing a plurality of neurons, each neuron being representative of W event types; b) Acquiring an input packet comprising N successive events of the series; c) Attributing to at least some neurons a potential value, representative of the number of common events between the input packet and the neuron; d) Modify the event types of neurons having a potential value exceeding a first threshold TL; and e) generating a first output signal for all neurons having a potential value exceeding a second threshold TF, and a second output signal, different from the first one, for all other neurons. A digital electronic circuit and system configured for carrying out such a method is also provided.
https://patents.google.com/patent/US20190138900A1/en
Neuron circuit, system, and method with synapse weight learning
Inventor:
Abstract
A neuron circuit performing synapse learning on weight values includes a first sub-circuit, a second sub-circuit, and a third sub-circuit. The first sub-circuit is configured to receive an input signal from a pre-synaptic neuron circuit and determine whether the received input signal is an active signal having an active synapse value. The second sub-circuit is configured to compare a first cumulative reception counter of active input signals with a learning threshold value based on results of the determination. The third sub-circuit is configured to perform a potentiating learning process based on a first probability value to set a synaptic weight value of at least one previously received input signal to an active value, upon the first cumulative reception counter reaching the learning threshold value, and perform a depressing learning process based on a second probability value to set each of the synaptic weight values to an inactive value.
BERNABE LINARES BARRANCO
http://www2.imse-cnm.csic.es/~bernabe/
He is co-founder of two start-ups, Prophesee SA (www.prophesee.ai) and GrAI-Matter-Labs SAS (www.graimatterlabs.ai), both on neuromorphic hardware.
https://patents.google.com/patent/ES2476115B1/en
Method and device for the detection of the temporary variation of light intensity in a photosensor matrix
Inventor
Current Assignee
Abstract
Method and device for detecting the temporal variation of the light intensity in an array of photosensors, comprising a pixel array, an automatic adjustment block for photocurrent amplification and an arbitrator and event encoder block. Each pixel comprises a photosensor that generates a photocurrent, an adjustable gain current mirror connected to the output of the photosensor, a transimpedance amplifier placed at the output of the current mirror, optionally at least one amplification circuit placed at the output of the amplifier of transimpedance, capacitors and threshold detectors to determine if the output voltage exceeds an upper or lower threshold below a lower threshold to generate an event in the pixel.
https://www.prophesee.ai/2018/02/21/chronocam-becomes-prophesee/
CHRONOCAM BECOMES PROPHESEE, INTRODUCING A NEW ERA IN MACHINE VISION
Chronocam, the inventor of the most advanced neuromorphic vision system is now Prophesee, a branding and identity transformation that reflects our expanded vision for revolutionizing how machines see.
Today, with 60+ employees and backed by more than $40 million in capital investment by world class companies like Intel Capital, Renault and Robert Bosch, we are poised to take advantage of a huge opportunity to transform the machine vision landscape. We are addressing the most challenging pain points in enabling better and more efficient ways to allow cameras and sensors improve our lives. Through higher performance, faster data processing and increased power efficiency, Prophesee will enhance safety, productivity and user experiences in many ways not possible with existing technology.
https://www.prophesee.ai/wp-content...EE-Metavision-sensor-packaged.PR_.Sept-25.pdf
Prophesee introduces the first Event-Based Vision sensor in an industry-standard, cost-efficient package
The sensor can be used by system developers to improve and in some cases create whole new industrial uses, including accelerating quality assessment on production lines; positioning, sensing and movement guidance for robots to enable better human collaboration; and equipment monitoring (e.g. caused by vibration, kinetic deviations) making the system an asset for predictive maintenance and reduced machine downtime.
https://www.prophesee.ai/2020/02/19/prophesee-sony-stacked-event-based-vision-sensor/
Prophesee and Sony Develop a Stacked Event-Based Vision Sensor with the Industry’s Smallest*1 Pixels and Highest*1 HDR Performance
The new sensor and its performance results were announced at the International Solid-State Circuits Conference (ISSCC) held in San Francisco in the United States, starting on February 16, 2020.
The new stacked Event-based vision sensor detects changes in the luminance of each pixel asynchronously and outputs data including coordinates and time only for the pixels where a change is detected, thereby enabling high efficiency, high speed, low latency data output. This vision sensor achieves high resolution, high speed, and high time resolution despite its small size and low power consumption.
https://www.bosch-presse.de/presspo...-of-chronocam-led-by-intel-capital-74049.html
Robert Bosch Venture Capital participates in 15 million Dollar funding round of Chronocam led by Intel Capital
Stuttgart, Germany – Robert Bosch Venture Capital, the corporate venture capital company of the Bosch Group, participates in the 15 million Dollar Series B investment round of Chronocam SA. This investment comes about 18 months after Robert Bosch Venture Capital led the Series A round of Chronocam in 2015. The Paris based start-up company is the innovation leader of biologically-inspired vision sensors and computer vision solutions. The new funding comes from lead investor Intel Capital, along with iBionext, 360 Capital, Renault-Nissan Group, as well as the investors of Chronocam’s previous Series A round, CEAi and Robert Bosch Venture Capital.
https://patents.google.com/patent/EP3543898A1/en
Fast detection of secondary objects that may intersect the trajectory of a moving primary object
Inventor
Abstract
A system (1) for detecting dynamic secondary objects (55) that have a potential to intersect the trajectory (51) of a moving primary object (50), comprising a vision sensor (2) with a light-sensitive area (20) that comprises event-based pixels (21), so that a relative change in the light intensity impinging onto an event-based pixel (21) of the vision sensor (2) by at least a predetermined percentage causes the vision sensor (2) to emit an event (21a) associated with this event-based pixel (21), wherein the system (1) further comprises a discriminator module (3) that gets both the stream of events (21a) from the vision sensor (2) and information (52) about the heading and/or speed of the motion of the primary object (50) as inputs, and is configured to identify, from said stream of events (21a), based at least in part on said information (52), events (21b) that are likely to be caused by the motion of a secondary object (55), rather than by the motion of the primary object (50).
Vision sensors (2) for use in the system (1).
A corresponding computer program.
The ULPEC Project
http://ulpecproject.eu/wp-content/uploads/2017/11/Ulpec_FactSheet_NEW.pdf
https://ulpecproject.eu/
The long term goal of ULPEC is to develop advanced vision applications with ultra-low power requirements and ultra-low latency. The output of the ULPEC project is a demonstrator connecting a neuromorphic event-based camera to a high speed ultra-low power consumption asynchronous visual data processing system (Spiking Neural Network with memristive synapses).
The project consortium therefore includes an industrial end-user (Bosch), which will more particularly investigate autonomous and computer assisted driving. Autonomous and computer assisted driving are indeed a major disruption in the transport and car manufacturing sector. Vision and recognition of traffic event must be computed with very low latency (to improve security) and low power(to accommodate the power limited environment in a car, such as powerbudget and heat dissipation)
The goal of ULPEC is to demonstrate a microsystem that is natively brain-inspired, connecting an event-based camera to a dedicated Spiking Neural Network enabled by memristive synapses. This high-speed, ultra-low power consumption asynchronous visual data processing system will then manipulate the sensor output end-to-end without changing its nature. ULPEC targets TRL4, as required by the ICT-03-2016 call, thanks to the functional realization of embedded smart event-based camera. The system demonstrator aims to prove that the underlying technology is viable and can be used for traffic road events and other applications. Such a level of integration has never been demonstrated so far, and no commercial equivalent exists on the market.The target use case for ULPEC technologies is the vision and recognition of traffic event (signs,obstacles like other cars, persons, etc.) which is part of the major disruption of autonomous and computer assisted driving in the transport and car manufacturing sector. Beyond transportation, our long-term vision encompasses all advanced vision applications with ultra-low power requirements and ultra-low latency, as well as for data processing in hardware native neural network.
https://www.frontiersin.org/articles/10.3389/fnins.2018.00774/full
Deep Learning With Spiking Neurons: Opportunities and Challenges
Authors:
Michael Pfeiffer* and Thomas Pfeil
Bosch Center for Artificial Intelligence, Robert Bosch GmbH, Renningen, Germany
Reviewed by:
Timothée Masquelier
Centre National de la Recherche Scientifique (CNRS), France
https://de.reuters.com/article/uk-bosch-factory-idUKKBN1961AN
Robert Bosch to invest one billion euros in Dresden semiconductor plant
FRANKFURT (Reuters) - Robert Bosch, is investing 1 billion euros (872.92 million pounds) in a semiconductor plant in Germany, a company source told Reuters, highlighting the world’s largest car parts supplier’s ambitions in self-drive cars and the industrial Internet.
The factory will be built in the eastern German city of Dresden, with most of the investment coming from Bosch and the rest from government and European Union subsidies, the source said. It is set to start production in 2021 and will employ 700 people, the source said. Bosch declined to comment.
Bosch already has a chip factory in Reutlingen in southern Germany and is a leading producer of sensors, but demand is expected to increase with the development of autonomous cars and the advance of more “smart” machines.
https://europe.autonews.com/blogs/why-bosch-bet-big-breakeven-chips
Why Bosch bet big on breakeven chips
"The Dresden plant will focus on producing application-specific integrated circuits (ASICs), which function like the part of our nervous system responsible for reflexes. Decisions are made almost instantaneously before a signal ever reaches the brain."
https://www.reuters.com/article/us-tech-bosch-idUSKBN1WM0YD
Bosch to make silicon carbide chips in electric vehicle range-anxiety play
“Silicon carbide semiconductors bring more power to electric vehicles. For motorists, this means a 6% increase in range,” Bosch board member Harald Kroeger said on Monday.
Bosch also wants to strengthen its position in so-called Application-Specific Integrated Circuits (ASICs) that decide how to act on sensor inputs; and in power electronics that manage everything from a car’s electric windows to its drivetrain.
The average car contains chips worth $370, according to industry estimates, but that figure rises by $450 for emission-free electric vehicles. Another $1,000 will be packed into the future self-driving cars, making semiconductors a growth opportunity in a car industry struggling with stagnant sales.
Throw in $100 for infotainment features, and the typical car will pack more than $1,900 in semiconductors as technology advances and as features now seen only in luxury vehicles spread to mass-market models, Bosch reckons.
https://www.researchgate.net/profil...tterns-using-a-novel-STDP-based-algorithm.pdf
https://www.asx.com.au/asxpdf/20170320/pdf/43gxg7g8c6xq25.pdf
BrainChip Advances its Position as a Leading Artificial Intelligence Provider with an Exclusive License for Next-Generation Neural Network Technology
Agreement with Toulouse Tech Transfer and the CERCO research center, to license the exclusive rights to the JAST learning rules
The JAST Patents & Research
https://www.researchgate.net/public...r_Neuromorphic_Bio-Inspired_Vision_Processing
Digital Design For Neuromorphic Bio-Inspired Vision Processing
“I would like to thank Peter Van Der Made and Anil Mankar from BrainChip company for their support for commercialization of our ideas”
Patents spun out from this research:
1. Unsupervised detection of repeating patterns in a series of events, European Patent Office EP16306525 Nov-2016, Amirreza Yousefzadeh, Timothee Masquelier, Jacob Martin, Simon Thorpe, Licensed to the Californian start-up BrainChip.
2. Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events, European Patent Office EP17305186 Feb2017, Amirreza Yousefzadeh, Bernabe Linares-Barranco, Timothee Masquelier, Jacob Martin, Simon Thorpe, Exclusive Licensed to the Californian start-up BrainChip.
3. Method and Apparatus for Stochastic STDP with Binary Weights, U.S. Patent and Trademark office 012055.0440P Nov-2017, Evangelos Stromatias, Amirreza Yousefzadeh, Teresa Serrano-Gotarredona, Bernabe Linares-Barranco, Licensed to Samsung Advanced Institute of Technology
https://jov.arvojournals.org/article.aspx?articleid=2651951
Unsupervised learning of repeating patterns using a novel STDP based algorithm
Authors
Abstract
Computational vision systems that are trained with deep learning have recently matched human performance (Hinton et al). However, while deep learning typically requires tens or hundreds of thousands of labelled examples, humans can learn a task or stimulus with only a few repetitions. For example, a 2015 study by Andrillon et al. showed that human listeners can learn complicated random auditory noises after only a few repetitions, with each repetition invoking a larger and larger EEG activity than the previous. In addition, a 2015 study by Martin et al. showed that only 10 minutes of visual experience of a novel object class was required to change early EEG potentials, improve saccadic reaction times, and increase saccade accuracies for the particular object trained. How might such ultra-rapid learning actually be accomplished by the cortex? Here, we propose a simple unsupervised neural model based on spike timing dependent plasticity, which learns spatiotemporal patterns in visual or auditory stimuli with only a few repetitions. The model is attractive for applications because it is simple enough to allow the simulation of very large numbers of cortical neurons in real time. Theoretically, the model provides a plausible example of how the brain may accomplish rapid learning of repeating visual or auditory patterns using only a few examples.
https://patents.google.com/patent/EP3324343A1/en
Unsupervised detection of repeating patterns in a series of events
Inventor
Current Assignee
- Consejo Superior de Investigaciones Cientificas CSIC
- Centre National de la Recherche Scientifique CNRS
Abstract
A method of performing unsupervised detection of repeating patterns in a series (TS) of events (E21, E12, E5 ...), comprising the steps of:
a) Providing a plurality of neurons (NR1 - NRP), each neuron being representative of W event types;
b) Acquiring an input packet (IV) comprising N successive events of the series;
c) Attributing to at least some neurons a potential value (PT1 - PTP), representative of the number of common events between the input packet and the neuron;
d) Modify the event types of neurons having a potential value exceeding a first threshold TL; and
e) generating a first output signal (OS1 - OSP) for all neurons having a potential value exceeding a second threshold TF, and a second output signal, different from the first one, for all other neurons.
A digital integrated circuit configured for carrying out such a method.
https://patents.google.com/patent/US20190286944A1/en
Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events
Inventor
Current Assignee
- Consejo Superior de Investigaciones Cientificas CSIC
- Centre National de la Recherche Scientifique CNRS
Abstract
A method of performing unsupervised detection of repeating patterns in a series of events, includes a) Providing a plurality of neurons, each neuron being representative of W event types; b) Acquiring an input packet comprising N successive events of the series; c) Attributing to at least some neurons a potential value, representative of the number of common events between the input packet and the neuron; d) Modify the event types of neurons having a potential value exceeding a first threshold TL; and e) generating a first output signal for all neurons having a potential value exceeding a second threshold TF, and a second output signal, different from the first one, for all other neurons. A digital electronic circuit and system configured for carrying out such a method is also provided.
https://patents.google.com/patent/US20190138900A1/en
Neuron circuit, system, and method with synapse weight learning
Inventor:
- Bernabe LINARES-BARRANCO
- Amirreza YOUSEFZADEH
- Evangelos STROMATIAS
- Teresa SERRANO-GOTARREDONA
- Consejo Superior de Investigaciones Cientificas CSIC
- Samsung Electronics Co Ltd
Abstract
A neuron circuit performing synapse learning on weight values includes a first sub-circuit, a second sub-circuit, and a third sub-circuit. The first sub-circuit is configured to receive an input signal from a pre-synaptic neuron circuit and determine whether the received input signal is an active signal having an active synapse value. The second sub-circuit is configured to compare a first cumulative reception counter of active input signals with a learning threshold value based on results of the determination. The third sub-circuit is configured to perform a potentiating learning process based on a first probability value to set a synaptic weight value of at least one previously received input signal to an active value, upon the first cumulative reception counter reaching the learning threshold value, and perform a depressing learning process based on a second probability value to set each of the synaptic weight values to an inactive value.
BERNABE LINARES BARRANCO
http://www2.imse-cnm.csic.es/~bernabe/
He is co-founder of two start-ups, Prophesee SA (www.prophesee.ai) and GrAI-Matter-Labs SAS (www.graimatterlabs.ai), both on neuromorphic hardware.
https://patents.google.com/patent/ES2476115B1/en
Method and device for the detection of the temporary variation of light intensity in a photosensor matrix
Inventor
Current Assignee
- Consejo Superior de Investigaciones Cientificas CSIC
Abstract
Method and device for detecting the temporal variation of the light intensity in an array of photosensors, comprising a pixel array, an automatic adjustment block for photocurrent amplification and an arbitrator and event encoder block. Each pixel comprises a photosensor that generates a photocurrent, an adjustable gain current mirror connected to the output of the photosensor, a transimpedance amplifier placed at the output of the current mirror, optionally at least one amplification circuit placed at the output of the amplifier of transimpedance, capacitors and threshold detectors to determine if the output voltage exceeds an upper or lower threshold below a lower threshold to generate an event in the pixel.
https://www.prophesee.ai/2018/02/21/chronocam-becomes-prophesee/
CHRONOCAM BECOMES PROPHESEE, INTRODUCING A NEW ERA IN MACHINE VISION
Chronocam, the inventor of the most advanced neuromorphic vision system is now Prophesee, a branding and identity transformation that reflects our expanded vision for revolutionizing how machines see.
Today, with 60+ employees and backed by more than $40 million in capital investment by world class companies like Intel Capital, Renault and Robert Bosch, we are poised to take advantage of a huge opportunity to transform the machine vision landscape. We are addressing the most challenging pain points in enabling better and more efficient ways to allow cameras and sensors improve our lives. Through higher performance, faster data processing and increased power efficiency, Prophesee will enhance safety, productivity and user experiences in many ways not possible with existing technology.
https://www.prophesee.ai/wp-content...EE-Metavision-sensor-packaged.PR_.Sept-25.pdf
Prophesee introduces the first Event-Based Vision sensor in an industry-standard, cost-efficient package
The sensor can be used by system developers to improve and in some cases create whole new industrial uses, including accelerating quality assessment on production lines; positioning, sensing and movement guidance for robots to enable better human collaboration; and equipment monitoring (e.g. caused by vibration, kinetic deviations) making the system an asset for predictive maintenance and reduced machine downtime.
https://www.prophesee.ai/2020/02/19/prophesee-sony-stacked-event-based-vision-sensor/
Prophesee and Sony Develop a Stacked Event-Based Vision Sensor with the Industry’s Smallest*1 Pixels and Highest*1 HDR Performance
The new sensor and its performance results were announced at the International Solid-State Circuits Conference (ISSCC) held in San Francisco in the United States, starting on February 16, 2020.
The new stacked Event-based vision sensor detects changes in the luminance of each pixel asynchronously and outputs data including coordinates and time only for the pixels where a change is detected, thereby enabling high efficiency, high speed, low latency data output. This vision sensor achieves high resolution, high speed, and high time resolution despite its small size and low power consumption.
https://www.bosch-presse.de/presspo...-of-chronocam-led-by-intel-capital-74049.html
Robert Bosch Venture Capital participates in 15 million Dollar funding round of Chronocam led by Intel Capital
Stuttgart, Germany – Robert Bosch Venture Capital, the corporate venture capital company of the Bosch Group, participates in the 15 million Dollar Series B investment round of Chronocam SA. This investment comes about 18 months after Robert Bosch Venture Capital led the Series A round of Chronocam in 2015. The Paris based start-up company is the innovation leader of biologically-inspired vision sensors and computer vision solutions. The new funding comes from lead investor Intel Capital, along with iBionext, 360 Capital, Renault-Nissan Group, as well as the investors of Chronocam’s previous Series A round, CEAi and Robert Bosch Venture Capital.
https://patents.google.com/patent/EP3543898A1/en
Fast detection of secondary objects that may intersect the trajectory of a moving primary object
Inventor
- Michael Pfeiffer
- Jochen Marx
- Oliver Lange
- Christoph Posch
- Xavier LAGORCE
- Spiros NIKOLAIDIS
- CHRONOCAM SA
- Robert Bosch GmbH
Abstract
A system (1) for detecting dynamic secondary objects (55) that have a potential to intersect the trajectory (51) of a moving primary object (50), comprising a vision sensor (2) with a light-sensitive area (20) that comprises event-based pixels (21), so that a relative change in the light intensity impinging onto an event-based pixel (21) of the vision sensor (2) by at least a predetermined percentage causes the vision sensor (2) to emit an event (21a) associated with this event-based pixel (21), wherein the system (1) further comprises a discriminator module (3) that gets both the stream of events (21a) from the vision sensor (2) and information (52) about the heading and/or speed of the motion of the primary object (50) as inputs, and is configured to identify, from said stream of events (21a), based at least in part on said information (52), events (21b) that are likely to be caused by the motion of a secondary object (55), rather than by the motion of the primary object (50).
Vision sensors (2) for use in the system (1).
A corresponding computer program.
The ULPEC Project
http://ulpecproject.eu/wp-content/uploads/2017/11/Ulpec_FactSheet_NEW.pdf
https://ulpecproject.eu/
The long term goal of ULPEC is to develop advanced vision applications with ultra-low power requirements and ultra-low latency. The output of the ULPEC project is a demonstrator connecting a neuromorphic event-based camera to a high speed ultra-low power consumption asynchronous visual data processing system (Spiking Neural Network with memristive synapses).
The project consortium therefore includes an industrial end-user (Bosch), which will more particularly investigate autonomous and computer assisted driving. Autonomous and computer assisted driving are indeed a major disruption in the transport and car manufacturing sector. Vision and recognition of traffic event must be computed with very low latency (to improve security) and low power(to accommodate the power limited environment in a car, such as powerbudget and heat dissipation)
The goal of ULPEC is to demonstrate a microsystem that is natively brain-inspired, connecting an event-based camera to a dedicated Spiking Neural Network enabled by memristive synapses. This high-speed, ultra-low power consumption asynchronous visual data processing system will then manipulate the sensor output end-to-end without changing its nature. ULPEC targets TRL4, as required by the ICT-03-2016 call, thanks to the functional realization of embedded smart event-based camera. The system demonstrator aims to prove that the underlying technology is viable and can be used for traffic road events and other applications. Such a level of integration has never been demonstrated so far, and no commercial equivalent exists on the market.The target use case for ULPEC technologies is the vision and recognition of traffic event (signs,obstacles like other cars, persons, etc.) which is part of the major disruption of autonomous and computer assisted driving in the transport and car manufacturing sector. Beyond transportation, our long-term vision encompasses all advanced vision applications with ultra-low power requirements and ultra-low latency, as well as for data processing in hardware native neural network.
https://www.frontiersin.org/articles/10.3389/fnins.2018.00774/full
Deep Learning With Spiking Neurons: Opportunities and Challenges
Authors:
Michael Pfeiffer* and Thomas Pfeil
Bosch Center for Artificial Intelligence, Robert Bosch GmbH, Renningen, Germany
Reviewed by:
Timothée Masquelier
Centre National de la Recherche Scientifique (CNRS), France
https://de.reuters.com/article/uk-bosch-factory-idUKKBN1961AN
Robert Bosch to invest one billion euros in Dresden semiconductor plant
FRANKFURT (Reuters) - Robert Bosch, is investing 1 billion euros (872.92 million pounds) in a semiconductor plant in Germany, a company source told Reuters, highlighting the world’s largest car parts supplier’s ambitions in self-drive cars and the industrial Internet.
The factory will be built in the eastern German city of Dresden, with most of the investment coming from Bosch and the rest from government and European Union subsidies, the source said. It is set to start production in 2021 and will employ 700 people, the source said. Bosch declined to comment.
Bosch already has a chip factory in Reutlingen in southern Germany and is a leading producer of sensors, but demand is expected to increase with the development of autonomous cars and the advance of more “smart” machines.
https://europe.autonews.com/blogs/why-bosch-bet-big-breakeven-chips
Why Bosch bet big on breakeven chips
"The Dresden plant will focus on producing application-specific integrated circuits (ASICs), which function like the part of our nervous system responsible for reflexes. Decisions are made almost instantaneously before a signal ever reaches the brain."
https://www.reuters.com/article/us-tech-bosch-idUSKBN1WM0YD
Bosch to make silicon carbide chips in electric vehicle range-anxiety play
“Silicon carbide semiconductors bring more power to electric vehicles. For motorists, this means a 6% increase in range,” Bosch board member Harald Kroeger said on Monday.
Bosch also wants to strengthen its position in so-called Application-Specific Integrated Circuits (ASICs) that decide how to act on sensor inputs; and in power electronics that manage everything from a car’s electric windows to its drivetrain.
The average car contains chips worth $370, according to industry estimates, but that figure rises by $450 for emission-free electric vehicles. Another $1,000 will be packed into the future self-driving cars, making semiconductors a growth opportunity in a car industry struggling with stagnant sales.
Throw in $100 for infotainment features, and the typical car will pack more than $1,900 in semiconductors as technology advances and as features now seen only in luxury vehicles spread to mass-market models, Bosch reckons.