BRN Discussion Ongoing

Dhm

Regular
I searched 'neuromorphic' on the Synaptics website and this popped up. I'm not sure if it hints at us or not. We aren't named but Synaptics founders recognised neural network computation as the future. Maybe in 2019 when this was written there was already contact with Brainchip.
Pure speculation on my part.

Screen Shot 2022-09-11 at 5.56.15 pm.png

Screen Shot 2022-09-11 at 5.56.30 pm.png
 
  • Like
  • Thinking
  • Love
Reactions: 7 users

Murphy

Life is not a dress rehearsal!
I am hoping Akida will be put into good use in medical diagnosis. As great as the multitude of applications are i think it finding patterns in medical imaging and other tests so much faster than the human eye can will be life changing for many. Having my grandson in ICU at present in an induced coma, and all the doctors doing so many tests, and still can only come up with some sort of brain infection. If only Akida could scan all the CT and MRI scans accurately and quickly a diagnosis might be made quicker than a dozen doctors arguing over there interpretation. One day hopefully
Thoughts are with your family at this mega stressful time, Fred. You feel so hopeless. It does highlight the fact that family really is everything. Hope this situation has great outcomes, mate.

If you don't have dreams, you can't have dreams come true!
 
  • Like
  • Love
Reactions: 18 users
I am hoping Akida will be put into good use in medical diagnosis. As great as the multitude of applications are i think it finding patterns in medical imaging and other tests so much faster than the human eye can will be life changing for many. Having my grandson in ICU at present in an induced coma, and all the doctors doing so many tests, and still can only come up with some sort of brain infection. If only Akida could scan all the CT and MRI scans accurately and quickly a diagnosis might be made quicker than a dozen doctors arguing over there interpretation. One day hopefully
A.I. is already doing these kinds of things better than humans.


"Overall, it made the correct diagnosis 92 percent of the time, as opposed to the physicians' 77.5 percent of the time, making the machines 19 percent more accurate than the humans".
 
Last edited:
  • Like
  • Fire
Reactions: 13 users

Diogenese

Top 20
  • Like
  • Fire
  • Love
Reactions: 8 users

equanimous

Norse clairvoyant shapeshifter goddess
A short article on Renesas at the end of the article it states that TOYOTA AND TELSA are Key customers.....
HOME/CAR NEWS
Inside the Suppliers: Renesas
The semiconductor shortage is crunching supply of new cars and forcing manufacturers to sell models with a pared-back feature set. Renesas is one of the companies hit hardest by the ongoing crisis.
11 SECONDS AGO
2 COMMENTS
PREVIOUS NEWS
Is Tesla modifying cars and software specifically for crash testing?
17 HOURS AGO
229 COMMENTS
Don't get ripped off, find out what others have paid

GET REPORT
Vivek Shah
Vivek Shah
CONTRIBUTOR
PUBLISHED
11 September 2022, 7:00 am
With interiors featuring large displays, and increasingly advanced autonomous driving and active safety technologies, the cars of today (and the future) are not far from being supercomputers on wheels.
Modern cars use semiconductors to enable not only advanced technologies like those described above, but also everything from air-conditioning to scrolling indicators.
This reliance on semiconductors, combined with a global shortage borne of COVID and war in Ukraine, is why some cars are being sold with a stripped-back feature set, or face lengthy delays to arrive locally.
Japanese company Renesas is one of the largest semiconductor suppliers for the global automotive industry, and consequently one of the companies at the centre of the current semiconductor pinch.

Brief history​

While Renesas is a relatively young company that was established in April 2003, it can trace its heritage back to Mitsubishi and Hitachi.

Knowledge is the key to a strong negotiation, so we suggest that you spend a few minutes researching the actual prices people have been recently paying at dealerships using services such as PriceMyCar or forums and social media.
In 2002, Mitsubishi Electric and Hitachi decided to consolidate their semiconductor businesses (excluding their DRAM, or dynamic random access memory business, which was merged separately) into a company called Renesas.
Hitachi would initially have a 55 per cent stake in the new firm, with Mitsubishi controlling the remaining 45 per cent of shares.
Renesas is a portmanteau of Renaissance Semiconductor for Advanced Solutions, which doesn’t say much about the rationale for its existence – to generate economies of scale by combining similar operations, improving profitability in the process.
With an annual sales revenue of approximately $US7 billion in the 2003 fiscal year, Renesas immediately became one of the largest semiconductor companies in the world. In 2010, Renesas merged with the semiconductor operations of fellow Japanese company NEC Electronics to further expand its footprint, which brought about a minor name change to Renesas Electronics Corporation (from the former Renesas Technology Corporation).
As with many other Japanese carmakers and suppliers, the Japanese earthquakes in 2011 hit Renesas hard. Although the company’s factory in Naka (Ibaraki Prefecture) was designed to be earthquake resistant, several pieces of valuable equipment were damaged, and estimates suggested it would take six months before the factory could be restored to its pre-earthquake output.
Others Have Viewed

MG HS Logo
MG HS
2021-MG-HS-Core-MG-HS-Excite-1-1.jpg
2021-MG-HS-Core-MG-HS-Excite-11.jpg
2021-MG-HS-Core-MG-HS-Excite-31.jpg
2021-MG-HS-Core-MG-HS-Excite-29.jpg
2021-MG-HS-Core-MG-HS-Excite-37.jpg


7.9
SPEC SHEETTEST DRIVE
However, with the assistance of Toyota engineers who devised a ‘big room method’ that made it clear how recovery efforts were progressing, and other outside help, the company was able to restart production ahead of schedule.
More importantly, Renesas claimed it developed an improved business continuity plan that allowed it to better weather natural disasters (such as earthquakes in 2016), and set it up to manage its business during the COVID-19 pandemic.
Renesas has been at the heart of the global semiconductor shortage. CEO Hidetoshi Shibata has claimed while there is adequate production capacity to build enough semiconductors, one of the reasons for the pinch are shortages in the mining and supply of raw materials.
This has caused a flow-on effect in the production of certain types of semiconductors, especially those that use 40 nanometre transistors (processors in modern smartphones generally use 5nm) such as power management chips, and chips that process mixed digital and analogue signals.
Renesas is a publicly listed company on the Tokyo Stock Exchange, with a market capitalisation of approximately US$17.8 billion.

Type of products produced​

Renesas’ business extends beyond the automotive industry, reaching consumer electronics, along with the industrial and healthcare sectors. In the automotive space, however, Renesas primarily makes automotive microcontroller units (MCUs) and system-on-chip (SoC) products.
Renesas claims its ‘R-Car’ automotive SoC is designed for the ‘next generation of automotive computing for the age of autonomous vehicles’, and can power everything from advanced driver assistance systems to digital instrument clusters.
For example, Renesas claims its R-Car V4H SoC is suitable for cars equipped with Level 3 autonomous driving features, and its maximum performance of 34 trillion operations per second facilitates activities such as high-speed image recognition and processing of objects that have been identified by surround-view cameras, radar and LiDAR systems.
Renesas claims the V4H family of SoCs will also enable customers (car manufacturers) to develop cars that meet predicted Euro NCAP requirements until 2025.
Other applications for its MCUs and SoCs include controlling everything from electric power windows and HVAC (heating, ventilation and air-conditioning) systems, to LED headlight units, and electric power steering and tyre pressure monitoring systems.
Apart from MCUs and SoCs, Renesas produces electronic oscillators (devices that generate a clock signal) such as the VersaClock automotive clock generator, used to synchronise signals from various automotive systems, as well as other power management devices and automotive display processors.

Vehicles using Renesas products​

The prevalence of semiconductors in a modern car, combined with Renesas’ portfolio of automotive products, means countless cars sold today use at least one Renesas semiconductor, with many vehicles likely using multiple chips.
Renesas counts major OEMs including everyone from Toyota to Tesla as key customers, with the former also being a shareholder in the company
Renesas merged with the semiconductor operations of fellow Japanese company NEC Electronics to further expand its footprint

  • SPIKING NEURAL NETWORK SYSTEM, LEARNING PROCESSING DEVICE, LEARNING METHOD, AND RECORDING MEDIUM

    Publication number: 20220253674
    Abstract: A spiking neural network system includes: a time-based spiking neural network; and a learning processing unit that causes learning of the spiking neural network to be performed by supervised learning using a cost function, the cost function using a regularization term relating to a firing time of a neuron in the spiking neural network.
    Type: Application
    Filed: May 18, 2020
    Publication date: August 11, 2022
    Applicants: NEC CORPORATION, THE UNIVERSITY OF TOKYO
    Inventors: Yusuke SAKEMI, Kai MORINO, Kazuyuki AIHARA


 
  • Like
  • Fire
Reactions: 7 users

equanimous

Norse clairvoyant shapeshifter goddess
IBM


  • Correlative time coding method for spiking neural networks

    Patent number: 11403514
    Abstract: A computer-implemented method for classification of an input element to an output class in a spiking neural network may be provided. The method comprises receiving an input data set comprising a plurality of elements, identifying a set of features and corresponding feature values for each element of the input data set, and associating each feature to a subset of spiking neurons of a set of input spiking neurons of the spiking neural network. Furthermore, the method comprises also generating, by the input spiking neurons, spikes at pseudo-random time instants depending on a value of the feature for a given input element, and classifying an element into a class depending on a distance measure value between output spiking patterns at output spiking neurons of the spiking neural network and a predefined target pattern related to the class.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: August 2, 2022
    Assignee: International Business Machines Corporation
    Inventors: Giovanni Cherubini, Ana Stanojevic, Abu Sebastian
 
  • Like
  • Fire
Reactions: 4 users

Boab

I wish I could paint like Vincent
IBM


  • Correlative time coding method for spiking neural networks

    Patent number: 11403514
    Abstract: A computer-implemented method for classification of an input element to an output class in a spiking neural network may be provided. The method comprises receiving an input data set comprising a plurality of elements, identifying a set of features and corresponding feature values for each element of the input data set, and associating each feature to a subset of spiking neurons of a set of input spiking neurons of the spiking neural network. Furthermore, the method comprises also generating, by the input spiking neurons, spikes at pseudo-random time instants depending on a value of the feature for a given input element, and classifying an element into a class depending on a distance measure value between output spiking patterns at output spiking neurons of the spiking neural network and a predefined target pattern related to the class.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: August 2, 2022
    Assignee: International Business Machines Corporation
    Inventors: Giovanni Cherubini, Ana Stanojevic, Abu Sebastian
The method comprises receiving an input data set comprising a plurality of elements, identifying a set of features and corresponding feature values for each element of the input data set,

Is this the opposite of on device learning?
 
  • Like
  • Thinking
Reactions: 3 users

Townyj

Ermahgerd
  • Like
  • Fire
Reactions: 8 users

Learning

Learning to the Top 🕵‍♂️
Apologies if this has been shared. But I haven't seen it on TSE. It's old from 17th September 2021. Good Sunday night listening anyhow.



It's different to this other one.
Screenshot_20220911_215502_YouTube.jpg


It's great to be a shareholder
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 21 users
Saw some updates to Prophesee site on their inventor ecosystem.

The one that stands out to me is the NeuTouch project.

This post isn't to say that Akida is in there (could be?) but does highlight the original inventors are totally aware, have referenced us, work with Prophesee as we do and that we can see how the reach can develop to multiple entities through EU and the world from this one group in Singapore originally who have shared their datasets and no doubt imo details of Akida in conjunction with Prophesee.

Most may recall NeuTouch being part of the Tacniq as below prev posts.

Post in thread 'BRN Discussion 2022' https://thestockexchange.com.au/threads/brn-discussion-2022.1/post-103145

Post in thread 'BRN Discussion 2022' https://thestockexchange.com.au/threads/brn-discussion-2022.1/post-69917

Thread 'Asynchronously Coded Electronic Skin (ACES) platform' https://thestockexchange.com.au/threads/asynchronously-coded-electronic-skin-aces-platform.3385/

We also had the RT tactile video fortuitously come out in June this year :unsure:



When you look at Prophesee Inventor ecosystem.



Screenshot_2022-09-11-19-48-51-09_4641ebc0df1485bf6b47ebd018b5ee76.jpg



Back in 2020 there was an article on their early progress which was on Loihi initially apparently but know from their more recent commentary on their website that Akida is alo thrown around.



IMG_20220911_200054.jpg


Specifically, Harold Soh’s group have incorporated the NeuTouch finger tips with a Prophesee event-based vision sensor and used the data generated (both separately and together) to train a spike-based network that approximates back propagation.....

This is partly thanks to a quantity called weighted spike count that they use in their models to encourage early classification. Functionally this could improve a machine’s reaction time, giving it a better chance of minimizing the likelihood and consequences of dropping an object.

The group have made their datasets available to other researchers who might want to work on improving the learning models used.

Power benefits
For these experiments, training was done using conventional technology, but the network was then run on the Intel Loihi chip. The published results show a 50x improvement in power efficiency, but that’s already been improved upon.

According to NUS’s Harold Soh, since the paper came out, “…We’ve been fine-tuning our neural models and analyses. Our most recent slip detection model uses 1900x less power when run on neuromorphic hardware compared to a GPU, while retaining inference speed and accuracy. Our focus now is on translating this low-level performance to better robot behaviors on higher-level tasks, such as object pick-and-place and human-robot handovers. More broadly, we believe event-driven multi-sensory intelligence to be an important step towards trustworthy robots that we feel comfortable working with.”

So that led me to see who they releasing datasets with / collaborating with other than Prophesee and like to think maybe some more recent tests or enhancements with Akida?

Pal Robotics in EU is one.


Screenshot_2022-09-11-20-11-18-68_4641ebc0df1485bf6b47ebd018b5ee76.jpg


Which also led me to the Cordis Grant and project group as part of Horizon 2020 with full details link below but they've collectively pumped $4m euro so far.



Understanding neural coding of touch as enabling technology for prosthetics and robotics​


Screenshot_2022-09-11-20-15-02-50_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Fire
  • Love
Reactions: 31 users
Just cause haven't posted a chart snip for awhile.

Like it to & it's trying to...hold this poss supp & bottom trend line as a min.

She wants to turn but a little bit of work still do....a solid little push through that smooth Heiken Ashi (yellow candle bodies) around 0.92 then break the pivot (green horizontal line) just above to the left early Aug at about 1.00 will do nicely.


View attachment 16009
Just an update to last Tues chart.

Snuck through the 0.92...not totally convincing yet....then smacked that indicated 1.00 & a small rejection.

Hopefully a bit f regroup this week and a force & hold through the 1.00 would be good.

1662900228523.png


Just a diff look at a pretty simple visual of key areas.

1662901227749.png
 
  • Like
  • Love
  • Fire
Reactions: 31 users
S

Straw

Guest
View attachment 16285
I just lost my hearing
That's not my ears bleeding, it's the will to live pouring out of my soul.

That has got to be one of the most unbalanced, vague and obtuse opinion fests about a stock I've ever heard (re 'the call' discussion on BRN).
What's their investment horizon again? 300 years - because the revenue (conveniently for the purposes of their discussion) will
remain as it is; interesting view. There seemed to be a suggestion that the share count would continue to rise indefinitely because it's increased in the past..... is that valid reasoning? and under what circumstances? They don't appear to give any view on the undeniably impressive and highly relevant list of publicly stated partners/licencees/patents that the company now has in it's commercial phase. For example do they even know who ARM is? Conveniently neglecting to mention their very public website...... PF: *feigned look of horror*...... they must be part of the cult along with all the companies BRN have financial/marketing/development and partnering relationships with. I forgot to mention the 'Brainchip youth cults' being fostered at two Universities in the US.

How much over 10 years has been spent on R&D for BRN's Akida project (R&D ongoing) (Using their example, how many projects does nVidia have going at one time and how efficiently does it use that money?). Seeing as they think purely the amount of money is directly related to the results.

As for personal attacks I don't appreciate being called a cult member on radio/podcast (or whatever it is) because I invested in something I have a positive view of based on my research, personal financial situation and investment style/horizon (which is not short term). Thanks but I'll make my decisions based on my 7 years of accumulated knowledge about what I'm invested in and why. I also know and accept that nothing is risk free which is where my investment plan comes in.

PF (Purple Ferret): *Jigs about waving his green and blue flag-like hankies ironically... Morris dancer style*
 
Last edited by a moderator:
  • Like
  • Love
  • Fire
Reactions: 23 users
Not a "sexy" industry to most I suspect but absolutely a necessary one these days.

A slice of this would be pretty satisfying.

Positive couple of mentions re neuromorphic and SNN.




Business

Smart Lock Market size worth $ 7.13 Billion, Globally, by 2030 at 13.71% CAGR: Verified Market Research®​

24 August 2022 at 9:15 pm AWST
Smart Lock Market size worth $ 7.13 Billion, Globally, by 2030 at 13.71% CAGR:
Verified Market Research®

PR Newswire

JERSEY CITY, N.J., Aug. 24, 2022

The market for smart locks has been driven by the rising security and safety
concerns relating to preventing invasion, unauthorized access, theft,
burglary, and expanded functionality compared to standard locks.

JERSEY CITY, N.J., Aug. 24, 2022 Verified Market Research recently published a
report, " Smart Lock Market " By Type (Deadbolts, Lever Handles, Padlocks, and
Others), By Communication Protocol (Bluetooth, Wi-Fi, and Others), By Vertical
(Commercial, Residential, Institution & Government, Industrial, and Others),
and By Geography. According to Verified Market Research, the Smart Lock Market
size was valued at USD 2.55 Billion in 2021 and is projected to reach USD 7.13
Billion by 2030, growing at a CAGR of 13.71% from 2022 to 2030.

Global Smart Lock Market Overview

The increasing need for high performance Integrated Circuits is a major factor
in the growth of the global smart lock market. By processing and storing data
on the same chip, neuromorphic devices can significantly cut down on the
amount of time a typical CPU spends moving data around. The time a regular CPU would have needed to shuttle data between a block of memory and the processor handling these memories' processing tasks is significantly decreased by the ability to combine processing and storage. So, the demand for higher performing ICs is assisting the market's growth in the pursuit of efficient computing.

In order to increase productivity and product quality, a wide range of sectors
needs to automate their processes with the use of artificial intelligence and
machine learning. Numerous businesses, including those in the medical, media,
telecom, auto, food, and beverage sectors, use AI extensively. Because SNN can make flexible and agile decisions while taking the situation's context into account as well as past experiences, it can effectively address the difficulties that these industries frequently face. Combining AI with ML can improve the efficiency of applications including fraud detection, credit scoring, speech recognition, self-driving cars, image classification, and language translation. The market is expanding as a result of an increase in demand for general-purpose cognitive and brain robots.

Key Developments

* February 2022 – Dormakaba finalised the acquisition of AtiQx Holding B.V.
in the Netherlands, bolstering its core business and services activities.
In the relevant market, AtiQx is one of the leading providers of
electronic access control and labour management.
* January 2022 – Vivint and New American Funding, one of the largest
independent mortgage providers in the country, have launched a strategic
alliance to help homeowners and purchasers protect their dream homes.
Customers of New American Funding will be eligible for special incentives
from Vivint to safeguard and automate their homes as a result of the
relationship.

Key Players

The major players in the market are Assa Abloy AB, Dorma+Kaba Holding AG,
Spectrum Brands Holdings, Inc., Allegion Plc, Onity, Inc.

Verified Market Research has segmented the Global Smart Lock Market On the
basis of Type, Communication Protocol, Vertical, and Geography.

* Smart Lock Market, By Type

* Deadbolts
* Lever Handles
* Padlocks
* Others

* Smart Lock Market, By Communication Protocol

* Bluetooth
* Wi-Fi
* Others

* Smart Lock Market, By Vertical

* Commercial
* Residential
* Institution & Government
* Industrial
* Others

* Smart Lock Market, by Geography

* North America

* U.S
* Canada
* Mexico

* Europe

* Germany
* France
* U.K
* Rest of Europe

* Asia Pacific

* China
* Japan
* India
* Rest of Asia Pacific

* ROW

* Middle East & Africa
* Latin America
 
  • Like
  • Love
  • Fire
Reactions: 33 users
Few months old article and not sure if posted prev but worth a read.

One of the few I've read that summarises and captures simply how certain tech fits within the mkt space inc Akida.



Is Neuromorphic Computing Using Edge The Future Of AI?​

By
Victor Dey
-
March 16, 2022
https://www.facebook.com/sharer.php?u=https://datatechvibe.com/ai/is-neuromorphic-computing-using-edge-the-future-of-ai/
As artificial intelligence (AI) continues to evolve, it is expected that AI at the edge will become a more significant portion of the current tech market. Known as the AI of Things or AIoT, various processor vendors like Intel and Nvidia have launched AI chips for such lower-power environments, respectively, with their Movidius and Jetson product lines.

Computing at the edge further aids in lower latency than sending information to the cloud. Ten years ago, there were questions about whether software and hardware could be made to work similar to a biological brain, including incredible power efficiency.

Today, the same question has been answered with a yes with advancement in technology, but the challenge now is for the industry to capitalise on neuromorphic technology development and answer tomorrow’s regressive computing challenges.

The Crux Of Neuromorphic Computing​

Neuromorphic computing differs from a classical approach to AI, which is generally based on convolutional neural networks (CNNs), as this technology mimics the brain much more closely through spiking neural networks (SNNs).

Although neuromorphic chips are generally digital, they tend to work based on asynchronous circuits, meaning there is no global clock. Depending upon the specific application, neuromorphic can be ordered to magnitude faster and requires less power. Neuromorphic computing complements CPU, GPU, and FPGA technologies for particular tasks, such as learning, searching and sensing, with extremely low power and high efficiency.

Researchers have lauded neuromorphic computing’s potential, but the most impactful advances to date have occurred in academic, government and private R&D laboratories. That appears to be ready to change.

A report by Sheer Analytics & Insights estimates that the worldwide market for neuromorphic computing will be growing at 50.3 per cent CAGR to $780 million over the next eight years. Mordor Intelligence, on the other hand, aimed lower with $111 million and a 12 per cent CAGR to reach $366 million by 2025.

Forecasts vary, but enormous growth seems likely. The current neuromorphic computing market is majorly driven by increasing demand for AI and brain chips to be used in cognitive and brain robots. These robots can respond like a human brain.

Numerous advanced embedded system providers are developing these brain chips with the help of AI and machine learning (ML) that acts as thinks and responds as the human brain.

This increased demand for neuromorphic chips and software for signal, data, and image processing in automotive, electronics, and robotics verticals is projected to further fuel the market.

The need for potential use cases such as video analysis through machine vision and voice identification has also been projected to aid market growth. Major players for the development include Intel, Samsung, IBM and Qualcomm.

Researchers are still trying to find out where practical neuromorphic computing should go first; vision and speech recognition are the most likely candidates. Autonomous vehicles could also benefit from such human-like learning without human-like distraction or cognitive errors.

BrainChip’s Akida architecture features event-based architecture. It supports on-chip training and inference and various sensor inputs such as vision, audio, olfactory, and innovative transducer applications.

Akida is already featured in a unique product: the Mercedes EQXX concept car, displayed at the CES this year, where it was used for voice control to reduce power consumption by up to 10x. Internet of Things (IoT) and opportunities for Edge range from the factory floor to the battlefield.


Neuromorphic computing will not be directly replacing the modern CPUs and GPUs. Instead, the two types of computing approaches will be complementary, each suited for its sorts of algorithms and applications.

The Potential Underneath​

Neuromorphic computing came to existence due to the pursuit of using analogue circuits to mimic the synaptic structures found in brains.

Our brain excels at picking out patterns from noise and learning. A neuromorphic edge CPU excels at processing discrete, transparent data. For the same reason, many believe neuromorphic computing can help unlock unknown applications and solve large-scale problems that have put conventional computing systems in trouble for decades. Neuromorphic processors aim to provide vastly more power-efficient operations by modelling the core workings of the brain.

In 2011, HRL announced that it had demonstrated its first “memristor” array, a form of non-volatile memory storage that could be actively applied to neuromorphic computing. Two years later, HRL’s first neuromorphic chip, “Surfrider” was released.

As reported by the MIT Technology Review, Surfrider featured 576 neurons and functions on just 50 mW of power. Researchers tested the built chip by adding it into a sub-100-gram drone aircraft loaded with several optical, infrared, and ultrasound sensors and sent the drone into three rooms.

The drone was observed to have “learned” the entire layout and objects present in the first room through sensory input. Later, using this teaching, it could “learn on the fly”, even if it was in a new room or could recognise having been in the same room before.

Today, most neuromorphic computing work is incorporated by using deep learning algorithms that perform processing on CPUs, GPUs, and FPGAs. None of these is optimised for neuromorphic processing. However, next-gen chips such as Intel’s Loihi were designed exactly for these tasks and can achieve similar results on a far smaller energy profile. This efficiency will prove critical for the coming generation of small devices needing AI capabilities.

Deep learning feed-forward neural networks (DNNs) underperform neuromorphic solutions like Loihi. DNNs are linear, with data moving from input to output straight. Recurrent neural networks (RNNs) are more similar to the working of a brain, using feedback loops and exhibiting more dynamic behaviour, and RNN workloads are where chips like Loihi shine.

Samsung also announced that it would expand its neuromorphic processing unit (NPU) division by 10x, growing from 200 employees to 2000 by 2030. Samsung said at the time that it expected the neuromorphic chip market to grow by 52 per cent annually through 2023.

One of the future challenges in the neuromorphic space will be defining standard workloads and methodologies for benchmarking and analysis. Benchmarking analysis applications such as 3DMark and SPECint have played a critical role to understand the technology, aiding adopters match products to their needs.

Currently, Neuromorphic computing remains deep in the R&D stage. There are virtually only a few substantial commercial offerings in the field. Still, it’s becoming clear whether specific applications are well-suited to neuromorphic computing or not. Neuromorphic processors will be faster and more power-efficient for extensive workloads than any modern, conventional alternatives.

CPU and GPU computing, on the other hand, will not be disappearing due to such developments; neuromorphic computing will be beside them to handle challenging roles better, faster, and more efficiently than anything we have seen before.
 
  • Like
  • Love
  • Fire
Reactions: 26 users
D

Deleted member 118

Guest

Machine Learning-Accelerated Grid Environment
Award Information
Agency:
National Aeronautics and Space Administration
Branch:
N/A
Contract:
80NSSC21C0182
Agency Tracking Number:
213026
Amount:
$124,788.00
Phase:
Phase I
Program:
SBIR
Solicitation Topic Code:
S5
Solicitation Number:
SBIR_21_P1
Timeline
Solicitation Year:
2021
Award Year:
2021
Award Start Date (Proposal Award Date):
2021-05-12
Award End Date (Contract End Date):
2021-11-19
Small Business Information
EMERGENT SPACE TECHNOLOGIES, INC.
7901 Sandy Spring Road, Suite 511
Laurel, MD 20707-3589
United States
DUNS:
101537046
HUBZone Owned:
No
Woman Owned:
No
Socially and Economically Disadvantaged:
No
Principal Investigator
Name: Brett Carver
Phone: (301) 345-1535
Email: brett.carver@emergentspace.com
Business Contact
Name: Everett Cary
Phone: (301) 345-1535
Email: everett.cary@emergentspace.com
Research Institution
N/A
Abstract
NASA satellites are generating over 4TB of data each day. Analyzing this data in-orbit is becoming increasingly important for the purposes of accelerating scientific discovery and enabling opportunistic science. State-of-the-art artificial intelligence (AI) and machine learning (ML) data science applications require significant resources to run computationally intensive algorithms and models. To facilitate intensive data analysis in a resource constrained environment such as space, we need to utilize resources efficiently and at scale. Current solutions to this problem require downlinking full datasets to perform ground-based processing or running low computational footprint algorithms that are less effective than state-of-the-art solutions. In this proposal, we explore the capabilities and benefits of developing MAGE (ML Accelerated Grid Environment). MAGE is a software framework and API that facilitates ML training and inference distributed across a networked constellation or swarm of satellites to enable resource intensive ML models to run at the extreme edge. This solution makes complex data processing at the edge possible by running on AI accelerated hardware and distributing ML processing and storage across a grid of compute and storage nodes. Collectively, these nodes comprise a grid computing environment that can be tasked by spacecraft to run resource intensive applications. MAGE reduces the need to downlink full data sets, allows prioritization of data downlinking, enables proliferation of complex autonomous space-based systems, and provides a mission agnostic environment for processing and storage. Utilizing a system such as MAGE would allow NASA to perform efficient, scalable, mission agnostic AI and ML processing at the edge for any scientific mission.




80D08E13-33FD-4D04-8936-595853C32968.png
 
  • Like
  • Love
  • Fire
Reactions: 16 users
D

Deleted member 118

Guest
 
  • Like
  • Fire
  • Haha
Reactions: 12 users
Few months old article and not sure if posted prev but worth a read.

One of the few I've read that summarises and captures simply how certain tech fits within the mkt space inc Akida.



Is Neuromorphic Computing Using Edge The Future Of AI?​

By
Victor Dey
-
March 16, 2022
https://www.facebook.com/sharer.php?u=https://datatechvibe.com/ai/is-neuromorphic-computing-using-edge-the-future-of-ai/
As artificial intelligence (AI) continues to evolve, it is expected that AI at the edge will become a more significant portion of the current tech market. Known as the AI of Things or AIoT, various processor vendors like Intel and Nvidia have launched AI chips for such lower-power environments, respectively, with their Movidius and Jetson product lines.

Computing at the edge further aids in lower latency than sending information to the cloud. Ten years ago, there were questions about whether software and hardware could be made to work similar to a biological brain, including incredible power efficiency.

Today, the same question has been answered with a yes with advancement in technology, but the challenge now is for the industry to capitalise on neuromorphic technology development and answer tomorrow’s regressive computing challenges.

The Crux Of Neuromorphic Computing​

Neuromorphic computing differs from a classical approach to AI, which is generally based on convolutional neural networks (CNNs), as this technology mimics the brain much more closely through spiking neural networks (SNNs).

Although neuromorphic chips are generally digital, they tend to work based on asynchronous circuits, meaning there is no global clock. Depending upon the specific application, neuromorphic can be ordered to magnitude faster and requires less power. Neuromorphic computing complements CPU, GPU, and FPGA technologies for particular tasks, such as learning, searching and sensing, with extremely low power and high efficiency.

Researchers have lauded neuromorphic computing’s potential, but the most impactful advances to date have occurred in academic, government and private R&D laboratories. That appears to be ready to change.

A report by Sheer Analytics & Insights estimates that the worldwide market for neuromorphic computing will be growing at 50.3 per cent CAGR to $780 million over the next eight years. Mordor Intelligence, on the other hand, aimed lower with $111 million and a 12 per cent CAGR to reach $366 million by 2025.

Forecasts vary, but enormous growth seems likely. The current neuromorphic computing market is majorly driven by increasing demand for AI and brain chips to be used in cognitive and brain robots. These robots can respond like a human brain.

Numerous advanced embedded system providers are developing these brain chips with the help of AI and machine learning (ML) that acts as thinks and responds as the human brain.

This increased demand for neuromorphic chips and software for signal, data, and image processing in automotive, electronics, and robotics verticals is projected to further fuel the market.

The need for potential use cases such as video analysis through machine vision and voice identification has also been projected to aid market growth. Major players for the development include Intel, Samsung, IBM and Qualcomm.

Researchers are still trying to find out where practical neuromorphic computing should go first; vision and speech recognition are the most likely candidates. Autonomous vehicles could also benefit from such human-like learning without human-like distraction or cognitive errors.

BrainChip’s Akida architecture features event-based architecture. It supports on-chip training and inference and various sensor inputs such as vision, audio, olfactory, and innovative transducer applications.

Akida is already featured in a unique product: the Mercedes EQXX concept car, displayed at the CES this year, where it was used for voice control to reduce power consumption by up to 10x. Internet of Things (IoT) and opportunities for Edge range from the factory floor to the battlefield.


Neuromorphic computing will not be directly replacing the modern CPUs and GPUs. Instead, the two types of computing approaches will be complementary, each suited for its sorts of algorithms and applications.

The Potential Underneath​

Neuromorphic computing came to existence due to the pursuit of using analogue circuits to mimic the synaptic structures found in brains.

Our brain excels at picking out patterns from noise and learning. A neuromorphic edge CPU excels at processing discrete, transparent data. For the same reason, many believe neuromorphic computing can help unlock unknown applications and solve large-scale problems that have put conventional computing systems in trouble for decades. Neuromorphic processors aim to provide vastly more power-efficient operations by modelling the core workings of the brain.

In 2011, HRL announced that it had demonstrated its first “memristor” array, a form of non-volatile memory storage that could be actively applied to neuromorphic computing. Two years later, HRL’s first neuromorphic chip, “Surfrider” was released.

As reported by the MIT Technology Review, Surfrider featured 576 neurons and functions on just 50 mW of power. Researchers tested the built chip by adding it into a sub-100-gram drone aircraft loaded with several optical, infrared, and ultrasound sensors and sent the drone into three rooms.

The drone was observed to have “learned” the entire layout and objects present in the first room through sensory input. Later, using this teaching, it could “learn on the fly”, even if it was in a new room or could recognise having been in the same room before.

Today, most neuromorphic computing work is incorporated by using deep learning algorithms that perform processing on CPUs, GPUs, and FPGAs. None of these is optimised for neuromorphic processing. However, next-gen chips such as Intel’s Loihi were designed exactly for these tasks and can achieve similar results on a far smaller energy profile. This efficiency will prove critical for the coming generation of small devices needing AI capabilities.

Deep learning feed-forward neural networks (DNNs) underperform neuromorphic solutions like Loihi. DNNs are linear, with data moving from input to output straight. Recurrent neural networks (RNNs) are more similar to the working of a brain, using feedback loops and exhibiting more dynamic behaviour, and RNN workloads are where chips like Loihi shine.

Samsung also announced that it would expand its neuromorphic processing unit (NPU) division by 10x, growing from 200 employees to 2000 by 2030. Samsung said at the time that it expected the neuromorphic chip market to grow by 52 per cent annually through 2023.

One of the future challenges in the neuromorphic space will be defining standard workloads and methodologies for benchmarking and analysis. Benchmarking analysis applications such as 3DMark and SPECint have played a critical role to understand the technology, aiding adopters match products to their needs.

Currently, Neuromorphic computing remains deep in the R&D stage. There are virtually only a few substantial commercial offerings in the field. Still, it’s becoming clear whether specific applications are well-suited to neuromorphic computing or not. Neuromorphic processors will be faster and more power-efficient for extensive workloads than any modern, conventional alternatives.

CPU and GPU computing, on the other hand, will not be disappearing due to such developments; neuromorphic computing will be beside them to handle challenging roles better, faster, and more efficiently than anything we have seen before.
Great article FMF.

I particularly liked one word: YES.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
Reactions: 10 users
Well, on a first go through, I didn't find any applicable Synaptics patents, but I did find this useful device:

US4773024A Brain emulation circuit with reduced confusion
Hi @Diogenese It is an accessory for a computer but would it also assist AKIDA?

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
Reactions: 4 users
D

Deleted member 118

Guest
  • Like
  • Love
Reactions: 13 users
Top Bottom