Breaking News- new BRN articles/research

Jasonk

Regular
Some may remember reading a post I put up regarding LinkedIn followers liking brainchip. One of them was high up in the Telstra Purple divison.

I just came across this.
 
  • Like
Reactions: 8 users
F

Filobeddo

Guest
Some may remember reading a post I put up regarding LinkedIn followers liking brainchip. One of them was high up in the Telstra Purple divison.

I just came across this.
Telstra could definately do with a boost in brainpower, they’ve been lacking in that area for a while
 
  • Haha
  • Like
Reactions: 10 users
Article today from ZDNet with Mike Demler, Senior Analyst at The Linley Group.

Highlighted the BRN comments :)



The AI edge chip market is on fire, kindled by 'staggering' VC funding​

Dozens of startups continue to get tens of millions in venture funding to make chips for AI in mobile and other embedded computing uses. The race shows no sign of slowing down.
Tiernan Ray
Written by Tiernan Ray, Contributing Writer
on February 11, 2022 | Topic: Artificial Intelligence

Chips to perform AI inference on edge devices such as smartphones is a red-hot market, even years into the field's emergence, attracting more and more startups and more and more venture funding, according to a prominent chip analyst firm covering the field.

"There are more new startups continuing to come out, and continuing to try to differentiate," says Mike Demler, Senior Analyst with The Linley Group, which publishes the widely read Microprocessor Report, in an interview with ZDNet via phone.

Linley Group produces two conferences each year in Silicon Valley hosting numerous startups, the Spring and Fall Processor Forum, with an emphasis in recent years on those AI startups.

At the most recent event, held in October, both virtually and in-person, in Santa Clara, California, the conference was packed with startups such Flex Logix, Hailo Technologies, Roviero, BrainChip, Syntiant, Untether AI, Expedera, and Deep AI giving short talks about their chip designs.
Demler and team regularly assemble a research report titled the Guide to Processors for Deep Learning, the latest version of which is expected out this month. "I count more than 60 chip vendors in this latest edition," he told ZDNet.

Edge AI has become a blanket term that refers mostly to everything that is not in a data center, though it may include servers on the fringes of data centers. It ranges from smartphones to embedded devices that suck micro-watts of power using the TinyML framework for mobile AI from Google.

The middle part of that range, where chips consume from a few watts of power up to 75 watts, is an especially crowded part of the market, said Demler, usually in the form of a pluggable PCIe or M.2 card. (75 watts is the PCI-bus limit in devices.)

"PCIe cards are the hot segment of the market, for AI for industrial, for robotics, for traffic monitoring," he explained. "You've seen companies such as Blaize, FlexLogic -- lots of these companies are going after that segment."

But really low-power is also quite active. "I'd say the tinyML segment is just as hot. There we have chips running from a few milliwatts to even microwatts."

Most of the devices are dedicated to the "inference" stage of AI, where artificial intelligence makes predictions based on new data.
Inference happens after a neural network program has been trained, meaning that its tunable parameters have been developed fully enough to reliably form predictions and the program can be put into service.

The initial challenge for the startups, said Demler, is to actually get from a nice PowerPoint slide show to working silicon. Many start out with a simulation of their chip running on a field-programmable gate array, and then either move to selling a finished system-on-chip (SoC), or else licensing their design as synthesizable IP that can be incorporated into a customer's chip.

"We still see a lot of startups hedging their bets, or pursuing as many revenue models as they can," said Demler, "by first demo'ing on an FPGA and offering their core IP for licensing." Some startups also offer the FPGA-based version as a product."

With dozens of vendors in the market, even those that get to working silicon are challenged to show something that's meaningfully different.
"It's hard to come up with something that's truly different," said Demler. "I see these presentations, 'world's first,' or, 'world's best,' and I say, yeah, no, we've seen dozens."

Some companies began with such a different approach that they set themselves apart early, but have taken some time to bear fruit.

BrainChip Holdings, of Sydney, Australia, with offices in Laguna Hills, California, got a very early start in 2011 with a chip to handle spiking neural networks, the neuromorphic approach to AI that purports to more closely model how the human brain functions.

The company has over the years showed off how its technology can perform tasks such as using machine vision to identify poker chips on the casino floor.

"BrainChip has been doggedly pursuing this spiking architecture," said Demler. "It has a unique capability, it can truly learn on device," thus performing both training and inference.

BrainChip has in one sense come the farthest of any startup: it's publicly traded. Its stock is listed on the Australian Stock Exchange under the ticker "BRN," and last fall the company issued American Depository Shares to trade on the U.S. over-the-counter market, under the ticker "BCHPY." Those shares have since more than tripled in value.

BrainChip is just starting to produce revenue. The company in October came out with mini PCIe boards of its "Akida" processor, for x86 and Raspberry Pi, and last month announced new PCIe boards for $499. The company in the December quarter had revenue of U.S.$1.1 million, up from $100,000 in the prior quarter. Total revenue for the year was $2.5 million, with an operating loss of $14 million.


Some other exotic approaches have proved hard to deliver in practice. Chip startup Mythic, founded in 2012 and based in Austin, Texas, has been pursuing the novel route of making some of its circuitry use analog chip technology, where instead of processing ones and zeros, it computes via manipulation of a real-valued wave form of an electrical signal.

"Mythic has generated a few chips but no design wins," Demler observed."Everyone agrees, theoretically, analog should have a power efficiency advantage, but getting there in something commercially variable is going to be much more difficult."

Another startup presenting at the Processor Conference, Syntiant, started out with an analog approach but decided analog didn't provide sufficient power advantages and took longer to bring to market, noted Demler.

Syntiant of Irvine, California, founded in 2017, has focused on very simple object recognition that can operate with low power on nothing more than a feature phone or a hearable.

"On a feature phone, you don't want an apps processor, so the Syntiant solution is perfect," observed Demler.

Regardless of the success of any one startup, the utility of special circuitry means that AI acceleration will endure as a category of chip technology, said Demler.

"AI is becoming so ubiquitous in so many fields, including automotive, embedded processing, the IoT, mobile, PCs, cloud, etc., that including a special-purpose accelerator will become commonplace, just like GPUs are for graphics."

Nevertheless, some tasks will be more efficient to run on a general-purpose CPU, DSP, or GPU, said Demler. That is why Intel and Nvidia and others are amplifying their architectures with special instructions, such as for vector handling.

Different approaches will continue to be explored as long as a venture capital market awash in cash lets a thousand flowers bloom.
"There's still so much VC money out there, I'm astounded by the amount these companies continue to get," said Demler.

Demler notes giant funding rounds for Sima.ai of San Jose, California, founded in 2018, which is developing what it calls an "MLSoC" focused on reducing power consumption. The company received $80 million in their Series B funding round.

Another one is Hailo Technologies of Tel Aviv, founded in 2017, which has raised $320.5 million, according to FactSet, including $100 million in its most recent round, and is supposedly valued at a billion dollars

"The figures coming out of China, if true, are even more staggering," said Demler. Funding looks set to continue for the time being, he said. "Until the VC community decides there's something else to invest in, you're going to see these companies popping up everywhere."

At some point, a shake-out will happen, but when that day may come is not clear.

"Some of them have to go away eventually," mused Demler. "Whether it's 3 years or 5 years from now, we'll see much fewer companies in this space."

The next conference event Demler and colleagues will host is late April, the Spring Processor Forum, at the Hyatt Regency Hotel in Santa Clara, but with live-streaming for those who can't make it in person.

As per prev ZDNet article and list of startups attended in Oct last year I see Expedera just started shipping with someone in consumer devices.

Be interesting to see who (not searched yet) and ok...rather it was us but postive is manufacturers are now crossing the line to start integrating neural accelerator IP.

C'mon Brainchip...show us who u sleeping with haha.



Expedera Announces First Production Shipments of Its Deep Learning Accelerator IP in a Consumer Device​

Santa Clara, California -- March 1, 2022 — Expedera Inc, a leading provider of scalable Deep Learning Accelerator (DLA) semiconductor intellectual property (IP), today announced that a global consumer device maker is now in production with its Origin™ DLA solution.
Many consumer devices include video capabilities. However, at resolutions of 4K and up, much of the image processing must now be handled on the device rather than in the cloud. Functions such as low light video denoising require that data must be processed in real time, but at higher image resolutions, it is no longer feasible to transfer the volume of data to and from the cloud fast enough. To meet the expanding need for advanced on-device image processing and other new deep learning applications, device manufacturers are adding highly efficient specialized accelerators such as Expedera’s.
“I am delighted to announce the first shipping consumer product with Expedera IP,” said Da Chuang, founder and CEO of Expedera. “A key advantage of our DLA architecture is the capability to finely tune a solution to meet the unique design requirements of new and emerging customer applications. Our ability to adapt our IP to any device architecture and optimize for any design space enables customers to create extremely efficient solutions with industry-leading performance.”
In a recent Microprocessor Report, editor-in-chief Linley Gwennap noted, “Expedera’s Origin deep-learning accelerator provides industry-leading performance per watt for mobile, smart-home, and other camera-based devices. Its architecture is the most efficient at up to 18 TOPS per watt in 7nm, as measured on the test chip.”
Expedera takes a network-centric approach to AI acceleration, whereby the architecture segments the neural network into packets, which are essentially command streams. These packets are then efficiently scheduled and executed by the hardware in a very fast, efficient and deterministic manner. This enables designs that reduce total memory requirements to the theoretical minimum and eliminate memory bottlenecks that can limit application performance. Expedera’s co-design approach additionally enables a simpler software stack and provides a system-aware design and a more productive development experience. The platform supports popular AI frontends including TensorFlow, ONNX, Keras, Mxnet, Darknet, CoreML and Caffe2 through Apache TVM.
For more information on the Expedera Origin family of deep learning accelerators, visit our website at https://www.expedera.com/products-overview/
About Expedera
Expedera provides scalable neural engine semiconductor IP that enables major improvements in performance, power, and latency while reducing cost and complexity in AI-inference applications. Third-party silicon validated, Expedera’s solutions produce superior performance and are scalable to a wide range of applications from edge nodes and smartphones to automotive and data centers. Expedera’s Origin deep learning accelerator products are easily integrated, readily scalable, and can be customized to application requirements. The company is headquartered in Santa Clara, California. Visit expedera.com
 
  • Like
Reactions: 3 users

Quatrojos

Regular

“...Renesas’ semiconductor devices optimized for automotive applications are extremely reliable, backed by a solid system of software and support, and provide superior power efficiency. The expansion of our collaborative efforts will speed up the development of Honda SENSING and help lead its widespread adoption...”
 
  • Like
Reactions: 8 users

overpup

Regular
  • Like
  • Wow
Reactions: 8 users
Great to have this finally formally announced but I must give a big shout out to the 1,000 Eyes who late last year unearthed this partnership when they found that Eastronics and Saleslink had AKIDA products listed on their websites. A lot of work was put in then and most certainly Eastronics is great announcement and in fact may have been the original link between Nanose and Brainchip.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
Reactions: 13 users
Not sure if posted already here or other site but anyway...from Sept 2021 - algo design for DVS suited to Akida plus some other random chip we won't speak of haha.

Snipped a couple of relevant sections.



StereoSpike: Depth Learning with a Spiking Neural Network

Abstract​


Depth estimation is an important computer vision task, useful in particular for navigation in autonomous vehicles, or for object manipulation in robotics. Here we solved it using an end-to-end neuromorphic approach, combining two event-based cameras and a Spiking Neural Network (SNN) with a slightly modified U-Net-like encoder-decoder architecture, that we named StereoSpike. More specifically, we used the Multi Vehicle Stereo Event Camera Dataset (MVSEC). It provides a depth ground-truth, which was used to train StereoSpike in a supervised manner, using surrogate gradient descent. We propose a novel readout paradigm to obtain a dense analog prediction -- the depth of each pixel -- from the spikes of the decoder. We demonstrate that this architecture generalizes very well, even better than its non-spiking counterparts, leading to state-of-the-art test accuracy. To the best of our knowledge, it is the first time that such a large-scale regression problem is solved by a fully spiking network. Finally, we show that low firing rates (<10%) can be obtained via regularization, with a minimal cost in accuracy. This means that StereoSpike could be efficiently implemented on neuromorphic chips, opening the door for low power and real time embedded systems.

Spiking Neural Networks (SNNs) are a good fit for DVSs,
as they can leverage the sparsity of their output event
streams. Implemented on dedicated chips such as
Intel Loihi
(Davies et al. 2018), IBM TrueNorth (Akopyan et al. 2015)
or Brainchip Akida (Vanarse et al. 2019), these models could
become a new paradigm for ultra-low power computation
in the coming years. In addition, SNNs maintain the same
level of biological plausibility as silicon retinae, making
them new models of choice among computational neuro-
scientists.

Target Hardware.
Our model has resolutely been developed in the philosophy
of spiking neural networks.
As a result, it is essentially implementable on dedicated
neuromorphic hardware
, such as Intel Loihi (Davies et al.
2018), IBM TrueNorth (Akopyan et al. 2015) or Brainchip
Akida
2. These chips can leverage the binarity and sparsity
of spike tensors navigating through the network. In addition,
we believe that our model being feedforward and requiring
a reset on all of its neurons at each timestep is not a
problem, because resetting membrane potentials is actually
less costly than applying a leak. Therefore, statelesness
can be seen as an advantage over recurrence in spiking
models with similar performances.
 
  • Like
Reactions: 8 users
Published Jan 2022.

Might recognise Andres name.





1647397242166.png
 
  • Like
  • Fire
Reactions: 18 users

M_C

Founding Member
  • Like
Reactions: 6 users
Published Jan 2022.

Might recognise Andres name.





View attachment 2632
1647398447539.png

Hi FMF
I am going to steal this and take it to the main thread and ask the 1,000 Eyes if anyone knows who it is that has 'implemented biometric authentication in mobile devices, cars, computers and beyond' using Brainchip's AKIDA.
This is a great find and true to form you have generously shared it here. Many thanks.
My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 12 users
View attachment 2633
Hi FMF
I am going to steal this and take it to the main thread and ask the 1,000 Eyes if anyone knows who it is that has 'implemented biometric authentication in mobile devices, cars, computers and beyond' using Brainchip's AKIDA.
This is a great find and true to form you have generously shared it here. Many thanks.
My opinion only DYOR
FF

AKIDA BALLISTA
Hey FF

Always welcome and hope someone can add some more meat around the subject.

I look at it that we SHs are invested together in the common goal of a successful company and whilst we individually all have diff timeframes, ROI expectations etc...that goal still underscores a communal approach.
 
  • Like
  • Love
Reactions: 15 users

M_C

Founding Member
For anyone who thinks GOOGLE heavily involving itself (Founding Partner) in Australia's recently announced 'National Artificial Intelligence Centre' run by the CSIRO has something to do with BRN...................Personally I can't think of another aussie AI company more worthy of inclusion or involvement. Pure speculation though

https://www.innovationaus.com/googl...medium=D61socialmedia&utm_campaign=NAICGoogle
 
  • Like
  • Fire
Reactions: 10 users
For anyone who thinks GOOGLE heavily involving itself (Founding Partner) in Australia's recently announced 'National Artificial Intelligence Centre' run by the CSIRO has something to do with BRN...................Personally I can't think of another aussie AI company more worthy of inclusion or involvement. Pure speculation though

https://www.innovationaus.com/googl...medium=D61socialmedia&utm_campaign=NAICGoogle
I wonder if Larry is related to Barry?
FF
 
  • Haha
  • Like
Reactions: 2 users
Just a recent article on Edge where we get a mentioned :)



Is Neuromorphic Computing Using Edge The Future Of AI?​

By
Victor Dey
-
March 16, 2022
https://www.facebook.com/sharer.php...orphic-computing-using-edge-the-future-of-ai/
https://twitter.com/intent/tweet?te...using-edge-the-future-of-ai/&via=DatatechVibe
https://www.linkedin.com/shareArtic...orphic+Computing+Using+Edge+The+Future+Of+AI?
https://api.whatsapp.com/send?text=...orphic-computing-using-edge-the-future-of-ai/
Is-Neuromorphic-Computing-Using-Edge-The-Future-Of-AI

Neuromorphic processors aim to provide vastly more power-efficient operations by modelling the core workings of the brain​


As artificial intelligence (AI) continues to evolve, it is expected that AI at the edge will become a more significant portion of the current tech market. Known as the AI of Things or AIoT, various processor vendors like Intel and Nvidia have launched AI chips for such lower-power environments, respectively, with their Movidius and Jetson product lines.
Computing at the edge further aids in lower latency than sending information to the cloud. Ten years ago, there were questions about whether software and hardware could be made to work similar to a biological brain, including incredible power efficiency.
Today, the same question has been answered with a yes with advancement in technology, but the challenge now is for the industry to capitalise on neuromorphic technology development and answer tomorrow’s regressive computing challenges.

The Crux Of Neuromorphic Computing​

Neuromorphic computing differs from a classical approach to AI, which is generally based on convolutional neural networks (CNNs), as this technology mimics the brain much more closely through spiking neural networks (SNNs).
Although neuromorphic chips are generally digital, they tend to work based on asynchronous circuits, meaning there is no global clock. Depending upon the specific application, neuromorphic can be ordered to magnitude faster and requires less power. Neuromorphic computing complements CPU, GPU, and FPGA technologies for particular tasks, such as learning, searching and sensing, with extremely low power and high efficiency.
Researchers have lauded neuromorphic computing’s potential, but the most impactful advances to date have occurred in academic, government and private R&D laboratories. That appears to be ready to change.
A report by Sheer Analytics & Insights estimates that the worldwide market for neuromorphic computing will be growing at 50.3 per cent CAGR to $780 million over the next eight years. Mordor Intelligence, on the other hand, aimed lower with $111 million and a 12 per cent CAGR to reach $366 million by 2025.
Forecasts vary, but enormous growth seems likely. The current neuromorphic computing market is majorly driven by increasing demand for AI and brain chips to be used in cognitive and brain robots. These robots can respond like a human brain.
Numerous advanced embedded system providers are developing these brain chips with the help of AI and machine learning (ML) that acts as thinks and responds as the human brain.
This increased demand for neuromorphic chips and software for signal, data, and image processing in automotive, electronics, and robotics verticals is projected to further fuel the market.
The need for potential use cases such as video analysis through machine vision and voice identification has also been projected to aid market growth. Major players for the development include Intel, Samsung, IBM and Qualcomm.
Researchers are still trying to find out where practical neuromorphic computing should go first; vision and speech recognition are the most likely candidates. Autonomous vehicles could also benefit from such human-like learning without human-like distraction or cognitive errors.
BrainChip’s Akida architecture features event-based architecture. It supports on-chip training and inference and various sensor inputs such as vision, audio, olfactory, and innovative transducer applications.
Akida is already featured in a unique product: the Mercedes EQXX concept car, displayed at the CES this year, where it was used for voice control to reduce power consumption by up to 10x. Internet of Things (IoT) and opportunities for Edge range from the factory floor to the battlefield.

Neuromorphic computing will not be directly replacing the modern CPUs and GPUs. Instead, the two types of computing approaches will be complementary, each suited for its sorts of algorithms and applications.

The Potential Underneath​

Neuromorphic computing came to existence due to the pursuit of using analogue circuits to mimic the synaptic structures found in brains.
Our brain excels at picking out patterns from noise and learning. A neuromorphic edge CPU excels at processing discrete, transparent data. For the same reason, many believe neuromorphic computing can help unlock unknown applications and solve large-scale problems that have put conventional computing systems in trouble for decades. Neuromorphic processors aim to provide vastly more power-efficient operations by modelling the core workings of the brain.
In 2011, HRL announced that it had demonstrated its first “memristor” array, a form of non-volatile memory storage that could be actively applied to neuromorphic computing. Two years later, HRL’s first neuromorphic chip, “Surfrider” was released.
As reported by the MIT Technology Review, Surfrider featured 576 neurons and functions on just 50 mW of power. Researchers tested the built chip by adding it into a sub-100-gram drone aircraft loaded with several optical, infrared, and ultrasound sensors and sent the drone into three rooms.
The drone was observed to have “learned” the entire layout and objects present in the first room through sensory input. Later, using this teaching, it could “learn on the fly”, even if it was in a new room or could recognise having been in the same room before.
unnamed-2.jpg
Image Source: MIT
Today, most neuromorphic computing work is incorporated by using deep learning algorithms that perform processing on CPUs, GPUs, and FPGAs. None of these is optimised for neuromorphic processing. However, next-gen chips such as Intel’s Loihi were designed exactly for these tasks and can achieve similar results on a far smaller energy profile. This efficiency will prove critical for the coming generation of small devices needing AI capabilities.
Deep learning feed-forward neural networks (DNNs) underperform neuromorphic solutions like Loihi. DNNs are linear, with data moving from input to output straight. Recurrent neural networks (RNNs) are more similar to the working of a brain, using feedback loops and exhibiting more dynamic behaviour, and RNN workloads are where chips like Loihi shine.
Samsung also announced that it would expand its neuromorphic processing unit (NPU) division by 10x, growing from 200 employees to 2000 by 2030. Samsung said at the time that it expected the neuromorphic chip market to grow by 52 per cent annually through 2023.
One of the future challenges in the neuromorphic space will be defining standard workloads and methodologies for benchmarking and analysis. Benchmarking analysis applications such as 3DMark and SPECint have played a critical role to understand the technology, aiding adopters match products to their needs.
Currently, Neuromorphic computing remains deep in the R&D stage. There are virtually only a few substantial commercial offerings in the field. Still, it’s becoming clear whether specific applications are well-suited to neuromorphic computing or not. Neuromorphic processors will be faster and more power-efficient for extensive workloads than any modern, conventional alternatives.
CPU and GPU computing, on the other hand, will not be disappearing due to such developments; neuromorphic computing will be beside them to handle challenging roles better, faster, and more efficiently than anything we have seen before.
 
  • Like
  • Fire
Reactions: 31 users
Well...would appear the Director of Dell Tech in China across us (with other Co's to be fair haha)....article / post from the Dell verified account.

Is all their bold but we down in the hardware commentary & I highlighted red ;)

Edit. Thought I would also their comment end of hardware section re accelerators and what they want to achieve.

Will need to translate if you visit page but below is obviously done.






1647485248494.png



The future of artificial intelligence​

Dell Technologies
Dell Technologies
Verified account

3 people liked this article
content
put away
Problems that need to be solved by artificial intelligence
The future of artificial intelligence

Author: Dr. Jia Zhen, Director of Dell Technologies China Research Institute​

Artificial Intelligence (AI) is already ubiquitous. In the era of digital transformation, when people turn to more frequent digital interactions, the massive data from the digital virtual world seamlessly merges with the real physical world. As the amount, variety, and speed of data generation increases, AI represents an important critical step in extracting insights from massive amounts of data and advancing other emerging technologies .
AI algorithms and hardware-accelerated systems are improving business decision-making efficiency, improving business processes, and delivering smarter, more real-time data analysis results at scale. AI is fundamentally changing the way businesses operate, redefining the way people work, and transforming industries on a global scale. In the era of digital transformation, our society and enterprises need to make more use of intelligent information system architecture, application software and algorithms, and data-first strategies to fully realize the business potential of enterprises.
v2-ebeb82c6a2550dda7c378ba3d5f9c955_720w.jpg

Here I also briefly list some key figures reflecting the booming development of artificial intelligence: 62% of global enterprises have invested in artificial intelligence to some extent[1]; 53% of global data and analytics decision makers say they are planning to implement some artificial intelligence in the form of [2]; by 2022, 75% of enterprises will embed intelligent automation into technology and process development [3].
As mentioned above, artificial intelligence has made great progress in recent years, but we still have many problems that need to be solved urgently. In this article, I will first analyze the core problems that remain to be solved in the current development of artificial intelligence, and then propose some ideas for our key development directions in the field of artificial intelligence.
v2-b5503882e18cf0780509a3a01df84398_720w.jpg

Problems that need to be solved by artificial intelligence

  • Algorithmic complexity of artificial intelligence: The mainstream algorithms of artificial intelligence today are based on the Deep Neural Network [13] of Machine Learning. With the development of artificial intelligence technology, the structure of deep neural network is becoming more and more complex, and there are more and more hyperparameters. Sophisticated deep neural networks improve the accuracy of machine learning models, but configuring and debugging such complex networks can be prohibitive for ordinary users of artificial intelligence. The ease of development, debugging, and deployment of deep neural network algorithms and applications is also becoming more and more urgent .
  • Data scarcity of artificial intelligence: The efficient reasoning and recognition of deep neural networks nowadays mainly depends on the support of a large amount of training data. Open databases such as ImageNet [9] provide thousands of images, videos and corresponding annotation information. Through the training of a large amount of data, the machine learning model can almost cover the changes of various reasoning and recognition scenarios. However, if the amount of data is not enough or the type is not comprehensive enough, the performance of the machine learning model is bound to be limited. In the application of artificial intelligence in industry, the problem of data shortage is particularly prominent. Different from traditional reasoning and recognition applications for ordinary consumers, artificial intelligence applications in the industry are often unique business-related problems (such as: intelligent manufacturing, remote system debugging and maintenance, etc.), corresponding data (especially negative samples) Very few. In the case of shortage of training data, how to improve the algorithm model of artificial intelligence so that it can still work efficiently under specific scenarios and limited data is also a new and urgent task .
  • High computational consumption of artificial intelligence: As mentioned in the previous two aspects, the complexity of deep neural networks and the diversity of big data will lead to the high consumption of computing resources in current artificial intelligence applications. At present, the training of more advanced machine learning models, such as GPT-3, takes several months to utilize high-performance clusters [10]. Ordinary machine learning models can take hours or even days to train on traditional x86 high-performance servers if the amount of data is large. At the same time, when the trained model performs inference and recognition tasks, due to the complex model structure, many hyperparameters, and complex calculations, the requirements for computing resources of terminal devices that process data are also higher. For example, lightweight IoT devices cannot run complex machine learning inference and recognition models, or for smart terminal devices, such as smartphones, running complex machine learning models will lead to large battery consumption. How to better and fully optimize computing resources to support machine learning training and inference recognition is also another new urgent task.
  • Interpretability of artificial intelligence: Artificial intelligence technology using deep neural networks, due to the complexity of neural networks, many times people treat them as a "black box". The user inputs the data that needs to be recognized by reasoning, and the deep neural network obtains the result of reasoning and recognition through a series of "complex and unknown" mathematical processing. However, we cannot intuitively analyze why the input data will get the corresponding results through the complex neural network. In some key AI areas, such as autonomous driving, the interpretability of AI decisions is critical . Why does an automated driving system make such a driving decision in some critical safety-related scenarios? Why is the reasoning and recognition of road conditions sometimes wrong? These inference and identification conclusions from the "black box" must be interpretable and must be traceable. Only when artificial intelligence can be explained can we find the basis for decision-making and judgment and find out the reason for the error of reasoning and identification. "From effect to cause", we can improve the performance of deep neural network, so that it can provide artificial intelligence applications more efficiently, safely and reliably in different occasions .
Of course, in addition to the above-mentioned four major problems that AI needs to solve urgently, AI also has some other limitations, such as the privacy of AI, the generality of AI, the scarcity of talents for AI development, and the lack of AI. Legal constraints, etc., I will not repeat them here. In this article, I will focus on the four main issues listed above and explore the way forward .
v2-eb32811d4385194fecba3f58ee470757_720w.jpg

The future of artificial intelligence

In view of the four major problems that artificial intelligence needs to solve urgently listed above, I will briefly describe the main technical directions that we need to pay attention to for future development:
  • First, we need to be facilitators of the “3rd Wave AI”, preparing our corporate society for the coming AI revolution. These changes will drive our data management, artificial intelligence algorithms, and hardware accelerators to flourish. We need to actively develop new models of collaboration with clients and research entities driving the “third wave of AI.” So, what is the "third wave of artificial intelligence"?
    • From an algorithmic point of view, we summarize it as the concept of Contextual Adaptation. Specifically, we need to pay more attention to the following algorithm development trends:
      • We need to establish reliable decision-making capabilities in artificial intelligence systems, so that people can understand or analyze why the "black box" machine learning algorithm model makes inference and identification decisions. Specifically, there are three problems that need to be solved for safe and reliable artificial intelligence: boundary problem, backtracking problem and verifiable problem . We call such a capability “AI explainability” [5].
      • How to build AI systems that can train machine learning models with one (One-Shot Learning [6]) or very few (Few-Shot Learning [7]) examples. As mentioned above, in real industrial application scenarios, data is relatively scarce. Effectively constructing and training machine learning models under extremely limited data is a hot research direction at present .
      • Compared with the traditional and open-loop offline learning (Offline Learning), online learning (Online Learning) [20], as an emerging direction, is a closed-loop system: the machine learning model sends the inference and recognition results to the user based on the current parameters and architecture, User feedback is collected and used to update the optimization model, thus completing an optimization process that continuously receives information and updates iteratively. In other words, machine learning models need to dynamically accept sequential data and update themselves to optimize performance .
      • Multi-Task Learning [21] refers to a learning method in which the training data contains samples from multiple different scenes, and the scene information is used to improve the performance of machine learning tasks during the learning process. The scene adaptation method in traditional transfer learning usually only realizes the bidirectional knowledge transfer between the original scene and the target scene, while multi-scene task learning encourages the bidirectional knowledge transfer between multiple scenes .
      • The machine learning model is trained based on the contextual information of the context. With the passage of time and the migration of the scene, the artificial intelligence system will gradually learn the method of constructing the updated model autonomously [11]. Machine learning models derived from contextual learning (Contextual Learning [15]) will be used to better perceive the world and help humans make inference decisions more intelligently .
      • With the rapid development of artificial intelligence technology, knowledge representation and knowledge reasoning based on deep neural networks have received more and more attention, and scene knowledge graphs of different scenarios have appeared one after another [22]. As a semantic network, the scene knowledge graph depicts scene knowledge and provides the basis for inference and recognition tasks within the scene. As an application of knowledge reasoning, the question answering system based on knowledge graph has made great progress .
      • Machine learning models derived from contextual learning can also help us better abstract our data and the world we need to perceive [16], thereby making our artificial intelligence systems more generalized and adaptable Solve all kinds of complex problems.
In conclusion, the advanced algorithms of the "third wave of artificial intelligence" can not only extract valuable information (Learn) from the data in the environment (Perceive), but also create new meanings (Abstract). , and has the ability to assist human planning and decision-making (Reasoning), while meeting human needs (Integration, Integration) and concerns (Ethics, Security) .
  • From a hardware perspective, the accelerators of Domain Specific Architectures (DSA) [12] enable the third-wave AI algorithms to operate in a hybrid ecosystem consisting of Edge, Core, and Cloud. run anywhere in the system . Specifically, accelerators for specific domain architectures include the following examples: Nvidia's GPU, Xilinx's FPGA, Google's TPU, and artificial intelligence acceleration chips such as BrainChip's Akida Neural Processer, GraphCore's Intelligent Processing Unit (IPU), Cambrian's Machine Learning Unit (MLU) and more. These types of domain-specific architecture accelerators will be integrated into more information devices, architectures, and ecosystems by requiring less training data and being able to operate at lower power when needed. In response to this trend, the area where we need to focus on development is to develop a unified heterogeneous architecture approach that enables information systems to easily integrate and configure various different types of domain-specific architecture hardware accelerators. For Dell Technologies, we can leverage Dell's vast global supply chain and sales network to attract domain-specific architecture accelerator suppliers to adhere to the standard interfaces defined by Dell to achieve a unified heterogeneous architecture .
To sum up, the hardware of the "third wave of artificial intelligence" should not only be more powerful (Powerful), but also smarter (Strategic) and more efficient (Efficient and Efficient).
v2-13a7a4e51cc97ffcd9fa8f9f5fecca5c_720w.jpg

In addition to the above-mentioned development of algorithms and hardware that drives the “third wave of artificial intelligence”, another development direction that requires more attention is artificial intelligence automation (AutoML) [12]. As mentioned above, the development of artificial intelligence is becoming more and more complex, and for ordinary users, the professional skills threshold for using artificial intelligence is getting higher and higher. We urgently need to provide a complete set of information system architecture solutions that " make artificial intelligence simple ".
  • We need to better operate and manage AI workloads, driving the simplification and optimization of information system architectures. Within the entire software stack of AI applications, we need to define " Easy Buttons " for future AI workloads . Specifically, we have the following technical directions to focus on:
    • Develop a more simple and easy-to-use common API (Application Protocol Interface) for the advanced artificial intelligence algorithm framework , so that the information system architecture can integrate and use more advanced and complex algorithms.
    • For artificial intelligence algorithms, we need to provide machine learning model parameters adaptive (Adaptive) selection and tuning (Tuning) strategies , according to the needs of users, automatically select the most suitable algorithm, and optimize the parameters of the algorithm to achieve the best performance.
    • For the artificial intelligence data processing process (Pipeline), we need to establish the functions of process tracking, analysis and reuse , such as MLOps (Machine Learning Operation) described in [14]. Machine learning process management (MLOps) is the practice of creating new machine learning (ML) and deep learning (DL) models and deploying them into production through repeatable, automated workflows. When we have new artificial intelligence application problems, we can learn from the existing data processing process, and after a little analysis and modification, we can reuse the more mature artificial intelligence software and hardware solutions to meet new needs, thereby reducing repeated development. waste of resources.
    • When our artificial intelligence system is deployed, our algorithm model still needs to have the evolution function of self-update, self-learning, and self-tuning . According to the changes of inference recognition scenarios and inference recognition tasks and the attenuation of algorithm accuracy, we use edge and cloud information system architecture to fully mobilize different computing resources to update, optimize and deploy our algorithm models. In the process of updating and deploying artificial intelligence models, we also use the latest algorithms such as model compression [17], data distillation [19], and knowledge distillation [18], so as to make full use of limited computing resources.
    • We need to consider integrating the above AI-enabled automation services in multi-cloud and hybrid cloud environments, in line with Data Management and Orchestration, to create a complete and intelligent AI service platform .
In conclusion, the automation of artificial intelligence should not only be easier (Easy to Use), but also more flexible (Adapt) and more capable of self-learning and growth (Evolve).
v2-d5e7115c3eb914a5ae21513675cbb8b6_720w.jpg

Technological innovation at Dell Technologies never stops. Our mission is to promote the progress of human society, promote technological innovation, and become the most important technology company in the data age. Our AI solutions will help our clients free themselves from the current complex processes of large-scale data processing, analysis and insights (Insights). The Research Office of our Office of CTO is also actively exploring the aforementioned AI development directions. We are committed to helping our clients make better use of state-of-the-art information system architectures, understand their data efficiently and in a timely manner, and bring greater value to their commercial business innovations .
Acknowledgments: I would like to thank the artificial intelligence research team of Dell Technologies China Research Institute (Li Sanping, Ni Jiacheng, Chen Qiang, Wang Zijia, Yang Wenbin, etc.) for their excellent research in the field of artificial intelligence. Their work results strongly support the content of this article.
 
Last edited:
  • Like
  • Fire
Reactions: 31 users

Slade

Top 20
Well...would appear the Director of Dell Tech in China across us (with other Co's to be fair haha)....article / post from the Dell verified account.

Is all their bold but we down in the hardware commentary & I highlighted red ;)

Edit. Thought I would also their comment end of hardware section re accelerators and what they want to achieve.

Will need to translate if you visit page but below is obviously done.






View attachment 2703


The future of artificial intelligence​

Dell Technologies
Dell Technologies
Verified account

3 people liked this article
content
put away
Problems that need to be solved by artificial intelligence
The future of artificial intelligence

Author: Dr. Jia Zhen, Director of Dell Technologies China Research Institute​

Artificial Intelligence (AI) is already ubiquitous. In the era of digital transformation, when people turn to more frequent digital interactions, the massive data from the digital virtual world seamlessly merges with the real physical world. As the amount, variety, and speed of data generation increases, AI represents an important critical step in extracting insights from massive amounts of data and advancing other emerging technologies .
AI algorithms and hardware-accelerated systems are improving business decision-making efficiency, improving business processes, and delivering smarter, more real-time data analysis results at scale. AI is fundamentally changing the way businesses operate, redefining the way people work, and transforming industries on a global scale. In the era of digital transformation, our society and enterprises need to make more use of intelligent information system architecture, application software and algorithms, and data-first strategies to fully realize the business potential of enterprises.
v2-ebeb82c6a2550dda7c378ba3d5f9c955_720w.jpg

Here I also briefly list some key figures reflecting the booming development of artificial intelligence: 62% of global enterprises have invested in artificial intelligence to some extent[1]; 53% of global data and analytics decision makers say they are planning to implement some artificial intelligence in the form of [2]; by 2022, 75% of enterprises will embed intelligent automation into technology and process development [3].
As mentioned above, artificial intelligence has made great progress in recent years, but we still have many problems that need to be solved urgently. In this article, I will first analyze the core problems that remain to be solved in the current development of artificial intelligence, and then propose some ideas for our key development directions in the field of artificial intelligence.
v2-b5503882e18cf0780509a3a01df84398_720w.jpg

Problems that need to be solved by artificial intelligence

  • Algorithmic complexity of artificial intelligence: The mainstream algorithms of artificial intelligence today are based on the Deep Neural Network [13] of Machine Learning. With the development of artificial intelligence technology, the structure of deep neural network is becoming more and more complex, and there are more and more hyperparameters. Sophisticated deep neural networks improve the accuracy of machine learning models, but configuring and debugging such complex networks can be prohibitive for ordinary users of artificial intelligence. The ease of development, debugging, and deployment of deep neural network algorithms and applications is also becoming more and more urgent .
  • Data scarcity of artificial intelligence: The efficient reasoning and recognition of deep neural networks nowadays mainly depends on the support of a large amount of training data. Open databases such as ImageNet [9] provide thousands of images, videos and corresponding annotation information. Through the training of a large amount of data, the machine learning model can almost cover the changes of various reasoning and recognition scenarios. However, if the amount of data is not enough or the type is not comprehensive enough, the performance of the machine learning model is bound to be limited. In the application of artificial intelligence in industry, the problem of data shortage is particularly prominent. Different from traditional reasoning and recognition applications for ordinary consumers, artificial intelligence applications in the industry are often unique business-related problems (such as: intelligent manufacturing, remote system debugging and maintenance, etc.), corresponding data (especially negative samples) Very few. In the case of shortage of training data, how to improve the algorithm model of artificial intelligence so that it can still work efficiently under specific scenarios and limited data is also a new and urgent task .
  • High computational consumption of artificial intelligence: As mentioned in the previous two aspects, the complexity of deep neural networks and the diversity of big data will lead to the high consumption of computing resources in current artificial intelligence applications. At present, the training of more advanced machine learning models, such as GPT-3, takes several months to utilize high-performance clusters [10]. Ordinary machine learning models can take hours or even days to train on traditional x86 high-performance servers if the amount of data is large. At the same time, when the trained model performs inference and recognition tasks, due to the complex model structure, many hyperparameters, and complex calculations, the requirements for computing resources of terminal devices that process data are also higher. For example, lightweight IoT devices cannot run complex machine learning inference and recognition models, or for smart terminal devices, such as smartphones, running complex machine learning models will lead to large battery consumption. How to better and fully optimize computing resources to support machine learning training and inference recognition is also another new urgent task.
  • Interpretability of artificial intelligence: Artificial intelligence technology using deep neural networks, due to the complexity of neural networks, many times people treat them as a "black box". The user inputs the data that needs to be recognized by reasoning, and the deep neural network obtains the result of reasoning and recognition through a series of "complex and unknown" mathematical processing. However, we cannot intuitively analyze why the input data will get the corresponding results through the complex neural network. In some key AI areas, such as autonomous driving, the interpretability of AI decisions is critical . Why does an automated driving system make such a driving decision in some critical safety-related scenarios? Why is the reasoning and recognition of road conditions sometimes wrong? These inference and identification conclusions from the "black box" must be interpretable and must be traceable. Only when artificial intelligence can be explained can we find the basis for decision-making and judgment and find out the reason for the error of reasoning and identification. "From effect to cause", we can improve the performance of deep neural network, so that it can provide artificial intelligence applications more efficiently, safely and reliably in different occasions .
Of course, in addition to the above-mentioned four major problems that AI needs to solve urgently, AI also has some other limitations, such as the privacy of AI, the generality of AI, the scarcity of talents for AI development, and the lack of AI. Legal constraints, etc., I will not repeat them here. In this article, I will focus on the four main issues listed above and explore the way forward .
v2-eb32811d4385194fecba3f58ee470757_720w.jpg

The future of artificial intelligence

In view of the four major problems that artificial intelligence needs to solve urgently listed above, I will briefly describe the main technical directions that we need to pay attention to for future development:
  • First, we need to be facilitators of the “3rd Wave AI”, preparing our corporate society for the coming AI revolution. These changes will drive our data management, artificial intelligence algorithms, and hardware accelerators to flourish. We need to actively develop new models of collaboration with clients and research entities driving the “third wave of AI.” So, what is the "third wave of artificial intelligence"?
    • From an algorithmic point of view, we summarize it as the concept of Contextual Adaptation. Specifically, we need to pay more attention to the following algorithm development trends:
      • We need to establish reliable decision-making capabilities in artificial intelligence systems, so that people can understand or analyze why the "black box" machine learning algorithm model makes inference and identification decisions. Specifically, there are three problems that need to be solved for safe and reliable artificial intelligence: boundary problem, backtracking problem and verifiable problem . We call such a capability “AI explainability” [5].
      • How to build AI systems that can train machine learning models with one (One-Shot Learning [6]) or very few (Few-Shot Learning [7]) examples. As mentioned above, in real industrial application scenarios, data is relatively scarce. Effectively constructing and training machine learning models under extremely limited data is a hot research direction at present .
      • Compared with the traditional and open-loop offline learning (Offline Learning), online learning (Online Learning) [20], as an emerging direction, is a closed-loop system: the machine learning model sends the inference and recognition results to the user based on the current parameters and architecture, User feedback is collected and used to update the optimization model, thus completing an optimization process that continuously receives information and updates iteratively. In other words, machine learning models need to dynamically accept sequential data and update themselves to optimize performance .
      • Multi-Task Learning [21] refers to a learning method in which the training data contains samples from multiple different scenes, and the scene information is used to improve the performance of machine learning tasks during the learning process. The scene adaptation method in traditional transfer learning usually only realizes the bidirectional knowledge transfer between the original scene and the target scene, while multi-scene task learning encourages the bidirectional knowledge transfer between multiple scenes .
      • The machine learning model is trained based on the contextual information of the context. With the passage of time and the migration of the scene, the artificial intelligence system will gradually learn the method of constructing the updated model autonomously [11]. Machine learning models derived from contextual learning (Contextual Learning [15]) will be used to better perceive the world and help humans make inference decisions more intelligently .
      • With the rapid development of artificial intelligence technology, knowledge representation and knowledge reasoning based on deep neural networks have received more and more attention, and scene knowledge graphs of different scenarios have appeared one after another [22]. As a semantic network, the scene knowledge graph depicts scene knowledge and provides the basis for inference and recognition tasks within the scene. As an application of knowledge reasoning, the question answering system based on knowledge graph has made great progress .
      • Machine learning models derived from contextual learning can also help us better abstract our data and the world we need to perceive [16], thereby making our artificial intelligence systems more generalized and adaptable Solve all kinds of complex problems.
In conclusion, the advanced algorithms of the "third wave of artificial intelligence" can not only extract valuable information (Learn) from the data in the environment (Perceive), but also create new meanings (Abstract). , and has the ability to assist human planning and decision-making (Reasoning), while meeting human needs (Integration, Integration) and concerns (Ethics, Security) .
  • From a hardware perspective, the accelerators of Domain Specific Architectures (DSA) [12] enable the third-wave AI algorithms to operate in a hybrid ecosystem consisting of Edge, Core, and Cloud. run anywhere in the system . Specifically, accelerators for specific domain architectures include the following examples: Nvidia's GPU, Xilinx's FPGA, Google's TPU, and artificial intelligence acceleration chips such as BrainChip's Akida Neural Processer, GraphCore's Intelligent Processing Unit (IPU), Cambrian's Machine Learning Unit (MLU) and more. These types of domain-specific architecture accelerators will be integrated into more information devices, architectures, and ecosystems by requiring less training data and being able to operate at lower power when needed. In response to this trend, the area where we need to focus on development is to develop a unified heterogeneous architecture approach that enables information systems to easily integrate and configure various different types of domain-specific architecture hardware accelerators. For Dell Technologies, we can leverage Dell's vast global supply chain and sales network to attract domain-specific architecture accelerator suppliers to adhere to the standard interfaces defined by Dell to achieve a unified heterogeneous architecture .
To sum up, the hardware of the "third wave of artificial intelligence" should not only be more powerful (Powerful), but also smarter (Strategic) and more efficient (Efficient and Efficient).
v2-13a7a4e51cc97ffcd9fa8f9f5fecca5c_720w.jpg

In addition to the above-mentioned development of algorithms and hardware that drives the “third wave of artificial intelligence”, another development direction that requires more attention is artificial intelligence automation (AutoML) [12]. As mentioned above, the development of artificial intelligence is becoming more and more complex, and for ordinary users, the professional skills threshold for using artificial intelligence is getting higher and higher. We urgently need to provide a complete set of information system architecture solutions that " make artificial intelligence simple ".
  • We need to better operate and manage AI workloads, driving the simplification and optimization of information system architectures. Within the entire software stack of AI applications, we need to define " Easy Buttons " for future AI workloads . Specifically, we have the following technical directions to focus on:
    • Develop a more simple and easy-to-use common API (Application Protocol Interface) for the advanced artificial intelligence algorithm framework , so that the information system architecture can integrate and use more advanced and complex algorithms.
    • For artificial intelligence algorithms, we need to provide machine learning model parameters adaptive (Adaptive) selection and tuning (Tuning) strategies , according to the needs of users, automatically select the most suitable algorithm, and optimize the parameters of the algorithm to achieve the best performance.
    • For the artificial intelligence data processing process (Pipeline), we need to establish the functions of process tracking, analysis and reuse , such as MLOps (Machine Learning Operation) described in [14]. Machine learning process management (MLOps) is the practice of creating new machine learning (ML) and deep learning (DL) models and deploying them into production through repeatable, automated workflows. When we have new artificial intelligence application problems, we can learn from the existing data processing process, and after a little analysis and modification, we can reuse the more mature artificial intelligence software and hardware solutions to meet new needs, thereby reducing repeated development. waste of resources.
    • When our artificial intelligence system is deployed, our algorithm model still needs to have the evolution function of self-update, self-learning, and self-tuning . According to the changes of inference recognition scenarios and inference recognition tasks and the attenuation of algorithm accuracy, we use edge and cloud information system architecture to fully mobilize different computing resources to update, optimize and deploy our algorithm models. In the process of updating and deploying artificial intelligence models, we also use the latest algorithms such as model compression [17], data distillation [19], and knowledge distillation [18], so as to make full use of limited computing resources.
    • We need to consider integrating the above AI-enabled automation services in multi-cloud and hybrid cloud environments, in line with Data Management and Orchestration, to create a complete and intelligent AI service platform .
In conclusion, the automation of artificial intelligence should not only be easier (Easy to Use), but also more flexible (Adapt) and more capable of self-learning and growth (Evolve).
v2-d5e7115c3eb914a5ae21513675cbb8b6_720w.jpg

Technological innovation at Dell Technologies never stops. Our mission is to promote the progress of human society, promote technological innovation, and become the most important technology company in the data age. Our AI solutions will help our clients free themselves from the current complex processes of large-scale data processing, analysis and insights (Insights). The Research Office of our Office of CTO is also actively exploring the aforementioned AI development directions. We are committed to helping our clients make better use of state-of-the-art information system architectures, understand their data efficiently and in a timely manner, and bring greater value to their commercial business innovations .
Acknowledgments: I would like to thank the artificial intelligence research team of Dell Technologies China Research Institute (Li Sanping, Ni Jiacheng, Chen Qiang, Wang Zijia, Yang Wenbin, etc.) for their excellent research in the field of artificial intelligence. Their work results strongly support the content of this article.
Full moon, Thank you for posting and especially for highlighting key points in all of the articles.
 
  • Like
Reactions: 7 users
Full moon, Thank you for posting and especially for highlighting key points in all of the articles.
Welcome Slade.

Figure helps readers get to the important bit quicker :)
 
  • Like
  • Fire
Reactions: 13 users

Diogenese

Top 20
View attachment 2633
Hi FMF
I am going to steal this and take it to the main thread and ask the 1,000 Eyes if anyone knows who it is that has 'implemented biometric authentication in mobile devices, cars, computers and beyond' using Brainchip's AKIDA.
This is a great find and true to form you have generously shared it here. Many thanks.
My opinion only DYOR
FF

AKIDA BALLISTA
Well the NKVD has not missed us (Turchin):

https://www.bing.com/search?q=turch...57j69i64.4759j0j1&pglt=43&FORM=ANNTA1&PC=DCTE

Assessing the future plausibility of future catastrophically dangerous AI

Alexey Turchin

Science for Life Extension Foundation, Moscow

Prospekt Mira 124-15, Moscow, Russia, 129164.

alexeiturchin@gmail.com


5.1 AI-related hardware progress
... (end of section 5.1)
Neuromorphic chips epoch: from 2019? Neuromorphic chips promise to solve the “von Neumann” bottleneck, that is, latency because of the need to download data from memory. The most computationally effective approach seems to be spiking neural net chips. The main problem is how to convert existing neural net programs into spiking neural net hardware. For example, the Akida chip “packs 1.2 million neurons and 10B synapses in an 11-layer SNN along with a RISC processor to work its magic – which should be up to 1,400 frames per second per watt. Those are impressive numbers – particularly for a $10 chip” (Morris, 2018) and they could be connected up to 1000 chips.

I haven't read the whole thing - no pictures!
 
  • Like
  • Fire
Reactions: 10 users
Top Bottom