BRN Discussion Ongoing

Diogenese

Top 20
An intresting read.


It's great to be a shareholder 🏖

There's no possum up that back-propagation tree -

Hinton says he always knew the deep learning “revolution” was coming.

“A bunch of us were convinced this had to be the future [of artificial intelligence],” said Hinton, whose 1986 paper popularized the backpropagation algorithm for training multilayer neural networks.

“We managed to show that what we had believed all along was correct.”

LeCun, who pioneered the use of backpropagation and convolutional neural networks in 1989, agrees. “I had very little doubt that eventually, techniques similar to the ones we had developed in the 80s and 90s” would be adopted, he said
.


https://brainchip.com/brainchip-announces-the-akida-architecture-a-neuromorphic-system-on-chip/
BrainChip Announces the Akida™ Architecture, a Neuromorphic System-on-Chip - BrainChip

The Akida NSoC uses a pure CMOS logic process, ensuring high yields and low cost. Spiking neural networks (SNNs) are inherently lower power than traditional convolutional neural networks (CNNs), as they replace the math-intensive convolutions and back-propagation training methods with biologically inspired neuron functions and feed-forward training methodologies. BrainChip’s research has determined the optimal neuron model and training methods, bringing unprecedented efficiency and accuracy. Each Akida NSoC has effectively 1.2 million neurons and 10 billion synapses, representing 100 times better efficiency than neuromorphic test chips from Intel and IBM. Comparisons to leading CNN accelerator devices show similar performance gains of an order of magnitude better images/second/watt running industry standard benchmarks such as CIFAR-10 with comparable accuracy.

@Fact Finder was talking about "them" using 8-bit, 16-bit, 32-bit maths earlier, pointing out that Akida only needs 1 to 4 bits to achieve comparable accuracy, with far less power and in much less time. Akida brings together the best of digital (precision) together with an approximation of the best of analog (spikes - a single bit being a direct analog of a spike, 2 or 4 bits being a more accurate hybrid digital/analog spike).

It was the insight of Simon Thorpe's group in recognizing that the strongest spikes were detected first by the eye that led to the N-of-M coding which allows the discarding of the M-N spikes which arrive after the first N spikes and consequently reduces the number of spikes to be processed without significant loss of accuracy. Serendipitously, or by dint if diligent research, PvdM was the only person on the planet who had devised a digital circuit which could utilize this phenomenon to advantage. N-of-M - the source of the sauce ... and let's not forget STDP.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 46 users

cosors

👀
Strange, I was just shown 1494 pages here. Then it would be suddenly only 1492. I am not mistaken. How can two pages disappear?
 
  • Haha
  • Thinking
Reactions: 4 users
Strange, I was just shown 1494 pages here. Then it would be suddenly only 1492. I am not mistaken. How can two pages disappear?
Shrooms
 
  • Haha
  • Like
Reactions: 19 users

Sirod69

bavarian girl ;-)

Amir Sherman​

Founder , Global Director Business Development , Country Manager - TinyML , Technology Director & Embedded-SoMs Specialist

Happy to share a special presentation I am working on to showcase the difference between GP MCU based on CortexMx or MPU based on CortexAx with/out AI Accelerator to Neural Engine IP to Neuromorphic Solutions to ML Accelerators - What is the best to use for your application utilize Edge Impulse Platform .
Next Month at
1663328493078.png
 
  • Like
  • Love
Reactions: 18 users

Learning

Learning to the Top 🕵‍♂️
There's no possum up that back-propagation tree -

Hinton says he always knew the deep learning “revolution” was coming.

“A bunch of us were convinced this had to be the future [of artificial intelligence],” said Hinton, whose 1986 paper popularized the backpropagation algorithm for training multilayer neural networks.

“We managed to show that what we had believed all along was correct.”

LeCun, who pioneered the use of backpropagation and convolutional neural networks in 1989, agrees. “I had very little doubt that eventually, techniques similar to the ones we had developed in the 80s and 90s” would be adopted, he said
.


https://brainchip.com/brainchip-announces-the-akida-architecture-a-neuromorphic-system-on-chip/
BrainChip Announces the Akida™ Architecture, a Neuromorphic System-on-Chip - BrainChip

The Akida NSoC uses a pure CMOS logic process, ensuring high yields and low cost. Spiking neural networks (SNNs) are inherently lower power than traditional convolutional neural networks (CNNs), as they replace the math-intensive convolutions and back-propagation training methods with biologically inspired neuron functions and feed-forward training methodologies. BrainChip’s research has determined the optimal neuron model and training methods, bringing unprecedented efficiency and accuracy. Each Akida NSoC has effectively 1.2 million neurons and 10 billion synapses, representing 100 times better efficiency than neuromorphic test chips from Intel and IBM. Comparisons to leading CNN accelerator devices show similar performance gains of an order of magnitude better images/second/watt running industry standard benchmarks such as CIFAR-10 with comparable accuracy.

@Fact Finder was talking about "them" using 8-bit, 16-bit, 32-bit maths earlier, pointing out that Akida only needs 1 to 4 bits to achieve comparable accuracy, with far less power and in much less time. Akida brings together the best of digital (precision) together with an approximation of the best of analog (spikes - a single bit being a direct analog of a spike, 2 or 4 bits being a more accurate hybrid digital/analog spike).

It was the insight of Simon Thorpe's group in recognizing that the strongest spikes were detected first by the eye that led to the N-of-M coding which allows the discarding of the M-N spikes which arrive after the first N spikes and consequently reduces the number of spikes to be processed without significant loss of accuracy. Serendipitously, or by dint if diligent research, PvdM was the only person on the planet who had devised a digital circuit which could utilize this phenomenon to advantage. N-of-M - the source of the sauce ... and let's not forget STDP.
Many thanks Dio,

When I read the article and posting it, I was intending to say "They forgot to add PVDM and AM to the article".
But I didn't as I would be commenting something above my pay grade.

It's amazing and privileged, that we shareholders has such knowledgeable individuals sharing their knowledge for me learn.

Learning.
Learning everyday.
 
  • Like
  • Love
  • Fire
Reactions: 20 users

Learning

Learning to the Top 🕵‍♂️
Strange, I was just shown 1494 pages here. Then it would be suddenly only 1492. I am not mistaken. How can two pages disappear?

My only show 1493.

Screenshot_20220916_222817_Chrome.jpg


Learning.
 
  • Like
Reactions: 3 users
Just an update to last Tues chart.

Snuck through the 0.92...not totally convincing yet....then smacked that indicated 1.00 & a small rejection.

Hopefully a bit f regroup this week and a force & hold through the 1.00 would be good.

View attachment 16408

Just a diff look at a pretty simple visual of key areas.

View attachment 16409
Same charts as prev post.

We tried the 1.00 again after my prev post but sadly got rejected again for the moment.

Has come back to that bottom trend line and needs hold and move up from here.

I am liking to the tightening of the MA's being the 2nd chart with a spread on 13, 34, 50, 100 & 200 MA at just 0.04.

Added a couple of circles to highlight where we had similar convergences this year.

Not only shorters at play but suspect some of the big end also in the lead up to the rebalance.

Added another chart last pic with auto fib based on each prev quarter values circling where June 17 spike & today was on chart.

Always find it so amazing to see our "fair, transparent, open" :poop: mkt end up with a conveniently similar SP level for the high volume rebalance after 3 mths trading.....not :cautious::mad:

Rant over 😐


1663334403613.png



1663334537926.png



1663335098217.png
 
  • Like
  • Love
  • Fire
Reactions: 33 users

Deadpool

Did someone say KFC
While I am an investor and sometimes trader in Brainchip Inc and maintain a positive overall view of the prospects for success I do not always agree with decisions made by the Brainchip Inc and when that is the case I invariably take my concerns to the company privately.

Fortunately this company unlike some I have invested in will engage in robust debate with me and on my preferred terms which is in writing. Often I am not persuaded by the company but to date though not persuaded they have convinced me that the basis of the decision they have made is their genuine attempt to comply with the law and look after the best interests of all shareholders.

On the issue of the non publication of the last granted patent I had the following discussion and as this issue is still gnawing at some I have sort permission to publish the following email discussion I had with Brainchip Inc on the subject:

Fact Finder: I am very unhappy with this news regarding the patent...

Brainchip: I understand your frustration. There are good reasons for the non-announcements. It has nothing to do with any attempt to distance ourselves from the retail shareholders at all. Let me begin by saying that the decision to announce or not to announce falls into a very grey area of the guidance which I can share with you. Even after reading my justification below, you may not agree but hopefully you can see this from our perspective.

In this case, applying the Continuous Disclosure standards by asking, would this information make an informed investor buy or sell the stock, we believe the answer is that it is unlikely that it would. This decision was not taken lightly by any means. We had lengthy conversations with our two patent attorneys based in Perth at the research institute who advised us on this position. One of the very key differentiators of this patent from the others in the past and the ones that are pending and will be granted in the next few weeks is that this is a patent on an application of Akida and not on the very critical and core technology that makes Akida what it is. Protecting one possible use of Akida is not nearly as critical as protecting the “secret sauce”. And while this is one potential way of encrypting data, according to our best and brightest, “encryption is a dime a dozen” and there are many other ways to do it. For someone to infringe upon this patent, first they have to invent their own neuromorphic chip and then do voice encryption using two chips and an SNN.

As you know, the applications of Akida are endless and if we take the position that we are going to do an announcement for each and every one of them, it would be dilutive to the more valuable and critical announcements and we will have reverted back to being the very “noisy” BRN that drew some very unwanted attention and criticism from the ASX over the past few years. We are on their closely watched list and everything we do is scrutinized to death to ensure we are not pumping the stock so we err on the side of caution.


Fact Finder: It seemed to myself and others that this patent allowed for the use of the Hey Mercedes to securely interact with home and office as proposed by Mercedes Benz. If this is only in part correct then it is clearly significant.

Brainchip: A Patent on an application of Akida which in of itself is not a unique application (encryption) although it may be unique to do so on a neuromorphic chip. This does not protect our core Akida technology and although we felt the research was valuable and worth patenting, we do not foresee any immediate commercial applications or demand for this.

Fact Finder: Secondly having regard to the five criteria for the grant of any patent, as well as the time and cost involved unless the company does not concern itself with the proper allocation of scarce resources all patents by their very nature are important. Thirdly for the company, to as Tony Dawe suggested to another shareholder, decide this patent can be consigned to a footnote in the October 4C is to raise concerns about the true motives of management.

Brainchip: The next few patents that are issued will definitely be announced. When you see them, the difference will be very clear.


I have previously mentioned that in my communications and discussions at the AGM I formed the view that Brainchip Inc is very, very, very concerned about having any adverse entries made against them by the ASX. While Brainchip Inc has never stated why unlike every other company I have invested in they have such deep rooted concerns I am of the opinion that as it is the end goal to list on the Nasdaq that this is the reason. The Nasdaq looks at the character of the company and its history on other exchanges in determining how and when any new company will be permitted to list. Brainchip Inc wants an unblemished good name and in my opinion goes overboard to comply beyond strictly with the ASX Rules.

My opinion only DYOR
FF

AKIDA BALLISTA
Once again you have shown how must of a bloody legend you are @FactFinder, a living natural treasure, I reckon.

Regardless if my next born is male or female, it will be known as @Factfinder.

Some say, they may have wrote Richard Roxburgh's character "Rake" after your court room antics? Legendary
 
Last edited:
  • Like
  • Haha
  • Fire
Reactions: 29 users

cosors

👀
Ok for this article I need a long time. But it seems to be a good overview. Comes from Russia but Altai is not discussed.) Who likes to read along:

"Neuromorphic artificial intelligence systems​

14 September 2022
Modern artificial intelligence (AI) systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the mammalian brain. In this article we discuss these limitations and ways to mitigate them. Next, we present an overview of currently available neuromorphic AI projects in which these limitations are overcome by bringing some brain features into the functioning and organization of computing systems (TrueNorth, Loihi, Tianjic, SpiNNaker, BrainScaleS, NeuronFlow, DYNAP, Akida, Mythic). Also, we present the principle of classifying neuromorphic AI systems by the brain features they use: connectionism, parallelism, asynchrony, impulse nature of information transfer, on-device-learning, local learning, sparsity, analog, and in-memory computing. In addition to reviewing new architectural approaches used by neuromorphic devices based on existing silicon microelectronics technologies, we also discuss the prospects for using a new memristor element base. Examples of recent advances in the use of memristors in neuromorphic applications are also given."
https://www.frontiersin.org/articles/10.3389/fnins.2022.959626/full


Presumably there will already be a nice summary here once I have digested ¼-⅓ of it 😅


Thanks @Diogenese for your great explanation in the last post!
 
Last edited:
  • Like
  • Haha
Reactions: 9 users

Sirod69

bavarian girl ;-)
I just found a message from Argo AI through a like from Rob Telson, which we've been keeping an eye on for a long time
1663341536816.png


“And taken together it sends a singular message to companies: Argo is open for business.”
This week we launched our complete line of autonomous vehicle products and services. Read more in TechCrunch!

Argo AI kicks into high gear to build a business out of AV tech​

Ford, VW-backed startup launches product line for commercial robotaxi and autonomous delivery services

1663341466486.png

....
Argo’s central product, as one might expect, is the self-driving system that combines software and hardware including its own lidar sensor, high-definition maps and a back-end cloud-based tool called Hub that supports the AVs when out in field — and when they return.
....
The company’s public introduction of its product line shows larger ambitions to land a variety of customers beyond its two big-name backers Ford and VW.

 
  • Like
  • Fire
Reactions: 27 users

Sirod69

bavarian girl ;-)
well and what do we have to read here again

"I hear far too many people say "Tesla solved full self driving", "You have to try it, wow".

Let's get this straight. Tesla is an amazing company with amazing products that made huge leaps in many fields, but unfortunately no, it did not solve full self driving.
Want examples?
Just watch 15 seconds of each of these videos of the latest FSD version, I clipped them for you. There are dozens more online.

Example - tries to go on a red light, does not recognize the lane:
https://lnkd.in/dx4QTCXg

Example - does not recognize speed bumps:
https://lnkd.in/d7Yv2udc

Example - does not handle construction:
https://lnkd.in/d5mcZMGM

Example - stops because of a rider but for no reason:
https://lnkd.in/dcsE53bm

Example - well...:
https://lnkd.in/dN2gBXyP

"Full self driving" is when all of the above cases, and a thousand others, will be solved to very high degree of validation.

I wasn't surprised to read about recent lawsuits against Tesla for alleged false advertising:
https://lnkd.in/dHf6kFNT

Tesla's website clearly states: "The currently enabled Autopilot, Enhanced Autopilot and Full Self-Driving features require active driver supervision and do not make the vehicle autonomous".

But...
* How many people read the fine print?
* Isn't the name "Full Self Driving" misleading?
* Did you know that currently 100,000 consumers are paying $15K to "test" Tesla's FSD? Are they all certified safety drivers?
* How many times did Musk boast how superior FSD is vs. human drivers, and that "no intervention would even be needed"? Here's an example from 2019 (!): https://lnkd.in/dQ6uPN5K

Companies must advertise their products for what they actually are and capable of, not just in the fine print, but everywhere. Especially when it comes to safety. I'm curious to see how these lawsuits end up and whether Musk starts using a different, more truthful tone."

 
  • Like
  • Love
  • Fire
Reactions: 28 users

Sirod69

bavarian girl ;-)
something is also happening at our partner Prophesee

The gang’s all here! All the Propheseers from around the world met in Aix-en-Provence for 3 days of energizing, inspiring and fun in-person sessions.

🧍 100 Propheseers
🌎 25 nationalities
💭 Thoughtful sessions about the future
🏃 A Koh-Lanta style challenge
🍸 Cocktails and catching-up

Thanks to all these inspiring humans for your dedication to our VISION to bring HUMAN-INSPIRED technologies to solve the most critical challenges of #MachineVision and #AI, making machines safer, smarter and more efficient.

Special thanks to Adrian Johnson and Anaisa Seneda for the fun and stimulating workshops.

1663344418364.png
 
  • Like
  • Fire
  • Love
Reactions: 28 users

GazDix

Regular
Tweet from the AI Hardware Summit
 

Attachments

  • Screenshot_20220917_044706_com.twitter.android.jpg
    Screenshot_20220917_044706_com.twitter.android.jpg
    824.1 KB · Views: 150
  • Like
  • Love
  • Fire
Reactions: 28 users

GazDix

Regular
Tweet from the AI Hardware Summit
Also thank you Fact Finder. Just to echo everyone else's sentiments on this forum, you are a bloody legend.
 
  • Like
  • Fire
  • Haha
Reactions: 12 users

stockduck

Regular
Sorry, if posted before....find it interesting to read....and their customer base....o_O


from 12.09.2022:



...like Socionext, Megachip, Renesas.... and many others
 
  • Like
  • Love
Reactions: 16 users
  • Like
  • Fire
  • Love
Reactions: 37 users
D

Deleted member 118

Guest
Very accidental...lol
Weekends were never the same without it when I was younger and yes the gif below is probably as similar as I can find to tripping of ya face but without the eyes everywhere

 
Last edited by a moderator:
  • Haha
  • Like
  • Wow
Reactions: 11 users
Is this something Akida could be helping with?

 
  • Like
  • Love
  • Fire
Reactions: 23 users
D

Deleted member 118

Guest

Future Challenges and Trends
Trust and explainability

AI algorithms, and especially deep neural networks, are often considered as black boxes, and as a consequence are not easily understandable for humans. The drawbacks of such algorithms can include a) any bias within the training data is potentially transferred to the algorithm and remains undetected, b) users may not trust their predictions, and c) that they may lack robustness in operational environments. Explainable AI is an active field of research that aims to provide insights into the internal decision-making process of machine learning algorithms. Using these insights, algorithms can be developed whose predictions are not only correct but right for the right reasons[1].
Re-learning

Considering the importance of edge AI, there is a need for commitment to consider the impact of edge AI throughout its lifecycle. To that extent, the developed algorithms must be kept up to date and performant on new data, with the ability to integrate external sources through re-training. In addition to meet the requirements and defined metrics that indicate the training state of the AI system, the re-training must also consider any con- sequence it may have on other components or the system itself. The implication is that rather than having to spend time and resources on re-training from scratch, to incorporate slightly different insights, the re-training should focus on creating more generic models. The aim is to permit improvements in performance through a quick re-training of an edge AI model that has already been trained using previous data sets. Equally, the re-training of one model should not compromise the performance of other components within the system (or other systems within a system of systems). In simple terms, the re-training must enable improved performance through exploitation of new data and in parallel it must not negatively impact its surroundings. Additionally, changes in calibration (e.g. of sensors or actuators) should be permitted without the need to retrain the edge AI.
Security and adversarial attacks

In distributed learning, a communication overhead is introduced in order for the edge platforms and the system aggregator to transfer data during training and inference. When compared to data processing in large central data centres, data produced on resource-constrained end devices in a decentralized and distributed setting is particularly vulnerable to security threats and the necessary level of protection against such risks should be considered carefully for specific applications. Further research is required to increase the security, privacy, and robustness of edge AI by reducing the overhead, or by adopting novel approaches such as clustered federated learning or federated distillations.
Learning at the Edge

Training artificial neural networks at the Edge remains a challenge. Work has been done to optimize inference at the Edge by optimizing algorithms and accelerators for low precision, low memory footprint and feed-for- ward computations. However, an additional re-training phase of an artificial neural network can undo part of those optimizations as higher precision is needed to enable the iterative approach typically used and more storage is needed to keep track of the intermediate data required. Also, the frequent weight updates during training can pose additional challenges regarding energy efficiency as well as reliability. As such, neuromorphic-based architectures hold potential, as they allow on-line learning to be built in by modelling plasticity. Plenty of challenges remain to achieve this goal as it is difficult to make a single synapse and neuron device that allows the capture of a very wide range of time constants.
Integrating AI into the smallest devices: Recently a number of tools have been developed with the goal of implementing AI models which could fit the memory available in edge platforms. As an example, tinyML is about processing sensor data at extremely low power and, in many cases, at the outer- most edge of the network. Therefore, tinyML applications could be deployed on the microcontroller in a sensor node to reduce the amount of data that the node forwards to the rest of the system. These integrated “tiny” machine learning applications require “full-stack” solutions (hardware, system, software, and applications) plus the machine learning architectures, techniques, and tools performing on-device analytics. Furthermore, a variety of sensing modalities (vision, audio, motion, environmental, human health monitoring, etc.) are used with extreme energy efficiency (typically in the single milliwatt, or lower, power range) to enable machine intelligence at the boundary of the physical and digital worlds. With the increase in dedicated hardware for machine learning, an important direction for future work is the development of compilers, such as Glow, and other tools that optimize neural network graphs for heterogeneous hardware or train and handle specialized technologies and algorithms.
Data centric AI

Data is the fundamental piece behind ML/AI. However, one of the major problems when developing AI solutions can be the lack of sufficient data to achieve the required performance in a specific application. In recent years several techniques have been considered to deal with this problem in the context of cloud-based solutions; for example, by using semi-supervised learning (to take advantage of the large amounts of unlabelled data generated by edge devices), by using data augmentation (via Generative Adversarial Networks (GANs) or transformations), or by transfer learning. These have become cutting-edge methods deployed to improve the overall performance in AI models. However, the adoption of these techniques in edge computing still needs to be thoroughly investigated. Moreover, edge systems need to interact with various types of IoT sensors, which produce a diversity of data such as image, text, sound, and motion. Edge analytics should be able to deal with those heterogeneous environments and adapt to be multimodal allowing learning from features collected over multiple modalities.
Neuromorphic technologies

Neuromorphic engineering is a ground-breaking approach to the design of computing technology that draws inspiration from powerful and efficient biological neural processing systems. Neuromorphic devices are able to carry out sensing, processing, and control strategies with ultra-low power performance. Today, the neuromorphic community in Europe is leading the State-of-the-Art in this domain. The community includes an increasing number of labs that work on the theory, modelling, and implementation of neuromorphic computing systems using conventional VLSI technologies, emerging memristive devices, photonics, spin-based, and other nano-technological solutions. Extensive work is needed in terms of neuromorphic algorithms, emerging technologies, hardware design and neuromorphic applications to enable the uptake of this technology, and to match the needs of real-world applications that solve real-world tasks in industry, health-care, assistive systems, and consumer devices. It is important to note that “neuromorphic” is most commonly defined as the group of brain-inspired hardware and algorithms.
Parallel to the advancement in neuromorphic computing, the underlying computation of such technology gets increasingly complex and requires more and more parameters. This triggers further development of efficient neuromorphic hardware designs, e.g. the development of neuromorphic hardware that can tackle the well- known memory wall issues and limited power budget in order to make such technology applicable on edge de- vices. The emerging memory technologies provide additional benefits for neuromorphic solutions, especially memory technology that can allow us to perform computation directly in the memory cells themselves instead of having to load and store the parameters, inputs, and outputs into computation cores.
Such technology, coupled with the properties of neuromorphic computing, delivers many benefits. Firstly, DL and spiking neural networks (SNN) parameters are often fixed and/or modified very seldom. This matches the capability of emerging non-volatile memories where write accesses are typically one or two orders slower than read accesses as the number of memory writes required is lower. Secondly, most computations are matrix addition and multiplication. This operation can be mapped efficiently in memory arrays. Thirdly, inference of such neuromorphic networks can be optimized for low-bit precision and coarse quantization without sacrificing the quality of the network outputs. Some tasks, such as classification, are proven to be good enough even when networks are optimized to binary and/or ternary representation. This provides an excellent opportunity as the underlying operation can be simply replaced by AND/XOR logic. Fourthly, neural networks are robust to error. Thus, process variations on the emerging memory technologies do not limit their capability to compute and/ or and load/store in the networks. These benefits can be achieved by in-memory compute technology using emerging memory technologies.
Meta-learning

In most of today’s industrial applications of deep learning, models and related learning algorithms are tailor-made for very specific tasks[2][3]. This procedure can lead to accurate solutions of complex and multidimensional problems but it also has visible weaknesses[4][5]. Normally, these models require an enormous amount of data to be able to learn how to correctly solve problems. Labelled data can be costly as it may require the intervention of experts or not be available in real-time applications due to the lack of generation events. A question can therefore arise: in addition to having the correct formulation and the descriptive data for the problem, is it possible not only to try to solve it but also to learn how to solve it in the best way? Therefore: “is it possible to learn how to learn?” Precisely on this question, the branch of machine learning, called meta-learning (Meta-L), is based[7][8]. In Meta-L the optimization is performed on multiple learning examples that consider different learning objectives in a series of training steps. In base learning, an inner learning algorithm, given a dataset and a target, solves a specific task such as image recognition. During meta learning, an outer algorithm updates the internal algorithm so that the model learned during base learning also optimizes an outer objective, which tries, for example, to increase the inner algorithm’s robustness or its generalization performance[9].
Intelligent extraction of information, by addressing the problem from a general point of view can also lead to the ability of the inner algorithm to handle new situations quickly and with little data available with a robust approach[10]. Looking at the advantages of Meta-Learning and the possibility of using it together with Edge computing to increase its benefits, provides a good outline of how this branch of ML can soon find concrete uses in the most varied application scenarios[11].
Hybrid modelling

Data-based and knowledge-based modelling can be combined into hybrid modelling approaches. Some solutions can take advantage of a-priori knowledge in the form of physical equations describing known causal relationships in the behaviour of the systems or by using well known simulation techniques. Whereas dependencies not known a priori can be represented by many kinds of machine learning methods using big data based on observing the behaviour of the systems. The former type of situation can be seen as white box modelling as the internal states possess a physical meaning, while the latter is referred to as black box modelling, using just the input-output-behaviour, but not maintaining information on the internal physical states of the system. However, in many cases, a model is not purely physics-based nor purely data-driven, giving rise to grey box modelling methods that can be formulated[12]. The assignment of models to the scale varies within the literature: For instance, a transfer function can be derived from physical considerations (white), identified from measurement data with a well-educated guess of the model order (grey) or without (black).
Approaches for combining machine learning and simulation, by simulation-assisted machine learning or by ma- chine-learning-assisted simulation and combinations are described by von Rueden et al. in “Combining Machine Learning and Simulation to a Hybrid Modelling approach: Current and Future Directions”[13] and in “Informed machine learning – towards a taxonomy of explicit integration of knowledge into machine learning.”[14] advantage of hybrid modelling is avoiding the necessity of learning a-priori the behaviour of systems from huge amounts of data, if they can be described by simulation techniques. Also, in the case of missing data, hybrid modelling is a possible approach[15].
A practical example of combining physical white-box modelling and machine learning to improve a model for the highly non-linear dynamic behaviour of a ship, described by a set of analytical equations has been recently investigated by Mei et al.[16]. Another example is hybrid modelling in process industries[17].
Energy efficiency

Reducing energy consumption is a general goal, not only, but especially for smart systems providers to address the challenges of global warming and enable a higher degree of miniaturization of intelligent devices. For a long time power reduction has been a challenge in micro and nano electronics and also a target for all AI applications, regardless of whether data is processed in the cloud or at the edge. But at the edge, this target is especially important as applications usually have only limited power resources available. They often have to be battery powered or even use energy harvesting.
Special energy-efficient neural network architectures have been investigated[56]. Not only is the hardware crucial for low-power AI applications, but also the implemented methods and models have great influence on the energy consumption. This has been examined for the example of computer vision[18].
Moving away from traditional von Neumann processing solutions and using dedicated hardware[19] allows for additional power reduction. Even more can be achieved with neuromorphic architectures[20].
The “ultimate benchmark” in power consumption for artificial intelligence would be the “natural intelligence” in form of the human brain, which has 86 bn. neurons[21] and approximately 1014–1015 synapses[22] with an energy consumption of less than 20W, based on glucose available to the brain, or only 0.2W, when counting the ATP usage instead of glucose[23]. Current GPU based solutions with that complexity are far from this energy efficiency. There is obviously plenty of headroom for further development
 
  • Like
  • Fire
Reactions: 16 users
Top Bottom