Breaking News- new BRN articles/research

Esq.111

Fascinatingly Intuitive.
Well...would appear the Director of Dell Tech in China across us (with other Co's to be fair haha)....article / post from the Dell verified account.

Is all their bold but we down in the hardware commentary & I highlighted red ;)

Edit. Thought I would also their comment end of hardware section re accelerators and what they want to achieve.

Will need to translate if you visit page but below is obviously done.






View attachment 2703


The future of artificial intelligence​

Dell Technologies
Dell Technologies
Verified account

3 people liked this article
content
put away
Problems that need to be solved by artificial intelligence
The future of artificial intelligence

Author: Dr. Jia Zhen, Director of Dell Technologies China Research Institute​

Artificial Intelligence (AI) is already ubiquitous. In the era of digital transformation, when people turn to more frequent digital interactions, the massive data from the digital virtual world seamlessly merges with the real physical world. As the amount, variety, and speed of data generation increases, AI represents an important critical step in extracting insights from massive amounts of data and advancing other emerging technologies .
AI algorithms and hardware-accelerated systems are improving business decision-making efficiency, improving business processes, and delivering smarter, more real-time data analysis results at scale. AI is fundamentally changing the way businesses operate, redefining the way people work, and transforming industries on a global scale. In the era of digital transformation, our society and enterprises need to make more use of intelligent information system architecture, application software and algorithms, and data-first strategies to fully realize the business potential of enterprises.
v2-ebeb82c6a2550dda7c378ba3d5f9c955_720w.jpg

Here I also briefly list some key figures reflecting the booming development of artificial intelligence: 62% of global enterprises have invested in artificial intelligence to some extent[1]; 53% of global data and analytics decision makers say they are planning to implement some artificial intelligence in the form of [2]; by 2022, 75% of enterprises will embed intelligent automation into technology and process development [3].
As mentioned above, artificial intelligence has made great progress in recent years, but we still have many problems that need to be solved urgently. In this article, I will first analyze the core problems that remain to be solved in the current development of artificial intelligence, and then propose some ideas for our key development directions in the field of artificial intelligence.
v2-b5503882e18cf0780509a3a01df84398_720w.jpg

Problems that need to be solved by artificial intelligence

  • Algorithmic complexity of artificial intelligence: The mainstream algorithms of artificial intelligence today are based on the Deep Neural Network [13] of Machine Learning. With the development of artificial intelligence technology, the structure of deep neural network is becoming more and more complex, and there are more and more hyperparameters. Sophisticated deep neural networks improve the accuracy of machine learning models, but configuring and debugging such complex networks can be prohibitive for ordinary users of artificial intelligence. The ease of development, debugging, and deployment of deep neural network algorithms and applications is also becoming more and more urgent .
  • Data scarcity of artificial intelligence: The efficient reasoning and recognition of deep neural networks nowadays mainly depends on the support of a large amount of training data. Open databases such as ImageNet [9] provide thousands of images, videos and corresponding annotation information. Through the training of a large amount of data, the machine learning model can almost cover the changes of various reasoning and recognition scenarios. However, if the amount of data is not enough or the type is not comprehensive enough, the performance of the machine learning model is bound to be limited. In the application of artificial intelligence in industry, the problem of data shortage is particularly prominent. Different from traditional reasoning and recognition applications for ordinary consumers, artificial intelligence applications in the industry are often unique business-related problems (such as: intelligent manufacturing, remote system debugging and maintenance, etc.), corresponding data (especially negative samples) Very few. In the case of shortage of training data, how to improve the algorithm model of artificial intelligence so that it can still work efficiently under specific scenarios and limited data is also a new and urgent task .
  • High computational consumption of artificial intelligence: As mentioned in the previous two aspects, the complexity of deep neural networks and the diversity of big data will lead to the high consumption of computing resources in current artificial intelligence applications. At present, the training of more advanced machine learning models, such as GPT-3, takes several months to utilize high-performance clusters [10]. Ordinary machine learning models can take hours or even days to train on traditional x86 high-performance servers if the amount of data is large. At the same time, when the trained model performs inference and recognition tasks, due to the complex model structure, many hyperparameters, and complex calculations, the requirements for computing resources of terminal devices that process data are also higher. For example, lightweight IoT devices cannot run complex machine learning inference and recognition models, or for smart terminal devices, such as smartphones, running complex machine learning models will lead to large battery consumption. How to better and fully optimize computing resources to support machine learning training and inference recognition is also another new urgent task.
  • Interpretability of artificial intelligence: Artificial intelligence technology using deep neural networks, due to the complexity of neural networks, many times people treat them as a "black box". The user inputs the data that needs to be recognized by reasoning, and the deep neural network obtains the result of reasoning and recognition through a series of "complex and unknown" mathematical processing. However, we cannot intuitively analyze why the input data will get the corresponding results through the complex neural network. In some key AI areas, such as autonomous driving, the interpretability of AI decisions is critical . Why does an automated driving system make such a driving decision in some critical safety-related scenarios? Why is the reasoning and recognition of road conditions sometimes wrong? These inference and identification conclusions from the "black box" must be interpretable and must be traceable. Only when artificial intelligence can be explained can we find the basis for decision-making and judgment and find out the reason for the error of reasoning and identification. "From effect to cause", we can improve the performance of deep neural network, so that it can provide artificial intelligence applications more efficiently, safely and reliably in different occasions .
Of course, in addition to the above-mentioned four major problems that AI needs to solve urgently, AI also has some other limitations, such as the privacy of AI, the generality of AI, the scarcity of talents for AI development, and the lack of AI. Legal constraints, etc., I will not repeat them here. In this article, I will focus on the four main issues listed above and explore the way forward .
v2-eb32811d4385194fecba3f58ee470757_720w.jpg

The future of artificial intelligence

In view of the four major problems that artificial intelligence needs to solve urgently listed above, I will briefly describe the main technical directions that we need to pay attention to for future development:
  • First, we need to be facilitators of the “3rd Wave AI”, preparing our corporate society for the coming AI revolution. These changes will drive our data management, artificial intelligence algorithms, and hardware accelerators to flourish. We need to actively develop new models of collaboration with clients and research entities driving the “third wave of AI.” So, what is the "third wave of artificial intelligence"?
    • From an algorithmic point of view, we summarize it as the concept of Contextual Adaptation. Specifically, we need to pay more attention to the following algorithm development trends:
      • We need to establish reliable decision-making capabilities in artificial intelligence systems, so that people can understand or analyze why the "black box" machine learning algorithm model makes inference and identification decisions. Specifically, there are three problems that need to be solved for safe and reliable artificial intelligence: boundary problem, backtracking problem and verifiable problem . We call such a capability “AI explainability” [5].
      • How to build AI systems that can train machine learning models with one (One-Shot Learning [6]) or very few (Few-Shot Learning [7]) examples. As mentioned above, in real industrial application scenarios, data is relatively scarce. Effectively constructing and training machine learning models under extremely limited data is a hot research direction at present .
      • Compared with the traditional and open-loop offline learning (Offline Learning), online learning (Online Learning) [20], as an emerging direction, is a closed-loop system: the machine learning model sends the inference and recognition results to the user based on the current parameters and architecture, User feedback is collected and used to update the optimization model, thus completing an optimization process that continuously receives information and updates iteratively. In other words, machine learning models need to dynamically accept sequential data and update themselves to optimize performance .
      • Multi-Task Learning [21] refers to a learning method in which the training data contains samples from multiple different scenes, and the scene information is used to improve the performance of machine learning tasks during the learning process. The scene adaptation method in traditional transfer learning usually only realizes the bidirectional knowledge transfer between the original scene and the target scene, while multi-scene task learning encourages the bidirectional knowledge transfer between multiple scenes .
      • The machine learning model is trained based on the contextual information of the context. With the passage of time and the migration of the scene, the artificial intelligence system will gradually learn the method of constructing the updated model autonomously [11]. Machine learning models derived from contextual learning (Contextual Learning [15]) will be used to better perceive the world and help humans make inference decisions more intelligently .
      • With the rapid development of artificial intelligence technology, knowledge representation and knowledge reasoning based on deep neural networks have received more and more attention, and scene knowledge graphs of different scenarios have appeared one after another [22]. As a semantic network, the scene knowledge graph depicts scene knowledge and provides the basis for inference and recognition tasks within the scene. As an application of knowledge reasoning, the question answering system based on knowledge graph has made great progress .
      • Machine learning models derived from contextual learning can also help us better abstract our data and the world we need to perceive [16], thereby making our artificial intelligence systems more generalized and adaptable Solve all kinds of complex problems.
In conclusion, the advanced algorithms of the "third wave of artificial intelligence" can not only extract valuable information (Learn) from the data in the environment (Perceive), but also create new meanings (Abstract). , and has the ability to assist human planning and decision-making (Reasoning), while meeting human needs (Integration, Integration) and concerns (Ethics, Security) .
  • From a hardware perspective, the accelerators of Domain Specific Architectures (DSA) [12] enable the third-wave AI algorithms to operate in a hybrid ecosystem consisting of Edge, Core, and Cloud. run anywhere in the system . Specifically, accelerators for specific domain architectures include the following examples: Nvidia's GPU, Xilinx's FPGA, Google's TPU, and artificial intelligence acceleration chips such as BrainChip's Akida Neural Processer, GraphCore's Intelligent Processing Unit (IPU), Cambrian's Machine Learning Unit (MLU) and more. These types of domain-specific architecture accelerators will be integrated into more information devices, architectures, and ecosystems by requiring less training data and being able to operate at lower power when needed. In response to this trend, the area where we need to focus on development is to develop a unified heterogeneous architecture approach that enables information systems to easily integrate and configure various different types of domain-specific architecture hardware accelerators. For Dell Technologies, we can leverage Dell's vast global supply chain and sales network to attract domain-specific architecture accelerator suppliers to adhere to the standard interfaces defined by Dell to achieve a unified heterogeneous architecture .
To sum up, the hardware of the "third wave of artificial intelligence" should not only be more powerful (Powerful), but also smarter (Strategic) and more efficient (Efficient and Efficient).
v2-13a7a4e51cc97ffcd9fa8f9f5fecca5c_720w.jpg

In addition to the above-mentioned development of algorithms and hardware that drives the “third wave of artificial intelligence”, another development direction that requires more attention is artificial intelligence automation (AutoML) [12]. As mentioned above, the development of artificial intelligence is becoming more and more complex, and for ordinary users, the professional skills threshold for using artificial intelligence is getting higher and higher. We urgently need to provide a complete set of information system architecture solutions that " make artificial intelligence simple ".
  • We need to better operate and manage AI workloads, driving the simplification and optimization of information system architectures. Within the entire software stack of AI applications, we need to define " Easy Buttons " for future AI workloads . Specifically, we have the following technical directions to focus on:
    • Develop a more simple and easy-to-use common API (Application Protocol Interface) for the advanced artificial intelligence algorithm framework , so that the information system architecture can integrate and use more advanced and complex algorithms.
    • For artificial intelligence algorithms, we need to provide machine learning model parameters adaptive (Adaptive) selection and tuning (Tuning) strategies , according to the needs of users, automatically select the most suitable algorithm, and optimize the parameters of the algorithm to achieve the best performance.
    • For the artificial intelligence data processing process (Pipeline), we need to establish the functions of process tracking, analysis and reuse , such as MLOps (Machine Learning Operation) described in [14]. Machine learning process management (MLOps) is the practice of creating new machine learning (ML) and deep learning (DL) models and deploying them into production through repeatable, automated workflows. When we have new artificial intelligence application problems, we can learn from the existing data processing process, and after a little analysis and modification, we can reuse the more mature artificial intelligence software and hardware solutions to meet new needs, thereby reducing repeated development. waste of resources.
    • When our artificial intelligence system is deployed, our algorithm model still needs to have the evolution function of self-update, self-learning, and self-tuning . According to the changes of inference recognition scenarios and inference recognition tasks and the attenuation of algorithm accuracy, we use edge and cloud information system architecture to fully mobilize different computing resources to update, optimize and deploy our algorithm models. In the process of updating and deploying artificial intelligence models, we also use the latest algorithms such as model compression [17], data distillation [19], and knowledge distillation [18], so as to make full use of limited computing resources.
    • We need to consider integrating the above AI-enabled automation services in multi-cloud and hybrid cloud environments, in line with Data Management and Orchestration, to create a complete and intelligent AI service platform .
In conclusion, the automation of artificial intelligence should not only be easier (Easy to Use), but also more flexible (Adapt) and more capable of self-learning and growth (Evolve).
v2-d5e7115c3eb914a5ae21513675cbb8b6_720w.jpg

Technological innovation at Dell Technologies never stops. Our mission is to promote the progress of human society, promote technological innovation, and become the most important technology company in the data age. Our AI solutions will help our clients free themselves from the current complex processes of large-scale data processing, analysis and insights (Insights). The Research Office of our Office of CTO is also actively exploring the aforementioned AI development directions. We are committed to helping our clients make better use of state-of-the-art information system architectures, understand their data efficiently and in a timely manner, and bring greater value to their commercial business innovations .
Acknowledgments: I would like to thank the artificial intelligence research team of Dell Technologies China Research Institute (Li Sanping, Ni Jiacheng, Chen Qiang, Wang Zijia, Yang Wenbin, etc.) for their excellent research in the field of artificial intelligence. Their work results strongly support the content of this article.
Afternoon Fullmoonfever,

Great find and cheers for sharing.

Regards,
Esq.
 
  • Like
Reactions: 8 users

M_C

Founding Member
  • Like
  • Fire
  • Thinking
Reactions: 21 users

Wags

Regular
  • Like
Reactions: 9 users
Great publicity for neuromorphic computing but I don’t think Brainchip is involved. The neuromorphic data was sent to Earth for processing using algorithms designed to interpret it. AKIDA would have taken the data/spikes from the camera then processed it and sent Meta data to Earth that would be immediately understood by a conventional computer here on Earth.

If you recall this is the project that was already happening when Peter van der Made posted his cheeky comment that Brainchip was already working with NASA and could help and the Professor asked him to contact him privately.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Sad
Reactions: 13 users

Twing

Akida’s gambit
New updated report from Pitt Street Research 21 Mar 2022.
 

Attachments

  • BRN Pitt Street 2022 0321.pdf
    383.4 KB · Views: 198
  • Like
  • Fire
Reactions: 13 users

SALT

Member
New Stockhouse article,

Media Alert: BrainChip Talks Disruptive Technologies with Foundries.io Ian Drew as Part of its 'This is Our Mission' Podcast.​

Available on April 1st 2022​

 
  • Like
Reactions: 11 users

Perhaps

Regular
Just stumbled upon it today, don't know, if this been posted before. Anyway, love it to have those names in one post.
An IBM podcast on Apple.com about Brainchip with guest Rob Telson:
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Learning

Learning to the Top 🕵‍♂️
Just stumbled upon it today, don't know, if this been posted before. Anyway, love it to have those names in one post.
An IBM podcast on Apple.com about Brainchip with guest Rob Telson:

Link pls
 
  • Like
Reactions: 5 users
D

Deleted member 118

Guest
  • Like
Reactions: 3 users

Learning

Learning to the Top 🕵‍♂️
  • Like
Reactions: 4 users

Perhaps

Regular
I've just entered the link in the post, that's the way it appears here, don't know how to change it. Maybe the problem is up to your scrypt blocking settings.
 
Last edited:
  • Like
Reactions: 1 users

Neuromorphia

fact collector
AI Everywhere, Even in Surprising Places with BrainChip
BrainChip's neuromorphic AI technology has long been the talk of the industry, and now the Akida processor is available for purchase. We invited Rob Telson, VP of Worldwide Sales for BrainChip, to return to the Utilizing AI podcast to give Chris Grundemann and Stephen Foskett an update on the Akida processor. As of today, Akida is available for use by developers and hobbyists to explore neuromorphic compute at the edge. BrainChip enables five sensor modalities: Vision, hearing, touch, olfactory, and taste. BrainChip's architecture allows incremental on-chip learning at extremely low power, potentially bringing this capability to some surprising places, from home appliances to the factory floor. Another differentiator of the BrainChip solution is its event-based architecture, which can trigger based on events rather than sending a continual stream of data. As of today, the BrainChip Akida AKD1000 PCIe development board is available for purchase so everyone can try out the technology.


 
Last edited:
  • Like
Reactions: 3 users

Quatrojos

Regular
  • Like
Reactions: 7 users

JB49

Regular
Just stumbled upon it today, don't know, if this been posted before. Anyway, love it to have those names in one post.
An IBM podcast on Apple.com about Brainchip with guest Rob Telson:

"We have had one major automative manufacturer actually highlight that they are using our technology in one of their vehicles MOVING FORWARD"
 
  • Like
  • Fire
  • Love
Reactions: 8 users

Perhaps

Regular
"We have had one major automative manufacturer actually highlight that they are using our technology in one of their vehicles MOVING FORWARD"
Hopefully this vehicle is not the Merc concept car.
 
  • Like
Reactions: 1 users

cosors

👀
I don't know if this is old or new for you. You are so busy posting I can't keep up:

"BrainChip selected by U.S. Air Force Research Laboratory to develop AI-based radar

2022-04-04 20:53 HKT

BrainChip, the world's first commercial producer of neuromorphic artificial intelligence chips and IP, today announced that Information Systems Laboratories (ISL) is developing an AI-based radar for the U.S. Air Force Research Laboratory (AFRL) based on its Akida™ Neural Network Processor Research solutions.
ISL is a specialist in expert research and complex analysis, software and systems engineering, advanced hardware design and development, and high-quality manufacturing for a variety of clients worldwide.
ISL focuses on areas such as advanced signal processing, space exploration, subsea technology, surveillance and tracking, cybersecurity, advanced radar systems and energy independence. As a member of the BrainChip Early Partnership Program (EAP), ISL will be able to evaluate boards for Akida devices, software and hardware support, and dedicated engineering resources.
"As part of BrainChip's EAP, we had the opportunity to directly assess the capabilities Akida offers to the AI ecosystem," said Jamie Bergin, Senior Vice President, Research, Development and Engineering Solutions Manager at ISL.

BrainChip brings AI to the edge in ways not possible with existing technologies. Akida processors feature ultra-low power consumption and high performance to support the development of edge AI technologies by using neuromorphic architecture, a type of artificial intelligence inspired by the biology of the human brain. BrainChip's EAP program provides partners with the ability to realize significant benefits of power consumption, design flexibility and true learning at the edge.
"ISL's decision to use Akida and Edge-based learning as a tool to incorporate into their research and engineering solutions portfolio is in large part due to the go-to-market advantages our innovation capabilities and production-ready status provide ” said Sean Hehir, CEO of BrainChip, “We are delighted to be a partner of AFRL and ISL on edge AI and machine learning. We believe the combination of technologies will help accelerate the deployment of AI in the field.”
Akida is currently licensed as IP and is also available to order for chip production. It focuses on low power consumption and high performance, supports sensory processing, and is suitable for applications that benefit artificial intelligence, as well as applications such as smart healthcare, smart cities, smart transportation, and smart homes."
https://inf.news/en/science/791bf988657af0e134c475ca748067c6.html
 
  • Like
  • Fire
Reactions: 19 users

Tezza

Regular
Nice pick up. Should help the SP up over the $1 Mark.
 
  • Like
Reactions: 4 users

Perhaps

Regular
  • Like
  • Fire
Reactions: 15 users
Top Bottom