Maybe someone (
@Dhm)

would like to contact Eli and put him on the right track. It'd be pretty groovy to have an article on BrainChip published in Forbes!
AI On The Edge: The Path To Maturity For A 40-Year-Old Industry?
Eli David
Forbes Councils Member
Forbes Technology Council
COUNCIL POST| Membership (fee-based)
May 10, 2022,09:15am EDT
(Extract Only)
The good news is that when you have that type of accuracy ratio, there is ample room for improvement. At this stage of technology development, the accuracy of the print drives the rest of the metrics: yield, waste and efficiency. So you can imagine the enormous profit incentive and competitive advantage for a solution that could raise accurate outcomes in AM by 25% to 30%.
Deep neural networks have spurred revolutions in image, voice and text recognition. Traditional machine learning methods rely on features provided by human experts. Thus, instead of directly learning from raw data (e.g., pixels in images), they only process those specific patterns that humans can think of.
Deep learning, on the other hand, is the first and currently the only AI method that can directly learn from raw data. They take inspiration from how our own brains work, and similar to our brains, they process all of the data they observe.
The advancements in the last few years in deep learning have resulted in great leaps in the history of artificial intelligence. Suddenly, we see improvement in accuracy in numerous computer vision, speech recognition and language understanding tasks.
There is enormous potential in the realm of digital manufacturing. If we could apply this deep learning inference engine to the 3D printing process, we could boost accuracy immensely. There would be very little waste, much lower materials costs and a giant leap in yield and efficiency—because we wouldn’t be making a lot of rejected parts anymore.
More technically speaking, if we use sensors and deep learning to detect the very early stages of a flaw, could we correct the course of a print job to avoid it developing further?
The short answer is yes. Deep learning can identify slight manufacturing flaws that the human eye would not even notice, something we will cover in my next column. And yes, many of these flaws can be compensated for during the printing of the object.
But there is a caveat to these, and it goes back to the tension between theory and practice. To use AI in a laboratory setting is extremely intensive in computing and memory requirements, so you need the requisite high-performance hardware. In theory, we could have the sophisticated AI hardware attached to every printer, but that would make the machines prohibitively expensive.
Many AI-driven solutions, Alexa or Google Home, for instance, work around this by deploying basic processors in their edge devices and connecting to AI that operates on servers in the cloud. This works well for some applications, but not for others.
If the edge device moves around, like a vehicle or a drone, it might lose connectivity. The second problem with this approach is latency—the time it takes to send data to the cloud and retrieve an AI answer back. This latency does not lend itself to procedures requiring immediate, real-time response, like discriminating between shadows of trees or pedestrians on a roadway—or for correcting a 3D printer without stopping it every time.
The dream of integrating deep learning into AM is still very much alive—it is merely a practical design problem that stands in the way of a mature, perfected manufacturing method.
What is required is a two-tier software and hardware architecture: one for computationally heavy learning and another for local, autonomous and immediate decision-making. Future columns in this series will look at how these two systems can coordinate to bring the best insights of AI from the lab out to the edge.
The advancements in the last few years in deep learning have resulted in great leaps in the history of artificial intelligence.
www.forbes.com