BRN Discussion Ongoing

Just posted yesterday apparently.






Vibration Classification with BrainChip's Akida​

With predictive maintenance, you can monitor your equipment while it’s running: This means that there is less downtime for inspections and repair jobs because the monitoring process takes place during operation instead of waiting until something breaks or wears out.
The Edge Impulse platform and solutions engineering team enables companies to make more accurate predictions about when devices might fail, which lets them optimize their fleet maintenance and use service crews most effectively. This saves the companies money by letting them lower overall asset downtime and allows customers to be more satisfied with their product and services.
In this article, we will explain some of the beneficial applications of predictive maintenance, and then show how to build a predictive maintenance solution that will detect abnormal vibrations using Edge Impulse’s platform, the BrainChip Akida hardware, and a computer cooling fan.
Business Case Examples for Edge Predictive Maintenance
Predictive maintenance provides a wide variety of business benefits, such as:
Predicting asset depreciation and maintenance timelines The security and building-access industry have been experiencing increasing pressure due to the global pandemic, and it’s imperative for customers to understand when a security door or component might fail. By anticipating maintenance, companies can reduce unplanned out-of-service intervals, allowing for minimal disruption in office buildings where there is huge traffic of people.
Lowering cost and gaining more ROI Global shipping companies are looking for ways to lower their costs and increase efficiency. Focusing on predictive maintenance can allow them to proactively address any issues before they become costly or cause unsafe conditions in order to avoid downtime on ships.
 
  • Like
  • Fire
  • Love
Reactions: 36 users

cosors

👀
yesterday
1713453647787.png

Screenshot_2024-04-18-17-32-27-37_40deb401b9ffe8e1df2f1cc5ba480b12.jpg

Qualcomm India Developer Conference 2024, Hyderabad
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 10 users
  • Like
  • Fire
Reactions: 11 users

Tothemoon24

Top 20

IMG_8803.jpeg

tinyML Summit – April 22-24, 2024 (San Francisco)​


BrainChip is proud to be a Silver Sponsor of the 2024 tinyML© Summit. This annual event, whichis hosted by the tinyML Foundation, attracts global innovators and companies with a common goal of advancing and promoting Tiny ML.
BrainChip will be presenting 2nd-generation Akida’s enhanced features including an expanded capacity to learn efficiently at extremely small form factors. Join us at our booth, whose location will be announced in the coming months. We look forward to talking shop with you there!
 
  • Like
  • Fire
  • Love
Reactions: 32 users

cosors

👀
Interestingly, BrainChip has a Software Development Centre, in Hyberabad.

I wonder if any employees from our Company, will be attending?
were
me too
"To strengthen collaborations with its partners and developer ecosystem, Qualcomm is hosting an in-person Developer Conference in Hyderabad on April 17th, 2024, which will bring together 150+ developers, engineers, and industry leaders to showcase their solutions and discuss the latest advancements in tools and technologies that will help accelerate this ecosystem."

I rather wonder why I can't find anything in Hyderabad with... I often drop by there, the reason why I noticed it.
If Anil or one of his aides hadn't been sitting there I would be disappointed.

...or we have a confirmation ?
1:150+
 
Last edited:
  • Like
Reactions: 4 users

rgupta

Regular

"Intel builds world’s largest neuromorphic system​

News Analysis
Apr 17, 2024

Code-named Hala Point, the brain-inspired system packs 1,152 Loihi 2 processors in a data center chassis the size of a microwave oven.
View attachment 61058
Quantum computing is billed as a transformative computer architecture that’s capable of tackling difficult optimization problems and making AI faster and more efficient. But quantum computers can’t be scaled yet to the point where they can outperform even classical computers, and a full ecosystem of platforms, programming languages and applications is even farther away.
Meanwhile, another new technology is poised to make a much more immediate difference: neuromorphic computing.
Neuromorphic computing looks to redesign how computer chips are built by looking at human brains for inspiration. For example, our neurons handle both processing and memory storage, whereas in traditional computers the two are kept separate. Sending data back and forth takes time and energy.

In addition, neurons only fire when needed, reducing energy consumption even further. As a result, neuromorphic computing offers massive parallel computing capabilities far beyond traditional GPU architecture, says Omdia analyst Lian Jye Su. “In addition, it is better at energy consumption and efficiency.”

According to Gartner, neuromorphic computing is one of the technologies with the most potential to disrupt a broad cross-section of markets, as “a critical enabler,” however, it is still three to six years away from making an impact.
Intel has achieved a key milestone, however. Today, Intel announced the deployment of the world’s largest neuromorphic computer yet, deployed at Sandia National Laboratories.

The computer, which uses Intel’s Loihi 2 processor, is code named Hala Point, and it supports up to 20 quadrillion operations per second with an efficiency exceeding 15 trillion 8-bit operations per second per watt – all in a package about the size of a microwave oven. It supports up to 1.15 billion neurons and 128 billion synapses, or about the level of an owl’s brain.
According to Intel, this is the first large-scale neuromorphic system that surpasses the efficiency and performance of CPU- and GPU-based architectures for real-time AI workloads. Loihi-based systems can perform AI inference and solve optimization problems 50 times faster than CPU and GPU architectures, the company said, while using 100 times less energy.

And the technology is available now, for free, to enterprises interested in researching its potential, says Mike Davies, director of Intel’s Neuromorphic Computing Lab.

To get started, companies should first join the Intel Neuromorphic Research Community, whose members include GE, Hitachi, Airbus, Accenture, Logitech, as well as many research organizations and universities – more than 200 participants as of this writing. There is a waiting list, Davies says. But participation doesn’t cost anything, he adds.
“The only requirement is that they agree to share their results and findings so that we can continue improving the hardware,” Davies says. Membership includes free access to cloud-based neuromorphic computing resources, and, if the project is interesting enough, free on-site hardware, as well.
“Right now, there’s only one Hala Point, and Sandia has it,” he says. “But we are building more. And there are other systems that are not as big. We give accounts on Intel’s virtual cloud, and they log in and access the systems remotely.”

Intel was able to build a practical, usable, neuromorphic computer by sticking with traditional manufacturing technology and digital circuits, he says. Some alternate approaches, such as analog circuits, are more difficult to build.

View attachment 61059

But the Loihi 2 processor does use many core neuromorphic computing principles, including combining memory and processing. “We do really embrace all the architectural features that we find in the brain,” Davies says.
The system can even continue to learn in real time, he says. “That’s something that we see brains doing all the time.”
Traditional AI systems train on a particular data set and then don’t change once they’ve been trained. In Loihi 2, however, the communications between the neurons are configurable, meaning that they can change over time.

The way that this works is that an AI model is trained – by traditional means – then loaded into the neuromorphic computer. Each chip contains just a part of the full model. Then, when the model is used to analyze, say, streaming video, the chip already has the model weights in memory so it processes things quickly – and only if it is needed. “If one pixel changes, or one region of the image changes from frame to frame, we don’t recompute the entire image,” says Davies.
The original training does happen elsewhere, he admits. And while the neuromorphic computer can update specific weights over time, it’s not retraining the entire network from scratch.
This approach is particularly useful for edge computing, he says, and for processing streaming video, audio, or wireless signals. But it could also find a home in data centers and high-performance computing applications, he says.
“The best class of workloads that we found that work very well are solving optimization problems,” Davies says. “Things like finding the shortest path through a map or graph. Scheduling, logistics – these tend to run very well on the architecture.”

The fact that these use cases overlap with those of quantum computing was a surprise, he says. “But we have a billion-neuron system shipped today and running, instead of a couple of qubits.”
Intel isn’t the only player in this space. According to Omdia’s Su, a handful of vendors, including IBM, have developed neuromorphic chips for cloud AI compute, while companies like BrainChip and Prophesee are starting to offer neuromorphic chips for devices and edge applications.

However, there are several major hurdles to adoption, he adds. To start with, neuromorphic computing is based on event-based spikes, which requires a complete change in programming languages.
There are also very few event-driven AI models, Su adds. “At the moment, most of them are based on conventional neural networks that are designed for traditional computing architecture.”

Finally, these new programming languages and computing architectures aren’t compatible with existing technologies, he says. “The technology is too immature at the moment,” he says. “It is not backwardly compatible with legacy architecture. At the same time, the developer and software ecosystem are still very small with lack of tools and model choices.”
*"


*Question for the techies among us. Is it like this how the analyst from them describes it here?
View attachment 61060
That means approx 9216 loihi chips coz loihi 2 is a stack of 8 loihi chips. Think if we combine 9216 akida 1000 that will be human brain 11.2 billion neurons and 100 trillion synapse. But surprisingly brainchip is not trying the same. May be company is aware bigger models may give extra revenue but misuse of technology could also be there.
Dyor
 
  • Like
  • Fire
Reactions: 3 users

IloveLamp

Top 20
I guess this is why Rob's been liking GM posts....

1000015202.jpg
 
  • Like
  • Fire
  • Love
Reactions: 20 users
Good article here outlining the arising issues happening as the urging need for more and more data centres:


"Research group Dgtl Infra has estimated that global data centre capital expenditure will surpass $225bn in 2024. Nvidia’s chief executive Jensen Huang said this year that $1tn worth of data centres would need to be built in the next several years to support generative AI, which is power intensive and involves the processing of enormous volumes of information."

Such growth would require huge amounts of electricity, even if systems become more efficient. According to the International Energy Agency, the electricity consumed by data centres globally will more than double by 2026 to more than 1,000 terawatt hours, an amount roughly equivalent to what Japan consumes annually.

“Updated regulations and technological improvements, including on efficiency, will be crucial to moderate the surge in energy consumption from data centres,” the IEA said this year.

US data centre electricity consumption is expected to grow from 4 per cent to 6 per cent of total demand by 2026, while the AI industry is forecast to expand “exponentially” and consume at least 10 times its 2023 demand by 2026, said the IEA."

"Finding appropriate sites can be challenging, with power just one factor to consider among others such as the availability of large volumes of water to cool data centres.

“For every 50 sites I look at, maybe two get to the point where they may be developed,” said Appleby Strategy’s Golding. “Folks are sifting through large numbers of properties.”
 
  • Like
  • Sad
  • Fire
Reactions: 11 users
  • Like
  • Fire
  • Love
Reactions: 24 users

Frangipani

Regular
At least because Ai has to be everywhere at the moment, 'no' company can avoid the topic, it has to be worth a try and there is subsidy money and there have to be conditions for investments in future tec and the old one is out of the question. Just yesterday I saw a global analysis of Ai and where it is taking place, in addition to research. I couldn't see Germany at a quick glance. But maybe I'll have another look to see if I can find it again. What interests me are products that can be bought from companies like those in Germany.) Just because companies are running a research lab here and dropping slices by slices of white paper on projects because they have obviously or perhaps lost touch doesn't convince me to be confident that Germany plays a really important role in this topic. So far, I'm taking MB's hub seriously.
I'm one of the sceptics, but I'm happy to be proven wrong. But I don't want to get into debate 'loops', as I'm just an observer from the sidelines and don't have serious insight and I'm not a politician too.
:)

Bear in mind, though, that these AI Labs were established long before ChatGPT became a household name, so they were not born as a result of a some sort of “AI hype”.

Bosch: established in 2017

ZF: established in March 2019

Conti: started inofficialy at the end of 2021, official inauguration of its new home on the AI Campus Berlin on February 1, 2023


These three Tier 1 suppliers employ a number of researchers with ties to neuromorphic computing/engineering. (I am, however, not aware of any project to date involving Akida).

Bosch alone employs a whopping 270 (!) AI researchers all over the world
- four of them focusing on neuromorphic computing/engineering:

0EF88CB8-F76A-48F8-857A-5B1807C62C48.jpeg


ZF is one of ~40 partners in the EU- and BMBF-funded StorAIge project (July 1, 2021 - June 30, 2024); their specific interest in this project lies in wind turbine monitoring / predictive maintenance.


“StorAIge aims to develop and industrialize FDSOI 28nm and next generation embedded Phase Change Memory (ePCM) world-class semiconductor technologies enabling competitive Artificial Intelligence for Edge applications.”

2304C016-97B0-4B93-8AE0-F32473FE89D5.jpeg



And when looking for a link between Conti and neuromorphic, this AI robotics engineer came up in my search, and I am pretty sure he won’t be the only one:

7C4DA155-73F4-4DBB-8969-BA289CF44F57.jpeg


Out of those three tech giants, my bet would be on Bosch to come out of the shadows first.
Not in the automotive sector, initially though (as automotive grade chips are a prerequisite for that and would need to satisfy eg the functional safety standard ISO 26262), but maybe for some kind of wearables, olfactory sensors, household appliances etc.

Bosch, Conti & ZF can’t just sit back and rest on their laurels, but need to be innovative to survive, embracing technologies for a sustainable future. Otherwise we’ll see further laying off staff or even closing of plants and resp relocation of industry to low-wage countries.

Or to aptly say it with ZF’s motto: see.think.act.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 14 users

wilzy123

Founding Member
 
  • Like
  • Fire
  • Love
Reactions: 39 users

Damo4

Regular
  • Like
  • Fire
  • Love
Reactions: 13 users

rgupta

Regular

"Intel builds world’s largest neuromorphic system​

News Analysis
Apr 17, 2024

Code-named Hala Point, the brain-inspired system packs 1,152 Loihi 2 processors in a data center chassis the size of a microwave oven.
View attachment 61058
Quantum computing is billed as a transformative computer architecture that’s capable of tackling difficult optimization problems and making AI faster and more efficient. But quantum computers can’t be scaled yet to the point where they can outperform even classical computers, and a full ecosystem of platforms, programming languages and applications is even farther away.
Meanwhile, another new technology is poised to make a much more immediate difference: neuromorphic computing.
Neuromorphic computing looks to redesign how computer chips are built by looking at human brains for inspiration. For example, our neurons handle both processing and memory storage, whereas in traditional computers the two are kept separate. Sending data back and forth takes time and energy.

In addition, neurons only fire when needed, reducing energy consumption even further. As a result, neuromorphic computing offers massive parallel computing capabilities far beyond traditional GPU architecture, says Omdia analyst Lian Jye Su. “In addition, it is better at energy consumption and efficiency.”

According to Gartner, neuromorphic computing is one of the technologies with the most potential to disrupt a broad cross-section of markets, as “a critical enabler,” however, it is still three to six years away from making an impact.
Intel has achieved a key milestone, however. Today, Intel announced the deployment of the world’s largest neuromorphic computer yet, deployed at Sandia National Laboratories.

The computer, which uses Intel’s Loihi 2 processor, is code named Hala Point, and it supports up to 20 quadrillion operations per second with an efficiency exceeding 15 trillion 8-bit operations per second per watt – all in a package about the size of a microwave oven. It supports up to 1.15 billion neurons and 128 billion synapses, or about the level of an owl’s brain.
According to Intel, this is the first large-scale neuromorphic system that surpasses the efficiency and performance of CPU- and GPU-based architectures for real-time AI workloads. Loihi-based systems can perform AI inference and solve optimization problems 50 times faster than CPU and GPU architectures, the company said, while using 100 times less energy.

And the technology is available now, for free, to enterprises interested in researching its potential, says Mike Davies, director of Intel’s Neuromorphic Computing Lab.

To get started, companies should first join the Intel Neuromorphic Research Community, whose members include GE, Hitachi, Airbus, Accenture, Logitech, as well as many research organizations and universities – more than 200 participants as of this writing. There is a waiting list, Davies says. But participation doesn’t cost anything, he adds.
“The only requirement is that they agree to share their results and findings so that we can continue improving the hardware,” Davies says. Membership includes free access to cloud-based neuromorphic computing resources, and, if the project is interesting enough, free on-site hardware, as well.
“Right now, there’s only one Hala Point, and Sandia has it,” he says. “But we are building more. And there are other systems that are not as big. We give accounts on Intel’s virtual cloud, and they log in and access the systems remotely.”

Intel was able to build a practical, usable, neuromorphic computer by sticking with traditional manufacturing technology and digital circuits, he says. Some alternate approaches, such as analog circuits, are more difficult to build.

View attachment 61059

But the Loihi 2 processor does use many core neuromorphic computing principles, including combining memory and processing. “We do really embrace all the architectural features that we find in the brain,” Davies says.
The system can even continue to learn in real time, he says. “That’s something that we see brains doing all the time.”
Traditional AI systems train on a particular data set and then don’t change once they’ve been trained. In Loihi 2, however, the communications between the neurons are configurable, meaning that they can change over time.

The way that this works is that an AI model is trained – by traditional means – then loaded into the neuromorphic computer. Each chip contains just a part of the full model. Then, when the model is used to analyze, say, streaming video, the chip already has the model weights in memory so it processes things quickly – and only if it is needed. “If one pixel changes, or one region of the image changes from frame to frame, we don’t recompute the entire image,” says Davies.
The original training does happen elsewhere, he admits. And while the neuromorphic computer can update specific weights over time, it’s not retraining the entire network from scratch.
This approach is particularly useful for edge computing, he says, and for processing streaming video, audio, or wireless signals. But it could also find a home in data centers and high-performance computing applications, he says.
“The best class of workloads that we found that work very well are solving optimization problems,” Davies says. “Things like finding the shortest path through a map or graph. Scheduling, logistics – these tend to run very well on the architecture.”

The fact that these use cases overlap with those of quantum computing was a surprise, he says. “But we have a billion-neuron system shipped today and running, instead of a couple of qubits.”
Intel isn’t the only player in this space. According to Omdia’s Su, a handful of vendors, including IBM, have developed neuromorphic chips for cloud AI compute, while companies like BrainChip and Prophesee are starting to offer neuromorphic chips for devices and edge applications.

However, there are several major hurdles to adoption, he adds. To start with, neuromorphic computing is based on event-based spikes, which requires a complete change in programming languages.
There are also very few event-driven AI models, Su adds. “At the moment, most of them are based on conventional neural networks that are designed for traditional computing architecture.”

Finally, these new programming languages and computing architectures aren’t compatible with existing technologies, he says. “The technology is too immature at the moment,” he says. “It is not backwardly compatible with legacy architecture. At the same time, the developer and software ecosystem are still very small with lack of tools and model choices.”
*"


*Question for the techies among us. Is it like this how the analyst from them describes it here?
View attachment 61060
9216 akida 1000 will be 10 billion neurons and 100 trillion synapse and 9216 loihi will be 10 billion neurons and only 1 trillion synapse. Is that a reason for better performance of akida. Neurons of akida are better joined than loihi and loihi 2
 
  • Like
  • Wow
  • Thinking
Reactions: 12 users

IloveLamp

Top 20
  • Like
  • Love
Reactions: 19 users

Damo4

Regular
9216 akida 1000 will be 10 billion neurons and 100 trillion synapse and 9216 loihi will be 10 billion neurons and only 1 trillion synapse. Is that a reason for better performance of akida. Neurons of akida are better joined than loihi and loihi 2

Hi rgupta,

It all comes down to Synaptic Density.
This a good read to understand the relationship: What Is the Akida Event Domain Neural Processor?

I think however, there is more to it than just a Neuron/Synapse ratio.
See here, there are explanations that go over my head and could point towards the reason for Loihi 2 having less MAX synapses per neuron than Loihi 1
Taking Neuromorphic Computing to the Next Level with Loihi 2 Technology Brief

1713489948225.png


1713489932586.png
 
  • Like
  • Fire
  • Love
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Could someone please, please, PLEEEEEEEAAAASE ask Steven Thorne or Rob Telson to get on the blower to Sam Altman ASAP?!

Didn't Sam Altman attend Intel‘s IFS Connect Forum in Feb earlier this year? If he did, then wouldn't he have seen our demo? And if he saw our demo, then he'd have to know that we're like the Betty Ford clinic for those suffering from crippling addictions to NVIDIA's expensive and energy guzzling GPU's.

I'd call Sam myself, but I haven't got his phone number at present.

Thanks in advance to Steve or Rob! Do us proud lads!

Screenshot 2024-04-19 at 12.47.29 pm.png



Microsoft and OpenAI Will Spend $100 Billion to Wean Themselves Off Nvidia GPUs​

The companies are working on an audacious data center for AI that's expected to be operational in 2028.
By Josh Norem April 18, 2024

Credit: Microsoft
Microsoft was the first big company to throw a few billion at ChatGPT-maker OpenAI. Now, a new report states the two companies are working together on a very ambitious AI project that will cost at least $100 billion. Both companies are currently huge Nvidia customers; Microsoft uses Nvidia hardware for its Azure cloud infrastructure, while OpenAI uses Nvidia GPUs for ChatGPT. However, the new data center will host an AI supercomputer codenamed "Stargate," which might not include any Nvidia hardware at all.
The news of the companies' plans to ditch Nvidia hardware comes from a variety of sources, as noted by Windows Central. The report details a five-phase plan developed by Microsoft and OpenAI to advance the two companies' AI ambitions through the end of the decade, with the fifth phase being the so-called Stargate AI supercomputer. This computer is expected to be operational by 2028 and will reportedly be outfitted with future versions of Microsoft's custom-made Arm Cobalt processors and Maia XPUs, all connected by Ethernet.

Microsoft is reportedly planning on using its custom-built Cobalt and Maia silicon to power its future AI ambitions. Credit: Microsoft
This future data center, which will house Stargate, will allow both companies to pursue their AI ambitions far into the future; reports say it will cost around $115 billion. That level of investment shows both companies have no plans to move their respective feet off the AI gas pedal any time soon and that they expect this market to continue to expand far into the future. TechRadar also notes that the amount required to get this supercomputer running is more than triple what Microsoft spent on CapEx last year, so the company is tripling down on AI, it seems.
What's also notable is at least one source says the data center itself will be the computer, as opposed to just housing it. Multiple data centers may link together, like Voltron, to form the supercomputer. This futuristic machine will reportedly push the boundaries of AI capabilities. Given how fast things are advancing in this field, it's impossible to imagine what that will even mean four years from now.


This situation, where massive companies abandon Nvidia for custom-made AI accelerators, will likely become a significant issue for Nvidia soon. Long wait times for Nvidia GPUs and exorbitant pricing have resulted in many companies reportedly beginning to look elsewhere to satisfy their AI hardware needs, which is why Nvidia is already looking to capture this market. OpenAI CEO Sam Altman is reportedly looking to build a global infrastructure of fabs and power sources to make custom silicon, so its plans with Microsoft might be aligned along this front.



 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Haha
  • Love
Reactions: 31 users

Diogenese

Top 20
All that glitters ...



Exclusive: Apple acquires Xnor.ai, edge AI spin-out from Paul Allen’s AI2, for price in $200M range

BY ALAN BOYLE, TAYLOR SOPER & TODD BISHOP on January 15, 2020

Apple buys Xnor.ai, an edge-centric AI2 spin-out, for price in $200M range (geekwire.com)


Apple has acquired Xnor.ai, a Seattle startup specializing in low-power, edge-based artificial intelligence tools, sources with knowledge of the deal told GeekWire.

The acquisition echoes Apple’s high-profile purchase of Seattle AI startup Turi in 2016. Speaking on condition of anonymity, sources said Apple paid an amount similar to what was paid for Turi, in the range of $200 million.

Xnor.ai didn’t immediately respond to our inquiries, while Apple emailed us its standard response on questions about acquisitions: “Apple buys smaller technology companies from time to time and we generally do not discuss our purpose or plans.” (The company sent the exact same response when we broke the Turi story.



The arrangement suggests that Xnor’s AI-enabled image recognition tools could well become standard features in future iPhones and webcams.

Xnor.ai’s acquisition marks a big win for the Allen Institute for Artificial Intelligence, or AI2, created by the late Microsoft co-founder Paul Allen to boost AI research. It was the second spin-out from AI2’s startup incubator, following Kitt.ai, which was acquired by the Chinese search engine powerhouse Baidu in 2017 for an undisclosed sum.

The deal is a big win as well for the startup’s early investors, including Seattle’s Madrona Venture Group; and for the University of Washington, which serves as a major source of Xnor.ai’s talent pool.

The three-year-old startup’s secret sauce has to do with AI on the edge — machine learning and image recognition tools that can be executed on low-power devices rather than relying on the cloud. “We’ve been able to scale AI out of the cloud to every device out there,” co-founder Ali Farhadi, who is the venture’s CXO (chief Xnor officer) as well as a UW professor, told GeekWire in 2018
.


This Apple patent is for compressing AI models. one of the inventors is ex-Xnor.

US11651192B2 Compressed convolutional neural network models 20190212 Rastegari nee Xnor

Systems and processes for training and compressing a convolutional neural network model include the use of quantization and layer fusion. Quantized training data is passed through a convolutional layer of a neural network model to generate convolutional results during a first iteration of training the neural network model. The convolutional results are passed through a batch normalization layer of the neural network model to update normalization parameters of the batch normalization layer. The convolutional layer is fused with the batch normalization layer to generate a first fused layer and the fused parameters of the fused layer are quantized. The quantized training data is passed through the fused layer using the quantized fused parameters to generate output data, which may be quantized for a subsequent layer in the training iteration.

[0018] A convolutional neural network (CNN) model may be designed as a deep learning tool capable of complex tasks such as image classification and natural language processing. CNN models typically receive input data in a floating point number format and perform floating point operations on the data as the data progresses through different layers of the CNN model. Floating point operations are relatively inefficient with respect to power consumed, memory usage and processor usage. These inefficiencies limit the computing platforms on which CNN models can be deployed. For example, field-programmable gate arrays (FPGA) may not include dedicated floating point modules for performing floating point operations and may have limited memory bandwidth that would be inefficient working with 32-bit floating point numbers.

[0019] As described in further detail below, the subject technology includes systems and processes for building a compressed CNN model suitable for deployment on different types of computing platforms having different processing, power and memory capabilities
.


Of course, that does not absolutely preclude the possibility that Apple are running the NN model on a more efficient SoC.


Apple have been developing NNs for several years, albeit with MACs.

US11487846B2 Performing multiply and accumulate operations in neural network processor 20180504

1713495070166.png



1713495142328.png


a neural processor circuit including a plurality of neural engine circuits, a data buffer, and a kernel fetcher circuit. At least one of the neural engine circuits is configured to receive matrix elements of a matrix as at least the portion of the input data from the data buffer over multiple processing cycles. The at least one neural engine circuit further receives vector elements of a vector from the kernel fetcher circuit, wherein each of the vector elements is extracted as a corresponding kernel to the at least one neural engine circuit in each of the processing cycles. The at least one neural engine circuit performs multiplication between the matrix and the vector as a convolution operation to produce at least one output channel of the output data.
 
  • Like
  • Wow
  • Love
Reactions: 13 users
To
Apple have been developing NNs for several years, albeit with MACs.

US11487846B2 Performing multiply and accumulate operations in neural network processor 20180504

View attachment 61092


View attachment 61093

a neural processor circuit including a plurality of neural engine circuits, a data buffer, and a kernel fetcher circuit. At least one of the neural engine circuits is configured to receive matrix elements of a matrix as at least the portion of the input data from the data buffer over multiple processing cycles. The at least one neural engine circuit further receives vector elements of a vector from the kernel fetcher circuit, wherein each of the vector elements is extracted as a corresponding kernel to the at least one neural engine circuit in each of the processing cycles. The at least one neural engine circuit performs multiplication between the matrix and the vector as a convolution operation to produce at least one output channel of the output data.
In layman terms what does that mean as far as BRN being apart of it ? Are we out or still in with a chance ?
 
  • Like
Reactions: 6 users

Frangipani

Regular
That means approx 9216 loihi chips coz loihi 2 is a stack of 8 loihi chips. Think if we combine 9216 akida 1000 that will be human brain 11.2 billion neurons and 100 trillion synapse. But surprisingly brainchip is not trying the same. May be company is aware bigger models may give extra revenue but misuse of technology could also be there.
Dyor

Hi rgupta,

Loihi 2 is not a stack of 8 Loihi chips…

Loihi 2 (introduced in 2021) is Intel’s 2nd generation of their neuromorphic research chip, so it is a completely different chip - significantly enhanced, but only (roughly) half the size of its predecessor, the original Loihi (unveiled in 2017), fabricated on a different process.

You must be confusing this with Kapoho Point, which is a compact 8-chip Loihi 2 development board “that can be stacked for large-scale workloads and connect directly to low-latency event-based vision sensors”.



E70AC545-D989-4260-85F6-79E40B844CCA.jpeg

Regards
Frangipani
 
  • Like
  • Love
  • Wow
Reactions: 12 users
Top Bottom