BRN Discussion Ongoing

Frangipani

Regular
That means approx 9216 loihi chips coz loihi 2 is a stack of 8 loihi chips. Think if we combine 9216 akida 1000 that will be human brain 11.2 billion neurons and 100 trillion synapse. But surprisingly brainchip is not trying the same. May be company is aware bigger models may give extra revenue but misuse of technology could also be there.
Dyor

Hi rgupta,

Loihi 2 is not a stack of 8 Loihi chips…

Loihi 2 (introduced in 2021) is Intel’s 2nd generation of their neuromorphic research chip, so it is a completely different chip - significantly enhanced, but only (roughly) half the size of its predecessor, the original Loihi (unveiled in 2017), fabricated on a different process.

You must be confusing this with Kapoho Point, which is a compact 8-chip Loihi 2 development board “that can be stacked for large-scale workloads and connect directly to low-latency event-based vision sensors”.



E70AC545-D989-4260-85F6-79E40B844CCA.jpeg

Regards
Frangipani
 
  • Like
  • Love
  • Wow
Reactions: 12 users

Diogenese

Top 20
To

In layman terms what does that mean as far as BRN being apart of it ? Are we out or still in with a chance ?
It means Apple have their own inferior in-house NN.

They have also purchased Xnor which has the tech which is a trade-off between efficiency and accuracy:

US10691975B2 Lookup-based convolutional neural network 20170717
[0019] … Lookup-based convolutional neural network (LCNN) structures are described that encode convolutions by few lookups to a dictionary that is trained to cover the space of weights in convolutional neural networks. For example, training an LCNN may include jointly learning a dictionary and a small set of linear combinations. The size of the dictionary may naturally traces a spectrum of trade-offs between efficiency and accuracy.


They have also developed an LLM compression technique which can be applied to edge applications:

US11651192B2 Compressed convolutional neural network models 20190212 Rastegari nee Xnor:

[0019] As described in further detail below, the subject technology includes systems and processes for building a compressed CNN model suitable for deployment on different types of computing platforms having different processing, power and memory capabilities.

Thus Apple have in-house technology which, on the face of it, is capable of implementing GenAI at the edge. That presents a substantial obstacle to Akida getting a look in.
 
  • Like
  • Sad
  • Fire
Reactions: 27 users
Must be the 1st time since I’ve been invested in BRN that the SP has fallen before a quarterly thanks to LDA and hopefully this drop in SP continues as my super should be ready next week some time.

1713508386812.gif
 
  • Haha
  • Like
Reactions: 11 users

rgupta

Regular
Could someone please, please, PLEEEEEEEAAAASE ask Steven Thorne or Rob Telson to get on the blower to Sam Altman ASAP?!

Didn't Sam Altman attend Intel‘s IFS Connect Forum in Feb earlier this year? If he did, then wouldn't he have seen our demo? And if he saw our demo, then he'd have to know that we're like the Betty Ford clinic for those suffering from crippling addictions to NVIDIA's expensive and energy guzzling GPU's.

I'd call Sam myself, but I haven't got his phone number at present.

Thanks in advance to Steve or Rob! Do us proud lads!

View attachment 61091


Microsoft and OpenAI Will Spend $100 Billion to Wean Themselves Off Nvidia GPUs​

The companies are working on an audacious data center for AI that's expected to be operational in 2028.
By Josh Norem April 18, 2024

Credit: Microsoft
Microsoft was the first big company to throw a few billion at ChatGPT-maker OpenAI. Now, a new report states the two companies are working together on a very ambitious AI project that will cost at least $100 billion. Both companies are currently huge Nvidia customers; Microsoft uses Nvidia hardware for its Azure cloud infrastructure, while OpenAI uses Nvidia GPUs for ChatGPT. However, the new data center will host an AI supercomputer codenamed "Stargate," which might not include any Nvidia hardware at all.
The news of the companies' plans to ditch Nvidia hardware comes from a variety of sources, as noted by Windows Central. The report details a five-phase plan developed by Microsoft and OpenAI to advance the two companies' AI ambitions through the end of the decade, with the fifth phase being the so-called Stargate AI supercomputer. This computer is expected to be operational by 2028 and will reportedly be outfitted with future versions of Microsoft's custom-made Arm Cobalt processors and Maia XPUs, all connected by Ethernet.

Microsoft is reportedly planning on using its custom-built Cobalt and Maia silicon to power its future AI ambitions. Credit: Microsoft
This future data center, which will house Stargate, will allow both companies to pursue their AI ambitions far into the future; reports say it will cost around $115 billion. That level of investment shows both companies have no plans to move their respective feet off the AI gas pedal any time soon and that they expect this market to continue to expand far into the future. TechRadar also notes that the amount required to get this supercomputer running is more than triple what Microsoft spent on CapEx last year, so the company is tripling down on AI, it seems.
What's also notable is at least one source says the data center itself will be the computer, as opposed to just housing it. Multiple data centers may link together, like Voltron, to form the supercomputer. This futuristic machine will reportedly push the boundaries of AI capabilities. Given how fast things are advancing in this field, it's impossible to imagine what that will even mean four years from now.


This situation, where massive companies abandon Nvidia for custom-made AI accelerators, will likely become a significant issue for Nvidia soon. Long wait times for Nvidia GPUs and exorbitant pricing have resulted in many companies reportedly beginning to look elsewhere to satisfy their AI hardware needs, which is why Nvidia is already looking to capture this market. OpenAI CEO Sam Altman is reportedly looking to build a global infrastructure of fabs and power sources to make custom silicon, so its plans with Microsoft might be aligned along this front.



Should we seriously believe Microsoft and open ai should talk specifically about brainchip involvement in their 100 billion dollar project.
On the other hand market is becoming very very competitive and it will be difficult to ignore better technology.
 
  • Like
Reactions: 7 users
How’s our 3 year lead going ?.
 
  • Like
  • Sad
  • Thinking
Reactions: 7 users
Last edited:
  • Haha
  • Like
Reactions: 11 users

Iseki

Regular
Here is a follow-up of my post on Ericsson (see above).

When I recently searched for info on Ericsson’s interest in neuromorphic technology besides the Dec 2023 paper, in which six Ericsson researchers described how they had built a prototype of an AI-enabled ZeroEnergy-IoT device utilising Akida, I not only came across an Ericsson Senior Researcher for Future Computing Platforms who was very much engaged with Intel’s Loihi (he even gave a presentation at the Intel Neuromorphic Research Community’s Virtual Summer 2023 Workshop), as well as an Intel podcast with Ericsson’s VP of Emerging Technologies, Mischa Dohler.

I also spotted the following LinkedIn post by a Greek lady, who had had a successful career at Ericsson spanning more than 23 years before taking the plunge into self-employment two years ago:

View attachment 61042

View attachment 61043


361eabcc-0b13-4ea1-a40a-fe23e802b8a1-jpeg.61045



Since Maria Boura concluded her post by sharing that very Intel podcast with Mischa Dohler mentioned earlier, my gut feeling was that those Ericsson 6G researchers she had talked to at MWC (Mobile World Congress) 2024 in Barcelona at the end of February had most likely been collaborating with Intel, but a quick Google search didn’t come up with any results at the time I first saw that post of hers back in March.

Then, last night, while reading an article on Intel’s newly revealed Hala Point (https://www.eejournal.com/industry_...morphic-system-to-enable-more-sustainable-ai/), there it was - the undeniable evidence that those Ericsson researchers had indeed been utilising Loihi 2:

“Advancing on its predecessor, Pohoiki Springs, with numerous improvements, Hala Point now brings neuromorphic performance and efficiency gains to mainstream conventional deep learning models, notably those processing real-time workloads such as video, speech and wireless communications. For example, Ericsson Research is applying Loihi 2 to optimize telecom infrastructure efficiency, as highlighted at this year’s Mobile World Congress.”

The blue link connects to the following article on the Intel website, published yesterday:


Ericsson Research Demonstrates How Intel Labs’ Neuromorphic AI Accelerator Reduces Compute Costs​

Subscribe
Article Options

large


Philipp_Stratmann

Philipp_Stratmann
Employee
‎04-17-2024
00344
Philipp Stratmann is a research scientist at Intel Labs, where he explores new neural network architectures for Loihi, Intel’s neuromorphic research AI accelerator. Co-author Péter Hága is a master researcher at Ericsson Research, where he leads research activities focusing on the applicability of neuromorphic and AI technologies to telecommunication tasks.

Highlights
  • Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications AI models to optimize telecom architecture.
  • Ericsson Research developed a radio receiver prototype for Intel’s Loihi 2 neuromorphic AI accelerator based on neuromorphic spiking neural networks, which reduced the data communication by 75 to 99% for energy efficient radio access networks (RANs).
  • As a member of Intel’s Neuromorphic Research Community, Ericsson Research is searching for new AI technologies that provide energy efficiency and low latency inference in telecom systems.

Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications artificial intelligence (AI) models to optimize telecom architecture. Ericsson currently uses AI-based network performance diagnostics to analyze communications service providers’ radio access networks (RANs) to resolve network issues efficiently and provide specific parameter change recommendations. At Mobile World Congress (MWC) Barcelona 2024, Ericsson Research demoed a radio receiver algorithm prototype targeted for Intel’s Loihi 2 neuromorphic research AI accelerator, demonstrating a significant reduction in computational cost to improve signals across the RAN.

In 2021, Ericsson Research joined the Intel Neuromorphic Research Community (INRC), a collaborative research effort that brings together academic, government, and industry partners to work with Intel to drive advances in real-world commercial usages of neuromorphic computing.

Ericsson Research is actively searching for new AI technologies that provide low latency inference and energy efficiency in telecom systems. Telecom networks face many challenges, including tight latency constraints driven by the need for data to travel quickly over the network, and energy constraints due to mobile system battery limitations. AI will play a central role in future networks by optimizing, controlling, and even replacing key components across the telecom architecture. AI could provide more efficient resource utilization and network management as well as higher capacity.

Neuromorphic computing draws insights from neuroscience to create chips that function more like the biological brain instead of conventional computers. It can deliver orders of magnitude improvements in energy efficiency, speed of computation, and adaptability across a range of applications, including real-time optimization, planning, and decision-making from edge to data center systems. Intel's Loihi 2 comes with Lava, an open-source software framework for developing neuro-inspired applications.

Radio Receiver Algorithm Prototype

Ericsson Research’s working prototype of a radio receiver algorithm was implemented in Lava for Loihi 2. In the demonstration, the neural network performs a common complex task of recognizing the effects of reflections and noise on radio signals as they propagate from the sender (base station) to the receiver (mobile). Then the neural network must reverse these environmental effects so that the information can be correctly decoded.

During training, researchers rewarded the model based on accuracy and the amount of communication between neurons. As a result, the neural communication was reduced, or sparsified, by 75 to 99% depending on the difficulty of the radio environment and the amount of work needed by the AI to correct the environmental effects on the signal.

Loihi 2 is built to leverage such sparse messaging and computation. With its asynchronous spike-based communication, neurons do not need to compute or communicate information when there is no change. Furthermore, Loihi 2 can compute with substantially less power due to its tight compute-memory integration. This reduces the energy and latency involved in moving data between the compute unit and the memory.

Like the human brain’s biological neural circuits that can intelligently process, respond to, and learn from real-world data at microwatt power levels and millisecond response times, neuromorphic computing can unlock orders of magnitude gains in efficiency and performance.

Neuromorphic computing AI solutions could address the computational power needed for future intelligent telecom networks. Complex telecom computation results must be produced in tight deadlines down to the millisecond range. Instead of using GPUs that draw substantial amounts of power, neuromorphic computing can provide faster processing and improved energy efficiency.

Emerging Technologies and Telecommunications

Learn more about emerging technologies and telecommunications in this episode of InTechnology. Host Camille Morhardt interviews Mischa Dohler, VP of Emerging Technologies at Ericsson, about neuromorphic computing, quantum computing, and more.



While Ericsson being deeply engaged with Intel even in the area of neuromorphic research doesn’t preclude them from also knocking on BrainChip’s door, this new reveal reaffirms my hesitation about adding Ericsson to our list of companies above the waterline, given the lack of any official acknowledgment by either party to date.

So to sum it up: Ericsson collaborating with Intel in neuromorphic research is a verifiable fact, while an NDA with BrainChip is merely speculation so far.
Mmm, that person photographed next to Mischa, looks like a young Pia. I wonder if they are related.
 

Frangipani

Regular
New LinkedIn post by Laurent Hili from ESA who is looking forward to attending the 2024 Hardware & Edge AI Summit in San Jose in September…


Note that the pictures were taken at the 2022 Hardware & Edge AI Summit.


4C5A0963-B9AE-48BD-BDF5-8D0792BC9FAA.jpeg



693EB83D-9D0A-448C-A969-02A076AE8C91.jpeg


7C91966C-624C-4C41-B764-BEE76479A604.jpeg
 
  • Love
  • Like
  • Fire
Reactions: 21 users

BrainShit

Regular
BrainChip selected by U.S. Air Force Research Laboratory to develop AI-based radar

2024-04-19 17:52

BrainChip, the world's first commercial producer of neuromorphic artificial intelligence chips and IP, today announced that Information Systems Laboratories (ISL) is developing an AI-based radar for the U.S. Air Force Research Laboratory (AFRL) based on its Akida™ Neural Network Processor Research solutions.

ISL is a specialist in expert research and complex analysis, software and systems engineering, advanced hardware design and development, and high-quality manufacturing for a variety of clients worldwide.

Military and Space are always leaders of new technologies 😉


 
  • Like
  • Fire
  • Love
Reactions: 51 users
How’s our 3 year lead going ?.
All this secrecy is running a little thin . Especially when everyone else is discussing what everyone else is doing with who ever
 
Last edited:
  • Like
  • Fire
  • Sad
Reactions: 14 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 21 users

Frangipani

Regular
BrainChip selected by U.S. Air Force Research Laboratory to develop AI-based radar

2024-04-19 17:52

BrainChip, the world's first commercial producer of neuromorphic artificial intelligence chips and IP, today announced that Information Systems Laboratories (ISL) is developing an AI-based radar for the U.S. Air Force Research Laboratory (AFRL) based on its Akida™ Neural Network Processor Research solutions.

ISL is a specialist in expert research and complex analysis, software and systems engineering, advanced hardware design and development, and high-quality manufacturing for a variety of clients worldwide.

Military and Space are always leaders of new technologies 😉



Hi BrainShit,

while it looks like it is hot off the press-news due to today’s publication date on the website, this press release is actually more than two years old:


Regards
Frangipani
 
  • Like
  • Love
  • Fire
Reactions: 22 users

IloveLamp

Top 20
Hi BrainShit,

while it looks like it is hot off the press-news due to today’s publication date on the website, this press release is actually more than two years old:


Regards
Frangipani
Sorry Frangi, but i have to say that has to be a record shortest post from you doesn't it?!
1000012150.gif
 
  • Haha
  • Like
Reactions: 17 users

Frangipani

Regular
  • Haha
  • Love
  • Like
Reactions: 15 users

manny100

Regular
Here is a follow-up of my post on Ericsson (see above).

When I recently searched for info on Ericsson’s interest in neuromorphic technology besides the Dec 2023 paper, in which six Ericsson researchers described how they had built a prototype of an AI-enabled ZeroEnergy-IoT device utilising Akida, I not only came across an Ericsson Senior Researcher for Future Computing Platforms who was very much engaged with Intel’s Loihi (he even gave a presentation at the Intel Neuromorphic Research Community’s Virtual Summer 2023 Workshop), as well as an Intel podcast with Ericsson’s VP of Emerging Technologies, Mischa Dohler.

I also spotted the following LinkedIn post by a Greek lady, who had had a successful career at Ericsson spanning more than 23 years before taking the plunge into self-employment two years ago:

View attachment 61042

View attachment 61043


361eabcc-0b13-4ea1-a40a-fe23e802b8a1-jpeg.61045



Since Maria Boura concluded her post by sharing that very Intel podcast with Mischa Dohler mentioned earlier, my gut feeling was that those Ericsson 6G researchers she had talked to at MWC (Mobile World Congress) 2024 in Barcelona at the end of February had most likely been collaborating with Intel, but a quick Google search didn’t come up with any results at the time I first saw that post of hers back in March.

Then, last night, while reading an article on Intel’s newly revealed Hala Point (https://www.eejournal.com/industry_...morphic-system-to-enable-more-sustainable-ai/), there it was - the undeniable evidence that those Ericsson researchers had indeed been utilising Loihi 2:

“Advancing on its predecessor, Pohoiki Springs, with numerous improvements, Hala Point now brings neuromorphic performance and efficiency gains to mainstream conventional deep learning models, notably those processing real-time workloads such as video, speech and wireless communications. For example, Ericsson Research is applying Loihi 2 to optimize telecom infrastructure efficiency, as highlighted at this year’s Mobile World Congress.”

The blue link connects to the following article on the Intel website, published yesterday:


Ericsson Research Demonstrates How Intel Labs’ Neuromorphic AI Accelerator Reduces Compute Costs​

Subscribe
Article Options

large


Philipp_Stratmann

Philipp_Stratmann
Employee
‎04-17-2024
00344
Philipp Stratmann is a research scientist at Intel Labs, where he explores new neural network architectures for Loihi, Intel’s neuromorphic research AI accelerator. Co-author Péter Hága is a master researcher at Ericsson Research, where he leads research activities focusing on the applicability of neuromorphic and AI technologies to telecommunication tasks.

Highlights
  • Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications AI models to optimize telecom architecture.
  • Ericsson Research developed a radio receiver prototype for Intel’s Loihi 2 neuromorphic AI accelerator based on neuromorphic spiking neural networks, which reduced the data communication by 75 to 99% for energy efficient radio access networks (RANs).
  • As a member of Intel’s Neuromorphic Research Community, Ericsson Research is searching for new AI technologies that provide energy efficiency and low latency inference in telecom systems.

Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications artificial intelligence (AI) models to optimize telecom architecture. Ericsson currently uses AI-based network performance diagnostics to analyze communications service providers’ radio access networks (RANs) to resolve network issues efficiently and provide specific parameter change recommendations. At Mobile World Congress (MWC) Barcelona 2024, Ericsson Research demoed a radio receiver algorithm prototype targeted for Intel’s Loihi 2 neuromorphic research AI accelerator, demonstrating a significant reduction in computational cost to improve signals across the RAN.

In 2021, Ericsson Research joined the Intel Neuromorphic Research Community (INRC), a collaborative research effort that brings together academic, government, and industry partners to work with Intel to drive advances in real-world commercial usages of neuromorphic computing.

Ericsson Research is actively searching for new AI technologies that provide low latency inference and energy efficiency in telecom systems. Telecom networks face many challenges, including tight latency constraints driven by the need for data to travel quickly over the network, and energy constraints due to mobile system battery limitations. AI will play a central role in future networks by optimizing, controlling, and even replacing key components across the telecom architecture. AI could provide more efficient resource utilization and network management as well as higher capacity.

Neuromorphic computing draws insights from neuroscience to create chips that function more like the biological brain instead of conventional computers. It can deliver orders of magnitude improvements in energy efficiency, speed of computation, and adaptability across a range of applications, including real-time optimization, planning, and decision-making from edge to data center systems. Intel's Loihi 2 comes with Lava, an open-source software framework for developing neuro-inspired applications.

Radio Receiver Algorithm Prototype

Ericsson Research’s working prototype of a radio receiver algorithm was implemented in Lava for Loihi 2. In the demonstration, the neural network performs a common complex task of recognizing the effects of reflections and noise on radio signals as they propagate from the sender (base station) to the receiver (mobile). Then the neural network must reverse these environmental effects so that the information can be correctly decoded.

During training, researchers rewarded the model based on accuracy and the amount of communication between neurons. As a result, the neural communication was reduced, or sparsified, by 75 to 99% depending on the difficulty of the radio environment and the amount of work needed by the AI to correct the environmental effects on the signal.

Loihi 2 is built to leverage such sparse messaging and computation. With its asynchronous spike-based communication, neurons do not need to compute or communicate information when there is no change. Furthermore, Loihi 2 can compute with substantially less power due to its tight compute-memory integration. This reduces the energy and latency involved in moving data between the compute unit and the memory.

Like the human brain’s biological neural circuits that can intelligently process, respond to, and learn from real-world data at microwatt power levels and millisecond response times, neuromorphic computing can unlock orders of magnitude gains in efficiency and performance.

Neuromorphic computing AI solutions could address the computational power needed for future intelligent telecom networks. Complex telecom computation results must be produced in tight deadlines down to the millisecond range. Instead of using GPUs that draw substantial amounts of power, neuromorphic computing can provide faster processing and improved energy efficiency.

Emerging Technologies and Telecommunications

Learn more about emerging technologies and telecommunications in this episode of InTechnology. Host Camille Morhardt interviews Mischa Dohler, VP of Emerging Technologies at Ericsson, about neuromorphic computing, quantum computing, and more.



While Ericsson being deeply engaged with Intel even in the area of neuromorphic research doesn’t preclude them from also knocking on BrainChip’s door, this new reveal reffirms my hesitation about adding Ericsson to our list of companies above the waterline, given the lack of any official acknowledgment by either party to date.

So to sum it up: Ericsson collaborating with Intel in neuromorphic research is a verifiable fact, while an NDA with BrainChip is merely speculation so far.
Although it would be grossly negligent for the Ericsson Team to test only the Intel chip (which according to BRN is still in research) when there are others available.
I very much doubt they would not be aware of BRN. I would also be very surprised if Ericsson and other telcos have not had high level contact from BRN.
The word slipped that we were tied with Mercedes. After that you can bet that every Auto is testing AKIDA. No one wants to be caught short.
The use for AKIDA in Autos is obvious.
For the 'layman' it's a little harder to identify Telco company uses.
BRN has worked hard to set up an eco system and together with enormous industry exposure its hard to imagine any decent Research departments of big business would be unaware of BRN. It would be just a matter of how, if at all, hey see AKIDA improving their business.
 
  • Like
  • Love
  • Fire
Reactions: 18 users

Esq.111

Fascinatingly Intuitive.
BrainChip selected by U.S. Air Force Research Laboratory to develop AI-based radar

2024-04-19 17:52

BrainChip, the world's first commercial producer of neuromorphic artificial intelligence chips and IP, today announced that Information Systems Laboratories (ISL) is developing an AI-based radar for the U.S. Air Force Research Laboratory (AFRL) based on its Akida™ Neural Network Processor Research solutions.

ISL is a specialist in expert research and complex analysis, software and systems engineering, advanced hardware design and development, and high-quality manufacturing for a variety of clients worldwide.

Military and Space are always leaders of new technologies 😉


Evening BrainShit ,

1 , if one could find a more salubrious forum name would be great.
But yes , truely sense & feel ones torment.

2, Not 100% certain , but pretty sure this is a rehashed announcement from some time ago..... with the pressent day date attached , purely to confuse.

Regards,
Esq.
 
  • Like
Reactions: 15 users

DK6161

Regular
Me every Friday after the market closes.
Sad Season 3 GIF by The Lonely Island

Oh well. Next week perhaps fellow chippers.
 
  • Haha
  • Like
Reactions: 15 users
It means Apple have their own inferior in-house NN.

They have also purchased Xnor which has the tech which is a trade-off between efficiency and accuracy:

US10691975B2 Lookup-based convolutional neural network 20170717
[0019] … Lookup-based convolutional neural network (LCNN) structures are described that encode convolutions by few lookups to a dictionary that is trained to cover the space of weights in convolutional neural networks. For example, training an LCNN may include jointly learning a dictionary and a small set of linear combinations. The size of the dictionary may naturally traces a spectrum of trade-offs between efficiency and accuracy.


They have also developed an LLM compression technique which can be applied to edge applications:

US11651192B2 Compressed convolutional neural network models 20190212 Rastegari nee Xnor:

[0019] As described in further detail below, the subject technology includes systems and processes for building a compressed CNN model suitable for deployment on different types of computing platforms having different processing, power and memory capabilities.

Thus Apple have in-house technology which, on the face of it, is capable of implementing GenAI at the edge. That presents a substantial obstacle to Akida getting a look in.
Assuming Apple make the decision to pursue their own way and manage to cobble something together which is workable (which is highly likely, given their resources).

What are the chances of them offering this tech, to their competitors, to also use?...

The problems Apple had, with their flagship iPhone 15 overheating last year, goes to show they are maybe not as "hot shit" as they think they are..

Maybe they are literally...

Happy to change my opinion, if they come to their senses 😛
 
Last edited:
  • Like
  • Fire
Reactions: 8 users

Diogenese

Top 20
Evening BrainShit ,

1 , if one could find a more salubrious forum name would be great.
But yes , truely sense & feel ones torment.

2, Not 100% certain , but pretty sure this is a rehashed announcement from some time ago..... with the pressent day date attached , purely to confuse.

Regards,
Esq.
I wonder how long ISL had been playing with Akida before they got the Airforce radar SBIR?

This patent was filed in mid 2021, so 5 months before the announcement.

US11256988B1 Process and method for real-time sensor neuromorphic processing 20210719

1713527356114.png

1713527429102.png


[0003] The invention relates to the general field of deep learning neural networks, which has enjoyed a great deal of success in recent years. This is attributed to more advanced neural architectures that more closely resemble the human brain. Neural networks work with functionalities similar to the human brain. The invention includes both a training cycle and a live (online) operation. The training cycle includes five elements and comprises the build portion of the deep learning process. The training cycle requirements ensure adequate convergence and performance. The live (online) operation includes the live operation of a Spiking Neural Network (SNN) designed by the five steps of the training cycle. The invention is part of a new generation of neuromorphic computing architectures, including Integrated Circuits (IC). This new generation of neuromorphic computing architectures includes IC, deep learning and machine learning.



1. A method of providing real-time sensor neuromorphic processing, the method comprising the steps of:
providing a training cycle and a live operation cycle;
wherein the training cycle includes:
(1) the establishment of a build portion or training cycle of a deep learning process with AI;
wherein the build portion or training cycle begins with the process taking performance requirements in the form of generation of a scenario as inputs;
(2) selecting a sensor model application, with associated performance specifications set forth in the generation of a scenario;
(3) providing a Hi-Fi Radio-Frequency (RF) sensor model which is used to augment any real data for training;
(4) providing a computer model surrogate which is used instead of, or in addition to, a non-surrogate computer model;
(5) the sensor model application being one or more of radar, sonar or LIDAR;
(6) specifying an operating environment details wherein the Hi-Fi sensor model generates requisite training data and/or training environment;
(7) the Hi-Fi sensor model generates training data in a quantity to ensure convergence of DNN neuron weights, wherein as an input enters the node, the input gets multiplied by a weight value and the resulting output is either observed, or passed to the next layer in the neural network;
(8) raw sensor training data is preprocessed into a format suitable for presentation to a DNN from a DNN interface, the sensor model application training data is forwarded to the DNN;
(9) the training environment information is then output from the DNN to a DNN-to-SNN operation through a DNN-to-SNN conversion; and
(10) thereafter, the DNN is converted to an SNN and the SNN outputs the SNN information to an neuromorphic integrated circuit (IC), creating a neuromorphic sensor application;
(11) providing and utilizing a statistical method which ensures reliable performance of the neuromorphic integrated circuit, wherein reliability is 99%
.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 39 users
Top Bottom