BRN Discussion Ongoing

Damo4

Regular
Mmmh, nice article about event-based cameras and their potential use cases, but what to think of those last two paragraphs? 🤔


Human Vision Inspires a New Generation of Cameras—And More​

October 11, 2023 Pat Brans
Thanks to a few lessons in biology, researchers have developed new sensor technology that opens up a world of new opportunities—including high-speed cameras that operate at low data rates.

In the broadest sense, the term neuromorphic applies to any computing system that borrows engineering concepts from biology. One set of notions that is particularly interesting for the development of electronic sensors is the spiking nature of neurons. Rather than fire right away, neurons build potential each time they receive a certain stimulus, firing only when a threshold is passed. The neurons are also leaky, losing membrane potential, which produces a filtering effect: If nothing new happens, the level goes down over time. “These behaviors can be emulated by electronics,” said Ilja Ocket, program manager for Neuromorphic Computing at imec. “And this is the basis for a new generation of sensors.”

Ilja Ocket, program manager for Neuromorphic Computing at imec


Ilja Ocket, program manager for Neuromorphic Computing at imec


The best illustration of how these ideas improve sensors is the event-based camera, also called the retinomorphic camera. Rather than accumulate photons in capacitive buckets and propagate them as images to a back-end system, these cameras treat each pixel autonomously. Each pixel can decide whether enough change has occurred in photon streams to convey that information downstream in the form of an event.

“Imec gets involved when sensors do not produce classical arrays or tensors or matrices, but rather events,” Ilja Ocket said. “We figure out how to adapt the AI to absorb event-based data and perform the necessary processing. Our spiking neural networks do not work with regular data. Instead, they take input from a time encoded stream.”

“One of the important benefits of these techniques is the reduced energy consumption—completely changing the game,” Ocket said. “We do a lot of work on AI and application development in areas where this benefit is the greatest— including robotics, smart sensors, wearables, AR/VR and automotive.”

One of the companies imec has been working with is Prophesee, a nine-year-old business based in Paris. Its 120 employees in France, China, Japan and the U.S. design vision sensors and develop software to overcome some of the challenges that plague traditional cameras.

Event-based vision sensors

“Our sensor is fundamentally different from a conventional image sensor,” said Luca Verre, CEO of Prophesee. “It produces event changes in the scene, as opposed to a full frame, at a fixed point in time. A regular camera captures images one after the other at a fixed point in time, maybe 20 frames per second.”

Prophesee's CEO Luca Verre's CEO Luca Verre
Luca Verre, CEO of Prophesee

This method, which is as old as cinematographer, works fine if you just want to display an image or make a movie. But it has three major shortcomings for more modern use cases, especially when AI is involved. The first is, because entire frames are captured and propagated even when there is very little change to most of the scene, a lot of redundant data is sent for processing.

The second problem is that movement between frames is missed. Since snapshots are taken at regular intervals several times a second, anything that happens between data capture events doesn’t get picked up.

The third problem is that traditional cameras have a fixed exposure time, which means each pixel could have a compromised acquisition depending on the lighting conditions. If there is bright light and dark areas in the same scene, you may end up with some pixels being overexposed or underexposed—often at the same time.

“Our approach, which is inspired by the human eye, is to have the acquisition driven by the scene, rather than having a sensor that acquires frames regardless of what’s changing,” Verre said. “Our pixels are independent and asynchronous, making for a very fast and efficient system. This suppresses data redundancy at the sensor level, and it captures movement, regardless of when it occurs—with microsecond precision.”

“While this is not time continuous, it is a very high time granularity for any natural phenomenon,” Verre said. “Most of the applications we target don’t need such high time precision. We don’t capture unnecessary data and we don’t miss information—two features that make a neuromorphic camera a high-speed camera, but at a low data rate.”

“Because the pixels are independent, we don’t have the problem of fixed exposure time,” Verre added. “Pixels that look at the dark part of the scene are independent from the ones looking at bright parts of the scene, so we have a very wide dynamic range.”

Because less redundant data is transmitted to AI systems, less processing is needed and less power consumption too. It becomes much easier to implement edge AI, putting inference closer to the sensor.

Prophesee-Sony-Sensor-Neuromorphic Sensing
The IMX 636 event-based camera module, developed with Sony, is a fourth-generation product. Last year, Prophesee released the EVK4 evaluation kit for the IMX 636 for industrial vision with a rugged housing but it will work for all applications. (Source: Prophesee)

Audio sensors and beyond neuromorphic

“Automotive is an important market for companies like Prophesee, but it’s a long play,” Ocket said. “If you want to develop a product for autonomous cars, you’ll need to think seven to 10 years ahead. And you’ll need the patience and deep pockets to sustain your company until the market really takes off.”

In the meantime, event-based cameras are meeting the needs of several other markets. These include industrial use cases that require ultra-high-speed counting, particle size monitoring and vibration monitoring for predictive maintenance. Other applications include eye tracking, visual odometry and gesture detection for AR and VR. And in China, there is a growing market for small cameras in toy animals. The cameras need to operate at low power—and the most important thing for them to detect is movement. Neuromorphic cameras meet this need, operating on very little power, and fitting nicely into toys.

Neuromorphic principles can also be applied to audio sensors. Like the retina, the cochlea does not sample spectrograms at fixed intervals. It just conveys changes in sensory input. So far, there are not many examples of neuromorphic audio sensors, but that’s likely to change soon since audio-based AI is now in high demand. Neuromorphic principles can also be applied to sensors with no biological counterpart, like radar or LiDAR.

But researchers are increasingly convinced that making a silicon version of the biological structures is not the best idea. The biggest impact may lie beyond neuromorphic, making the best use of both biology and electronics.

“If you strip it down to its computational behavior you could improve biology,” Ocket said. “Instead of emulating spiking neurons with thresholds, you can just apply time-based computational behavior on very simple timing circuits—technology from the 1950s and 1960s. If you hook them together and find a way to train them, you can go much lower in power consumption than if you simply emulate spike neurons in electronic form.”

Seems to me that he is just pointing out theres an un-developped/researched idea that could potentially use less power.
It also doesn't sound like he knows how it would work either, as he questions whether or not it can be trained.

I think he was just pointing out we may not need to naively replicate to our best ability the Brain, and instead look into a hybrid system.
Almost as if to say; stop assuming the brain is the best learning computer, there might be something better.

Either way I don't think it matters, his work at Imec is Neuromorphic focused, so I doubt he would act against his own interests or imply NM is a waste of effort.

Great post though @Frangipani, it's a great summary of how the technology is being adopted!
 
Last edited:
  • Like
  • Fire
Reactions: 8 users

Tothemoon24

Top 20

SiFive Announces Differentiated Solutions for Generative AI and ML Applications Leading RISC-V into a New Era of High-Performance Innovation​

SiFive’s Performance P870 and Intelligence X390 product debut sets new bar for high-performance compute in consumer, infrastructure, and automotive applications
Santa Clara, Calif., Oct. 11, 2023 –SiFive, Inc., the pioneer and leader of RISC-V computing today announced two new products designed to address new requirements for high performance compute. The SiFive Performance™ P870 and SiFive Intelligence™ X390 offer a new level of low power, compute density, and vector compute capability, and when combined provide the necessary performance boost for increasingly data intensive compute. Together, the new products create a powerful mix of scalar and vector computing to meet the needs of today’s dataflow and computation intensive AI applications across consumer, automotive, and infrastructure markets.
The announcement took place at an in-person press and analyst event in Santa Clara today, where the company also provided an update on several of its product lines currently shipping in silicon to customers around the world. Company executives offered insight into SiFive’s product roadmap and discussed how the overall RISC-V ecosystem continues to expand rapidly as new applications call for the benefits of RISC-V-based high-performance compute solutions.
“SiFive is leading the industry into a new era of high-performance RISC-V innovation, and closing the gap with other instruction set architectures with our unparalleled portfolio, while recent silicon tape-outs are demonstrating the tremendous benefits of SiFive RISC-V solutions,” said Patrick Little, SiFive Chairman, President and CEO. “As the Arm IPO showed, there is a fast-growing demand for semiconductors across many sectors, particularly processors for consumer and infrastructure markets. The flexibility of SiFive’s RISC-V solutions allows companies to address the unique computing requirements of these segments and capitalize on the momentum around generative AI, where we have seen double-digit design wins, and for other cutting-edge applications.
The SiFive Performance P870
Ideal for high performance consumer applications, or when used in conjunction with a vector processor in the datacenter, the P870 core sets an impressive new RISC-V performance bar across instruction set architecture availability, throughput, parallelism, and memory bandwidth. Bringing a 50% peak single thread performance upgrade (specINT2k6) over the previous generation SiFive Performance processors, the P870 is a six-wide out-of-order core, that meets RVA 23 and offers a shared cluster cache enabling up to a 32-core cluster. High execution throughput comes with more instruction sets per cycle, more ALU, and more branch units. The P870 is fully compatible with Google’s platform requirements for Android on RISC-V. The P870 also offers additional proven SiFive features: · x 128b VLEN RVV · Vector crypto and hypervisor extensions · IOMMU and AIA · Non-inclusive L3 cache · Proven RISC-V WorldGuard security
The SiFive Intelligence X390
Building on the highly popular SiFive Intelligence X280’s success in coupling AI/ML applications with hardware accelerators in mobile, infrastructure, and automotive applications, the new X390 brings a 4x improvement to vector computation with its single core configuration, doubled vector length, and dual vector ALUs. This allows quadruple the amount of sustained data bandwidth. With SiFive Vector Coprocessor Interface eXtension (VCIX) companies can easily add their own vector instructions and/or acceleration hardware, bringing unprecedented flexibility and allowing users to greatly increase performance with custom instructions. Features include: · 1024-bit VLEN, 512-bit DLEN · Single / Dual Vector ALU · VCIX (2048-bit out, 1024-bit in)
An Agile Hardware Solution for Generative AI applications
Bringing the P870 high-performance general compute SoC together with a high performance NPU cluster, consisting of the X390 and customer AI hardware engines, offers product designers a highly flexible, low power, and programmable solution with superior compute density for complex workloads.
The company highlighted how interest in these combined SiFive solutions is high, with a number of customers achieving silicon success and in various stages of commercialization using high performance products.
SiFive continues to actively work across the ecosystem (see attached quote sheet) with partners who are ensuring the software, security, and flexibility benefits of the open standard ecosystem are in place for SiFive processors as companies move to commercialize their SiFive-powered products.
Supporting quotations from industry partners:
SiFive has assembled an array of ecosystem partners to help customers speed their time to commercialization.
"We have collaborated with SiFive to deliver Cadence AI-driven digital full flow Rapid Adoption Kits (RAKs) for previous generation SiFive Performance™ and Intelligence™ RISC-V processors and are looking forward to producing them for the upcoming P870 and X390 processors" said KT Moore, vice president of Corporate Marketing, Cadence. "The RAKs utilize our leading Generative AI solutions that optimize power, performance and area while our system verification solutions enable optimal verification throughput and productivity. This empowers SiFive customers to accelerate time-to-market, enhance product quality, and deliver innovative solutions for high-performance computing, AI, automotive, and mobile applications."
“Canonical’s strategic alliance with SiFive, a RISC-V CPU IP leader, grants us exclusive privileges, including early access to their cutting-edge processors under development. Canonical has ported Ubuntu to SiFive development systems in the past and is working to have Ubuntu ready at launch with the SiFive HiFive Pro P550 and future platforms,” said Cindy Goldberg, Vice President, Silicon Alliances at Canonical. “We see a growing demand for SiFive RISC-V processors and recognize the opportunity across consumer, automotive and infrastructure markets. Ubuntu is the operating system of choice for infrastructure and cloud use cases. This year with the introduction of Ubuntu Pro we have enhanced security, compliance and support coverage across a broad portfolio of open source software and platform architectures. The combination of SiFive’s RISC-V IP and Canonical’s software is a combination that will lead the transformative future in computing, on RISC-V.”
“As an early RISC-V adopter and industry leader for delivering production-proven, safety-certified development tools, C/C+ compilers and operating systems for RISC-V, Green Hills Software is excited to be expanding its close working relationship with SiFive by adding optimized support for the P870 and X390.” said Dan Mender, VP of Business Development at Green Hills Software. “Together, Green Hills and SiFive will help companies realize the maximum performance, power, and area benefit possible for these new SiFive offerings.”
“IAR welcomes the new SiFive Performance P870 and Intelligence X390 RISC-V processors and recognizes their opportunity for generative AI and ML as well as high-performance computing applications addressing consumer, automotive, and infrastructure. IAR and SiFive have a strong partnership and stand out in the RISC-V ecosystem. SiFive enables IAR with early access its leading commercial RISC-V IP processors while they are under development, enabling co-optimizations benefiting mutual customers. IAR’s complete development solution for all the leading RISC-V core IP from SiFive helps embedded software developers around the world maximize the energy efficiency, simplicity, security, and flexibility upsides that RISC-V and SiFive offer, like the latest additions for Generative AI/ML applications.”
“As the world leader in debugging and trace tools used by all major and well-known technology companies, Lauterbach has been committed to supporting the RISC-V ecosystem from the beginning and is a close long-term partner of SiFive, a leading provider of RISC-V CPU IP. Currently, we see a strong growing global demand for RISC-V based processors including generative AI and ML applications as well as high performance compute across consumer, automotive, and infrastructure markets, all markets in which we have been successfully active for many years. Our early access to SiFive's processors under development allows both SiFive and Lauterbach to co-optimize their products for an optimal user experience.” Norbert Weiss, Managing Director, Lauterbach GmbH
"SiFive has been instrumental in bringing the RISC-V architecture to Automotive Grade Linux and providing additional hardware options for automakers and suppliers, many of whom are already using the open source AGL platform in production," said Dan Cauchy, Executive Director of Automotive Grade Linux (AGL), an open source project at The Linux Foundation. "SiFive is an active AGL member, and we look forward to their continued collaboration with the broader community."
“The growth of AI and machine learning systems is driving significant compute demands in application-specific processors. Our collaboration with SiFive to provide co-optimized solutions including Synopsys.ai™ full-stack AI-driven EDA suite and Fusion QuickStart Implementation Kits, along with Synopsys Interface and Foundation IP, hardware-assisted verification, and virtual prototyping solutions help mutual customers accelerate the design of high-performance, RISC-V-based SoCs.” Kiran Vittal, Senior Director of Partner Alliances Marketing for the EDA Group, Synopsys.
 
  • Like
  • Fire
  • Thinking
Reactions: 22 users

TECH

Regular
Good morning,

Within the next 14 business days Brainchip will be required to deliver it's 2nd quarter results and short-term guidance, this upcoming
4C will determine our share price movement in the immediate short-term, my gut feeling like a number of posters is once again, very little
income as nothing has been announced that would suggest otherwise, apart from "watch the financials" which I still personally think is
some way off yet.

All the news that's come out of the company has been positive, there's absolutely no denying that fact and for a small start-up we are
100% making great progress establishing our company's name and superb technology. the learning continues for all.

Our accountant has a very sharp pencil at the ready to write down the 7 digit figures we are all awaiting, so what I'm trying to say in a
roundabout way is, try to contain your excitement when the share price does move north, it can't be sustained if the back of house
haven't resharpened their pencil because of lack of use. :rolleyes:

Have a good day...Texta ;)
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Diogenese

Top 20
Mmmh, nice article about event-based cameras and their potential use cases, but what to think of those last two paragraphs? 🤔


Human Vision Inspires a New Generation of Cameras—And More​

October 11, 2023 Pat Brans
Thanks to a few lessons in biology, researchers have developed new sensor technology that opens up a world of new opportunities—including high-speed cameras that operate at low data rates.

In the broadest sense, the term neuromorphic applies to any computing system that borrows engineering concepts from biology. One set of notions that is particularly interesting for the development of electronic sensors is the spiking nature of neurons. Rather than fire right away, neurons build potential each time they receive a certain stimulus, firing only when a threshold is passed. The neurons are also leaky, losing membrane potential, which produces a filtering effect: If nothing new happens, the level goes down over time. “These behaviors can be emulated by electronics,” said Ilja Ocket, program manager for Neuromorphic Computing at imec. “And this is the basis for a new generation of sensors.”

Ilja Ocket, program manager for Neuromorphic Computing at imec


Ilja Ocket, program manager for Neuromorphic Computing at imec


The best illustration of how these ideas improve sensors is the event-based camera, also called the retinomorphic camera. Rather than accumulate photons in capacitive buckets and propagate them as images to a back-end system, these cameras treat each pixel autonomously. Each pixel can decide whether enough change has occurred in photon streams to convey that information downstream in the form of an event.

“Imec gets involved when sensors do not produce classical arrays or tensors or matrices, but rather events,” Ilja Ocket said. “We figure out how to adapt the AI to absorb event-based data and perform the necessary processing. Our spiking neural networks do not work with regular data. Instead, they take input from a time encoded stream.”

“One of the important benefits of these techniques is the reduced energy consumption—completely changing the game,” Ocket said. “We do a lot of work on AI and application development in areas where this benefit is the greatest— including robotics, smart sensors, wearables, AR/VR and automotive.”

One of the companies imec has been working with is Prophesee, a nine-year-old business based in Paris. Its 120 employees in France, China, Japan and the U.S. design vision sensors and develop software to overcome some of the challenges that plague traditional cameras.

Event-based vision sensors

“Our sensor is fundamentally different from a conventional image sensor,” said Luca Verre, CEO of Prophesee. “It produces event changes in the scene, as opposed to a full frame, at a fixed point in time. A regular camera captures images one after the other at a fixed point in time, maybe 20 frames per second.”

Prophesee's CEO Luca Verre's CEO Luca Verre
Luca Verre, CEO of Prophesee

This method, which is as old as cinematographer, works fine if you just want to display an image or make a movie. But it has three major shortcomings for more modern use cases, especially when AI is involved. The first is, because entire frames are captured and propagated even when there is very little change to most of the scene, a lot of redundant data is sent for processing.

The second problem is that movement between frames is missed. Since snapshots are taken at regular intervals several times a second, anything that happens between data capture events doesn’t get picked up.

The third problem is that traditional cameras have a fixed exposure time, which means each pixel could have a compromised acquisition depending on the lighting conditions. If there is bright light and dark areas in the same scene, you may end up with some pixels being overexposed or underexposed—often at the same time.

“Our approach, which is inspired by the human eye, is to have the acquisition driven by the scene, rather than having a sensor that acquires frames regardless of what’s changing,” Verre said. “Our pixels are independent and asynchronous, making for a very fast and efficient system. This suppresses data redundancy at the sensor level, and it captures movement, regardless of when it occurs—with microsecond precision.”

“While this is not time continuous, it is a very high time granularity for any natural phenomenon,” Verre said. “Most of the applications we target don’t need such high time precision. We don’t capture unnecessary data and we don’t miss information—two features that make a neuromorphic camera a high-speed camera, but at a low data rate.”

“Because the pixels are independent, we don’t have the problem of fixed exposure time,” Verre added. “Pixels that look at the dark part of the scene are independent from the ones looking at bright parts of the scene, so we have a very wide dynamic range.”

Because less redundant data is transmitted to AI systems, less processing is needed and less power consumption too. It becomes much easier to implement edge AI, putting inference closer to the sensor.

Prophesee-Sony-Sensor-Neuromorphic Sensing
The IMX 636 event-based camera module, developed with Sony, is a fourth-generation product. Last year, Prophesee released the EVK4 evaluation kit for the IMX 636 for industrial vision with a rugged housing but it will work for all applications. (Source: Prophesee)

Audio sensors and beyond neuromorphic

“Automotive is an important market for companies like Prophesee, but it’s a long play,” Ocket said. “If you want to develop a product for autonomous cars, you’ll need to think seven to 10 years ahead. And you’ll need the patience and deep pockets to sustain your company until the market really takes off.”

In the meantime, event-based cameras are meeting the needs of several other markets. These include industrial use cases that require ultra-high-speed counting, particle size monitoring and vibration monitoring for predictive maintenance. Other applications include eye tracking, visual odometry and gesture detection for AR and VR. And in China, there is a growing market for small cameras in toy animals. The cameras need to operate at low power—and the most important thing for them to detect is movement. Neuromorphic cameras meet this need, operating on very little power, and fitting nicely into toys.

Neuromorphic principles can also be applied to audio sensors. Like the retina, the cochlea does not sample spectrograms at fixed intervals. It just conveys changes in sensory input. So far, there are not many examples of neuromorphic audio sensors, but that’s likely to change soon since audio-based AI is now in high demand. Neuromorphic principles can also be applied to sensors with no biological counterpart, like radar or LiDAR.

But researchers are increasingly convinced that making a silicon version of the biological structures is not the best idea. The biggest impact may lie beyond neuromorphic, making the best use of both biology and electronics.

“If you strip it down to its computational behavior you could improve biology,” Ocket said. “Instead of emulating spiking neurons with thresholds, you can just apply time-based computational behavior on very simple timing circuits—technology from the 1950s and 1960s. If you hook them together and find a way to train them, you can go much lower in power consumption than if you simply emulate spike neurons in electronic form.”

Hi Frangipani,

So Prophesee are still working with Synsense?
And in China, there is a growing market for small cameras in toy animals. The cameras need to operate at low power—and the most important thing for them to detect is movement. Neuromorphic cameras meet this need, operating on very little power, and fitting nicely into toys.

I wonder how far along the automotive path Prophesee has gone:
“Automotive is an important market for companies like Prophesee, but it’s a long play,” Ocket said. “If you want to develop a product for autonomous cars, you’ll need to think seven to 10 years ahead. And you’ll need the patience and deep pockets to sustain your company until the market really takes off.”
...
and how does it relate to: "One of the companies imec has been working with is Prophesee, a nine-year-old business based in Paris."

Now, turning to the last 2 paragraphs, and in particular to:
"Instead of emulating spiking neurons with thresholds, you can just apply time-based computational behavior on very simple timing circuits—technology from the 1950s and 1960s." ... you could go back to 1927 and Edgar Douglas Adrian's research demonstrating that the leading spike carries the information and that the subsequent spike sequence carries redundant information in the spike rate, a fact which was identified by Simon Thorpe's group (Spikenet) in developing N-of-M coding, subsequently licensed to Brainchip by Spikenet after PvdM recognized the potential of the technique to greatly improve the performance of digital spiking neural networks. Brianchip acquired Spikenet in 2016.
 
  • Like
  • Love
  • Fire
Reactions: 36 users
Good morning,

Within the next 14 business days Brainchip will be required to deliver it's 2nd quarter results and short-term guidance, this upcoming
4C will determine our share price movement in the immediate short-term, my gut feeling like a number of posters is once again, very little
income as nothing has been announced that would suggest otherwise, apart from "watch the financials" which I still personally think is
some way off yet.

All the news that's come out of the company has been positive, there's absolutely no denying that fact and for a small start-up we are
100% making great progress establishing our company's name and superb technology. the learning continues for all.

Our accountant has a very sharp pencil at the ready to write down the 7 digit figures we are all awaiting, so what I'm trying to say in a
roundabout way is, try to contain your excitement when the share price does move north, it can't be sustained if the back of house
haven't resharpened their pencil because of lack of use. :rolleyes:

Have a good day...Texta ;)
Are you paid to lift everybodies spirits all the time, The company need to deliver simple as that or the share price will be raped again
 

Damo4

Regular
1697067910960.png


Here's a Real Nut Job: Getting Stubborn Macadamias Out of Trees - WSJ
 
  • Haha
  • Like
  • Fire
Reactions: 17 users

Vladsblood

Regular
Just thinking 💭 The real protection of our Tech not being used/ overtaken is our Patents.
This Patent protection stops others being competitive against Akida…Maybe this is why they aren’t getting worried about the SP.
Hoping all our Patents are swiftly approved asap.
Vlad
 
  • Like
  • Fire
  • Love
Reactions: 24 users

Gies

Regular
Just thinking 💭 The real protection of our Tech not being used/ overtaken is our Patents.
This Patent protection stops others being competitive against Akida…Maybe this is why they aren’t getting worried about the SP.
Hoping all our Patents are swiftly approved asap.
Vlad
Hi Vlasblood
Most of the patents are on Peter personal name and not on the company. So whatever shares they buy, they can’t own the patents.
I have a endless trust and believe in Peter that he won’t sell his dream for whatever price
 
  • Like
  • Fire
  • Thinking
Reactions: 27 users
Just thinking 💭 The real protection of our Tech not being used/ overtaken is our Patents.
This Patent protection stops others being competitive against Akida…Maybe this is why they aren’t getting worried about the SP.
Hoping all our Patents are swiftly approved asap.
Vlad
You maybe onto something there Vladsblood.

I can't remember the exact words but I do remember PVDM in one of his interviews saying something along the lines that having these patents means absolutely everything to the survival and success of the business and it's technology.

Definitely looking forward to the approvals of the other pending patents, great signs 😊
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Vladsblood

Regular
Hi Vlasblood
Most of the patents are on Peter personal name and not on the company. So whatever shares they buy, they can’t own the patents.
I have a endless trust and believe in Peter that he won’t sell his dream for whatever price
I’m also very thankful for that too keeping you and me safe. Vlad
 
  • Like
  • Fire
Reactions: 4 users

Vladsblood

Regular
You maybe onto something there Vladsblood.

I can't remember the exact words but I do remember PVDM in one of his interviews saying something along the lines that having these patents means absolutely everything to the survival and success of the business and it's technology.

Definitely looking forward to the approvals of the other pending patents, great signs 😊
Fully agree and with you SharesForBrekky on this safety net of ours. Vlad
 
  • Like
  • Fire
Reactions: 5 users

Esq.111

Fascinatingly Intuitive.
Equanimous ,

Shaken but not stirred.

Esq
 
  • Haha
  • Like
Reactions: 14 users

buena suerte :-)

BOB Bank of Brainchip
You maybe onto something there Vladsblood.

I can't remember the exact words but I do remember PVDM in one of his interviews saying something along the lines that having these patents means absolutely everything to the survival and success of the business and it's technology.

Definitely looking forward to the approvals of the other pending patents, great signs 😊
Yes indeed ...They (The Patents) contain "The secret sauce" I think /recall that also our little Akida can be copied to a certain level but to get past it to a further degree Peter and Anil have a 'secret encryption' (code) that only they know ....so just about impossible to emulate our Ground breaking Tech!!!!!!!! :)

Also any patent that is submitted by other companies that comes anywhere near our tech/formula... will simply get rejected!

IMO...We are still 3/5 years ahead of any competition.

That is my understanding of it anyway :)
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Vladsblood

Regular
Long time I couldn’t get my head around them not worrying about the SP….Lightbulb moment it’s gotta be the Patent safety net !!!
Thank dear God for our Founder PVDM.
Vlad
 
  • Like
  • Love
Reactions: 12 users

Diogenese

Top 20
Hi Gies,

That's not correct.

Peter is named as the inventor on most of the patents, but the company is the assignee.
 
  • Like
  • Love
  • Fire
Reactions: 46 users

HopalongPetrovski

I'm Spartacus!
You maybe onto something there Vladsblood.

I can't remember the exact words but I do remember PVDM in one of his interviews saying something along the lines that having these patents means absolutely everything to the survival and success of the business and it's technology.

Definitely looking forward to the approvals of the other pending patents, great signs 😊
As I recall Peter had an unpleasant learning curve experience in an earlier part of his career relating to the beneficial ownership of some of his ideas.
Which is part of the why he eschewed the well worn path of working for another established concern or the Venture (Vulture) capital raising private Company route and instead opted for the public ownership company structure which grew into the Brainchip we know today.
Much clumsier, but he gets to retain control and (I think) spread the potential benefit wider than would otherwise be possible.
Personally controlling the patents is a major part of that.
 
  • Like
  • Love
  • Fire
Reactions: 18 users
From Dell Tech presso this month.

Wonder if they just watching the general horizon or they talking about their own horizon :unsure:

HERE


Screenshot_2023-10-12-08-37-13-79_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
IMG_20231012_083418.jpg


We also know from a previous post that Dell aware of us from at least early 2022.

Was from one of the Directors at Dell.


  • From a hardware perspective, the accelerators of Domain Specific Architectures (DSA) [12] enable the third-wave AI algorithms to operate in a hybrid ecosystem consisting of Edge, Core, and Cloud. run anywhere in the system . Specifically, accelerators for specific domain architectures include the following examples: Nvidia's GPU, Xilinx's FPGA, Google's TPU, and artificial intelligence acceleration chips such as BrainChip's Akida Neural Processer, GraphCore's Intelligent Processing Unit (IPU), Cambrian's Machine Learning Unit (MLU) and more. These types of domain-specific architecture accelerators will be integrated into more information devices, architectures, and ecosystems by requiring less training data and being able to operate at lower power when needed. In response to this trend, the area where we need to focus on development is to develop a unified heterogeneous architecture approach that enables information systems to easily integrate and configure various different types of domain-specific architecture hardware accelerators. For Dell Technologies, we can leverage Dell's vast global supply chain and sales network to attract domain-specific architecture accelerator suppliers to adhere to the standard interfaces defined by Dell to achieve a unified heterogeneous architecture .
 
  • Like
  • Fire
  • Love
Reactions: 52 users

Gies

Regular
Hi Gies,

That's not correct.

Peter is named as the inventor on most of the patents, but the company is the assignee.
OkĂŠ I was in the understanding that some of the patents were on his personal name. I could be wrong.
 
  • Like
Reactions: 3 users
At least this author takes the time to contact someone like Nandan to discuss our solution unlike some out there.

Under low power edge processing.


OCTOBER 1, 2023 | INTERNET OF THINGS | SENSORS/DATA ACQUISITION | CONNECTIVITY

Wireless Sensor Networking for the IIoT

ST-1023-p14_fig1.jpg
Figure 1. Analyzing maintenance data to forecast machine maintenance. (Image: Kristian/Adobe Stock)

Factories of all sizes are incorporating automation at ever increasing rates. Among the reasons for that are reshoring, the idea that automating factories is a way of lowering labor costs for U.S. manufacturing so that domestic manufacturing becomes more cost-effective when you compare it to the costs of offshoring. You can take advantage of the much lower labor rates in many countries, but you have to add in the costs of more complex management and logistics, as well as the costs of shipping.
And then there’s the U.S. labor shortage, the difficulty of finding enough people who are willing to work in factories, while at the same time, a significant portion of the existing workforce is aging toward retirement.
And of course, the increasing productivity gains due to automation are good for the bottom line. Automation not only reduces the costs of production in the long run, but it also helps maintain reliable high-quality results and consistently predictable time frames.
The downside of automating an existing factory is the initial investment, not just in dollars, but also in the necessary down time for making such basic changes. However, most factories already have some automated processes using PLCs and other industrial controllers, generally running independently of each other, so that’s a head start. The next step is to integrate all of that into a single network — the Industrial Internet of Things (IIoT).
Ideally the factory network should also connect with the office network, to enable management to make more informed decisions. And it should enable connecting to the cloud for complicated analytics and large data storage — as well to the internet for connectivity beyond the factory.

WHY WIRELESS?​

To begin with, installing a wireless network is much less expensive. The costs in labor, materials, and downtime, for wiring a factory are far greater than for setting up a wireless system. And once in place, a wireless network is much more flexible. As processes change or new equipment is added, it is relatively simple to add or reprogram sensor nodes. Also, wireless sensors can be installed in locations that would be difficult to reach with cabling, for example on rotating machinery.

DESIGNING THE SYSTEM​

One of the main challenges with setting up wireless sensors, is powering them. Even if you use some sort of power harvesting, you still need power storage, usually with batteries. If the batteries have to be changed often, wireless is a non-starter, so keeping power low is of primary importance.
One strategy for keeping sensor power low is to reduce the amount of data transmitted from each sensor because streaming data uses a relatively large amount of power. The trouble is that once an IIoT network has been installed, the maximum benefit comes from obtaining as much data as possible and sending it to a local server or to a cloud data center for analysis. But in general, only a small percentage of the data is relevant. External analytic data-crunching can sort the wheat from the chaff, but the size of the data stream and the amount of computing can overwhelm systems. A solution is to do preprocessing at the “edge” — right at the sensor — to determine what data is significant and only send that.

LOW POWER EDGE PROCESSING​

But for edge processing to provide a net improvement in power reduction, the processing itself has to be done at low power. On that subject, I had a discussion with Nandan Nayampally, Chief Marketing Officer of BrainChip, makers of the Akida IP platform, a neural processor designed to provide ultra-low-power edge AI sensor network preprocessing. It starts with fully digital neuromorphic event-based AI and can learn on the device to trigger outputs only when there is significant information. It has its own memory that it uses to analyze the data, thus avoiding the energy-intensive transfer of data back and forth to utilize remote data storage. It also significantly reduces the amount of bandwidth required for the processing.
ST-1023-p14_fig2.jpg
Figure 2. (Image: BrainChip)
According to Nayampally, the series is focused on three general configurations (See Figure 2). The Akida-E is the most basic of the solutions, dealing with sensor inputs like vibration detection, anomaly detection, keyword spotting, and sensor fusion. Akida-S is more mid-range. It can do microcontroller (MCU)-level machine learning for more complex tasks such as presence detection, object classification, and biometric recognition. Finally, on the right-hand side of the figure, the Akida-P can perform higher-level tasks using a microprocessor (MPU) for tasks like advanced object detection or sequence prediction.

WI-FI NETWORKING​

ST-1023-p15_fig1.jpg
Figure 3. InnoPhase IoT Talaria TWO™ Low Power Wi-Fi plus BLE5.0 Module with associated ML and MCU. (Image: InnoPhase IoT)
While the Brainchip solutions save power by doing advanced AI processing at the edge, InnoPhase IoT, Inc. focuses on the network architecture. Their Talaria Two Wi-Fi and BLE System on Chip (SoC) and module enables sensors to be networked via Wi-Fi. According to Deepal Mehta, InnoPhase IoT Senior Director of Business Development, since Wi-Fi enabled IoT end points use TCP/IP connectivity, they can communicate directly to the cloud without any need for intervening gateways. The chip also includes a Bluetooth Low Energy (BLE) gateway that can be used in two different ways. It enables legacy BLE connected devices to connect to the Wi-Fi network and facilitates provisioning the Wi-Fi enabled end points using an app on a cell phone.
In addition to saving power by eliminating gateways, they use a low-power radio to transmit the Wi-Fi signal. Key to the low power radio is their method of digitally encoding and decoding the RF waveform. This method, they call PolaRFusion™, is distinguished from other techniques that use higher power-consuming analog processing.

USE CASE​

ST-1023-p15_fig2.jpg
Figure 4. Typical sensor application. (Image: InnoPhase IoT)
A typical simple use case is temperature sensing. If a sensor mounted on a machine shows a trending rise in temperature, that could be an indication that there is a problem. However, instead of sending all the temperature data all the time, a local neural processor like BrainChip’s Akida can actively learn the standard temperature envelope for a particular installation and set an alarm level based on that. Then only the alarm needs to be transmitted, possibly via InnoPhase Wi-Fi, to the local server or to the cloud. Alternatively, once the alarm level has been reached, the continuous stream of temperature data can then be transmitted.
If we now consider a series of temperature sensors mounted on different assets. Each of those pieces of equipment might have different standard operating temperatures. And even identical assets located in different places in the building may have variations based on draft air, sunlight, or other factors. So, being able to use AI analytics separately at each sensor and setting different alarm levels will be extremely efficient. The value added to the manufacturing process will be multiplied if you use the same approach for other data such as vibration, pressure, levels, and flows. And even more, if in addition to all the sensor data, you can track products and processes using cameras and then analyze that information using the video analytics that can be performed with the Akida platform.

THE BOTTOM LINE​

The industrial internet of things — sharing, collecting, and analyzing information across a complete manufacturing enterprise — can significantly enhance the bottom line. Not only in monetary terms but also in the quality and reliability of the products and the ability to deliver them on time.
This article was written by Ed Brown, editor of Sensor Technology. For more information, go to www.brainchip.com and www.innophaseiot.com .
 
  • Like
  • Love
  • Fire
Reactions: 27 users
Top Bottom