BRN Discussion Ongoing


"Trikarenos aims to address these challenges with a 32-bit RISC-V CPU architecture and "advanced" 28nm manufacturing technology provided by TSMC."

"In comparison to RAD750, which consumes five watts of power, Trikarenos can achieve the same computing results with just 15.7 milliwatts, making it advantageous for energy-constrained missions like CubeSat satellite launches."

"In the coming years, NASA intends to replace RAD750 with a high-performance RISC-V CPU designed by SiFive."
 
  • Like
  • Thinking
Reactions: 17 users

Damo4

Regular
Mmmh, nice article about event-based cameras and their potential use cases, but what to think of those last two paragraphs? 🤔


Human Vision Inspires a New Generation of Cameras—And More​

October 11, 2023 Pat Brans
Thanks to a few lessons in biology, researchers have developed new sensor technology that opens up a world of new opportunities—including high-speed cameras that operate at low data rates.

In the broadest sense, the term neuromorphic applies to any computing system that borrows engineering concepts from biology. One set of notions that is particularly interesting for the development of electronic sensors is the spiking nature of neurons. Rather than fire right away, neurons build potential each time they receive a certain stimulus, firing only when a threshold is passed. The neurons are also leaky, losing membrane potential, which produces a filtering effect: If nothing new happens, the level goes down over time. “These behaviors can be emulated by electronics,” said Ilja Ocket, program manager for Neuromorphic Computing at imec. “And this is the basis for a new generation of sensors.”

Ilja Ocket, program manager for Neuromorphic Computing at imec


Ilja Ocket, program manager for Neuromorphic Computing at imec


The best illustration of how these ideas improve sensors is the event-based camera, also called the retinomorphic camera. Rather than accumulate photons in capacitive buckets and propagate them as images to a back-end system, these cameras treat each pixel autonomously. Each pixel can decide whether enough change has occurred in photon streams to convey that information downstream in the form of an event.

“Imec gets involved when sensors do not produce classical arrays or tensors or matrices, but rather events,” Ilja Ocket said. “We figure out how to adapt the AI to absorb event-based data and perform the necessary processing. Our spiking neural networks do not work with regular data. Instead, they take input from a time encoded stream.”

“One of the important benefits of these techniques is the reduced energy consumption—completely changing the game,” Ocket said. “We do a lot of work on AI and application development in areas where this benefit is the greatest— including robotics, smart sensors, wearables, AR/VR and automotive.”

One of the companies imec has been working with is Prophesee, a nine-year-old business based in Paris. Its 120 employees in France, China, Japan and the U.S. design vision sensors and develop software to overcome some of the challenges that plague traditional cameras.

Event-based vision sensors

“Our sensor is fundamentally different from a conventional image sensor,” said Luca Verre, CEO of Prophesee. “It produces event changes in the scene, as opposed to a full frame, at a fixed point in time. A regular camera captures images one after the other at a fixed point in time, maybe 20 frames per second.”

Prophesee's CEO Luca Verre's CEO Luca Verre
Luca Verre, CEO of Prophesee

This method, which is as old as cinematographer, works fine if you just want to display an image or make a movie. But it has three major shortcomings for more modern use cases, especially when AI is involved. The first is, because entire frames are captured and propagated even when there is very little change to most of the scene, a lot of redundant data is sent for processing.

The second problem is that movement between frames is missed. Since snapshots are taken at regular intervals several times a second, anything that happens between data capture events doesn’t get picked up.

The third problem is that traditional cameras have a fixed exposure time, which means each pixel could have a compromised acquisition depending on the lighting conditions. If there is bright light and dark areas in the same scene, you may end up with some pixels being overexposed or underexposed—often at the same time.

“Our approach, which is inspired by the human eye, is to have the acquisition driven by the scene, rather than having a sensor that acquires frames regardless of what’s changing,” Verre said. “Our pixels are independent and asynchronous, making for a very fast and efficient system. This suppresses data redundancy at the sensor level, and it captures movement, regardless of when it occurs—with microsecond precision.”

“While this is not time continuous, it is a very high time granularity for any natural phenomenon,” Verre said. “Most of the applications we target don’t need such high time precision. We don’t capture unnecessary data and we don’t miss information—two features that make a neuromorphic camera a high-speed camera, but at a low data rate.”

“Because the pixels are independent, we don’t have the problem of fixed exposure time,” Verre added. “Pixels that look at the dark part of the scene are independent from the ones looking at bright parts of the scene, so we have a very wide dynamic range.”

Because less redundant data is transmitted to AI systems, less processing is needed and less power consumption too. It becomes much easier to implement edge AI, putting inference closer to the sensor.

Prophesee-Sony-Sensor-Neuromorphic Sensing
The IMX 636 event-based camera module, developed with Sony, is a fourth-generation product. Last year, Prophesee released the EVK4 evaluation kit for the IMX 636 for industrial vision with a rugged housing but it will work for all applications. (Source: Prophesee)

Audio sensors and beyond neuromorphic

Automotive is an important market for companies like Prophesee, but it’s a long play,” Ocket said. “If you want to develop a product for autonomous cars, you’ll need to think seven to 10 years ahead. And you’ll need the patience and deep pockets to sustain your company until the market really takes off.”

In the meantime, event-based cameras are meeting the needs of several other markets. These include industrial use cases that require ultra-high-speed counting, particle size monitoring and vibration monitoring for predictive maintenance. Other applications include eye tracking, visual odometry and gesture detection for AR and VR. And in China, there is a growing market for small cameras in toy animals. The cameras need to operate at low power—and the most important thing for them to detect is movement. Neuromorphic cameras meet this need, operating on very little power, and fitting nicely into toys.

Neuromorphic principles can also be applied to audio sensors. Like the retina, the cochlea does not sample spectrograms at fixed intervals. It just conveys changes in sensory input. So far, there are not many examples of neuromorphic audio sensors, but that’s likely to change soon since audio-based AI is now in high demand. Neuromorphic principles can also be applied to sensors with no biological counterpart, like radar or LiDAR.

But researchers are increasingly convinced that making a silicon version of the biological structures is not the best idea. The biggest impact may lie beyond neuromorphic, making the best use of both biology and electronics.

“If you strip it down to its computational behavior you could improve biology,” Ocket said. “Instead of emulating spiking neurons with thresholds, you can just apply time-based computational behavior on very simple timing circuits—technology from the 1950s and 1960s. If you hook them together and find a way to train them, you can go much lower in power consumption than if you simply emulate spike neurons in electronic form.”

Seems to me that he is just pointing out theres an un-developped/researched idea that could potentially use less power.
It also doesn't sound like he knows how it would work either, as he questions whether or not it can be trained.

I think he was just pointing out we may not need to naively replicate to our best ability the Brain, and instead look into a hybrid system.
Almost as if to say; stop assuming the brain is the best learning computer, there might be something better.

Either way I don't think it matters, his work at Imec is Neuromorphic focused, so I doubt he would act against his own interests or imply NM is a waste of effort.

Great post though @Frangipani, it's a great summary of how the technology is being adopted!
 
Last edited:
  • Like
  • Fire
Reactions: 8 users

Tothemoon24

Top 20

SiFive Announces Differentiated Solutions for Generative AI and ML Applications Leading RISC-V into a New Era of High-Performance Innovation​

SiFive’s Performance P870 and Intelligence X390 product debut sets new bar for high-performance compute in consumer, infrastructure, and automotive applications
Santa Clara, Calif., Oct. 11, 2023 –SiFive, Inc., the pioneer and leader of RISC-V computing today announced two new products designed to address new requirements for high performance compute. The SiFive Performance™ P870 and SiFive Intelligence™ X390 offer a new level of low power, compute density, and vector compute capability, and when combined provide the necessary performance boost for increasingly data intensive compute. Together, the new products create a powerful mix of scalar and vector computing to meet the needs of today’s dataflow and computation intensive AI applications across consumer, automotive, and infrastructure markets.
The announcement took place at an in-person press and analyst event in Santa Clara today, where the company also provided an update on several of its product lines currently shipping in silicon to customers around the world. Company executives offered insight into SiFive’s product roadmap and discussed how the overall RISC-V ecosystem continues to expand rapidly as new applications call for the benefits of RISC-V-based high-performance compute solutions.
“SiFive is leading the industry into a new era of high-performance RISC-V innovation, and closing the gap with other instruction set architectures with our unparalleled portfolio, while recent silicon tape-outs are demonstrating the tremendous benefits of SiFive RISC-V solutions,” said Patrick Little, SiFive Chairman, President and CEO. “As the Arm IPO showed, there is a fast-growing demand for semiconductors across many sectors, particularly processors for consumer and infrastructure markets. The flexibility of SiFive’s RISC-V solutions allows companies to address the unique computing requirements of these segments and capitalize on the momentum around generative AI, where we have seen double-digit design wins, and for other cutting-edge applications.
The SiFive Performance P870
Ideal for high performance consumer applications, or when used in conjunction with a vector processor in the datacenter, the P870 core sets an impressive new RISC-V performance bar across instruction set architecture availability, throughput, parallelism, and memory bandwidth. Bringing a 50% peak single thread performance upgrade (specINT2k6) over the previous generation SiFive Performance processors, the P870 is a six-wide out-of-order core, that meets RVA 23 and offers a shared cluster cache enabling up to a 32-core cluster. High execution throughput comes with more instruction sets per cycle, more ALU, and more branch units. The P870 is fully compatible with Google’s platform requirements for Android on RISC-V. The P870 also offers additional proven SiFive features: · x 128b VLEN RVV · Vector crypto and hypervisor extensions · IOMMU and AIA · Non-inclusive L3 cache · Proven RISC-V WorldGuard security
The SiFive Intelligence X390
Building on the highly popular SiFive Intelligence X280’s success in coupling AI/ML applications with hardware accelerators in mobile, infrastructure, and automotive applications, the new X390 brings a 4x improvement to vector computation with its single core configuration, doubled vector length, and dual vector ALUs. This allows quadruple the amount of sustained data bandwidth. With SiFive Vector Coprocessor Interface eXtension (VCIX) companies can easily add their own vector instructions and/or acceleration hardware, bringing unprecedented flexibility and allowing users to greatly increase performance with custom instructions. Features include: · 1024-bit VLEN, 512-bit DLEN · Single / Dual Vector ALU · VCIX (2048-bit out, 1024-bit in)
An Agile Hardware Solution for Generative AI applications
Bringing the P870 high-performance general compute SoC together with a high performance NPU cluster, consisting of the X390 and customer AI hardware engines, offers product designers a highly flexible, low power, and programmable solution with superior compute density for complex workloads.
The company highlighted how interest in these combined SiFive solutions is high, with a number of customers achieving silicon success and in various stages of commercialization using high performance products.
SiFive continues to actively work across the ecosystem (see attached quote sheet) with partners who are ensuring the software, security, and flexibility benefits of the open standard ecosystem are in place for SiFive processors as companies move to commercialize their SiFive-powered products.
Supporting quotations from industry partners:
SiFive has assembled an array of ecosystem partners to help customers speed their time to commercialization.
"We have collaborated with SiFive to deliver Cadence AI-driven digital full flow Rapid Adoption Kits (RAKs) for previous generation SiFive Performance™ and Intelligence™ RISC-V processors and are looking forward to producing them for the upcoming P870 and X390 processors" said KT Moore, vice president of Corporate Marketing, Cadence. "The RAKs utilize our leading Generative AI solutions that optimize power, performance and area while our system verification solutions enable optimal verification throughput and productivity. This empowers SiFive customers to accelerate time-to-market, enhance product quality, and deliver innovative solutions for high-performance computing, AI, automotive, and mobile applications."
Canonical’s strategic alliance with SiFive, a RISC-V CPU IP leader, grants us exclusive privileges, including early access to their cutting-edge processors under development. Canonical has ported Ubuntu to SiFive development systems in the past and is working to have Ubuntu ready at launch with the SiFive HiFive Pro P550 and future platforms,” said Cindy Goldberg, Vice President, Silicon Alliances at Canonical. “We see a growing demand for SiFive RISC-V processors and recognize the opportunity across consumer, automotive and infrastructure markets. Ubuntu is the operating system of choice for infrastructure and cloud use cases. This year with the introduction of Ubuntu Pro we have enhanced security, compliance and support coverage across a broad portfolio of open source software and platform architectures. The combination of SiFive’s RISC-V IP and Canonical’s software is a combination that will lead the transformative future in computing, on RISC-V.”
“As an early RISC-V adopter and industry leader for delivering production-proven, safety-certified development tools, C/C+ compilers and operating systems for RISC-V, Green Hills Software is excited to be expanding its close working relationship with SiFive by adding optimized support for the P870 and X390.” said Dan Mender, VP of Business Development at Green Hills Software. “Together, Green Hills and SiFive will help companies realize the maximum performance, power, and area benefit possible for these new SiFive offerings.”
IAR welcomes the new SiFive Performance P870 and Intelligence X390 RISC-V processors and recognizes their opportunity for generative AI and ML as well as high-performance computing applications addressing consumer, automotive, and infrastructure. IAR and SiFive have a strong partnership and stand out in the RISC-V ecosystem. SiFive enables IAR with early access its leading commercial RISC-V IP processors while they are under development, enabling co-optimizations benefiting mutual customers. IAR’s complete development solution for all the leading RISC-V core IP from SiFive helps embedded software developers around the world maximize the energy efficiency, simplicity, security, and flexibility upsides that RISC-V and SiFive offer, like the latest additions for Generative AI/ML applications.”
“As the world leader in debugging and trace tools used by all major and well-known technology companies, Lauterbach has been committed to supporting the RISC-V ecosystem from the beginning and is a close long-term partner of SiFive, a leading provider of RISC-V CPU IP. Currently, we see a strong growing global demand for RISC-V based processors including generative AI and ML applications as well as high performance compute across consumer, automotive, and infrastructure markets, all markets in which we have been successfully active for many years. Our early access to SiFive's processors under development allows both SiFive and Lauterbach to co-optimize their products for an optimal user experience.” Norbert Weiss, Managing Director, Lauterbach GmbH
"SiFive has been instrumental in bringing the RISC-V architecture to Automotive Grade Linux and providing additional hardware options for automakers and suppliers, many of whom are already using the open source AGL platform in production," said Dan Cauchy, Executive Director of Automotive Grade Linux (AGL), an open source project at The Linux Foundation. "SiFive is an active AGL member, and we look forward to their continued collaboration with the broader community."
“The growth of AI and machine learning systems is driving significant compute demands in application-specific processors. Our collaboration with SiFive to provide co-optimized solutions including Synopsys.ai™ full-stack AI-driven EDA suite and Fusion QuickStart Implementation Kits, along with Synopsys Interface and Foundation IP, hardware-assisted verification, and virtual prototyping solutions help mutual customers accelerate the design of high-performance, RISC-V-based SoCs.” Kiran Vittal, Senior Director of Partner Alliances Marketing for the EDA Group, Synopsys.
 
  • Like
  • Fire
  • Thinking
Reactions: 22 users

TECH

Regular
Good morning,

Within the next 14 business days Brainchip will be required to deliver it's 2nd quarter results and short-term guidance, this upcoming
4C will determine our share price movement in the immediate short-term, my gut feeling like a number of posters is once again, very little
income as nothing has been announced that would suggest otherwise, apart from "watch the financials" which I still personally think is
some way off yet.

All the news that's come out of the company has been positive, there's absolutely no denying that fact and for a small start-up we are
100% making great progress establishing our company's name and superb technology. the learning continues for all.

Our accountant has a very sharp pencil at the ready to write down the 7 digit figures we are all awaiting, so what I'm trying to say in a
roundabout way is, try to contain your excitement when the share price does move north, it can't be sustained if the back of house
haven't resharpened their pencil because of lack of use. :rolleyes:

Have a good day...Texta ;)
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Diogenese

Top 20
Mmmh, nice article about event-based cameras and their potential use cases, but what to think of those last two paragraphs? 🤔


Human Vision Inspires a New Generation of Cameras—And More​

October 11, 2023 Pat Brans
Thanks to a few lessons in biology, researchers have developed new sensor technology that opens up a world of new opportunities—including high-speed cameras that operate at low data rates.

In the broadest sense, the term neuromorphic applies to any computing system that borrows engineering concepts from biology. One set of notions that is particularly interesting for the development of electronic sensors is the spiking nature of neurons. Rather than fire right away, neurons build potential each time they receive a certain stimulus, firing only when a threshold is passed. The neurons are also leaky, losing membrane potential, which produces a filtering effect: If nothing new happens, the level goes down over time. “These behaviors can be emulated by electronics,” said Ilja Ocket, program manager for Neuromorphic Computing at imec. “And this is the basis for a new generation of sensors.”

Ilja Ocket, program manager for Neuromorphic Computing at imec


Ilja Ocket, program manager for Neuromorphic Computing at imec


The best illustration of how these ideas improve sensors is the event-based camera, also called the retinomorphic camera. Rather than accumulate photons in capacitive buckets and propagate them as images to a back-end system, these cameras treat each pixel autonomously. Each pixel can decide whether enough change has occurred in photon streams to convey that information downstream in the form of an event.

“Imec gets involved when sensors do not produce classical arrays or tensors or matrices, but rather events,” Ilja Ocket said. “We figure out how to adapt the AI to absorb event-based data and perform the necessary processing. Our spiking neural networks do not work with regular data. Instead, they take input from a time encoded stream.”

“One of the important benefits of these techniques is the reduced energy consumption—completely changing the game,” Ocket said. “We do a lot of work on AI and application development in areas where this benefit is the greatest— including robotics, smart sensors, wearables, AR/VR and automotive.”

One of the companies imec has been working with is Prophesee, a nine-year-old business based in Paris. Its 120 employees in France, China, Japan and the U.S. design vision sensors and develop software to overcome some of the challenges that plague traditional cameras.

Event-based vision sensors

“Our sensor is fundamentally different from a conventional image sensor,” said Luca Verre, CEO of Prophesee. “It produces event changes in the scene, as opposed to a full frame, at a fixed point in time. A regular camera captures images one after the other at a fixed point in time, maybe 20 frames per second.”

Prophesee's CEO Luca Verre's CEO Luca Verre
Luca Verre, CEO of Prophesee

This method, which is as old as cinematographer, works fine if you just want to display an image or make a movie. But it has three major shortcomings for more modern use cases, especially when AI is involved. The first is, because entire frames are captured and propagated even when there is very little change to most of the scene, a lot of redundant data is sent for processing.

The second problem is that movement between frames is missed. Since snapshots are taken at regular intervals several times a second, anything that happens between data capture events doesn’t get picked up.

The third problem is that traditional cameras have a fixed exposure time, which means each pixel could have a compromised acquisition depending on the lighting conditions. If there is bright light and dark areas in the same scene, you may end up with some pixels being overexposed or underexposed—often at the same time.

“Our approach, which is inspired by the human eye, is to have the acquisition driven by the scene, rather than having a sensor that acquires frames regardless of what’s changing,” Verre said. “Our pixels are independent and asynchronous, making for a very fast and efficient system. This suppresses data redundancy at the sensor level, and it captures movement, regardless of when it occurs—with microsecond precision.”

“While this is not time continuous, it is a very high time granularity for any natural phenomenon,” Verre said. “Most of the applications we target don’t need such high time precision. We don’t capture unnecessary data and we don’t miss information—two features that make a neuromorphic camera a high-speed camera, but at a low data rate.”

“Because the pixels are independent, we don’t have the problem of fixed exposure time,” Verre added. “Pixels that look at the dark part of the scene are independent from the ones looking at bright parts of the scene, so we have a very wide dynamic range.”

Because less redundant data is transmitted to AI systems, less processing is needed and less power consumption too. It becomes much easier to implement edge AI, putting inference closer to the sensor.

Prophesee-Sony-Sensor-Neuromorphic Sensing
The IMX 636 event-based camera module, developed with Sony, is a fourth-generation product. Last year, Prophesee released the EVK4 evaluation kit for the IMX 636 for industrial vision with a rugged housing but it will work for all applications. (Source: Prophesee)

Audio sensors and beyond neuromorphic

Automotive is an important market for companies like Prophesee, but it’s a long play,” Ocket said. “If you want to develop a product for autonomous cars, you’ll need to think seven to 10 years ahead. And you’ll need the patience and deep pockets to sustain your company until the market really takes off.”

In the meantime, event-based cameras are meeting the needs of several other markets. These include industrial use cases that require ultra-high-speed counting, particle size monitoring and vibration monitoring for predictive maintenance. Other applications include eye tracking, visual odometry and gesture detection for AR and VR. And in China, there is a growing market for small cameras in toy animals. The cameras need to operate at low power—and the most important thing for them to detect is movement. Neuromorphic cameras meet this need, operating on very little power, and fitting nicely into toys.

Neuromorphic principles can also be applied to audio sensors. Like the retina, the cochlea does not sample spectrograms at fixed intervals. It just conveys changes in sensory input. So far, there are not many examples of neuromorphic audio sensors, but that’s likely to change soon since audio-based AI is now in high demand. Neuromorphic principles can also be applied to sensors with no biological counterpart, like radar or LiDAR.

But researchers are increasingly convinced that making a silicon version of the biological structures is not the best idea. The biggest impact may lie beyond neuromorphic, making the best use of both biology and electronics.

“If you strip it down to its computational behavior you could improve biology,” Ocket said. “Instead of emulating spiking neurons with thresholds, you can just apply time-based computational behavior on very simple timing circuits—technology from the 1950s and 1960s. If you hook them together and find a way to train them, you can go much lower in power consumption than if you simply emulate spike neurons in electronic form.”

Hi Frangipani,

So Prophesee are still working with Synsense?
And in China, there is a growing market for small cameras in toy animals. The cameras need to operate at low power—and the most important thing for them to detect is movement. Neuromorphic cameras meet this need, operating on very little power, and fitting nicely into toys.

I wonder how far along the automotive path Prophesee has gone:
Automotive is an important market for companies like Prophesee, but it’s a long play,” Ocket said. “If you want to develop a product for autonomous cars, you’ll need to think seven to 10 years ahead. And you’ll need the patience and deep pockets to sustain your company until the market really takes off.
...
and how does it relate to: "One of the companies imec has been working with is Prophesee, a nine-year-old business based in Paris."

Now, turning to the last 2 paragraphs, and in particular to:
"Instead of emulating spiking neurons with thresholds, you can just apply time-based computational behavior on very simple timing circuits—technology from the 1950s and 1960s." ... you could go back to 1927 and Edgar Douglas Adrian's research demonstrating that the leading spike carries the information and that the subsequent spike sequence carries redundant information in the spike rate, a fact which was identified by Simon Thorpe's group (Spikenet) in developing N-of-M coding, subsequently licensed to Brainchip by Spikenet after PvdM recognized the potential of the technique to greatly improve the performance of digital spiking neural networks. Brianchip acquired Spikenet in 2016.
 
  • Like
  • Love
  • Fire
Reactions: 36 users
Good morning,

Within the next 14 business days Brainchip will be required to deliver it's 2nd quarter results and short-term guidance, this upcoming
4C will determine our share price movement in the immediate short-term, my gut feeling like a number of posters is once again, very little
income as nothing has been announced that would suggest otherwise, apart from "watch the financials" which I still personally think is
some way off yet.

All the news that's come out of the company has been positive, there's absolutely no denying that fact and for a small start-up we are
100% making great progress establishing our company's name and superb technology. the learning continues for all.

Our accountant has a very sharp pencil at the ready to write down the 7 digit figures we are all awaiting, so what I'm trying to say in a
roundabout way is, try to contain your excitement when the share price does move north, it can't be sustained if the back of house
haven't resharpened their pencil because of lack of use. :rolleyes:

Have a good day...Texta ;)
Are you paid to lift everybodies spirits all the time, The company need to deliver simple as that or the share price will be raped again
 

Damo4

Regular
1697067910960.png


Here's a Real Nut Job: Getting Stubborn Macadamias Out of Trees - WSJ
 
  • Haha
  • Like
  • Fire
Reactions: 17 users

Vladsblood

Regular
Just thinking 💭 The real protection of our Tech not being used/ overtaken is our Patents.
This Patent protection stops others being competitive against Akida…Maybe this is why they aren’t getting worried about the SP.
Hoping all our Patents are swiftly approved asap.
Vlad
 
  • Like
  • Fire
  • Love
Reactions: 24 users

Gies

Regular
Just thinking 💭 The real protection of our Tech not being used/ overtaken is our Patents.
This Patent protection stops others being competitive against Akida…Maybe this is why they aren’t getting worried about the SP.
Hoping all our Patents are swiftly approved asap.
Vlad
Hi Vlasblood
Most of the patents are on Peter personal name and not on the company. So whatever shares they buy, they can’t own the patents.
I have a endless trust and believe in Peter that he won’t sell his dream for whatever price
 
  • Like
  • Fire
  • Thinking
Reactions: 27 users
Just thinking 💭 The real protection of our Tech not being used/ overtaken is our Patents.
This Patent protection stops others being competitive against Akida…Maybe this is why they aren’t getting worried about the SP.
Hoping all our Patents are swiftly approved asap.
Vlad
You maybe onto something there Vladsblood.

I can't remember the exact words but I do remember PVDM in one of his interviews saying something along the lines that having these patents means absolutely everything to the survival and success of the business and it's technology.

Definitely looking forward to the approvals of the other pending patents, great signs 😊
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Vladsblood

Regular
Hi Vlasblood
Most of the patents are on Peter personal name and not on the company. So whatever shares they buy, they can’t own the patents.
I have a endless trust and believe in Peter that he won’t sell his dream for whatever price
I’m also very thankful for that too keeping you and me safe. Vlad
 
  • Like
  • Fire
Reactions: 4 users

Vladsblood

Regular
You maybe onto something there Vladsblood.

I can't remember the exact words but I do remember PVDM in one of his interviews saying something along the lines that having these patents means absolutely everything to the survival and success of the business and it's technology.

Definitely looking forward to the approvals of the other pending patents, great signs 😊
Fully agree and with you SharesForBrekky on this safety net of ours. Vlad
 
  • Like
  • Fire
Reactions: 5 users

Esq.111

Fascinatingly Intuitive.
Equanimous ,

Shaken but not stirred.

Esq
 
  • Haha
  • Like
Reactions: 14 users

buena suerte :-)

BOB Bank of Brainchip
You maybe onto something there Vladsblood.

I can't remember the exact words but I do remember PVDM in one of his interviews saying something along the lines that having these patents means absolutely everything to the survival and success of the business and it's technology.

Definitely looking forward to the approvals of the other pending patents, great signs 😊
Yes indeed ...They (The Patents) contain "The secret sauce" I think /recall that also our little Akida can be copied to a certain level but to get past it to a further degree Peter and Anil have a 'secret encryption' (code) that only they know ....so just about impossible to emulate our Ground breaking Tech!!!!!!!! :)

Also any patent that is submitted by other companies that comes anywhere near our tech/formula... will simply get rejected!

IMO...We are still 3/5 years ahead of any competition.

That is my understanding of it anyway :)
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Vladsblood

Regular
Long time I couldn’t get my head around them not worrying about the SP….Lightbulb moment it’s gotta be the Patent safety net !!!
Thank dear God for our Founder PVDM.
Vlad
 
  • Like
  • Love
Reactions: 12 users

Diogenese

Top 20
Hi Gies,

That's not correct.

Peter is named as the inventor on most of the patents, but the company is the assignee.
 
  • Like
  • Love
  • Fire
Reactions: 46 users

HopalongPetrovski

I'm Spartacus!
You maybe onto something there Vladsblood.

I can't remember the exact words but I do remember PVDM in one of his interviews saying something along the lines that having these patents means absolutely everything to the survival and success of the business and it's technology.

Definitely looking forward to the approvals of the other pending patents, great signs 😊
As I recall Peter had an unpleasant learning curve experience in an earlier part of his career relating to the beneficial ownership of some of his ideas.
Which is part of the why he eschewed the well worn path of working for another established concern or the Venture (Vulture) capital raising private Company route and instead opted for the public ownership company structure which grew into the Brainchip we know today.
Much clumsier, but he gets to retain control and (I think) spread the potential benefit wider than would otherwise be possible.
Personally controlling the patents is a major part of that.
 
  • Like
  • Love
  • Fire
Reactions: 18 users
From Dell Tech presso this month.

Wonder if they just watching the general horizon or they talking about their own horizon :unsure:

HERE


Screenshot_2023-10-12-08-37-13-79_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
IMG_20231012_083418.jpg


We also know from a previous post that Dell aware of us from at least early 2022.

Was from one of the Directors at Dell.


  • From a hardware perspective, the accelerators of Domain Specific Architectures (DSA) [12] enable the third-wave AI algorithms to operate in a hybrid ecosystem consisting of Edge, Core, and Cloud. run anywhere in the system . Specifically, accelerators for specific domain architectures include the following examples: Nvidia's GPU, Xilinx's FPGA, Google's TPU, and artificial intelligence acceleration chips such as BrainChip's Akida Neural Processer, GraphCore's Intelligent Processing Unit (IPU), Cambrian's Machine Learning Unit (MLU) and more. These types of domain-specific architecture accelerators will be integrated into more information devices, architectures, and ecosystems by requiring less training data and being able to operate at lower power when needed. In response to this trend, the area where we need to focus on development is to develop a unified heterogeneous architecture approach that enables information systems to easily integrate and configure various different types of domain-specific architecture hardware accelerators. For Dell Technologies, we can leverage Dell's vast global supply chain and sales network to attract domain-specific architecture accelerator suppliers to adhere to the standard interfaces defined by Dell to achieve a unified heterogeneous architecture .
 
  • Like
  • Fire
  • Love
Reactions: 52 users

Gies

Regular
Hi Gies,

That's not correct.

Peter is named as the inventor on most of the patents, but the company is the assignee.
Oké I was in the understanding that some of the patents were on his personal name. I could be wrong.
 
  • Like
Reactions: 3 users
Top Bottom