BRN Discussion Ongoing

Hi Fmf,

I haven't tracked down the iniLabs patents yet, so I don't know what their earliest date is or what it covers (probly filed in the name of Zurich Uni), but this is an iniVation DVS patent appliocation from 20210322:

EP4064686A1 EVENT SENSOR AND METHOD FOR PRODUCING A SIGNAL STREAM OF EVENT DATA

View attachment 42313

View attachment 42314


The invention relates to an event sensor and a method for producing a signal stream of event data in reaction to light incident on a pixel array (10) of pixels. The event sensor comprises: for each pixel of the pixel array (10) at least one photodetector (1) configured to produce a detector signal in reaction to light incident on the pixel; for each pixel or a group of the pixels a signal converter (2) connected to the photodetector (1) and configured to repeatedly produce and store digital sample values dependent on the detector signal sampled at sampling intervals; and a readout processor (4) connected to the signal converter (2). The readout processor (4) is configured: to derive a digital accumulated pixel value based on one or multiple of the sample values, wherein the accumulated pixel value corresponds to an accumulation of the detector signal over a sampling count of the sampling intervals; and to generate a pixel event of the event data dependent on the accumulated pixel value and the sampling count.

1. Event sensor comprising a pixel array (10) of pixels and configured to produce a signal stream of event data in reaction to light incident on said pixel array (10), comprising:
- for each pixel of said pixel array (10) at least one photodetector (1) configured to produce a detector signal in reaction to light incident on said pixel;
- for each pixel or a group of said pixels a signal converter (2) connected to said photodetector (1) and configured to repeatedly produce and store digital sample values dependent on said detector signal sampled at sampling intervals; and
- a readout processor (4) connected to said signal converter (2) and configured:
- to derive a digital accumulated pixel value based on one or multiple of said sample values, wherein said accumulated pixel value corresponds to an accumulation of said detector signal over a sampling count of said sampling intervals, and
- to generate a pixel event of said event data dependent on said accumulated pixel value and said sampling count
.

Prophesee have patents dating from at least 2013.

US11212470B2 Dynamic, single photodiode pixel circuit and operating method thereof

View attachment 42316



The invention relates to pixel circuit and an operating method thereof, comprising—a front-end circuit (1) comprising a single photodiode (PD) and having an output (4), said front-end circuit (1) being configured for delivering on said output a photoreceptor signal derived from a light exposure of said single photodiode (PD);—a transient detector circuit (2) configured for detecting a change in said photoreceptor signal delivered on said output (4);—an exposure measurement circuit (3) configured for measuring said photoreceptor signal delivered on said output (4) upon detection by the transient detector circuit (2) of a change in the photoreceptor signal. The invention also relates to an image sensor comprising a plurality of pixel circuits.


Here's a 2016 Brainchip patent which uses a DVS/event camera, probably from the time we were working with iniVation:
US2017236027A1 INTELLIGENT BIOMORPHIC SYSTEM FOR PATTERN RECOGNITION WITH AUTONOMOUS VISUAL FEATURE EXTRACTION

View attachment 42312

Embodiments of the present invention provide a hierarchical arrangement of one or more artificial neural networks for recognizing visual feature pattern extraction and output labeling. The system comprises a first spiking neural network and a second spiking neural network. The first spiking neural network is configured to autonomously learn complex, temporally overlapping visual features arising in an input pattern stream. Competitive learning is implemented as spike time dependent plasticity with lateral inhibition in the first spiking neural network. The second spiking neural network is connected by means of dynamic synapses with the first spiking neural network, and is trained for interpreting and labeling output data of the first spiking neural network. Additionally, the output of the second spiking neural network is transmitted to a computing device, such as a CPU for post processing.

We are only using the DVS. We did not have any inventive input to the camera, but I'm sure the company is aware of the potential litigation for Prophesee.
Hi D

Agree not so much about us and suggest you right about Zurich given the excerpt below. Don't think Inivation implying anything on their behalf, more so a general sense or possibility from others.

"....has long been contention about whether the Prophesee camera is too similar to earlier designs, particularly those invented at INI Zurich."

Be nice to have a bet each way and be doing something with Inivation as well haha
 
  • Like
Reactions: 7 users

TECH

Regular
Good evening all,

I don't think I'll get into trouble for sharing this, but I consider Peter a friend, he is a great guy, dedicated to seeing our company
succeed, working long hours, making himself available for the team is just part of his day, next week he heads off to Toulouse, France to
oversee work on MetaTF 2.0.

We are so lucky to have such a great team back in Perth, with Valentina and Tony holding the fort, allowing Peter to travel to the US to
help out with Akida 2.0 design work etc (he's really enjoyed being there) to now be heading off to France to oversee things there.

It's the stuff that goes on behind the scenes, the stuff that isn't readily visible, maybe I'll get into trouble for sharing this, but I love our
company, we will succeed, why, because of the Founder/ Co-Founder and all our staff, we are a professional operation now, circled by
great individuals with fantastic life values...today's Podcast reconfirmed my belief companies have tested the waters elsewhere, and
realized that BrainChip is the real deal, the three "P's"....Performance, Power and Price...we nail all three, listen to Antonio carefully, the
customers/potential customers are starting to realize who the real leaders are in the race to the edge.

Honesty, Integrity and of course revolutionary technology will always triumph.

Love Brainchip 💞 Tech.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 81 users

Zedjack33

Regular
Good evening all,

I don't think I'll get into trouble for sharing this, but I consider Peter a friend, he is a great guy, dedicated to seeing our company
succeed, working long hours, making himself available for the team is just part of his day, next week he heads off to Toulouse, France to
oversee work on MetaTF 2.0.

We are so lucky to have such a great team back in Perth, with Valentina and Tony holding the fort, allowing Peter to travel to the US to
help out with Akida 2.0 design work etc (he's really enjoyed being there) to now be heading off to France to oversee things there.

It's the stuff that goes on behind the scenes, the stuff that isn't readily visible, maybe I'll get into trouble for sharing this, but I love our
company, we will succeed, why, because of the Founder/ Co-Founder and all our staff, we are a professional operation now, circled by
great individuals with fantastic life values...today's Podcast reconfirmed my belief companies have tested the waters elsewhere, and
realized that BrainChip is the real deal, the three "P's"....Performance, Power and Price...we nail all three, listen to Antonio carefully, the
customers/potential customers are starting to realize who the real leaders are in the race to the edge.

Honesty, Integrity and of course revolutionary technology will always triumph.

Love Brainchip 💞 Tech.
So if Mr Peter is heading over to help with design work, is B Chip missing the timeline of this quarter?

Asking for a friend.
 
  • Like
  • Haha
  • Thinking
Reactions: 7 users
I always wonder, like the chicken & egg, what came first?

With Teksun, I see they have Edge Impulse, Megachips, Renesas and of course us as partners (and some other notable like Qualcomm, AWS, Google Cloud).

Did Teksun approach us, us them or a mutual intro by one of our connected partners?

I find it interesting or read odd, that BRN is listed under a semiconductor partner like Qualcomm and Renesas whilst Megachips is AI :unsure:

 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 19 users

Mt09

Regular
Good evening all,

I don't think I'll get into trouble for sharing this, but I consider Peter a friend, he is a great guy, dedicated to seeing our company
succeed, working long hours, making himself available for the team is just part of his day, next week he heads off to Toulouse, France to
oversee work on MetaTF 2.0.

We are so lucky to have such a great team back in Perth, with Valentina and Tony holding the fort, allowing Peter to travel to the US to
help out with Akida 2.0 design work etc (he's really enjoyed being there) to now be heading off to France to oversee things there.

It's the stuff that goes on behind the scenes, the stuff that isn't readily visible, maybe I'll get into trouble for sharing this, but I love our
company, we will succeed, why, because of the Founder/ Co-Founder and all our staff, we are a professional operation now, circled by
great individuals with fantastic life values...today's Podcast reconfirmed my belief companies have tested the waters elsewhere, and
realized that BrainChip is the real deal, the three "P's"....Performance, Power and Price...we nail all three, listen to Antonio carefully, the
customers/potential customers are starting to realize who the real leaders are in the race to the edge.

Honesty, Integrity and of course revolutionary technology will always triumph.

Love Brainchip 💞 Tech.
Fingers crossed the design work for meta tf 2.0 is complete, and Peter is going over to dot the i’s before the go switch is flicked.
 
  • Like
  • Fire
Reactions: 18 users

Zedjack33

Regular
Fingers crossed the design work for meta tf 2.0 is complete, and Peter is going over to dot the i’s before the go switch is flicked.
Yep. Fingers crossed. Cos that’s all we got.
 
  • Like
Reactions: 5 users

Makeme 2020

Regular
Good evening all,

I don't think I'll get into trouble for sharing this, but I consider Peter a friend, he is a great guy, dedicated to seeing our company
succeed, working long hours, making himself available for the team is just part of his day, next week he heads off to Toulouse, France to
oversee work on MetaTF 2.0.

We are so lucky to have such a great team back in Perth, with Valentina and Tony holding the fort, allowing Peter to travel to the US to
help out with Akida 2.0 design work etc (he's really enjoyed being there) to now be heading off to France to oversee things there.

It's the stuff that goes on behind the scenes, the stuff that isn't readily visible, maybe I'll get into trouble for sharing this, but I love our
company, we will succeed, why, because of the Founder/ Co-Founder and all our staff, we are a professional operation now, circled by
great individuals with fantastic life values...today's Podcast reconfirmed my belief companies have tested the waters elsewhere, and
realized that BrainChip is the real deal, the three "P's"....Performance, Power and Price...we nail all three, listen to Antonio carefully, the
customers/potential customers are starting to realize who the real leaders are in the race to the edge.

Honesty, Integrity and of course revolutionary technology will always triumph.

Love Brainchip 💞 Tech.
Please explain.
 
  • Like
  • Haha
Reactions: 7 users

equanimous

Norse clairvoyant shapeshifter goddess
Please explain.

1692355785003.png
 
  • Haha
  • Like
Reactions: 14 users

equanimous

Norse clairvoyant shapeshifter goddess
can anyone explain this?

 
  • Wow
  • Like
  • Thinking
Reactions: 9 users

Tothemoon24

Top 20

Links to Soundhound via Mercedes , chance of involvement here ..🤷🏻‍♂️

SoundHound Collaborates with Togg to Deliver Voice AI Experience to New Smart Vehicles​

MACHINE LEARNINGNEWS
By Business WireOn Aug 18, 2023


Share

First Togg smart vehicles include a fully branded AI voice assistant accessible through natural speech

SoundHound AI, a global leader in voice artificial intelligence, announced its advanced AI technology, including innovative Edge+Cloud connectivity, multiple language capabilities, and a custom-branded voice assistant, is live in a new line of smart vehicles from Turkish mobility provider, Togg.
AiThority Interview Insights: AiThority Interview with Bret Greenstein, Partner, Data & AI at PwC
“It’s exciting to be involved in a project like this, working at the very cutting edge of the future of mobility”
Launched in Q2 2023, Togg vehicles have been described as a “new living space” that will redefine mobility. As part of Togg’s tech-forward vision to create a more user-centric experience, the mobility provider has collaborated with SoundHound to deliver a sophisticated, best-in-class voice AI.
Togg’s in-vehicle AI-powered voice assistant is capable of tracking and understanding the context of speech in real-time (even before the user has finished speaking), as well as addressing multiple questions and filtering results using SoundHound’s proprietary Speech-to-Meaning® and Deep Meaning Understanding technologies.
Through SoundHound’s extensive library of content domains – including co-developed vehicle control services – Togg users can use voice commands to play music, lock doors, open the trunk, change the temperature, switch lights, and access information about local charging stations and battery range. SoundHound’s voice technology is also available in 25 different languages.
As a result, this collaboration between SoundHound and Togg provides users with the ability to control their smart vehicle experience by speaking naturally – just as they would to another person.
Read More about AiThority Interview: AiThority Interview with Rebecca Jones, General Manager at Mosaicx
In addition to adopting this uniquely fluent, conversational AI, Togg’s in-vehicle voice technology will also have personal touches – including custom wake words, like “Hello Togg.”
“It’s exciting to be involved in a project like this, working at the very cutting edge of the future of mobility,” says Keyvan Mohajer, CEO at SoundHound. “Our technology works in synergy with the tech-forward components of Togg’s vehicles to provide drivers and their passengers with an effortless, entertaining, and safe way to control the in-car experience.”
 
Last edited:
  • Like
  • Fire
Reactions: 6 users

Diogenese

Top 20
  • Like
  • Wow
Reactions: 10 users

miaeffect

Oat latte lover
  • Like
Reactions: 6 users

Tothemoon24

Top 20
Smart Technology

Top Trends Shaping the Future of Data Science and Machine Learning​


TLME News Service

Published:17th Aug, 2023 at 7:17 PM






Gartner, Inc. today highlighted the top trends impacting the future of data science and machine learning (DSML) as the industry rapidly grows and evolves to meet the increasing significance of data in artificial intelligence (AI), particularly as the focus shifts towards generative AI investments.
Peter Krensky, Director Analyst at Gartner said: “As machine learning adoption continues to grow rapidly across industries, DSML is evolving from just focusing on predictive models, toward a more democratized, dynamic and data-centric discipline.
"This is now also fueled by the fervor around generative AI. While potential risks are emerging, so too are the many new capabilities and use cases for data scientists and their organizations.”
According to Gartner, the top trends shaping the future of DSML include:
Cloud Data Ecosystems
Data ecosystems are moving from self-contained software or blended deployments to full cloud-native solutions. By 2024, Gartner expects 50% of new system deployments in the cloud will be based on a cohesive cloud data ecosystem rather than on manually integrated point solutions.
Insight: Global Digital Transformation in Logistics
Gartner recommends organizations evaluate data ecosystems based on their ability to resolve distributed data challenges, as well as to access and integrate with data sources outside of their immediate environment.
Edge AI
Demand for Edge AI is growing to enable the processing of data at the point of creation at the edge, helping organizations to gain real-time insights, detect new patterns and meet stringent data privacy requirements.
Edge AI also helps organizations improve the development, orchestration, integration and deployment of AI.
Gartner predicts that more than 55% of all data analysis by deep neural networks will occur at the point of capture in an edge system by 2025, up from less than 10% in 2021.
Organizations should identify the applications, AI training and inferencing required to move to edge environments near IoT endpoints.
Responsible AI
Responsible AI makes AI a positive force, rather than a threat to society and to itself. It covers many aspects of making the right business and ethical choices when adopting AI that organizations often address independently, such as business and societal value, risk, trust, transparency and accountability.
Gartner predicts the concentration of pretrained AI models among 1% of AI vendors by 2025 will make responsible AI a societal concern.
Gartner recommends organizations adopt a risk-proportional approach to deliver AI value and take caution when applying solutions and models.
Seek assurances from vendors to ensure they are managing their risk and compliance obligations, protecting organizations from potential financial loss, legal action and reputational damage.
Data-Centric AI
Data-centric AI represents a shift from a model and code-centric approach to being more data focused to build better AI systems.
Solutions such as AI-specific data management, synthetic data and data labeling technologies, aim to solve many data challenges, including accessibility, volume, privacy, security, complexity and scope.
The use of generative AI to create synthetic data is one area that is rapidly growing, relieving the burden of obtaining real-world data so machine learning models can be trained effectively.
By 2024, Gartner predicts 60% of data for AI will be synthetic to simulate reality, future scenarios and derisk AI, up from 1% in 2021.
Accelerated AI Investment
Investment in AI will continue to accelerate by organizations implementing solutions, as well as by industries looking to grow through AI technologies and AI-based businesses.
By the end of 2026, Gartner predicts that more than $10 billion will have been invested in AI startups that rely on foundation models – large AI models trained on huge amounts of data.
A recent Gartner poll of more than 2,500 executive leaders found that 45% reported that recent hype around ChatGPT prompted them to increase AI investments.
70% said their organization is in investigation and exploration mode with generative AI, while 19% are in pilot or production mode.
 
  • Like
  • Fire
Reactions: 10 users

Easytiger

Regular
  • Love
  • Like
  • Fire
Reactions: 15 users

Cartagena

Regular
I personally hope that ARM, once they actually list on the Nasdaq IPO within the very near future will have substantial funds set aside for Co acquisitions, mergers etc etc as part of their overall ongoing commercial and corporate strategies ........ it would be beneficial and make sense for ARM to take some kind of a financial stake no matter what size interest big or small ( but meaningful ) in Brainchip, especially given that they know of our Co, as well as most importantly incorporating / promoting our Akida AI technology within their own ARM products.

The company (Renesas) says it is developing a new series of its RA MCUs based on the Cortex-M85 processor, planned for release in 2023.
 

Attachments

  • Screenshot_20230819-015541.png
    Screenshot_20230819-015541.png
    1.3 MB · Views: 132
  • Like
  • Fire
  • Love
Reactions: 35 users

cosors

👀

What a beautiful family member! And they are almost always the only sensible ones in the family.) with the heart fully with the pack and are always honest. And just this magnificent specimen! How can you not fall for this look...?

'Mine' had made me a better person/human I think.
 
  • Like
  • Love
Reactions: 11 users
Whilst only a small mention, great to see confirmation we're still in the mix with some heavy hitters over at NASA & Ames Research Centre.

Gotta say something about the tech :)

The original neuromorphic flight "test" was with Loihi and the TechEd Sat from what I read but states moving to next phases with more neuromorphic.

BRAINSTACK – A Platform for Artificial Intelligence & Machine Learning Collaborative Experiments on a Nano-Satellite

Date Acquired
August 2, 2023
Publication Date
August 11, 2023
Publication Information
Publication: SmallSat Conference Proceedings

Screenshot_2023-08-19-00-31-00-80_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg


IMG_20230819_002905.jpg


Part of the conclusion...

Building on earlier successes with GPUs, and more recently, a neuromorphic processor flight test, the notion of a collaborative BrainStack orbital AI/ML
laboratory module is presented. The intention is to be able to perform experiments on multiple hardware and
software AI/ML elements on the same flight with different collaborative teams.

The TES-n Common AI/ML Software Interface will permit a menu driven set
of experiments across individual elements. The next set of TES BrainStack experiments will host combinations
of GPUs and neuromorphic processors, with flexibility
to support upcoming novel systems and their unique interfaces. Such a collaborative BrainStack system will greatly expand the use of these remarkable new tools and methods in the space sector.


Full paper:

HERE
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Learning

Learning to the Top 🕵‍♂️
  • Like
  • Love
  • Fire
Reactions: 21 users

Dallas

Regular
la4phoa69za & amp; s = 19
 
  • Like
  • Fire
  • Love
Reactions: 32 users
Top Bottom