BRN Discussion Ongoing

Taproot

Regular
Just for interest.

Pedro Julian
This bloke founded Oculi.

3D-CMOS Neuromorphic Imager with Cortical Layer Interconnects

Prof. Pedro Julián, an internationally recognized researcher, has recently joined as the head of the research unit "Embedded Artificial Intelligence" at Silicon Austria Labs (SAL) in Linz. Julián has been working on the implementation of neural networks in digital hardware ("Neuromorphic Computing") for 20 years and has also worked at the University of California Berkeley and Johns Hopkins University in the US state of Maryland, where he still holds a position as visiting professor.


05/15/2023

PROJECT DIGINEURON: CHIPS MODELED ON THE HUMAN BRAIN​

“We are pleased that the neural network blocks implemented on the chip have already been successfully used in an application, namely in the cooperative FFG project Firesat (Artificial Intelligence on Earth Observation Satellites), on which SAL worked together with Ororatech GmbH and Joanneum Research. “An image taken by a satellite was analyzed by an AI to detect fire on the earth’s surface,” says Pedro Julian.


 
  • Like
  • Fire
Reactions: 8 users
USPTO s cited a handfull of prior patents against the Fris application, including a couple of Sony patent documents.

This may raise the possibility that Fris (Oculi) infringes a Sony patent.
Supposing the application is thwarted, does that leave Oculi dead in the water?

Or just needing to licence their tech from Sony?

Which would then be completely at Sony's discretion..
 
  • Like
  • Thinking
Reactions: 3 users

IloveLamp

Top 20
Oculi, a spinout of Johns Hopkins University, is developing smart programmable vision sensors based on the company’s Real-Time Vision Intelligence developed by founder and CEO Charbel Rizk at Johns Hopkins. The OCULI SPU (Sensing and Processing Unit) is an integrated sensing and processing module designed to mimic the human eye in selectively delivering information to make computer vision faster and more efficient.

I have noticed staff at John Hopkins uni liking brn posts on LinkedIn for the past couple of years...
 
  • Like
  • Fire
  • Love
Reactions: 18 users

Diogenese

Top 20
Supposing the application is thwarted, does that leave Oculi dead in the water?

Or just needing to licence their tech from Sony?

Which would then be completely at Sony's discretion..
Hi DB,

There are 2 distinct issues:

1. Invalidation of Oculi's patent claims by prior publication: Whether the Oculi patent claims something which is inventive over and above what is disclosed in the cited patents, or, to put it another way, whether all the features of the Oculi patent claims are disclosed in the prior documents; and

2. Patent infringement: Whether the Oculi NN powered pixel array uses all the features of one of the claims of the cited patents.

Just because a patent is prior published by an earlier patent document does not automatically make it an infringement of the earlier patent.

Oculi do not need to have a patent to make their pixel array. However, they would not be able to prevent others from copying their product.

Oculi will have problems if 2 prevails, in which case, the patent owner could stop Oculi from making their pixel array. The alternative is, as you say, at the patentee's discretion as to whether to grant Oculi a licence.
 
  • Like
  • Wow
  • Thinking
Reactions: 8 users

Diogenese

Top 20
Early testing, of the required backpack, has been met with approval, so don't discount it yet..


View attachment 52537
I remember doing the Cradle Mt - Lake St Clair walk with that exact backpack.
 
  • Haha
  • Like
Reactions: 15 users

IloveLamp

Top 20

Screenshot_20231223_171939_LinkedIn.jpg
 
  • Like
  • Fire
Reactions: 13 users

IloveLamp

Top 20
Last edited:
  • Like
  • Fire
  • Love
Reactions: 64 users
Merry Christmas all 🎄🎅

View attachment 52550 View attachment 52552
My opinion with PVDM Stepping back a touch tells me 24 will be a blast, brainchip without him the company is nothing, his stepping back because things are in place, SEAN TO PETER hey Pete take a step back piano piano and enjoy Brainchip is about to take this world on ,
 
  • Like
  • Fire
Reactions: 24 users

Perhaps

Regular
Except that the German display didn’t promote the virtual assistant component of the CLA to the same degree.
Software updates of the MB.OS, not a change of the hardware inbetween 3 months.
As CES is coming soon, we will just wait and see.
 
  • Like
  • Fire
Reactions: 7 users

Getupthere

Regular
  • Like
  • Thinking
Reactions: 6 users
Morning Pom down under,

Presume you are referring to shorters?

No is the short answer , but thay have certainly made a monza.

To keep it simple , you will have noticed that the short position has only averaged say 5% Of the floated average of brn shares.

Sooooo.....if thay had all started / taken their shorts out at the peak of our market cap & simply held to pressent , (which thay have not ) these scum dwelling creatures would have captured say 5% Of the market capitol which brn once was, then subtract borrowing fees and then the final buyback price of said shares.

Amazing the damage devients can create by playing with such a small % of shares.

The above example is extremely simplified , as thay open and close positions at will, so gross return would also diminish.

Yet another way some of this lost market capitol at the peak could have been captured , albeit as a crystalised loss to carry foward , is to sell one's shares then purchase more at a lower price.
This technique can seriously grow one's portfolio , but you have to get the timing correct .




Regards,
Esq.
So at 7% total amount of shares shorted out of 1.8b they have made 10.5 million $
 
  • Thinking
Reactions: 2 users

wilzy123

Founding Member
  • Like
  • Haha
Reactions: 2 users

jtardif999

Regular
Software updates of the MB.OS, not a change of the hardware inbetween 3 months.
As CES is coming soon, we will just wait and see.
They have yet to describe the hardware implementation behind the virtual assistant; it would be great if they do that to any degree.
 
  • Fire
  • Like
Reactions: 2 users

Perhaps

Regular
They have yet to describe the hardware implementation behind the virtual assistant; it would be great if they do that to any degree.
Infotainment hardware by Qualcomm, ADAS by Nvidia (sorry, only in German)


Updates are integration of ChatGPT and Unity Media game engine for visualisation

 
  • Like
Reactions: 5 users

"Qualcomm and four other significant semiconductor firms have officially joined forces to establish Quintauris, a company focused on developing "next-generation hardware" based on the RISC-V open-standard architecture (via Business Wire)."

"Qualcomm is just one of five partners involved in Quintauris, with the other four being Bosch, Infineon, Nordic Semiconductor, and NXP Semiconductors."

"According to its official website, Quintauris says its products will initially focus on the automotive industry and then later cater to mobile and Internet of Things (IoT) applications. The company is also focused on promoting standards for the RISC-V hardware-software ecosystem, and the implication appears to be that the launch of successful products developed by several big companies will help realize standardization."
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users

Colorado23

Regular
I hope you all have a safe and relaxing festive break team and look forward to digesting more of your insightful content in the new year.
Thought Id share a clip from a band I saw in Perth last week. Their music can be very calming, which goes nicely with the prospects in 2024.
Fair to say, the pain is almost over for some.
 
  • Like
  • Love
Reactions: 7 users

Gemmax

Regular
Merry Xmas Chippers!
Best wishes. Next year will be our year! ( Hang on..... I may have said that last year!! ) :rolleyes:
 
  • Like
  • Haha
  • Love
Reactions: 22 users
Hi Diogenese
I try to read all your posts and do admire your patience and the way you have consistently attempted to educate even those who seem blinkered to the reality.

The bottom line on Brainchip’s commercial strategy is that it is unashamedly following the ARM business model as first outlined by Peter van der Made in 2015.

ARM sells IP or as Google tells it:

People also ask:
Do Arm make their own chips?

Arm Limited, the company behind the Arm processor, designs the core CPU components and licenses the intellectual property to partner organizations, which then build Arm-based chips according to their own requirements.
1703370054478.png

https://www.techtarget.com › whatis

What is an Arm processor? - TechTarget


ARM listed on the NASDAQ with this commercial model and the World was so comfortable with ARM persisting with this commercial model that it opened with a market capitalisation of approximately $US50 billion and the last I read a couple of weeks ago it had been pushed out to $US63 billion odd.

As it so happens Brainchip like about 1,000 other companies is partnered with ARM and sits side by side with ARM selling IP (not chips) to ARM customers who come to ARM to buy???

You guessed it IP to include in their own chips that they are going to have made but not by ARM.

A company called EDGE IMPULSE is an ARM partner as well. It however does not sell IP or chips. It sells design services to those customers of ARM who do not have the size and scale to undertake the design of the chips in which they propose to use the IP purchased from ARM.

Strangely Brainchip is also partnered with EDGE IMPULSE for exactly the same purpose.

EDGE IMPULSE actually has a design platform used by well over 100,000 engineers working on one to several projects with the intention of bringing commercial product ideas to life and marketing them to someone to produce them and bring them to market.

As to how this model works Renesas has licensed AKIDA IP and taped out Silicon for the purpose of manufacturing its own chip which as it so happens will also include ARM Cortex IP.

MegaChips Douglas Fairbairn was interviewed last year and asked about whether they were building AKIDA chips and he said NO they were concentrating on their larger customers who wanted IP to include in their existing chips but they may do so in the future if the demand or enquiries from smaller customers continued to grow.

Socionext is also a fabless semiconductor supplier and it sells IP and design services and it too is selling AKIDA IP not chips to its customers.

Then we have TATA ELXSI again selling IP and design services driving the use of AKIDA IP into medical and industrial applications.

The revolutionary nature of the AKIDA technology has required demonstration AKD1000 and 1500 chips to confirm to customers that this is not fake technology carrying a fatal flaw.

The VVDN EDGE Box using AKD1000 chips is a demonstrator which coincidentally once it was recently proven out saw Unigen jump and enter a partnership with Brainchip. Did they get an early view of the VVDN Edge Box???

Anyway I for one am content to trust that ARM has a sensible roadmap to commercial success and that Brainchip by following that model will also be best placed to succeed.

Having said this at the AGM a shareholder asked a question based on the premise that Brainchip did not have a flexible pricing model for its IP to meet the needs of different customers. This assumption was shown to be incorrect as the Chair made clear that Brainchip’s pricing model was very flexible and tailored to individual customers.

So with a flexible pricing model and a successful commercial roadmap to follow and over 50 publicly revealed relationships and 100’s of potential customers trialling AKIDA it seems as many have stated just a matter of time unless ARM’s success was just a fluke.

My opinion only DYOR
FACT FINDER

MERRY CHRISTMAS 🎄🎄🎄
 
  • Like
  • Love
  • Fire
Reactions: 125 users

jla

Regular
Hi Diogenese
I try to read all your posts and do admire your patience and the way you have consistently attempted to educate even those who seem blinkered to the reality.

The bottom line on Brainchip’s commercial strategy is that it is unashamedly following the ARM business model as first outlined by Peter van der Made in 2015.

ARM sells IP or as Google tells it:

People also ask:
Do Arm make their own chips?

Arm Limited, the company behind the Arm processor, designs the core CPU components and licenses the intellectual property to partner organizations, which then build Arm-based chips according to their own requirements.
View attachment 52580
https://www.techtarget.com › whatis

What is an Arm processor? - TechTarget


ARM listed on the NASDAQ with this commercial model and the World was so comfortable with ARM persisting with this commercial model that it opened with a market capitalisation of approximately $US50 billion and the last I read a couple of weeks ago it had been pushed out to $US63 billion odd.

As it so happens Brainchip like about 1,000 other companies is partnered with ARM and sits side by side with ARM selling IP (not chips) to ARM customers who come to ARM to buy???

You guessed it IP to include in their own chips that they are going to have made but not by ARM.

A company called EDGE IMPULSE is an ARM partner as well. It however does not sell IP or chips. It sells design services to those customers of ARM who do not have the size and scale to undertake the design of the chips in which they propose to use the IP purchased from ARM.

Strangely Brainchip is also partnered with EDGE IMPULSE for exactly the same purpose.

EDGE IMPULSE actually has a design platform used by well over 100,000 engineers working on one to several projects with the intention of bringing commercial product ideas to life and marketing them to someone to produce them and bring them to market.

As to how this model works Renesas has licensed AKIDA IP and taped out Silicon for the purpose of manufacturing its own chip which as it so happens will also include ARM Cortex IP.

MegaChips Douglas Fairbairn was interviewed last year and asked about whether they were building AKIDA chips and he said NO they were concentrating on their larger customers who wanted IP to include in their existing chips but they may do so in the future if the demand or enquiries from smaller customers continued to grow.

Socionext is also a fabless semiconductor supplier and it sells IP and design services and it too is selling AKIDA IP not chips to its customers.

Then we have TATA ELXSI again selling IP and design services driving the use of AKIDA IP into medical and industrial applications.

The revolutionary nature of the AKIDA technology has required demonstration AKD1000 and 1500 chips to confirm to customers that this is not fake technology carrying a fatal flaw.

The VVDN EDGE Box using AKD1000 chips is a demonstrator which coincidentally once it was recently proven out saw Unigen jump and enter a partnership with Brainchip. Did they get an early view of the VVDN Edge Box???

Anyway I for one am content to trust that ARM has a sensible roadmap to commercial success and that Brainchip by following that model will also be best placed to succeed.

Having said this at the AGM a shareholder asked a question based on the premise that Brainchip did not have a flexible pricing model for its IP to meet the needs of different customers. This assumption was shown to be incorrect as the Chair made clear that Brainchip’s pricing model was very flexible and tailored to individual customers.

So with a flexible pricing model and a successful commercial roadmap to follow and over 50 publicly revealed relationships and 100’s of potential customers trialling AKIDA it seems as many have stated just a matter of time unless ARM’s success was just a fluke.

My opinion only DYOR
FACT FINDER

MERRY CHRISTMAS 🎄🎄🎄
Thank you Fact Finder for all your help through the years, May you and your family Have Merry Christmas. Allso to all the contributors a big thank you.from Len.
 
  • Like
  • Love
  • Fire
Reactions: 34 users

Tothemoon24

Top 20

Recently released Sony Patent .​

I’ve copied some points of interest below ,there’s many more if you care to click on the link .



Sony Patent | Deployment of dynamic vision sensor hybrid element in method for tracking a controller and simultaneous body tracking, slam or safety shutter.​



Eye Tracking

Aspects of the present disclosure may be applied to eye tracking. Generally, eye tracking image analysis takes advantage of characteristics distinctive to how light is reflected off of the eyes to determine eye gaze direction from the image. For example, the image may be analyzed to identify eye location based on corneal reflections in the image data, and the image may be further analyzed to determine gaze direction based on a relative location of the pupils in the image.

Two common gaze tracking techniques for determining eye gaze direction based on pupil location are known as Bright Pupil tracking and Dark Pupil tracking. Bright Pupil tracking involves illumination of the eyes with a light source that is substantially in line with the optical axis of the DVS, causing the emitted light to be reflected off of the retina and back to the DVS through the pupil. The pupil presents in the image as an identifiable bright spot at the location of the pupil, similar to the red eye effect which occurs in images during conventional flash photography. In this method of gaze tracking, the bright reflection from pupil itself helps the system locate the pupil if contrast between pupil and iris is not enough.

Dark Pupil tracking involves illumination with a light source that is substantially offline from the optical axis of the DVS, causing light directed through the pupil to be reflected away from the optical axis of the DVS, resulting in an identifiable dark spot in the Event at the location of the pupil. In alternative Dark Pupil tracking systems, an infrared light source and cameras directed at eyes can look at corneal reflections. Such DVS based systems track the location of the pupil and corneal reflections which provides parallax due to different depths of reflections gives additional accuracy.

FIG. 18A depicts an example of a dark pupil gaze tracking system 1800 that may be used in the context of the present disclosure. The gaze tracking system tracks the orientation of a user’s eye E relative to a display screen 1801 on which visible images are presented. While a display screen is utilized in the example system of FIG. 18A, certain alternative embodiments may utilize an image projection system capable of projecting images directly into the eyes of a user. In these embodiments, the user’s eye E would be tracked relative to the images projected into the user’s eyes. In the example of FIG. 18A, the eye E gathers light from the screen 1801 through a variable iris I and a lens L projects an image on the retina R. The opening in the iris is known as the pupil. Muscles control rotation of the eye E in response to nerve impulses from the brain. Upper and lower eyelid muscles ULM, LLM respectively control upper and lower eyelids UL LL in response to other nerve impulses.

Light sensitive cells on the retina R generate electrical impulses that are sent to the user’s brain (not shown) via the optic nerve ON. The visual cortex of the brain interprets the impulses. Not all portions of the retina R are equally sensitive to light. Specifically, light-sensitive cells are concentrated in an area known as the fovea.

The illustrated image tracking system includes one or more infrared light sources 1802, e.g., light emitting diodes (LEDs) that direct non-visible light (e.g., infrared light) toward the eye E. Part of the non-visible light reflects from the cornea C of the eye and part reflects from the iris. The reflected non-visible light is directed toward a DVS 1804sensitive to infrared light by a wavelength-selective mirror 1806. The mirror transmits visible light from the screen 1801 but reflects the non-visible light reflected from the eye.

The DVS 1804 produces an event of the eye E which may be analyzed to determine a gaze direction GD from the relative position of the pupil. This event may be produced with a processor 1805. The DVS 1804 is advantageous in this implementation as the extremely fast update rate for events provides near real time information on changes in the user’s gaze.

As seen in FIG. 18B, the event 1811 showing a user’s head H may be analyzed to determine a gaze direction GD from the relative position of the pupil. For example, analysis may determine a 2-dimensional offset of the pupil P from a center of the eye E in the image. The location of the pupil relative to the center may be converted to a gaze direction relative to the screen 1801, by a straightforward geometric computation of a three-dimensional vector based on the known size and shape of the eyeball. The determined gaze direction GD is capable of showing the rotation and acceleration of the eye E as it moves relative to the screen 1801.

As also seen in FIG. 18B, the event may also include reflections 1807 and 1808 of the non-visible light from the cornea C and the lens L, respectively. Since the cornea and lens are at different depths, the parallax and refractive index between the reflections may be used to provide additional accuracy in determining the gaze direction GD. An example of this type of eye tracking system is a dual Purkinje tracker, wherein the corneal reflection is the 1st Purkinje Image, and the lens reflection is the 4th Purkinje Image. There may also be reflections 1808 from a user’s eyeglasses 1809, if these are worn a user.

Performance of eye tracking systems depend on a multitude of factors, including the placement of light sources (IR, visible, etc.) and DVS, whether user is wearing glasses or contacts, Headset optics, tracking system latency, rate of eye movement, shape of eye (which changes during the course of the day or can change as a result of movement), eye conditions, e.g., lazy eye, gaze stability, fixation on moving objects, scene being presented to user, and user head motion. The DVS provides an extremely fast update rate for events with reduced extraneous information output to the processor. This allows for quicker processing and faster gaze tracking state and error parameter determination.

Error parameters that may be determined from gaze tracking data may include, but are not limited to, rotation velocity and prediction error, error in fixation, confidence interval regarding the current and/or future gaze position, and errors in smooth pursuit. State information regarding a user’s gaze involves the discrete state of the user’s eyes and/or gaze. Accordingly, example state parameters that may be determined from gaze tracking data may include, but are not limited to, blink metrics, saccade metrics, depth of field response, color blindness, gaze stability, and eye movement as a precursor to head movement.

In certain implementations, the gaze tracking error parameters can include a confidence interval regarding the current gaze position. The confidence interval can be determined by examining the rotational velocity and acceleration of a user’s eye for change from last position. In alternative embodiments, the gaze tracking error and/or state parameters can include a prediction of future gaze position. The future gaze position can be determined by examining the rotational velocity and acceleration of eye and extrapolating the possible future positions of the user’s eye. In general terms, the DVS update rate of the gaze tracking system may lead to a small error between the determined future position and the actual future position for a user with larger values of rotational velocity and acceleration because the updated rate of the DVS is so high this small error may be significantly less than existing camera based systems.



Finger Position Tracking
SNN




In an alternative implementation finger tracking may be performed without the use of one or more light sources. A machine learning model may be trained with a machine learning algorithm to detect finger position from events generated from ambient light changes due to finger movement. The machine learning model may be a general machine learning model such as a CNN, RNN or DNN as discussed above. In some implementations specialized machine learning model such as for example and without limitation a spiking (or sparking) neural network (SNN) may be trained with a specialized machine learning algorithm. An SNN mimics biological NNs by having an activation threshold and a weight that is adjusted according to a relative spike time within an interval, also known as Spike-timing-dependent-plasticity (STDP). When the activation threshold is achieved the SNN is said to spike and transmit its weight to the next layer. An SNN may be trained via STDP and supervised or unsupervised learning techniques. and More information about SNNs can be found in Tavanaei, Amirhossein et al. “Deep Learning in Spiking Neural Networks” Neural Networks (2018) arXiv:1804.08150, the contents of which are incorporated herein by reference for all purposes.

Alternatively, a high dynamic range (HDR) image may be constructed using aggregated events from ambient data. A machine learning model trained to recognize hand position or controller position and orientation from HDR images. The trained machine learning model may be applied to HDR images generated from the events to determine the hand/finger position or controller position and orientation. The machine learning model may be a general machine learning model trained with supervised learning techniques as discussed in the general neural network training section.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 20 users
Top Bottom