BRN Discussion Ongoing

GazDix

Regular
Does anyone know when the quarterly is due?
End of this month. But
Fed up with our Jurassic banking system - I transferred $2500 to my trading account on Saturday and it still isn’t available. I’m sure it’ll be clear tomorrow once the price has pumped up again 🫣
This is one thing that blockchain technology will solve. Already using Bitcoin or any crypto (including stablecoins) you can transfer money and it will go instantaneously in 5 minutes or under (with most chains) over the weekend as well. Although of course, this is still in its infancy, the technology still works.

What will be really cool though, with blockchain tech, some emerging decentralised banks are looking at wage payments in real time rather than weekly, fortnightly or monthly payments. For example, the 5 minutes I spent typing this post, I will be able to see and access the 5 cents go into my bank account for my wages in my job. This can implemented now and some countries/banks will introduce that soon I think.

But this being said, it might be like the show Beyond 2000. Nothing will really change, haha.

Apologies about not being about Brainchip, but it is still tech-centred. Hope your money comes soon @robsmark so you can buy more cheap shares!
 
  • Like
  • Love
  • Fire
Reactions: 9 users

robsmark

Regular
End of this month. But

This is one thing that blockchain technology will solve. Already using Bitcoin or any crypto (including stablecoins) you can transfer money and it will go instantaneously in 5 minutes or under (with most chains) over the weekend as well. Although of course, this is still in its infancy, the technology still works.

What will be really cool though, with blockchain tech, some emerging decentralised banks are looking at wage payments in real time rather than weekly, fortnightly or monthly payments. For example, the 5 minutes I spent typing this post, I will be able to see and access the 5 cents go into my bank account for my wages in my job. This can implemented now and some countries/banks will introduce that soon I think.

But this being said, it might be like the show Beyond 2000. Nothing will really change, haha.

Apologies about not being about Brainchip, but it is still tech-centred. Hope your money comes soon @robsmark so you can buy more cheap shares!
I’m with you mate - I’ve been a massive BTC advocate since 2017.
 
  • Like
  • Love
Reactions: 4 users

(Remember the old one shot 2D image of a tiger then AKD1000 can recognise tigers?...............)​

Accurate depth estimation across different modalities

Depth estimation and 3D reconstruction is the perception task of creating 3D models of scenes and objects from 2D images. Our research leverages input configurations including a single image, stereo images, and 3D point clouds. We’ve developed SOTA supervised and self-supervised learning methods for monocular and stereo images with transformer models that are not only highly efficient but also very accurate. Beyond the model architecture, our full-stack optimization includes using neural architecture search with DONNA (Distilling Optimal Neural Networks Architectures) and quantization with the AI Model Efficiency Toolkit (AIMET). As a result, we demonstrated the world’s first real-time monocular depth estimation on a phone that can create 3D images from a single image. Watch my 3D perception webinar for more details.

Efficient and accurate 3D object detection

3D object detection is the perception task of finding positions and regions of individual objects. For example, the goal could be detecting the corresponding 3D bounding boxes of all vehicles and pedestrians on LiDAR data for autonomous driving. We are making possible efficient object detection in 3D point clouds. We’ve developed an efficient transformer-based 3D object detection architecture that utilizes 2D pseudo-image features extracted in the polar space. With a smaller, faster, and lower power model, we’ve achieved top accuracy scores in the detection of vehicles, pedestrians, and traffic signs on LiDAR 3D point clouds.

Low latency and accurate 3D pose estimation

3D pose estimation is the perception task of finding the orientation and key-points of objects. For XR applications, accurate and low-latency hand and body pose estimation are essential for intuitive interactions with virtual objects within a virtual environment. We’ve developed an efficient neural network architecture with dynamic refinements to reduce the model size and latency for hand pose estimation. Our models can interpret 3D human body pose and hand pose from 2D images, and our computationally scalable architecture iteratively improves key-point detection with less than 5mm error – achieving the best average 3D error.

3D scene understanding

3D scene understanding is the perception task of decomposing a scene into its 3D and physical components. We’ve developed the world’s first transformer-based inverse rendering for scene understanding. Our end-to-end trained pipeline estimates physically-based scene attributes from an indoor image, such as room layout, surface normal, albedo (surface diffuse reflectivity), material type, object class, and lighting estimation. Our AI model leads to better handling of global interactions between scene components, achieving better disambiguation of shape, material, and lighting. We achieved SOTA results on all 3D perception tasks and enable high-quality AR applications such as photorealistic virtual object insertion into real scenes.

Qualcomm-image

Our method correctly estimates lighting to realistically insert objects, such as a bunny.

Click to see a larger image.

More 3D perception breakthroughs to come

Looking forward, our ongoing research in 3D perception is expected to produce additional breakthroughs in neural radiance fields (NeRF), 3D imitation learning, neuro-SLAM (Simultaneous Localization and Mapping), and 3D scene understanding in RF (Wi-Fi/5G). In addition, our perception research is much broader than 3D perception as we continue to drive high-impact machine learning research efforts and invent technology enablers in several areas. We are focused on enabling advanced use cases for important applications, including XR, camera, mobile, autonomous driving, IoT, and much more. The future is looking bright as more perceptive devices become available that enhance our everyday lives.
Very nice find @MC🐠 (or Rob) but seriously it fits very nicely with what Anil Mankar said at the 2021 Ai Field Day:

"Similarly 3D point cloud by definition 3D point clouds are very sparse. Lidar data is very sparse.
Today people are taking Lidar data and converting it into a 2D kind of image because it's much easier to process the image and detect the object. There is no reason why we can't do that directly in a 3D point cloud and take advantage of that. WE ARE WORKING ON SOME OF THOSE APPLICATIONS and also there are other sensors that send 3D point cloud points and THAT'S ACTUALLY ONE OF THE APPLICATIONS THAT WE HAVE (AND) IS COMING IN BUT BECAUSE IT'S NEUROMORPHIC AND BECAUSE IT'S SO SMALL AND IT REACTS TO THE PIXEL WHERE THE THINGS ARE CHANGING."

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 31 users

VictorG

Member
End of this month. But

This is one thing that blockchain technology will solve. Already using Bitcoin or any crypto (including stablecoins) you can transfer money and it will go instantaneously in 5 minutes or under (with most chains) over the weekend as well. Although of course, this is still in its infancy, the technology still works.

What will be really cool though, with blockchain tech, some emerging decentralised banks are looking at wage payments in real time rather than weekly, fortnightly or monthly payments. For example, the 5 minutes I spent typing this post, I will be able to see and access the 5 cents go into my bank account for my wages in my job. This can implemented now and some countries/banks will introduce that soon I think.

But this being said, it might be like the show Beyond 2000. Nothing will really change, haha.

Apologies about not being about Brainchip, but it is still tech-centred. Hope your money comes soon @robsmark so you can buy more cheap shares!
No need for apologies, Block Chain will form an integral part of the Brainchip story when Brainchip start paying dividends.
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 13 users

alwaysgreen

Top 20
No need for apologies, Block Chain will form an integral part of the Brainchip story when Brainchip start paying dividend.

Instead of dividends being paid bi-annually, we have 0.001 cents transferred to our digital wallet everytime Brainchip makes a sale.

I like the sound of that.
 
  • Like
  • Fire
  • Haha
Reactions: 18 users

VictorG

Member
Instead of dividends being paid bi-annually, we have 0.001 cents transferred to our digital wallet everytime Brainchip makes a sale.

I like the sound of that.
I'm in.
 
  • Like
  • Haha
Reactions: 4 users

Iseki

Regular
I'm really liking this. XOL Automation has bought in to Oculi, and they look after a lot of factory automation. This looks all ready to get big straight away - no waiting to see if some engineer put us in a product in 2025. I wonder if Anil regrets dropping this lead?!
I think Oculi have won a second award *today* at Vision Stuttgart:
 
  • Fire
  • Like
Reactions: 17 users

robsmark

Regular
Instead of dividends being paid bi-annually, we have 0.001 cents transferred to our digital wallet everytime Brainchip makes a sale.

I like the sound of that.
If Brainchip is as adopted as I hope it will be then the network will need many additional layers to handle that volume of transactions!
 
Last edited:
  • Like
  • Love
Reactions: 3 users

Diogenese

Top 20

(Remember the old one shot 2D image of a tiger then AKD1000 can recognise tigers?...............)​

Accurate depth estimation across different modalities

Depth estimation and 3D reconstruction is the perception task of creating 3D models of scenes and objects from 2D images. Our research leverages input configurations including a single image, stereo images, and 3D point clouds. We’ve developed SOTA supervised and self-supervised learning methods for monocular and stereo images with transformer models that are not only highly efficient but also very accurate. Beyond the model architecture, our full-stack optimization includes using neural architecture search with DONNA (Distilling Optimal Neural Networks Architectures) and quantization with the AI Model Efficiency Toolkit (AIMET). As a result, we demonstrated the world’s first real-time monocular depth estimation on a phone that can create 3D images from a single image. Watch my 3D perception webinar for more details.

Efficient and accurate 3D object detection

3D object detection is the perception task of finding positions and regions of individual objects. For example, the goal could be detecting the corresponding 3D bounding boxes of all vehicles and pedestrians on LiDAR data for autonomous driving. We are making possible efficient object detection in 3D point clouds. We’ve developed an efficient transformer-based 3D object detection architecture that utilizes 2D pseudo-image features extracted in the polar space. With a smaller, faster, and lower power model, we’ve achieved top accuracy scores in the detection of vehicles, pedestrians, and traffic signs on LiDAR 3D point clouds.

Low latency and accurate 3D pose estimation

3D pose estimation is the perception task of finding the orientation and key-points of objects. For XR applications, accurate and low-latency hand and body pose estimation are essential for intuitive interactions with virtual objects within a virtual environment. We’ve developed an efficient neural network architecture with dynamic refinements to reduce the model size and latency for hand pose estimation. Our models can interpret 3D human body pose and hand pose from 2D images, and our computationally scalable architecture iteratively improves key-point detection with less than 5mm error – achieving the best average 3D error.

3D scene understanding

3D scene understanding is the perception task of decomposing a scene into its 3D and physical components. We’ve developed the world’s first transformer-based inverse rendering for scene understanding. Our end-to-end trained pipeline estimates physically-based scene attributes from an indoor image, such as room layout, surface normal, albedo (surface diffuse reflectivity), material type, object class, and lighting estimation. Our AI model leads to better handling of global interactions between scene components, achieving better disambiguation of shape, material, and lighting. We achieved SOTA results on all 3D perception tasks and enable high-quality AR applications such as photorealistic virtual object insertion into real scenes.

Qualcomm-image

Our method correctly estimates lighting to realistically insert objects, such as a bunny.

Click to see a larger image.

More 3D perception breakthroughs to come

Looking forward, our ongoing research in 3D perception is expected to produce additional breakthroughs in neural radiance fields (NeRF), 3D imitation learning, neuro-SLAM (Simultaneous Localization and Mapping), and 3D scene understanding in RF (Wi-Fi/5G). In addition, our perception research is much broader than 3D perception as we continue to drive high-impact machine learning research efforts and invent technology enablers in several areas. We are focused on enabling advanced use cases for important applications, including XR, camera, mobile, autonomous driving, IoT, and much more. The future is looking bright as more perceptive devices become available that enhance our everyday lives.






Very nice find @MC🐠 (or Rob) but seriously it fits very nicely with what Anil Mankar said at the 2021 Ai Field Day:

"Similarly 3D point cloud by definition 3D point clouds are very sparse. Lidar data is very sparse.
Today people are taking Lidar data and converting it into a 2D kind of image because it's much easier to process the image and detect the object. There is no reason why we can't do that directly in a 3D point cloud and take advantage of that. WE ARE WORKING ON SOME OF THOSE APPLICATIONS and also there are other sensors that send 3D point cloud points and THAT'S ACTUALLY ONE OF THE APPLICATIONS THAT WE HAVE (AND) IS COMING IN BUT BECAUSE IT'S NEUROMORPHIC AND BECAUSE IT'S SO SMALL AND IT REACTS TO THE PIXEL WHERE THE THINGS ARE CHANGING."

My opinion only DYOR
FF

AKIDA BALLISTA

These Qualcomm patents ...

US9083960B2 Real-time 3D reconstruction with power efficient depth sensor usage [2013]

1665365955113.png


Embodiments disclosed facilitate resource utilization efficiencies in Mobile Stations (MS) during 3D reconstruction. In some embodiments, camera pose information for a first color image captured by a camera on an MS may be obtained and a determination may be made whether to extend or update a first 3-Dimensional (3D) model of an environment being modeled by the MS based, in part, on the first color image and associated camera pose information. The depth sensor, which provides depth information for images captured by the camera, may be disabled, when the first 3D model is not extended or updated.


US10484697B2 Simultaneous localization and mapping for video coding [2019]

1665367946929.png


Video encoding and decoding techniques are described in which a predictive image s formed from texture mapping a composite image to a proxy geometry that provides an approximation of a three-dimensional structure of a current image or a previously encoded or decoded image. A residual between the predictive image and the current image is used to encode or decode the current image.


... do not mention NNs.

So, if Qualcomm have been working on this from at least 2013, what took them so long?
... maybe the clunky old GPU?
 
  • Like
  • Fire
  • Love
Reactions: 25 users
I think Oculi have won a second award *today* at Vision Stuttgart:
The more I look into Oculi, the more I like their product offer / tech and where we could fit if we are indeed involved......I do hope so for mine.

They still have a bit of business structuring / funding to work through but looks very positive.

I highlighted some sections I also liked and wouldn't be averse to BRN chucking in some coin maybe in the seed round :)

Recent article.



Oculi eyes computer vision revolution​

Sep 6, 2022 • Thierry Heles​

Charbel Rizk has spun Oculi out of Johns Hopkins University to commercialise technology that gives computer vision the capabilities of the human eye.

Oculi's SPU is able to identify important data within a scene – in this case, a human face – processing only that data and ignoring the rest


Oculi’s technology is able to identify important data within a scene – in this case, a human face – processing only that data and ignoring the rest
As you are reading this text, your eyes are prioritising the screen and deprioritising your surroundings. That sounds obvious when you think about it: you don’t currently need to know if the leaves on your desk plant look a bit brown, but if you just looked away to check your eyes would’ve easily adjusted their priority and your screen would’ve faded into the background.

The likelihood is you’ve never really thought about this. It is simply how sighted people perceive the world. Yet eyes, rather than merely capturing light and passing it all straight through to the brain, in fact do a lot of pre-processing and they do it in parallel very efficiently, sending only relevant details on to the brain.

Computer vision does not currently work this way. Instead, it is a dumb sensor that captures everything and passes it all onto a processor. That means that the limit of what machine vision can achieve is reliant on the data pipeline and how powerful the processor is — the more data that needs to be transferred and processed, the larger the pipeline and memory and the faster a processor and, importantly, the more electricity and time are required.

Charbel Rizk
Charbel Rizk

“When we say gigabits per second, people say ‘there is an Intel processor that can do so many teraflops per second’, but the one detail they miss is that the processor cannot handle that data at once. It has to buffer the data back and forth into memory in order to process all of it — moving data back and forth between memory and processing is what consumes most of the power and time,” Charbel Rizk, founder and chief executive of Johns Hopkins University (JHU) spinout Oculi explains.

In addition to processing useless data, this architecture also limits the amount of data that can be accessed from the sensor. Essentially, machine vision today gets you the worst of both worlds: too much useless data and not enough useful data.

There hasn’t really been any innovation in how machine vision is done until now. We’ve merely added ever higher resolution sensors, more powerful processors, and artificial intelligence software that tries to understand the massive amounts of data after it has been captured.

“Human vision remains 40,000 times more efficient than the best computer vision out there.”
“That architecture is very inefficient and has remained the same for decades. It is why machines have outperformed humans in just about every task subdivision, but human vision remains 40,000 times more efficient than the best computer vision out there.”
Oculi’s founding members include Philippe Pouliquen, a co-inventor on four patents and assistant research professor at Johns Hopkins, and JHU graduate Chad Howard, who serves as lead solutions engineer.

How to save lives by counting raindrops​

Oculi’s technology is an integrated sensing and processing architecture for imaging, dubbed sensing and processing unit (SPU) that is capable of performing tasks at the pixel level and at the edge (in other words, data doesn’t need to be sent back to a server to be processed). It uses parallel processing and in-memory compute in the pixel — it calls that technology IntelliPixel — massively reducing the amount of data that needs to be analysed or stored, and thereby reducing power requirements and latency to the point of the analysis happening in real time. In other words, the SPU knows what to look for in a scene and ignores the rest.

“You reduce a gigabits-per-second problem into a kilobits-per-second problem. So, anything that happens after can be handled effectively, because you can manage kilobits per second in real time without using a lot of power and resources. You can’t do that with gigabits per second.”
Rizk stresses that all of this happens on the same chip — a reality that keeps defying expectations every time he explains it to a new person, he says, because the current model of separate sensor and processor is so ingrained in people’s minds.
While the spinout outsources the manufacturing of the chips, its embedded software is created entirely in-house. In fact, half of Oculi’s staff is focused on the latter, Rizk says.

Rizk gives a real-world example of Oculi’s partnership with a Japan-based company, which was looking for a way to predict regular occurrences of flash flooding that often proved fatal because it was impossible to know exactly when an area was becoming too dangerous.

Oculi’s SPU was able to “count the rain drops and estimate how much rain is falling”.
Oculi’s SPU was able to “count the rain drops and estimate how much rain is falling”, Rizk explains. “That is the kind of application that I personally get excited about.”

The company did not need to adapt its hardware for this, he underlines. “We are the only intelligent software-defined vision sensor. Our intelligence starts at the pixel level, not at the edge of the array, not next to it. The pixel is smart and it’s software-defined because just about every aspect of the sensor can change in real time, which is a flexibility that we’ve never had before. This is why the same chip is used to track a bullet in flight, to do rain measurement and to do gesture control.”

It is the consumer sector that Oculi is targeting first with a view to generating significant revenue. Interacting with virtual reality devices involves “tracking your eyes, your face, your hands, your body movement,” Rizk says. “You would think it’s a simple problem. It is if you don’t care about how much power and money you put into it. But if you’re trying to do it effectively with a small solution that doesn’t cost a lot of money, there is none out there outside of our technology.”

Oculi recently secured its first sizeable deal in the consumer sector when a UK-based company signed a letter of intent for 250,000 units, Rizk notes.

But its gesture tracking specifically also has applications in healthcare, Rizk continues, where Oculi has worked with a client to develop touchless check-in screens in medical facilities.

The pursuit of efficiency​

Rizk is “obsessed with efficiency”, he says. It is partially what motivated him to develop the new vision architecture in a white paper some 20 years ago and as an engineer he found Oculi’s solution through iteration to optimise the architecture (nowadays this approach is called systems engineering) rather than emulating the human eye, having not known how the eye functions when he set out to create his technology.

That drive for efficiency can be a source of frustration as CEO, he admits, because he’s unable to optimise the fundraising process: after all, no entrepreneur can know if an investor will give them money when they walk into a room.

Another challenge for Oculi is the Baltimore and Washington DC ecosystem (the two cities are within an hour’s drive of each other), which has historical strengths in healthcare and cybersecurity. It means finding investors locally in Oculi’s space has proven tricky.
But even though Johns Hopkins University’s strengths also lie in healthcare, Rizk highlights the importance of support from tech transfer office Johns Hopkins Technology Ventures and in particular Brian Stansky, director of innovation hub FastForward.

Due to the intensive demands of leading a startup, Rizk – an associate research professor in the Whithing School of Engineering since 2016 – did not teach this past academic year. Still, he says: “I was teaching two courses that I had developed myself and they are electives, so I can pick and choose the students. That type of teaching is definitely going to be part of my life.”
Finding capable technical staff has been a breeze, Rizk notes, because of his engineering background and extensive network, even if as a first-time entrepreneur finding businesspeople required more work.

One chip to rule them all​

Although Oculi is Rizk’s first startup, he has been entrepreneurial for much of his life. “I developed the first four-rotor drone in the early 90s and ever since then, I’ve had that startup mentality even when I worked for large enterprises like Boeing.”

His ambition is as big as that of any startup founder and he hopes that eventually, Oculi’s technology will become the default for machine vision – in any sector: “If Waymo or Cruise want to put machine vision in their autonomous car, instead of buying a camera and a computer, and then putting it together, we are going to sell them something that already does all the lower level processing and outputs the information that they want: lane markings, traffic lights, pedestrians…

“Everybody wants the same basic information, what is relevant in the scene. They don’t really care about the low-level processing that gets you that information. In fact, it is more of a pain than value added. Because they want to make money with their product, selling you a full solution. They don’t want to drown in low-level integration details. There is a significant value that only systems engineers or integrators really appreciate.

“Ask any automotive original equipment manufacturers or tier 1 suppliers and they’ll tell you the nightmare in integrating sensors from different vendors. Oculi technology will be plug and play because of the significant reduction in data transfer, the interface will be very simple and standard. With that said, the fact that the SPU is programmable, it does not limit access to any and all raw sensor data as needed.

“We want to sell the same SPU to everyone and all they have to do is program it to, say, track the eye.”
With applications in essentially every sector and a chief executive as driven as Rizk at the helm, it seems a safe bet to assume Oculi will pull it off. And now with customers and orders in the pipeline, the focus is shifting to production, Rizk explains. Oculi has secured a strategic partnership and investment with a foundry to mass produce the first of three Oculi products planned for the next five years.

The spinout already has a handful of investors to help it get there: telecoms provider Mada, diversified engineering firm XOL Automation, semiconductor-focused incubator Silicon Catalyst and the CYNC programme, a partnership between aerospace and defence company Northrop Grumman and the Cyber Incubator@bwtech. Rizk is in the process of raising bridge financing while negotiating with multiple lead investors to close the seed round.
 
  • Like
  • Fire
  • Love
Reactions: 39 users

MDhere

Regular
Instead of dividends being paid bi-annually, we have 0.001 cents transferred to our digital wallet everytime Brainchip makes a sale.

I like the sound of that.
That's fair 🙂
 
  • Like
  • Haha
Reactions: 5 users
The more I look into Oculi, the more I like their product offer / tech and where we could fit if we are indeed involved......I do hope so for mine.

They still have a bit of business structuring / funding to work through but looks very positive.

I highlighted some sections I also liked and wouldn't be averse to BRN chucking in some coin maybe in the seed round :)

Recent article.



Oculi eyes computer vision revolution​

Sep 6, 2022 • Thierry Heles​

Charbel Rizk has spun Oculi out of Johns Hopkins University to commercialise technology that gives computer vision the capabilities of the human eye.

Oculi's SPU is able to identify important data within a scene – in this case, a human face – processing only that data and ignoring the rest's SPU is able to identify important data within a scene – in this case, a human face – processing only that data and ignoring the rest


Oculi’s technology is able to identify important data within a scene – in this case, a human face – processing only that data and ignoring the rest
As you are reading this text, your eyes are prioritising the screen and deprioritising your surroundings. That sounds obvious when you think about it: you don’t currently need to know if the leaves on your desk plant look a bit brown, but if you just looked away to check your eyes would’ve easily adjusted their priority and your screen would’ve faded into the background.

The likelihood is you’ve never really thought about this. It is simply how sighted people perceive the world. Yet eyes, rather than merely capturing light and passing it all straight through to the brain, in fact do a lot of pre-processing and they do it in parallel very efficiently, sending only relevant details on to the brain.

Computer vision does not currently work this way. Instead, it is a dumb sensor that captures everything and passes it all onto a processor. That means that the limit of what machine vision can achieve is reliant on the data pipeline and how powerful the processor is — the more data that needs to be transferred and processed, the larger the pipeline and memory and the faster a processor and, importantly, the more electricity and time are required.

Charbel Rizk
Charbel Rizk

“When we say gigabits per second, people say ‘there is an Intel processor that can do so many teraflops per second’, but the one detail they miss is that the processor cannot handle that data at once. It has to buffer the data back and forth into memory in order to process all of it — moving data back and forth between memory and processing is what consumes most of the power and time,” Charbel Rizk, founder and chief executive of Johns Hopkins University (JHU) spinout Oculi explains.

In addition to processing useless data, this architecture also limits the amount of data that can be accessed from the sensor. Essentially, machine vision today gets you the worst of both worlds: too much useless data and not enough useful data.

There hasn’t really been any innovation in how machine vision is done until now. We’ve merely added ever higher resolution sensors, more powerful processors, and artificial intelligence software that tries to understand the massive amounts of data after it has been captured.


“That architecture is very inefficient and has remained the same for decades. It is why machines have outperformed humans in just about every task subdivision, but human vision remains 40,000 times more efficient than the best computer vision out there.”
Oculi’s founding members include Philippe Pouliquen, a co-inventor on four patents and assistant research professor at Johns Hopkins, and JHU graduate Chad Howard, who serves as lead solutions engineer.

How to save lives by counting raindrops​

Oculi’s technology is an integrated sensing and processing architecture for imaging, dubbed sensing and processing unit (SPU) that is capable of performing tasks at the pixel level and at the edge (in other words, data doesn’t need to be sent back to a server to be processed). It uses parallel processing and in-memory compute in the pixel — it calls that technology IntelliPixel — massively reducing the amount of data that needs to be analysed or stored, and thereby reducing power requirements and latency to the point of the analysis happening in real time. In other words, the SPU knows what to look for in a scene and ignores the rest.

“You reduce a gigabits-per-second problem into a kilobits-per-second problem. So, anything that happens after can be handled effectively, because you can manage kilobits per second in real time without using a lot of power and resources. You can’t do that with gigabits per second.”
Rizk stresses that all of this happens on the same chip — a reality that keeps defying expectations every time he explains it to a new person, he says, because the current model of separate sensor and processor is so ingrained in people’s minds.
While the spinout outsources the manufacturing of the chips, its embedded software is created entirely in-house. In fact, half of Oculi’s staff is focused on the latter, Rizk says.

Rizk gives a real-world example of Oculi’s partnership with a Japan-based company, which was looking for a way to predict regular occurrences of flash flooding that often proved fatal because it was impossible to know exactly when an area was becoming too dangerous.


Oculi’s SPU was able to “count the rain drops and estimate how much rain is falling”, Rizk explains. “That is the kind of application that I personally get excited about.”

The company did not need to adapt its hardware for this, he underlines. “We are the only intelligent software-defined vision sensor. Our intelligence starts at the pixel level, not at the edge of the array, not next to it. The pixel is smart and it’s software-defined because just about every aspect of the sensor can change in real time, which is a flexibility that we’ve never had before. This is why the same chip is used to track a bullet in flight, to do rain measurement and to do gesture control.”

It is the consumer sector that Oculi is targeting first with a view to generating significant revenue. Interacting with virtual reality devices involves “tracking your eyes, your face, your hands, your body movement,” Rizk says. “You would think it’s a simple problem. It is if you don’t care about how much power and money you put into it. But if you’re trying to do it effectively with a small solution that doesn’t cost a lot of money, there is none out there outside of our technology.”

Oculi recently secured its first sizeable deal in the consumer sector when a UK-based company signed a letter of intent for 250,000 units, Rizk notes.

But its gesture tracking specifically also has applications in healthcare, Rizk continues, where Oculi has worked with a client to develop touchless check-in screens in medical facilities.

The pursuit of efficiency​

Rizk is “obsessed with efficiency”, he says. It is partially what motivated him to develop the new vision architecture in a white paper some 20 years ago and as an engineer he found Oculi’s solution through iteration to optimise the architecture (nowadays this approach is called systems engineering) rather than emulating the human eye, having not known how the eye functions when he set out to create his technology.

That drive for efficiency can be a source of frustration as CEO, he admits, because he’s unable to optimise the fundraising process: after all, no entrepreneur can know if an investor will give them money when they walk into a room.

Another challenge for Oculi is the Baltimore and Washington DC ecosystem (the two cities are within an hour’s drive of each other), which has historical strengths in healthcare and cybersecurity. It means finding investors locally in Oculi’s space has proven tricky.
But even though Johns Hopkins University’s strengths also lie in healthcare, Rizk highlights the importance of support from tech transfer office Johns Hopkins Technology Ventures and in particular Brian Stansky, director of innovation hub FastForward.

Due to the intensive demands of leading a startup, Rizk – an associate research professor in the Whithing School of Engineering since 2016 – did not teach this past academic year. Still, he says: “I was teaching two courses that I had developed myself and they are electives, so I can pick and choose the students. That type of teaching is definitely going to be part of my life.”
Finding capable technical staff has been a breeze, Rizk notes, because of his engineering background and extensive network, even if as a first-time entrepreneur finding businesspeople required more work.

One chip to rule them all​

Although Oculi is Rizk’s first startup, he has been entrepreneurial for much of his life. “I developed the first four-rotor drone in the early 90s and ever since then, I’ve had that startup mentality even when I worked for large enterprises like Boeing.”

His ambition is as big as that of any startup founder and he hopes that eventually, Oculi’s technology will become the default for machine vision – in any sector: “If Waymo or Cruise want to put machine vision in their autonomous car, instead of buying a camera and a computer, and then putting it together, we are going to sell them something that already does all the lower level processing and outputs the information that they want: lane markings, traffic lights, pedestrians…

“Everybody wants the same basic information, what is relevant in the scene. They don’t really care about the low-level processing that gets you that information. In fact, it is more of a pain than value added. Because they want to make money with their product, selling you a full solution. They don’t want to drown in low-level integration details. There is a significant value that only systems engineers or integrators really appreciate.

“Ask any automotive original equipment manufacturers or tier 1 suppliers and they’ll tell you the nightmare in integrating sensors from different vendors. Oculi technology will be plug and play because of the significant reduction in data transfer, the interface will be very simple and standard. With that said, the fact that the SPU is programmable, it does not limit access to any and all raw sensor data as needed.

“We want to sell the same SPU to everyone and all they have to do is program it to, say, track the eye.”
With applications in essentially every sector and a chief executive as driven as Rizk at the helm, it seems a safe bet to assume Oculi will pull it off. And now with customers and orders in the pipeline, the focus is shifting to production, Rizk explains. Oculi has secured a strategic partnership and investment with a foundry to mass produce the first of three Oculi products planned for the next five years.

The spinout already has a handful of investors to help it get there: telecoms provider Mada, diversified engineering firm XOL Automation, semiconductor-focused incubator Silicon Catalyst and the CYNC programme, a partnership between aerospace and defence company Northrop Grumman and the Cyber Incubator@bwtech. Rizk is in the process of raising bridge financing while negotiating with multiple lead investors to close the seed round.
Great article and love the highlights you have chosen.

As a reasonably long time fan, starting in 2016, of Brainchip what immediately stood out for me was that someone unconnected to Brainchip was answering the same old technology thinking that Brainchip, Peter van der Made and Anil Mankar have confronted since they entered the ASX in 2015.

In these two paragraphs Charbel Rizk elegantly disposes of this criticism:

“When we say gigabits per second, people say ‘there is an Intel processor that can do so many teraflops per second’, but the one detail they miss is that the processor cannot handle that data at once. It has to buffer the data back and forth into memory in order to process all of it — moving data back and forth between memory and processing is what consumes most of the power and time,” Charbel Rizk, founder and chief executive of Johns Hopkins University (JHU) spinout Oculi explains.

In addition to processing useless data, this architecture also limits the amount of data that can be accessed from the sensor.
Essentially, machine vision today gets you the worst of both worlds: too much useless data and not enough useful data.”

And herein we find the answer as to why Brainchip AKIDA is ESSENTIAL.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 56 users
Great article and love the highlights you have chosen.

As a reasonably long time fan, starting in 2016, of Brainchip what immediately stood out for me was that someone unconnected to Brainchip was answering the same old technology thinking that Brainchip, Peter van der Made and Anil Mankar have confronted since they entered the ASX in 2015.

In these two paragraphs Charbel Rizk elegantly disposes of this criticism:

“When we say gigabits per second, people say ‘there is an Intel processor that can do so many teraflops per second’, but the one detail they miss is that the processor cannot handle that data at once. It has to buffer the data back and forth into memory in order to process all of it — moving data back and forth between memory and processing is what consumes most of the power and time,” Charbel Rizk, founder and chief executive of Johns Hopkins University (JHU) spinout Oculi explains.

In addition to processing useless data, this architecture also limits the amount of data that can be accessed from the sensor.
Essentially, machine vision today gets you the worst of both worlds: too much useless data and not enough useful data.”

And herein we find the answer as to why Brainchip AKIDA is ESSENTIAL.

My opinion only DYOR
FF

AKIDA BALLISTA
Then probs like some of these highlights..."agnostic" sounds familiar ;)




CEO Interview: Charbel Rizk of Oculi
by Daniel Nenni on 11-19-2021 at 6:00 am
Categories: CEO Interviews

Charbel Rizk is CEO of Oculi®, a spinout from Johns Hopkins University, a fabless semiconductor startup commercializing technology to address the high power and latency challenges of vision technology. Dr. Rizk recognized these as barriers to effective AI in his years of experience as a Principal Systems Engineer, Lead Innovator, and Professor at Rockwell Aerospace, McDonnell Douglas, Boeing, JHUAPL and Johns Hopkins University. The Oculi vision solution reduces latency, bandwidth, and/or power consumption by up to 30x.

Why did you decide to create this technology?

Our original motivation was simply to enable more effective autonomy. Our perspective is that the planet needs the “human eye” in AI for energy efficiency and safety. Machines outperform humans in most tasks but human vision remains far superior despite technology advances. Cameras, being the predominant sensors for machine vision, have mega-pixels of resolution. Advanced processors can perform trillions of operations per second. With this combination, one would expect vision architecture (camera + computer) today to be on par with human vision. However, current technology is as much as ~40,000x behind, when looking at the combination of time and energy wasted in extracting the required information. There is a fundamental tradeoff between time and energy, and most solutions optimize one at the expense of the other. Just like biology, machine vision must generate the “best” actionable information very efficiently (in time and power consumption) from the available signal (photons).

What are the major problems with the current technology available in the market?

Cameras and processors operate very differently compared to the eye+brain combination, largely because they have been historically developed for different purposes. Cameras are for accurate communication and reproduction of a scene. Processors have evolved over time with certain applications in mind, with the primary performance measure being operations per second. The latest trend is domain specific architectures (i.e. custom chips), driven by demand from applications such as image processing.

Another important disconnect, albeit less obvious, is the architecture itself. When a solution is developed from existing components (i.e. off-the-self cameras and processors), it becomes difficult to integrate into a flexible solution and more importantly to dynamically optimize in real-time which is a key aspect of human vision.

As the world of automation grows exponentially and the demand for imaging sensors skyrockets, efficient (time and resources) vision technology becomes even more critical to safety (reducing latency) and to conserving energy.

What are the solutions proposed by Oculi?

Oculi has developed an integrated sensing and processing architecture for imaging or vision applications. Oculi patented technology is agnostic to both the sensing modality on the front end (linear, Geiger, DVS, infrared, depth or TOF) and the post-processing (CPU, GPU, AI Processors…) that follows. We have also demonstrated key IP in silicon that can materialize this architecture into commercial products within 12-18 months.

A processing platform that equals the brain is an important step in matching human perception, but it will not be sufficient to achieve human vision without “eye-like” sensors. In the world of vision technology, the eye represents the power and effectiveness of parallel edge processing and dynamic sensor optimization. The eye not only senses the light, it also performs a good bit of parallel processing and only transfers to the brain relevant information. It also receives feedback signals from the brain to dynamically adjust to changing conditions and/or objectives. Oculi has developed a novel vision architecture that deploys parallel processing and in-memory compute in the pixel (zero-distance between sensing and processing) that delivers up to 30x improvements in efficiency (time and/or energy).

The OCULI SPU™ (Sensing & Processing Unit), is a single chip complete vision solution delivering real-time Vision Intelligence (VI) at the edge with software-defined features and an output compatible with most computer vision ecosystems of tools and algorithms. Being fitted with the IntelliPixel™ technology, the OCULI SPU reduces bandwidth and external post-processing down to ~1% with zero loss of relevant information.

The OCULI SPU S12, Our GEN 1 Go-To-Market product, is the industry’s first integrated neuromorphic (eye+brain) silicon deploying sparse sensing, parallel processing + memory, and dynamic optimization

It offers Efficient Vision Intelligence (VI) that is a prerequisite for effective Artificial Intelligence (AI) for edge applications.
OCULI SPU is the first single-chip vision solution on a standard CMOS process that delivers unparalleled selectivity, efficiency, and speed.

There is significant room for improvement in today’s products by simply optimizing the architecture, in particular the signal processing chain from capture to action, and human vision is a perfect example of what’s possible. At Oculi, we have developed a new architecture for computer and machine vision that promises efficiency on par with human vision but outperforms in speed.

Do you want to talk about the potential markets? R&D?

We have developed a healthy pipeline of customers/partners engagements over a variety of markets from industrial and intelligent transportation to consumers to automotive. Our initial focus is on edge applications for eye, gesture, and face tracking for interactive/smart display and AR/VR markets.
These are near term market opportunities with high volume and Oculi technology offers a clear competitive edge. As biology and nature have been the inspiration for much of the technology innovations, developing imaging technology that mimics human vision in efficiency but outperforms in speed is a logical path. It is a low hanging fruit (performance versus price) as Oculi has successfully demonstrated in multiple paid pilot projects with large international customers. Also unlike photos and videos we collect for personal consumption, machine vision is not about pretty images and the most number of pixels.
 
  • Like
  • Love
  • Fire
Reactions: 36 users

MDhere

Regular
Aarggh i tell ya, im having a rant, it will be over soon. My other half if i must be at all diplomatic right now, says to me i don't like women with tatts.
Ummm lol excuse me I'm the one who introduced you to AGY and BRN....SO SHUT the F up about my desire to hallmark each brn dollar. Geeez ungrateful sod! lol ok rant over.
what's he gonna do divorce me when i have 5 tatts? Pppfff
 
Last edited:
  • Haha
  • Like
  • Wow
Reactions: 25 users

Slade

Top 20
It’s exciting to see the high end sophisticated complex use cases for Akida gaining traction. But I reckon the first Akida product to market will be in something like a door bell, which is very fine by me. The other use case that I can see being fast tracked is vibrational analysis that detects faults in industrial machinery.
 
  • Like
  • Love
  • Haha
Reactions: 16 users

Slade

Top 20
My opinions are random and ever changing.
 
  • Haha
  • Love
  • Like
Reactions: 11 users

Slade

Top 20
I am wondering if anyone has asked BrainChip about Anil’s Oculi/Oculii comment? I think it’s worth asking for confirmation.
 
  • Like
  • Thinking
Reactions: 7 users

Vojnovic

Regular
Aarggh i tell ya, im having a rant, it will be over soon. My other half if i must be at all diplomatic right now, says to me i don't like women with tatts.
Ummm lol excuse me I'm the one who introduced you to AGY and BRN....SO SHUT the F up about my desire to hallmark each brn dollar. Geeez ungrateful sod! lol ok rant over.
what's he gonna do divorce me when i have 5 tatts? Pppfff
Only five?? He better get used to this:
1665372088236.png
 
  • Haha
  • Like
  • Fire
Reactions: 12 users

MDhere

Regular
My opinions are random and ever changing.
trust me if yr opinions are random and ever changing yr last one i was thinking attach vibration sensor to beds and it will instantly tell u where yr marraige is headed Lol is that random enough?
 
Last edited:
  • Haha
  • Like
  • Wow
Reactions: 14 users

VictorG

Member
  • Haha
Reactions: 4 users
Top Bottom