It's POETS day for the graphbot.Hi folks. Does anyone else have the delayed price missing from TSE? I had it this morning, but not since lunchtime......
If you don't havevdreams, you can't have dreams come true!
@Tothemoon24 your post is exciting. Oculi's technology is the same technology developed at John Hopkin's university as descibed in your post. Brainchip is currently engaged with Oculi. @chapman89 post today shows that Oculi has entered into a strategic agreement with Global Foundaries (as we all know, Brainchip recently taped out the Akida 1500 on Global Foudaries technology). Oculi's new chip will be used in smart devices and homes, industrial, IoT, automotive markets and wearables including AR/VR. Prophesee is an Oculi competitor. No wonder NDAs are so well guarded.Tracking How the Event Camera is Evolving
Event camera processing is advancing and enabling a new wave of neuromorphic technology.
Sony, Prophesee, iniVation, and CelePixel are already working to commercialize event (spike-based) cameras. Even more important, however, is the task of processing the data these cameras produce efficiently so that it can be used in real-world applications. While some are using relatively conventional digital technology for this, others are working on more neuromorphic, or brain-like, approaches.
Though more conventional techniques are easier to program and implement in the short term, the neuromorphic approach has more potential for extremely low-power operation.
By processing the incoming signal before having to convert from spikes to data, the load on digital processors can be minimized. In addition, spikes can be used as a common language with sensors in other modalities, such as sound, touch or inertia. This is because when things happen in the real world, the most obvious thing that unifies them is time: When a ball hits a wall, it makes a sound, causes an impact that can be felt, deforms and changes direction. All of these cluster temporally. Real-time, spike-based processing can therefore be extremely efficient for finding these correlations and extracting meaning from them.
Last time, on Nov. 21, we looked at the advantage of the two-cameras-in-one approach (DAVIS cameras), which uses the same circuitry to capture both event images, including only changing pixels, and conventional intensity images. The problem is that these two types of images encode information in fundamentally different ways.
Common language
Researchers at Peking University in Shenzhen, China, recognized that to optimize that multi-modal interoperability all the signals should ideally be represented in the same way. Essentially, they wanted to create a DAVIS camera with two modes, but with both of them communicating using events. Their reasoning was both pragmatic—it makes sense from an engineering standpoint—and biologically motivated. The human vision system, they point out, includes both peripheral vision, which is sensitive to movement, and foveal vision for fine details. Both of these feed into the same human visual system.
The Chinese researchers recently described what they call retinomorphic sensing or super vision that provides event-based output. The output can provide both dynamic sensing like conventional event cameras and intensity sensing in the form of events. They can switch back and forth between the two modes in a way that allows them to capture the dynamics and the texture of an image in a single, compressed representation that humans and machines can easily process.
These representations include the high temporal resolution you would expect from an event camera, combined with the visual texture you would get from an ordinary image or photograph.
They have achieved this performance using a prototype that consists of two sensors: a conventional event camera (DVS) and a Vidar camera, a new event camera from the same group that can efficiently create conventional frames from spikes by aggregating over a time window. They then use a spiking neural network for more advanced processing, achieving object recognition and tracking.
The other kind of CNN
At Johns Hopkins University, Andreas Andreou and his colleagues have taken event cameras in an entirely different direction. Instead of focusing on making their cameras compatible with external post-processing, they have built the processing directly into the vision chip. They use an analog, spike-based cellular neural network (CNN) structure where nearest-neighbor pixels talk to each other. Cellular neural networks share an acronym with convolutional neural networks, but are not closely related.
In cellular CNNs, the input/output links between each pixel and its eight nearest are built directly in hardware and can be specified to perform symmetrical processing tasks (see figure). These can then be sequentially combined to produce sophisticated image-processing algorithms.
Two things make them particularly powerful. One is that the processing is fast because it is performed in the analog domain. The other is that the computations across all pixels are local. So while there is a sequence of operations to perform an elaborate task, this is a sequence of fast, low-power, parallel operations.
A nice feature of this work is that the chip has been implemented in three dimensions using Chartered 130nm CMOS and Terrazon interconnection technology. Unlike many 3D systems, in this case the two tiers are not designed to work separately (e.g. processing on one layer, memory on the other, and relatively sparse interconnects between them). Instead, each pixel and its processing infrastructure are built on both tiers operating as a single unit.
Andreou and his team were part of a consortium, led by Northrop–Grumman, that secured a $2 million contract last year from the Defence Advanced Research Projects Agency (DARPA). While exactly what they are doing is not public, one can speculate the technology they are developing will have some similarities to the work they’ve published.
Shown is the 3D structure of a Cellular Neural Network cell (right) and layout (bottom left) of the John’s Hopkins University event camera with local processing.
In the dark
We know DARPA has strong interest in this kind of neuromorphic technology. Last summer the agency announced that its Fast Event-based Neuromorphic Camera and Electronics (FENCE) program granted three contracts to develop very-low-power, low-latency search and tracking in the infrared. One of the three teams is led by Northrop-Grumman.
Whether or not the FENCE project and the contract announced by Johns Hopkins university are one and the same, it is clear is that event imagers are becoming increasingly sophisticated.
Click "BRN Quotes" for an alternativeHi folks. Does anyone else have the delayed price missing from TSE? I had it this morning, but not since lunchtime......
If you don't havevdreams, you can't have dreams come true!
Hi Salde,@Tothemoon24 your post is exciting. Oculi's technology is the same technology developed at John Hopkin's university as descibed in your post. Brainchip is currently engaged with Oculi. @chapman89 post today shows that Oculi has entered into a strategic agreement with Global Foundaries (as we all know, Brainchip recently taped out the Akida 1500 on Global Foudaries technology). Oculi's new chip will be used in smart devices and homes, industrial, IoT, automotive markets and wearables including AR/VR. Prophesee is an Oculi competitor. No wonder NDAs are so well guarded.
No one can tell me that Akida is not being used by Oculi.
It's happy days. Perhaps we will get an update on this next week in either the podcast that comes out at 6am on Monday!! or In our annual report due out sometime next week.
![]()
Oculi eyes computer vision revolution - Global University Venturing
Charbel Rizk has spun Oculi out of Johns Hopkins to commercialise technology that gives computer vision the capabilities of the human eye.globalventuring.com
![]()
Oculi Forms strategic partnership with GlobalFoundries to advance edge sensing technology
Oculi is putting the "Eye" in AI OCULI SPU - Sensing and Processing Unit BALTIMORE, MD, UNITED STATES, February 15, 2023 /EINPresswire.com/ -- Oculi today announced a strategic partnership with GlobalFoundries Inc. (GF) (Nasdaq: GFS), a global leader in feature-rich semiconductor manufacturing...www.8newsnow.com
BrainChip Tapes Out AKD1500 Chip in GlobalFoundries 22nm FD SOI Process
Design And Reuse - Catalog of IP Cores and Silicon on Chip solutions for IoT, Automotive, Security, RISC-V, AI, ... and Asic Design Platforms and Resourceswww.design-reuse.com
Hi @DiogeneseHi Salde,
On your hypothesis, you still need a processor, because Akida1500 doesn't have one.
Is there any connection through Xylinx SOC?Hi Salde,
On your hypothesis, you still need a processor, because Akida1500 doesn't have one.
We used their COTS FPGA for a Studio accelerator 6 years ago.Is there any connection through Xylinx SOC?
We did some early work with them didn't we?
View attachment 29881
Cheers.We used their COTS FPGA for a Studio accelerator 6 years ago.
You want coincidences?Hypothetical question. Could taping out Akida 1500 on Global Foudries be enough evidence that Akida works on GF technology and thus give Oculi the confidence to develop their own chip through GF that incorporates Akida IP?
I feel there is too many coincidences (at least in my head) to ignore.
41.50 is where anil mentions we've "directly worked with" oculi
You mean this oneYou want coincidences?
Check out the NASA SBIR for 22nm FD-SoI NN sans processor.
Thanks Fmf,You mean this one
![]()
BRN Discussion Ongoing
Pretty sure (but not double checked) that some of this may have been already posted. Jan '23 NASA SBIR solicitations. There are a couple of points in here I have noticed and highlighted. Coincidence? NASA preferred....such as 22-nm FDSOI. Section towards bottom of copy speaks of the...thestockexchange.com.au
Welcome.Thanks Fmf,
My short term memory is somewhat deficient.
My reason for the question is the paragraph below that was written in the Akida 1500 tape out article:Hypothetical question. Could taping out Akida 1500 on Global Foudries be enough evidence that Akida works on GF technology for Oculi to have the confidence to proceed with developing their own chip incorporating Akida IP through GF?
I feel there is too many coincidences (at least in my head) to ignore.
I don't think it's just Oculi that were interested in seeing Akida performing on GlobalFoundries' 22FDX platform. It's a popular platform for efficient industrial and consumer applications and has had several years to mature and gain adoption by the industry.My reason for the question is the paragraph below that was written in the Akida 1500 tape out article:
"The tape-out was completed using GlobalFoundries’ 22nm fully depleted silicon-on-insulator (FD-SOI) technology and is being described as a milestone in validating BrainChip’s IP across different processes and foundries, providing its partners with varied global manufacturing options."
The Brainchip/Global Foudries tape out was announced on 29th Jan 2023.
Two days ago, Oculi, whom Brainchip is working with, announced that they are using Global Foundries to help them produce their latest chip (did the Akida 1500 tape out on Global Foudries technology give them what they needed to go ahead with their own GF chip?):
Oculi say this about their future chip:
"Oculi’s new vision is ideal for edge applications such as always-on gesture/face/people tracking and low-latency eye tracking, while alternative solutions are too slow, big, and power inefficient. GF is an excellent partner to enable us to quickly get our product to our customers."