BRN Discussion Ongoing

Murphy

Life is not a dress rehearsal!
Hi folks. Does anyone else have the delayed price missing from TSE? I had it this morning, but not since lunchtime......


If you don't havevdreams, you can't have dreams come true!
 
  • Like
Reactions: 4 users

Diogenese

Top 20
Hi folks. Does anyone else have the delayed price missing from TSE? I had it this morning, but not since lunchtime......


If you don't havevdreams, you can't have dreams come true!
It's POETS day for the graphbot.
 
  • Haha
  • Like
Reactions: 3 users

Slade

Top 20
Tracking How the Event Camera is Evolving


Event camera processing is advancing and enabling a new wave of neuromorphic technology.

Sony, Prophesee, iniVation, and CelePixel are already working to commercialize event (spike-based) cameras. Even more important, however, is the task of processing the data these cameras produce efficiently so that it can be used in real-world applications. While some are using relatively conventional digital technology for this, others are working on more neuromorphic, or brain-like, approaches.

Though more conventional techniques are easier to program and implement in the short term, the neuromorphic approach has more potential for extremely low-power operation.

By processing the incoming signal before having to convert from spikes to data, the load on digital processors can be minimized. In addition, spikes can be used as a common language with sensors in other modalities, such as sound, touch or inertia. This is because when things happen in the real world, the most obvious thing that unifies them is time: When a ball hits a wall, it makes a sound, causes an impact that can be felt, deforms and changes direction. All of these cluster temporally. Real-time, spike-based processing can therefore be extremely efficient for finding these correlations and extracting meaning from them.

Last time, on Nov. 21, we looked at the advantage of the two-cameras-in-one approach (DAVIS cameras), which uses the same circuitry to capture both event images, including only changing pixels, and conventional intensity images. The problem is that these two types of images encode information in fundamentally different ways.

Common language

Researchers at Peking University in Shenzhen, China, recognized that to optimize that multi-modal interoperability all the signals should ideally be represented in the same way. Essentially, they wanted to create a DAVIS camera with two modes, but with both of them communicating using events. Their reasoning was both pragmatic—it makes sense from an engineering standpoint—and biologically motivated. The human vision system, they point out, includes both peripheral vision, which is sensitive to movement, and foveal vision for fine details. Both of these feed into the same human visual system.

The Chinese researchers recently described what they call retinomorphic sensing or super vision that provides event-based output. The output can provide both dynamic sensing like conventional event cameras and intensity sensing in the form of events. They can switch back and forth between the two modes in a way that allows them to capture the dynamics and the texture of an image in a single, compressed representation that humans and machines can easily process.

These representations include the high temporal resolution you would expect from an event camera, combined with the visual texture you would get from an ordinary image or photograph.

They have achieved this performance using a prototype that consists of two sensors: a conventional event camera (DVS) and a Vidar camera, a new event camera from the same group that can efficiently create conventional frames from spikes by aggregating over a time window. They then use a spiking neural network for more advanced processing, achieving object recognition and tracking.

The other kind of CNN

At Johns Hopkins University, Andreas Andreou and his colleagues have taken event cameras in an entirely different direction. Instead of focusing on making their cameras compatible with external post-processing, they have built the processing directly into the vision chip. They use an analog, spike-based cellular neural network (CNN) structure where nearest-neighbor pixels talk to each other. Cellular neural networks share an acronym with convolutional neural networks, but are not closely related.

In cellular CNNs, the input/output links between each pixel and its eight nearest are built directly in hardware and can be specified to perform symmetrical processing tasks (see figure). These can then be sequentially combined to produce sophisticated image-processing algorithms.

Two things make them particularly powerful. One is that the processing is fast because it is performed in the analog domain. The other is that the computations across all pixels are local. So while there is a sequence of operations to perform an elaborate task, this is a sequence of fast, low-power, parallel operations.

A nice feature of this work is that the chip has been implemented in three dimensions using Chartered 130nm CMOS and Terrazon interconnection technology. Unlike many 3D systems, in this case the two tiers are not designed to work separately (e.g. processing on one layer, memory on the other, and relatively sparse interconnects between them). Instead, each pixel and its processing infrastructure are built on both tiers operating as a single unit.

Andreou and his team were part of a consortium, led by Northrop–Grumman, that secured a $2 million contract last year from the Defence Advanced Research Projects Agency (DARPA). While exactly what they are doing is not public, one can speculate the technology they are developing will have some similarities to the work they’ve published.


Shown is the 3D structure of a Cellular Neural Network cell (right) and layout (bottom left) of the John’s Hopkins University event camera with local processing.
In the dark

We know DARPA has strong interest in this kind of neuromorphic technology. Last summer the agency announced that its Fast Event-based Neuromorphic Camera and Electronics (FENCE) program granted three contracts to develop very-low-power, low-latency search and tracking in the infrared. One of the three teams is led by Northrop-Grumman.

Whether or not the FENCE project and the contract announced by Johns Hopkins university are one and the same, it is clear is that event imagers are becoming increasingly sophisticated.
@Tothemoon24 your post is exciting. Oculi's technology is the same technology developed at John Hopkin's university as descibed in your post. Brainchip is currently engaged with Oculi. @chapman89 post today shows that Oculi has entered into a strategic agreement with Global Foundaries (as we all know, Brainchip recently taped out the Akida 1500 on Global Foudaries technology). Oculi's new chip will be used in smart devices and homes, industrial, IoT, automotive markets and wearables including AR/VR. Prophesee is an Oculi competitor. No wonder NDAs are so well guarded.

No one can tell me that Akida is not being used by Oculi.

It's happy days. Perhaps we will get an update on this next week in either the podcast that comes out at 6am on Monday!! or In our annual report due out sometime next week.



 
  • Like
  • Fire
  • Love
Reactions: 32 users

Diogenese

Top 20
Hi folks. Does anyone else have the delayed price missing from TSE? I had it this morning, but not since lunchtime......


If you don't havevdreams, you can't have dreams come true!
Click "BRN Quotes" for an alternative
 
  • Like
Reactions: 1 users

Diogenese

Top 20
@Tothemoon24 your post is exciting. Oculi's technology is the same technology developed at John Hopkin's university as descibed in your post. Brainchip is currently engaged with Oculi. @chapman89 post today shows that Oculi has entered into a strategic agreement with Global Foundaries (as we all know, Brainchip recently taped out the Akida 1500 on Global Foudaries technology). Oculi's new chip will be used in smart devices and homes, industrial, IoT, automotive markets and wearables including AR/VR. Prophesee is an Oculi competitor. No wonder NDAs are so well guarded.

No one can tell me that Akida is not being used by Oculi.

It's happy days. Perhaps we will get an update on this next week in either the podcast that comes out at 6am on Monday!! or In our annual report due out sometime next week.



Hi Salde,

On your hypothesis, you still need a processor, because Akida1500 doesn't have one.
 
  • Like
Reactions: 2 users

Slade

Top 20
Hi Salde,

On your hypothesis, you still need a processor, because Akida1500 doesn't have one.
Hi @Diogenese
I am not a vegetarian.
I will leave the technical side to you
I am not saying that the Akida 1500 chip has anything to do with Oculi's new chip. But what I am saying is that there is a very high probability that Oculi is using Akida IP.
 
  • Haha
  • Like
Reactions: 10 users
Hi Salde,

On your hypothesis, you still need a processor, because Akida1500 doesn't have one.
Is there any connection through Xylinx SOC?

We did some early work with them didn't we?

Screenshot_2023-02-17-13-49-06-31_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users

Colorado23

Regular
  • Like
  • Thinking
Reactions: 5 users

Diogenese

Top 20
  • Like
  • Fire
  • Thinking
Reactions: 10 users
We used their COTS FPGA for a Studio accelerator 6 years ago.
Cheers.

Thought seen mentioned by LTHs previously.

That snip from a 2021 slide.
 
  • Like
Reactions: 3 users

Slade

Top 20
Hypothetical question. Could taping out Akida 1500 on Global Foudries be enough evidence that Akida works on GF technology for Oculi to have the confidence to proceed with developing their own chip incorporating Akida IP through GF?
I feel there is too many coincidences (at least in my head) to ignore.
 
  • Like
  • Thinking
  • Fire
Reactions: 16 users

JB49

Regular


41.50 is where anil mentions we've "directly worked with" oculi
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Diogenese

Top 20
Hypothetical question. Could taping out Akida 1500 on Global Foudries be enough evidence that Akida works on GF technology and thus give Oculi the confidence to develop their own chip through GF that incorporates Akida IP?
I feel there is too many coincidences (at least in my head) to ignore.
You want coincidences?

Check out the NASA SBIR for 22nm FD-SoI NN sans processor.
 
  • Like
  • Fire
  • Love
Reactions: 27 users

Slade

Top 20


41.50 is where anil mentions we've "directly worked with" oculi

Thank you @JB49
Just to add, It has also been confirmed in an email that Brainchip is engaged with Oculi (not Oculii).
 
Last edited:
  • Like
  • Fire
Reactions: 16 users
You want coincidences?

Check out the NASA SBIR for 22nm FD-SoI NN sans processor.
You mean this one ;)


 
  • Like
  • Love
  • Fire
Reactions: 10 users

Diogenese

Top 20
You mean this one ;)


Thanks Fmf,

My short term memory is somewhat deficient.
 
  • Like
  • Haha
Reactions: 2 users
  • Haha
  • Like
Reactions: 6 users

Slade

Top 20
Hypothetical question. Could taping out Akida 1500 on Global Foudries be enough evidence that Akida works on GF technology for Oculi to have the confidence to proceed with developing their own chip incorporating Akida IP through GF?
I feel there is too many coincidences (at least in my head) to ignore.
My reason for the question is the paragraph below that was written in the Akida 1500 tape out article:

"The tape-out was completed using GlobalFoundries’ 22nm fully depleted silicon-on-insulator (FD-SOI) technology and is being described as a milestone in validating BrainChip’s IP across different processes and foundries, providing its partners with varied global manufacturing options."

The Brainchip/Global Foudries tape out was announced on 29th Jan 2023.
Two days ago, Oculi, whom Brainchip is working with, announced that they are using Global Foundries to help them produce their latest chip (did the Akida 1500 tape out on Global Foudries technology give them what they needed to go ahead with their own GF chip?):

Oculi say this about their future chip:
"Oculi’s new vision is ideal for edge applications such as always-on gesture/face/people tracking and low-latency eye tracking, while alternative solutions are too slow, big, and power inefficient. GF is an excellent partner to enable us to quickly get our product to our customers."
 
  • Like
  • Love
  • Fire
Reactions: 39 users

JDelekto

Regular
My reason for the question is the paragraph below that was written in the Akida 1500 tape out article:

"The tape-out was completed using GlobalFoundries’ 22nm fully depleted silicon-on-insulator (FD-SOI) technology and is being described as a milestone in validating BrainChip’s IP across different processes and foundries, providing its partners with varied global manufacturing options."

The Brainchip/Global Foudries tape out was announced on 29th Jan 2023.
Two days ago, Oculi, whom Brainchip is working with, announced that they are using Global Foundries to help them produce their latest chip (did the Akida 1500 tape out on Global Foudries technology give them what they needed to go ahead with their own GF chip?):

Oculi say this about their future chip:
"Oculi’s new vision is ideal for edge applications such as always-on gesture/face/people tracking and low-latency eye tracking, while alternative solutions are too slow, big, and power inefficient. GF is an excellent partner to enable us to quickly get our product to our customers."
I don't think it's just Oculi that were interested in seeing Akida performing on GlobalFoundries' 22FDX platform. It's a popular platform for efficient industrial and consumer applications and has had several years to mature and gain adoption by the industry.

I think this latest tape-out of AKD1500 and the choice to create the next batch of reference chips using the GlobalFoundries tech is likely the linchpin that will get signed IP contracts with some unnamed customers.

What I am hoping for at this point is that said customers have been working on prototypes of whatever products they are choosing to bring to market, and refactoring time from prototype to consumer good will be dramatically shortened, pending the successful testing of this generation of Akida.

While companies may still choose to remain tight-lipped through NDAs in order to retain a competitive advantage in the present, I do think that the day will come when BrainChip's technology will become a lot more mainstream and companies will start to advertise its use in order to attract consumers. I like to call that the "Intel Inside" effect.
 
  • Like
  • Love
  • Fire
Reactions: 41 users
Top Bottom