BRN Discussion Ongoing

stuart888

Regular
  • Like
  • Fire
Reactions: 3 users

BaconLover

Founding Member
Some of you may have been paying attention to the increased number of derail and train / freight accidents that are occurring across the globe.
A massive one in Ohio, couple of days ago one in Adelaide and a number of other ones across the globe.
Obviously the environmentalists, media and green activists are silent.

Anyways, can't wait Akida to come as a standard across these key parts of economy and human lives, so such accidents could be avoided with the vibration analysis.
It saves humans, water, our food, and so much flaura and fauna!
Every time I see these accidents I think "there's a problem and we need a solution".
Good thing is now we've got one.
Time for the marketing gurus to smash a six out of the park while we have just enough overs left in the match.
 
  • Like
  • Love
  • Fire
Reactions: 28 users

stuart888

Regular
Plenty of good vibes over in my camp. Yeah Brainchip, low power, nothing to do with text! All sorts of sensors.

 
  • Like
  • Fire
Reactions: 3 users

stuart888

Regular
Tractor Ride!

 
  • Like
Reactions: 1 users

stuart888

Regular
Cadence preaching the AI love.

 
  • Like
  • Fire
Reactions: 5 users

stuart888

Regular
Sorry to toss in some Stripe.

This guy is wow factor. It helps me learn how Brainchip is growing, as they deal with similar.

 

Mugen74

Regular
Big Telsa recall faulty FSD crash risk 362000 cars! wow
 
  • Like
Reactions: 4 users

Pappagallo

Regular
  • Haha
  • Like
  • Love
Reactions: 20 users

Tothemoon24

Top 20
Tata Motors.
2023 Harrier & Safari
New feature 360-degree camera


Tata Harrier and Safari with ADAS to launch in March 2023
By Utkarsh Deshmukh Published: February 6, 2023


Tata Motors the third largest carmaker in India recently confirmed that the Red Editions of its two highly popular SUVs the Harrier and Safari will be launched in March of this year. The company unveiled these new edition models at the 2023 Auto Expo where it also displayed a plethora of production and concept vehicles. At the time it was anticipated that Tata would be launching these SUVs in the second half of this year, although these will be now coming a lot sooner.





Like every other special edition of these SUVs that the company has launched in the last few years, there will be no mechanical changes this time as well. However, unlike the other editions these two SUVs will now get some much-anticipated features apart from minor aesthetic changes on the outside and major changes on the inside.


To begin with, the new Harrier and Safari have received an update for the 2023 model year that includes the addition of advanced driver assistance systems (ADAS). The automaker is now providing ADAS technology including autonomous emergency braking (AEB), forward collision alert, lane assist, and even traffic sign recognition. The Red Dark model also comes with six airbags in addition to these state-of-the-art collision avoidance technologies.




One of the standout features of the new 2023 Red Edition automobiles is the inclusion of a substantially larger 10.25-inch touchscreen screen, which is once again combined with a nine-speaker JBL sound system. Additionally, the SUV siblings now get a fully digital instrument cluster in place of the previous semi-digital one, and they now include the eagerly awaited 360-degree camera.


The new “Carnelian” red seat upholstery with a quilted pattern, red leatherette grab handles, grey dashboard trim, and piano black accents on the steering wheel are additional upgrades on the interior of the Red Edition SUVs. Additionally, the front seats in the new Edition cars are now ventilated, and the driver’s seat has memory and is electrically adjustable.





In addition to ambient lighting that surrounds the panoramic sunroof, the three-row Safari now boasts a second row that is ventilated, a motorised front passenger seat with a “Boss” mode that allows the rear passenger to pull the seat forward, and all three rows that are motorised.

The SUV’s exterior, on the other hand, has seen little alterations; it now just features a slightly different shade of “Oberon Black” and a faint tinge of red on the front grille. With the exception of the red-painted brake calipers inside, the new versions come with the same 18-inch alloy wheel style.




The forthcoming new Red Edition SUVs will utilize the same 2.0 litre turbocharged diesel engine, as previously mentioned, and have not undergone any mechanical alterations. The highest output of this engine is 170 PS and the maximum torque is 350 Nm. In both the Harrier and Safari, this engine is available with a 6-speed manual or 6-speed automatic transmission









Tata Harrier, Safari with ADAS launching soon:
 
  • Like
  • Thinking
Reactions: 10 users

Xray1

Regular
Great find. This may explain in part, the issue of employee incentive shares.
I'm quite surprised that Peter nor Anil weren't listed as one of the inventor's on that list !!!
 
  • Like
Reactions: 3 users

stuart888

Regular
Searching in Youtube for 24 hours is interesting "neuromorphic spiking". Seems like a hot topic!!!

I cannot keep up.


1676606198481.png
 
  • Like
Reactions: 4 users

MrNick

Regular
  • Like
  • Haha
Reactions: 2 users

Taproot

Regular
Brainchip stays at home alone is that what you mean?
But why?
Animation Abschied GIF
It's all OK Sirod
Prophesee and Brainchip are going to achieve great things together.

"Patience is a form of wisdom. It demonstrates that we understand and accept the fact that sometimes things must unfold in their own time "



16:35 - 18:41
Solving motion blur.

19:42 - 20.07
Rob letting US know where the Prophesee / Brainchip collaboration is headed.

25:35 - 28:35
Jerry Maguire " You complete me " moment !

 
  • Like
  • Love
  • Haha
Reactions: 11 users

Tothemoon24

Top 20
Tracking How the Event Camera is Evolving


Event camera processing is advancing and enabling a new wave of neuromorphic technology.

Sony, Prophesee, iniVation, and CelePixel are already working to commercialize event (spike-based) cameras. Even more important, however, is the task of processing the data these cameras produce efficiently so that it can be used in real-world applications. While some are using relatively conventional digital technology for this, others are working on more neuromorphic, or brain-like, approaches.

Though more conventional techniques are easier to program and implement in the short term, the neuromorphic approach has more potential for extremely low-power operation.

By processing the incoming signal before having to convert from spikes to data, the load on digital processors can be minimized. In addition, spikes can be used as a common language with sensors in other modalities, such as sound, touch or inertia. This is because when things happen in the real world, the most obvious thing that unifies them is time: When a ball hits a wall, it makes a sound, causes an impact that can be felt, deforms and changes direction. All of these cluster temporally. Real-time, spike-based processing can therefore be extremely efficient for finding these correlations and extracting meaning from them.

Last time, on Nov. 21, we looked at the advantage of the two-cameras-in-one approach (DAVIS cameras), which uses the same circuitry to capture both event images, including only changing pixels, and conventional intensity images. The problem is that these two types of images encode information in fundamentally different ways.

Common language

Researchers at Peking University in Shenzhen, China, recognized that to optimize that multi-modal interoperability all the signals should ideally be represented in the same way. Essentially, they wanted to create a DAVIS camera with two modes, but with both of them communicating using events. Their reasoning was both pragmatic—it makes sense from an engineering standpoint—and biologically motivated. The human vision system, they point out, includes both peripheral vision, which is sensitive to movement, and foveal vision for fine details. Both of these feed into the same human visual system.

The Chinese researchers recently described what they call retinomorphic sensing or super vision that provides event-based output. The output can provide both dynamic sensing like conventional event cameras and intensity sensing in the form of events. They can switch back and forth between the two modes in a way that allows them to capture the dynamics and the texture of an image in a single, compressed representation that humans and machines can easily process.

These representations include the high temporal resolution you would expect from an event camera, combined with the visual texture you would get from an ordinary image or photograph.

They have achieved this performance using a prototype that consists of two sensors: a conventional event camera (DVS) and a Vidar camera, a new event camera from the same group that can efficiently create conventional frames from spikes by aggregating over a time window. They then use a spiking neural network for more advanced processing, achieving object recognition and tracking.

The other kind of CNN

At Johns Hopkins University, Andreas Andreou and his colleagues have taken event cameras in an entirely different direction. Instead of focusing on making their cameras compatible with external post-processing, they have built the processing directly into the vision chip. They use an analog, spike-based cellular neural network (CNN) structure where nearest-neighbor pixels talk to each other. Cellular neural networks share an acronym with convolutional neural networks, but are not closely related.

In cellular CNNs, the input/output links between each pixel and its eight nearest are built directly in hardware and can be specified to perform symmetrical processing tasks (see figure). These can then be sequentially combined to produce sophisticated image-processing algorithms.

Two things make them particularly powerful. One is that the processing is fast because it is performed in the analog domain. The other is that the computations across all pixels are local. So while there is a sequence of operations to perform an elaborate task, this is a sequence of fast, low-power, parallel operations.

A nice feature of this work is that the chip has been implemented in three dimensions using Chartered 130nm CMOS and Terrazon interconnection technology. Unlike many 3D systems, in this case the two tiers are not designed to work separately (e.g. processing on one layer, memory on the other, and relatively sparse interconnects between them). Instead, each pixel and its processing infrastructure are built on both tiers operating as a single unit.

Andreou and his team were part of a consortium, led by Northrop–Grumman, that secured a $2 million contract last year from the Defence Advanced Research Projects Agency (DARPA). While exactly what they are doing is not public, one can speculate the technology they are developing will have some similarities to the work they’ve published.


Shown is the 3D structure of a Cellular Neural Network cell (right) and layout (bottom left) of the John’s Hopkins University event camera with local processing.
In the dark

We know DARPA has strong interest in this kind of neuromorphic technology. Last summer the agency announced that its Fast Event-based Neuromorphic Camera and Electronics (FENCE) program granted three contracts to develop very-low-power, low-latency search and tracking in the infrared. One of the three teams is led by Northrop-Grumman.

Whether or not the FENCE project and the contract announced by Johns Hopkins university are one and the same, it is clear is that event imagers are becoming increasingly sophisticated.
 
  • Like
  • Fire
  • Love
Reactions: 16 users

mrgds

Regular
Big Telsa recall faulty FSD crash risk 362000 cars! wow
When the media stop using the word "RECALL" the general population will be better informed.
Its a "OVER THE AIR UPDATE FOR THE FSD BETA" ............... NOT A PHYSICAL RECALL OF 360000K VEHICLES" :rolleyes:
 
Last edited:
  • Like
  • Thinking
Reactions: 9 users

Mugen74

Regular
Cheers Mrgds typical dodgy media.
 
  • Like
  • Thinking
Reactions: 6 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Our partners Prophesee talking big......mention for us as well


Smartphones are one field where event cameras may make an unexpected entrance, but Verre says this is just the tip of the iceberg. He is looking forward to a paradigm shift and is most excited about all the applications that will soon pop up for event cameras – some of which we probably cannot yet envision.

“I see these technologies and new tech sensing modalities as a new paradigm that will create a new standard in the market. And in serving many, many applications, so we will see more event-based cameras all around us. This is so exciting."
 
  • Like
  • Love
  • Fire
Reactions: 33 users

Murphy

Life is not a dress rehearsal!
Hi folks. Does anyone else have the delayed price missing from TSE? I had it this morning, but not since lunchtime......


If you don't havevdreams, you can't have dreams come true!
 
  • Like
Reactions: 4 users

Diogenese

Top 20
Hi folks. Does anyone else have the delayed price missing from TSE? I had it this morning, but not since lunchtime......


If you don't havevdreams, you can't have dreams come true!
It's POETS day for the graphbot.
 
  • Haha
  • Like
Reactions: 3 users

Slade

Top 20
Tracking How the Event Camera is Evolving


Event camera processing is advancing and enabling a new wave of neuromorphic technology.

Sony, Prophesee, iniVation, and CelePixel are already working to commercialize event (spike-based) cameras. Even more important, however, is the task of processing the data these cameras produce efficiently so that it can be used in real-world applications. While some are using relatively conventional digital technology for this, others are working on more neuromorphic, or brain-like, approaches.

Though more conventional techniques are easier to program and implement in the short term, the neuromorphic approach has more potential for extremely low-power operation.

By processing the incoming signal before having to convert from spikes to data, the load on digital processors can be minimized. In addition, spikes can be used as a common language with sensors in other modalities, such as sound, touch or inertia. This is because when things happen in the real world, the most obvious thing that unifies them is time: When a ball hits a wall, it makes a sound, causes an impact that can be felt, deforms and changes direction. All of these cluster temporally. Real-time, spike-based processing can therefore be extremely efficient for finding these correlations and extracting meaning from them.

Last time, on Nov. 21, we looked at the advantage of the two-cameras-in-one approach (DAVIS cameras), which uses the same circuitry to capture both event images, including only changing pixels, and conventional intensity images. The problem is that these two types of images encode information in fundamentally different ways.

Common language

Researchers at Peking University in Shenzhen, China, recognized that to optimize that multi-modal interoperability all the signals should ideally be represented in the same way. Essentially, they wanted to create a DAVIS camera with two modes, but with both of them communicating using events. Their reasoning was both pragmatic—it makes sense from an engineering standpoint—and biologically motivated. The human vision system, they point out, includes both peripheral vision, which is sensitive to movement, and foveal vision for fine details. Both of these feed into the same human visual system.

The Chinese researchers recently described what they call retinomorphic sensing or super vision that provides event-based output. The output can provide both dynamic sensing like conventional event cameras and intensity sensing in the form of events. They can switch back and forth between the two modes in a way that allows them to capture the dynamics and the texture of an image in a single, compressed representation that humans and machines can easily process.

These representations include the high temporal resolution you would expect from an event camera, combined with the visual texture you would get from an ordinary image or photograph.

They have achieved this performance using a prototype that consists of two sensors: a conventional event camera (DVS) and a Vidar camera, a new event camera from the same group that can efficiently create conventional frames from spikes by aggregating over a time window. They then use a spiking neural network for more advanced processing, achieving object recognition and tracking.

The other kind of CNN

At Johns Hopkins University, Andreas Andreou and his colleagues have taken event cameras in an entirely different direction. Instead of focusing on making their cameras compatible with external post-processing, they have built the processing directly into the vision chip. They use an analog, spike-based cellular neural network (CNN) structure where nearest-neighbor pixels talk to each other. Cellular neural networks share an acronym with convolutional neural networks, but are not closely related.

In cellular CNNs, the input/output links between each pixel and its eight nearest are built directly in hardware and can be specified to perform symmetrical processing tasks (see figure). These can then be sequentially combined to produce sophisticated image-processing algorithms.

Two things make them particularly powerful. One is that the processing is fast because it is performed in the analog domain. The other is that the computations across all pixels are local. So while there is a sequence of operations to perform an elaborate task, this is a sequence of fast, low-power, parallel operations.

A nice feature of this work is that the chip has been implemented in three dimensions using Chartered 130nm CMOS and Terrazon interconnection technology. Unlike many 3D systems, in this case the two tiers are not designed to work separately (e.g. processing on one layer, memory on the other, and relatively sparse interconnects between them). Instead, each pixel and its processing infrastructure are built on both tiers operating as a single unit.

Andreou and his team were part of a consortium, led by Northrop–Grumman, that secured a $2 million contract last year from the Defence Advanced Research Projects Agency (DARPA). While exactly what they are doing is not public, one can speculate the technology they are developing will have some similarities to the work they’ve published.


Shown is the 3D structure of a Cellular Neural Network cell (right) and layout (bottom left) of the John’s Hopkins University event camera with local processing.
In the dark

We know DARPA has strong interest in this kind of neuromorphic technology. Last summer the agency announced that its Fast Event-based Neuromorphic Camera and Electronics (FENCE) program granted three contracts to develop very-low-power, low-latency search and tracking in the infrared. One of the three teams is led by Northrop-Grumman.

Whether or not the FENCE project and the contract announced by Johns Hopkins university are one and the same, it is clear is that event imagers are becoming increasingly sophisticated.
@Tothemoon24 your post is exciting. Oculi's technology is the same technology developed at John Hopkin's university as descibed in your post. Brainchip is currently engaged with Oculi. @chapman89 post today shows that Oculi has entered into a strategic agreement with Global Foundaries (as we all know, Brainchip recently taped out the Akida 1500 on Global Foudaries technology). Oculi's new chip will be used in smart devices and homes, industrial, IoT, automotive markets and wearables including AR/VR. Prophesee is an Oculi competitor. No wonder NDAs are so well guarded.

No one can tell me that Akida is not being used by Oculi.

It's happy days. Perhaps we will get an update on this next week in either the podcast that comes out at 6am on Monday!! or In our annual report due out sometime next week.



 
  • Like
  • Fire
  • Love
Reactions: 32 users
Top Bottom