BRN Discussion Ongoing

Taproot

Regular

Is Digital Radar the Answer to ADAS Interference?​


When our cars brake automatically to avoid an accident, or tuck themselves neatly into a tight parking space without our hands touching the steering wheel, we don’t experience it as a technological miracle as much as the fulfillment of a consumer expectation. This fact is a testament to just how far we have come in the push toward autonomous mobility, that the installation of advanced driver-assistance systems (ADAS) has quietly become an industry standard in the production of new vehicles.

But the technology that makes ADAS work, and that will ultimately carry us into the next, more advanced phase of commercial AVs, still has serious limitations. In order to achieve continued progress and bring about the next generation of mobility, OEMs and system developers will need to make hard decisions about where to focus. In the end, some of the technologies that have been instrumental in bringing us to this point may need to be reimagined—or even replaced—if we hope to reach that next phase in the foreseeable future.

Among the immediate challenges for developers is the use of radar in ADAS, which has recently emerged as a cause of concern as the technology becomes more widespread. Radar is an essential component for achieving accurate sensing in a variety of systems. It is critical to detecting distance objects as well as the speed at which pedestrians or another vehicle may be travelling.

Distance and speed may arguably be the most important aspects of any ADAS. As the volume of radars on roadways increases, many experts worry about potential signal interference—a real liability, and one that would be unconscionable to overlook.

Radar interference


We must first frame and understand the problem: What exactly is radar interference, and what is the extent of the difficulty it represents for AV developers?

Radar interference has been described as the “Achilles’ heel” of automated and driver-assisted vehicles. Radar interference in the context of AVs is surprisingly easy to understand. An emitter transmits a signal that reflects off an object and is collected and processed by the radar’s receiver. But when two or more signals from separate radars cross paths, they can interfere with each other, causing any number of complications.

Interference raises safety issues given that radar is often an indispensable component of ADAS sensing capabilities. Essential functions such as parking-assist, blind-spot detection, adaptive cruise control, forward collision warnings and automatic emergency brakes are all made possible through the use of radar.

The issue gets even trickier when you consider that the latter function is becoming mandatory in as many as 40 countries, as adopted by the United Nations Economic Commission for Europe (UNECE), China, and most recently the U.S.

The worst-case scenario is that the interference problem won’t be sufficiently mitigated before our roads are filled with radar-equipped vehicles in support of more ADAS deployments, potentially resulting in avoidable accidents and fatalities due to signal interference. The phenomenon is referred to as “radar congestion,” occurring when too many radar signals degrade sensor performance.

Radar interference can occur between radars on the same vehicle as well as among radars on adjacent vehicles. If it weren’t quite literally a matter of life and death, we might be able to appreciate the irony of something like this happening as the result of a global mandate. The irony is, of course, lives would be endangered by a technology meant to protect us.

Possible solutions

The possibility of widespread radar interference diminishing the integrity of safety features constitutes a very real challenge for ADAS designers. Still, no industry consensus exists for solving the interference problem.

The automotive sector has been aware of potential issues related to interference for nearly a decade, informed by a report released by a Europe-based project known as MOSARIM (More Safety for All by Radar Interference Mitigation). The problem has also been studied by various agencies, and the common conclusion is that radar interference poses little to no threat in environments with few competing radars.

When congestion occurs, however, the possibility of an error vastly increases.

As stated in a report by the National Highway Traffic Safety Administration, “systems that operate well in environments with few other radars may suffer significant degradation of performance in radar congested environments… in scenarios with many vehicles operating radars in the 76-81GHz band, the power from other radars will likely exceed the power of echoes from targets needed for specific performance, by several order of magnitude.”

To the present, the auto industry has done little more than rely on an allocated frequency spectrum for vehicles (the 76- to 81-GHz range noted above) while leaving the details to individual automotive radar developers. That despite the fact that all must operate together within this limited bandwidth. Unfortunately, regulating radar developers would be difficult, and even the best results would likely only emerge over time after overcoming industry resistance.

The truth is that radar interference needs to be mitigated now, not later. The most logical path forward might be re-evaluating current radar technologies, determining how those technologies might be improved or whether they should be replaced by technologies better equipped to cope with interference.

Many ADAS-equipped vehicles today use frequency modulated continuous wave (FMCW) technologies, or “analog radar”. FMCW-based systems lack the adaptability to function properly in certain environments, including those congested with radar signals. Current FMCW radars rely on techniques such as frequency hopping and timing jitter on the transmit side, as well as time-domain excision on the receive end to mitigate mutual interference from nearby radars.

Those techniques, however, are not without their limitations, and many analog radars still run the risk of detecting a “ghost” target, resulting in “false alarms” that trigger unnecessary activation of automatic braking.

By contrast, digital radar technologies are natively better at mitigating interference compared with their analog counterparts.

Digital radars differ from analog systems in many ways, most notably its unique code for each transmit signal. This is a key element of digital code modulation (DCM), allowing radars to distinguish their own signal from multiple other signals in congested environments.

That feature is absolutely crucial for the widespread adoption of AVs and ADAS technologies. While frequency-hopping techniques used by analog radars remain problematic due to bandwidth availability and lack of standardization, digital radar is comparatively unbounded through its use of 1018 different unique identifier codes. In fact, DCM makes it inherently more immune to mutual interference, reducing ghost targets that can trigger a false activation of automated braking systems.

In the absence of regulations, radar interference will at best be a major obstacle to the expansion of autonomous mobility. Worst case, it creates extreme liability and public safety concerns. With regulatory intervention far from certain, OEMs and developers will need to reevaluate radar technologies installed in their vehicles. At the very least, digital radar’s potential role in mitigating interference deserves serious consideration.

–Max Liberman is a vice president at digital automotive radar vendor Uhnder Inc.
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Fisker is frisky for advanced autonomous driving technologies.



Screen Shot 2022-05-02 at 6.31.20 pm.png
 
  • Like
  • Fire
Reactions: 12 users

Diogenese

Top 20
Final annoying question assuming you gave AKIDA a stopwatch going back to our original discussion of LiDAR could it ignore those pulses not travelling at the correct speed ie; the pen laser fired directly into the middle of the sensor?

FF

AKIDA BALLISTA
Assuming that each pixel is only polled for 0.00005 seconds (4000 pixels at 50% duty cycle) every 1/25 th of a second, the interfering incoming laser pulse would need to strike the pixel during that 0.00005 second window for it to be generate a point in the point cloud. So, in one second, there are 25 times 0.00005 seconds (0.00125 seconds) when a false cloud point event could be registered from a direct rogue laser pulse. That's 1 in 800, which we shall refer to as the temporal probability.

Now that is also dependent on the probability of the interfering beam impinging on the LiDaR receiver's lens of less than, say, 3 square inches. I don't know what the probability (let's call it the geometric probability) of that is, but, it would be quite small, say for example, less than 1 in 10.

So the probability of an interfering beam striking the lens would be less than 1 in 8000, based on the above assumptions about temporal and geometric probabilities.

And remember that that will be only for one frame of the 25 frames per second. Which means the system could be programmed to ignore such rogue pulses. Indeed, the system should ignore any incoming laser pulse which saturated the dynamic range of the pixel (if we can somehow account for the instance of a reflection from a retroreflecting bicycle type reflector, which is designed to reflect most of the incoming light back in the direction from whence it came but this would still be much smaller than a direct laser strike).

Now rogue scattered reflections may be a different matter - but my head hurts ...
 
  • Like
  • Love
  • Fire
Reactions: 11 users

Taproot

Regular

Is Digital Radar the Answer to ADAS Interference?​


When our cars brake automatically to avoid an accident, or tuck themselves neatly into a tight parking space without our hands touching the steering wheel, we don’t experience it as a technological miracle as much as the fulfillment of a consumer expectation. This fact is a testament to just how far we have come in the push toward autonomous mobility, that the installation of advanced driver-assistance systems (ADAS) has quietly become an industry standard in the production of new vehicles.

But the technology that makes ADAS work, and that will ultimately carry us into the next, more advanced phase of commercial AVs, still has serious limitations. In order to achieve continued progress and bring about the next generation of mobility, OEMs and system developers will need to make hard decisions about where to focus. In the end, some of the technologies that have been instrumental in bringing us to this point may need to be reimagined—or even replaced—if we hope to reach that next phase in the foreseeable future.

Among the immediate challenges for developers is the use of radar in ADAS, which has recently emerged as a cause of concern as the technology becomes more widespread. Radar is an essential component for achieving accurate sensing in a variety of systems. It is critical to detecting distance objects as well as the speed at which pedestrians or another vehicle may be travelling.

Distance and speed may arguably be the most important aspects of any ADAS. As the volume of radars on roadways increases, many experts worry about potential signal interference—a real liability, and one that would be unconscionable to overlook.

Radar interference


We must first frame and understand the problem: What exactly is radar interference, and what is the extent of the difficulty it represents for AV developers?

Radar interference has been described as the “Achilles’ heel” of automated and driver-assisted vehicles. Radar interference in the context of AVs is surprisingly easy to understand. An emitter transmits a signal that reflects off an object and is collected and processed by the radar’s receiver. But when two or more signals from separate radars cross paths, they can interfere with each other, causing any number of complications.

Interference raises safety issues given that radar is often an indispensable component of ADAS sensing capabilities. Essential functions such as parking-assist, blind-spot detection, adaptive cruise control, forward collision warnings and automatic emergency brakes are all made possible through the use of radar.

The issue gets even trickier when you consider that the latter function is becoming mandatory in as many as 40 countries, as adopted by the United Nations Economic Commission for Europe (UNECE), China, and most recently the U.S.

The worst-case scenario is that the interference problem won’t be sufficiently mitigated before our roads are filled with radar-equipped vehicles in support of more ADAS deployments, potentially resulting in avoidable accidents and fatalities due to signal interference. The phenomenon is referred to as “radar congestion,” occurring when too many radar signals degrade sensor performance.

Radar interference can occur between radars on the same vehicle as well as among radars on adjacent vehicles. If it weren’t quite literally a matter of life and death, we might be able to appreciate the irony of something like this happening as the result of a global mandate. The irony is, of course, lives would be endangered by a technology meant to protect us.

Possible solutions

The possibility of widespread radar interference diminishing the integrity of safety features constitutes a very real challenge for ADAS designers. Still, no industry consensus exists for solving the interference problem.

The automotive sector has been aware of potential issues related to interference for nearly a decade, informed by a report released by a Europe-based project known as MOSARIM (More Safety for All by Radar Interference Mitigation). The problem has also been studied by various agencies, and the common conclusion is that radar interference poses little to no threat in environments with few competing radars.

When congestion occurs, however, the possibility of an error vastly increases.

As stated in a report by the National Highway Traffic Safety Administration, “systems that operate well in environments with few other radars may suffer significant degradation of performance in radar congested environments… in scenarios with many vehicles operating radars in the 76-81GHz band, the power from other radars will likely exceed the power of echoes from targets needed for specific performance, by several order of magnitude.”

To the present, the auto industry has done little more than rely on an allocated frequency spectrum for vehicles (the 76- to 81-GHz range noted above) while leaving the details to individual automotive radar developers. That despite the fact that all must operate together within this limited bandwidth. Unfortunately, regulating radar developers would be difficult, and even the best results would likely only emerge over time after overcoming industry resistance.

The truth is that radar interference needs to be mitigated now, not later. The most logical path forward might be re-evaluating current radar technologies, determining how those technologies might be improved or whether they should be replaced by technologies better equipped to cope with interference.

Many ADAS-equipped vehicles today use frequency modulated continuous wave (FMCW) technologies, or “analog radar”. FMCW-based systems lack the adaptability to function properly in certain environments, including those congested with radar signals. Current FMCW radars rely on techniques such as frequency hopping and timing jitter on the transmit side, as well as time-domain excision on the receive end to mitigate mutual interference from nearby radars.

Those techniques, however, are not without their limitations, and many analog radars still run the risk of detecting a “ghost” target, resulting in “false alarms” that trigger unnecessary activation of automatic braking.

By contrast, digital radar technologies are natively better at mitigating interference compared with their analog counterparts.

Digital radars differ from analog systems in many ways, most notably its unique code for each transmit signal. This is a key element of digital code modulation (DCM), allowing radars to distinguish their own signal from multiple other signals in congested environments.

That feature is absolutely crucial for the widespread adoption of AVs and ADAS technologies. While frequency-hopping techniques used by analog radars remain problematic due to bandwidth availability and lack of standardization, digital radar is comparatively unbounded through its use of 1018 different unique identifier codes. In fact, DCM makes it inherently more immune to mutual interference, reducing ghost targets that can trigger a false activation of automated braking systems.

In the absence of regulations, radar interference will at best be a major obstacle to the expansion of autonomous mobility. Worst case, it creates extreme liability and public safety concerns. With regulatory intervention far from certain, OEMs and developers will need to reevaluate radar technologies installed in their vehicles. At the very least, digital radar’s potential role in mitigating interference deserves serious consideration.

–Max Liberman is a vice president at digital automotive radar vendor Uhnder Inc.
"Digital lidar is built on the idea that if you can consolidate all of the important functionality of a lidar sensor into semiconductors fabricated in a standard CMOS process, you can put your core technology on a radically different price/performance improvement curve than is possible with other analog, MEMS, or silicon photonics-based approaches.

At this point you may be asking, if using SPADs and VCSELs has all of these performance and cost advantages, why doesn’t everyone use them? The short answer is that it’s really hard to make them work. Back in 2015 when we first began down the path of digital lidar, the current state of the art detectors and lasers would have produced a sensor with a range of only a few meters. Today, our OS2 has a range of over 200 meters and will continue to improve significantly over time. Crucially, its cost – and the cost of all our sensors – will come down at the same time.

At Ouster, we envision a future where lidar-powered solutions are ubiquitous, with high-performing and affordable 3D perception capabilities built for every industry. We are convinced CMOS digital lidar technology is what will get us there."



As a side note to this radar / lidar discussion,
Some will recognise Ouster
Manny + Ouster
Manny + Brainchip
 
  • Like
  • Fire
  • Love
Reactions: 10 users

Diogenese

Top 20

Is Digital Radar the Answer to ADAS Interference?​


When our cars brake automatically to avoid an accident, or tuck themselves neatly into a tight parking space without our hands touching the steering wheel, we don’t experience it as a technological miracle as much as the fulfillment of a consumer expectation. This fact is a testament to just how far we have come in the push toward autonomous mobility, that the installation of advanced driver-assistance systems (ADAS) has quietly become an industry standard in the production of new vehicles.

But the technology that makes ADAS work, and that will ultimately carry us into the next, more advanced phase of commercial AVs, still has serious limitations. In order to achieve continued progress and bring about the next generation of mobility, OEMs and system developers will need to make hard decisions about where to focus. In the end, some of the technologies that have been instrumental in bringing us to this point may need to be reimagined—or even replaced—if we hope to reach that next phase in the foreseeable future.

Among the immediate challenges for developers is the use of radar in ADAS, which has recently emerged as a cause of concern as the technology becomes more widespread. Radar is an essential component for achieving accurate sensing in a variety of systems. It is critical to detecting distance objects as well as the speed at which pedestrians or another vehicle may be travelling.

Distance and speed may arguably be the most important aspects of any ADAS. As the volume of radars on roadways increases, many experts worry about potential signal interference—a real liability, and one that would be unconscionable to overlook.

Radar interference


We must first frame and understand the problem: What exactly is radar interference, and what is the extent of the difficulty it represents for AV developers?

Radar interference has been described as the “Achilles’ heel” of automated and driver-assisted vehicles. Radar interference in the context of AVs is surprisingly easy to understand. An emitter transmits a signal that reflects off an object and is collected and processed by the radar’s receiver. But when two or more signals from separate radars cross paths, they can interfere with each other, causing any number of complications.

Interference raises safety issues given that radar is often an indispensable component of ADAS sensing capabilities. Essential functions such as parking-assist, blind-spot detection, adaptive cruise control, forward collision warnings and automatic emergency brakes are all made possible through the use of radar.

The issue gets even trickier when you consider that the latter function is becoming mandatory in as many as 40 countries, as adopted by the United Nations Economic Commission for Europe (UNECE), China, and most recently the U.S.

The worst-case scenario is that the interference problem won’t be sufficiently mitigated before our roads are filled with radar-equipped vehicles in support of more ADAS deployments, potentially resulting in avoidable accidents and fatalities due to signal interference. The phenomenon is referred to as “radar congestion,” occurring when too many radar signals degrade sensor performance.

Radar interference can occur between radars on the same vehicle as well as among radars on adjacent vehicles. If it weren’t quite literally a matter of life and death, we might be able to appreciate the irony of something like this happening as the result of a global mandate. The irony is, of course, lives would be endangered by a technology meant to protect us.

Possible solutions

The possibility of widespread radar interference diminishing the integrity of safety features constitutes a very real challenge for ADAS designers. Still, no industry consensus exists for solving the interference problem.

The automotive sector has been aware of potential issues related to interference for nearly a decade, informed by a report released by a Europe-based project known as MOSARIM (More Safety for All by Radar Interference Mitigation). The problem has also been studied by various agencies, and the common conclusion is that radar interference poses little to no threat in environments with few competing radars.

When congestion occurs, however, the possibility of an error vastly increases.

As stated in a report by the National Highway Traffic Safety Administration, “systems that operate well in environments with few other radars may suffer significant degradation of performance in radar congested environments… in scenarios with many vehicles operating radars in the 76-81GHz band, the power from other radars will likely exceed the power of echoes from targets needed for specific performance, by several order of magnitude.”

To the present, the auto industry has done little more than rely on an allocated frequency spectrum for vehicles (the 76- to 81-GHz range noted above) while leaving the details to individual automotive radar developers. That despite the fact that all must operate together within this limited bandwidth. Unfortunately, regulating radar developers would be difficult, and even the best results would likely only emerge over time after overcoming industry resistance.

The truth is that radar interference needs to be mitigated now, not later. The most logical path forward might be re-evaluating current radar technologies, determining how those technologies might be improved or whether they should be replaced by technologies better equipped to cope with interference.

Many ADAS-equipped vehicles today use frequency modulated continuous wave (FMCW) technologies, or “analog radar”. FMCW-based systems lack the adaptability to function properly in certain environments, including those congested with radar signals. Current FMCW radars rely on techniques such as frequency hopping and timing jitter on the transmit side, as well as time-domain excision on the receive end to mitigate mutual interference from nearby radars.

Those techniques, however, are not without their limitations, and many analog radars still run the risk of detecting a “ghost” target, resulting in “false alarms” that trigger unnecessary activation of automatic braking.

By contrast, digital radar technologies are natively better at mitigating interference compared with their analog counterparts.

Digital radars differ from analog systems in many ways, most notably its unique code for each transmit signal. This is a key element of digital code modulation (DCM), allowing radars to distinguish their own signal from multiple other signals in congested environments.

That feature is absolutely crucial for the widespread adoption of AVs and ADAS technologies. While frequency-hopping techniques used by analog radars remain problematic due to bandwidth availability and lack of standardization, digital radar is comparatively unbounded through its use of 1018 different unique identifier codes. In fact, DCM makes it inherently more immune to mutual interference, reducing ghost targets that can trigger a false activation of automated braking systems.

In the absence of regulations, radar interference will at best be a major obstacle to the expansion of autonomous mobility. Worst case, it creates extreme liability and public safety concerns. With regulatory intervention far from certain, OEMs and developers will need to reevaluate radar technologies installed in their vehicles. At the very least, digital radar’s potential role in mitigating interference deserves serious consideration.

–Max Liberman is a vice president at digital automotive radar vendor Uhnder Inc.
Thanks Taproot,

This may help convince @Fact Finder that I wasn't blathering on just to air my tonsils.

And, of course, Akida is perfect for sieving out the specific digitally coded radar signals from each individual vehicle.
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Taproot

Regular
"Digital lidar is built on the idea that if you can consolidate all of the important functionality of a lidar sensor into semiconductors fabricated in a standard CMOS process, you can put your core technology on a radically different price/performance improvement curve than is possible with other analog, MEMS, or silicon photonics-based approaches.

At this point you may be asking, if using SPADs and VCSELs has all of these performance and cost advantages, why doesn’t everyone use them? The short answer is that it’s really hard to make them work. Back in 2015 when we first began down the path of digital lidar, the current state of the art detectors and lasers would have produced a sensor with a range of only a few meters. Today, our OS2 has a range of over 200 meters and will continue to improve significantly over time. Crucially, its cost – and the cost of all our sensors – will come down at the same time.

At Ouster, we envision a future where lidar-powered solutions are ubiquitous, with high-performing and affordable 3D perception capabilities built for every industry. We are convinced CMOS digital lidar technology is what will get us there."



As a side note to this radar / lidar discussion,
Some will recognise Ouster
Manny + Ouster
Manny + Brainchip

Ouster Ships First Digital Flash Series A-Sample, Achieving Major Milestone on Path to Automotive Readiness

Ouster shipped the first DF A-sample to its global automotive OEM partner, and plans to present A-samples to over thirty automotive OEMs, Tier 1s, and AV companies this year

With zero moving parts, Ouster’s DF series is the first true solid-state flash lidar on the market. The DF A-sample achieved an 8x reduction in size in less than six months, resulting in a more compact sensor than its predecessor, while delivering market-leading performance on range, resolution, and field-of-view. Ouster Automotive plans to present its first A-sample to over 30 automotive OEMs, Tier 1 suppliers, and autonomous vehicle companies in 2022. While the Company is in advanced negotiations for multiple series production programs, it expects the A-sample to significantly accelerate its commercial progress by allowing prospective customers and partners to validate the breakthrough form-factor and performance of the DF sensor.
 
  • Like
  • Fire
  • Love
Reactions: 11 users
D

Deleted member 118

Guest
Assuming that each pixel is only polled for 0.00005 seconds (4000 pixels at 50% duty cycle) every 1/25 th of a second, the interfering incoming laser pulse would need to strike the pixel during that 0.00005 second window for it to be generate a point in the point cloud. So, in one second, there are 25 times 0.00005 seconds (0.00125 seconds) when a false cloud point event could be registered from a direct rogue laser pulse. That's 1 in 800, which we shall refer to as the temporal probability.

Now that is also dependent on the probability of the interfering beam impinging on the LiDaR receiver's lens of less than, say, 3 square inches. I don't know what the probability (let's call it the geometric probability) of that is, but, it would be quite small, say for example, less than 1 in 10.

So the probability of an interfering beam striking the lens would be less than 1 in 8000, based on the above assumptions about temporal and geometric probabilities.

And remember that that will be only for one frame of the 25 frames per second. Which means the system could be programmed to ignore such rogue pulses. Indeed, the system should ignore any incoming laser pulse which saturated the dynamic range of the pixel (if we can somehow account for the instance of a reflection from a retroreflecting bicycle type reflector, which is designed to reflect most of the incoming light back in the direction from whence it came but this would still be much smaller than a direct laser strike).

Now rogue scattered reflections may be a different matter - but my head hurts ...

 
  • Haha
  • Like
Reactions: 9 users

Taproot

Regular

Ouster Ships First Digital Flash Series A-Sample, Achieving Major Milestone on Path to Automotive Readiness

Ouster shipped the first DF A-sample to its global automotive OEM partner, and plans to present A-samples to over thirty automotive OEMs, Tier 1s, and AV companies this year

With zero moving parts, Ouster’s DF series is the first true solid-state flash lidar on the market. The DF A-sample achieved an 8x reduction in size in less than six months, resulting in a more compact sensor than its predecessor, while delivering market-leading performance on range, resolution, and field-of-view. Ouster Automotive plans to present its first A-sample to over 30 automotive OEMs, Tier 1 suppliers, and autonomous vehicle companies in 2022. While the Company is in advanced negotiations for multiple series production programs, it expects the A-sample to significantly accelerate its commercial progress by allowing prospective customers and partners to validate the breakthrough form-factor and performance of the DF sensor.
for those that prefer to watch rather than read.
From 50 seconds.




Ouster + Sense Photonics = Ouster Automotive
 
  • Like
  • Fire
Reactions: 12 users
Thanks Taproot,

This may help convince @Fact Finder that I wasn't blathering on just to air my tonsils.

And, of course, Akida is perfect for sieving out the specific digitally coded radar signals from each individual vehicle.
Hi @Diogenese

I believe everything you say but I miss examining expert witnesses and I assumed you missed being examined.

So here we have the best of both worlds and I certainly enjoy your explanations and expanding my understanding.

I will stop being a pest if you prefer. 😂🤣😂

FF

AKIDA BALLISTA
 
  • Haha
  • Like
  • Love
Reactions: 17 users

M_C

Founding Member
  • Like
  • Fire
Reactions: 4 users

Jumpchooks

Regular
On 22 March, 2022 Brainchip published on its website this paid for update by Pitt Street Research under the heading Momentum Keeps Building:


In this report Pitt Street Research acknowledge the developments up to the date of its report but clearly had no official knowledge of the pending SiFive and Nviso partnerships or the MegaChips marketing campaign in the USA spearheading it with AKIDA Ai as the advantage they bring to the market.

At the time of the report Pitt Street Research point out that the then price was sitting at a 58% discount to their true value assessment of $1.50.

Leave aside whether Brainchip's true value has been increased by the above three events beyond $1.50 they most certainly have reinforced the $1.50 true value proposition being argued by Pitt Street Research.

In other words if they were right on 22 March, 2022 with their valuation and present price discount they are on even more solid ground now with these additional assets being added to the Brainchip arsenal.

We all know or at least those who get out of bed that overseas Institutions have been taking interest and accumulating shares in Brainchip and I personally think the reason is dead obvious. However you look at Brainchip it is undervalued and is a buy at these levels as we have seen occurring.

Global events should always be considered as potential risk factors but what are they actually and how do they relate to Brainchip in the short to medium term:

1. Gobal interest rates:

Brainchip does not have any borrowings and has plenty of cash on hand. Rising interest rates only do one thing to Brainchip. They increase the returns on the cash reserves which sit at over $AUD40 million.

Will Brainchip's customers who have debt be affected? Yes of course.

Will this deter them from pursuing AKIDA technology? I would argue no because if you understand the technology adoption saves you money in power and in the reduction in the number of semiconductors you need to carry out your activities and you do not have to throw out your current technology to take advantage.

Also remember the world is legislating compulsory power reduction in all industry which also places AKIDA technology in the right market place.

2. Global semiconductor supply chain issues:

Well just like all boats go up and down with the tide this is an industry wide issue and so is not a specific risk for Brainchip and as they have said because they offer IP as well as chips it is open to their customers to buy the IP and include it on their own chips. Socionext also appears to be a very good strategic partner when it comes to booking foundry time at TSMC.

3. Russia and Ukraine:

Again this is all boats go up and down with the tide and so the risks it creates are equal across the industry unless you are in an industry that thrives on military conflicts and supply defence forces.

The versatility of Brainchip's technology offering means that it is being taken up by the Defence industry and so while this is in its early stages sadly it is one which is being accelerated by the use of existing armaments to supply weapons to Ukraine.

I have not heard the outcome but the US President has asked congress to approve the giving to Ukraine an additional 40 plus billion dollars worth of weapons on top of the 2 billion dollars worth already sent. This means that these weapons worth more than $US40 billion need to be replaced as they were part of the inventory considered necessary to defend the USA. This is effectively a $US40 billion plus stimulus package for the defence industry.

When they run down their inventory the US Military do not purchase outdated inventory they buy the updated improved versions. This will actually create a faster track to the development and take up of AKIDA than would otherwise have been the case. Added to this demand is the realisation by NATO countries that Russia is not a benign neighbour and they are stepping up and also spending large sums to upgrade their military.

The US build up in the Pacific to counter China will also be a factor in increasing this demand supply chain advantage that Brainchip has stumbled into sad as it is in every other respect. Again countries like Japan, South Korea and Taiwan are also joining in the arms race not to mention Australia.

4. The Australian Election:

Before the vote in May we will be lead by politicians and sadly after the election we will be led by politicians. The press will run around running bad news stories but despite their efforts who ever we get will inherit the lucky country with under 4% unemployment and OECD predicted growth of 4.2% for the coming year.

As AKIDA drives energy efficiency it is on trend as far as all politics go so I personally cannot see any head winds for Brainchip as a result of the election either way.

5. Covid:

Not much to say really.

Brainchip is in an industry tailor made for remote working.

Its technology has a place in the global health sector so any global health emergency will be good news for Brainchip's bottom line in due course.

Good health never goes out of fashion in fact it is something that everyone from the poorest of society to the most advantaged craves constantly.

There are even medical conditions that turn this craving into a disorder where we humans feign illness just so we can seek medical assistance. In some extreme cases we even make those we love ill so that they can receive medical treatment.

I cannot think of any other industry where demand is so assured so ferocious.



The above are just my uninformed opinions so please treat them as such and as always DYOR
FF

AKIDA BALLISTA
Australia needs a person like you in Politics, I have no idea of your leaning but I would love to see a brain like yours supporting the Left, you would be an asset and the country would benefit for your contribution. You sound like a nice bloke. Just imagine FF for Attorney General ,,, whhooo whoooo !
 
  • Like
  • Haha
  • Love
Reactions: 12 users
D

Deleted member 118

Guest
Australia needs a person like you in Politics, I have no idea of your leaning but I would love to see a brain like yours supporting the Left, you would be an asset and the country would benefit for your contribution. You sound like a nice bloke. Just imagine FF for Attorney General ,,, whhooo whoooo !
I’d vote for FF all day long

 
  • Haha
  • Like
Reactions: 7 users

Jumpchooks

Regular
I have a question about Lidar or radar for AVs.

It's great if only 1 AV is using it, but what about when there are two or more cars in the 200 m range, whether going in the same direction or going the other way?

How does each AV know which reflections are from its Lidar, let alone the directly impinging beams from oncoming traffic?

The receiver is going to need to distinguish one set of reflections from maybe 50 sets of reflections and direct beam
 

Jumpchooks

Regular
I have a question about Lidar or radar for AVs.

It's great if only 1 AV is using it, but what about when there are two or more cars in the 200 m range, whether going in the same direction or going the other way?

How does each AV know which reflections are from its Lidar, let alone the directly impinging beams from oncoming traffic?

The receiver is going to need to distinguish one set of reflections from maybe 50 sets of reflections and direct bea

I have a question about Lidar or radar for AVs.

It's great if only 1 AV is using it, but what about when there are two or more cars in the 200 m range, whether going in the same direction or going the other way?

How does each AV know which reflections are from its Lidar, let alone the directly impinging beams from oncoming traffic?

The receiver is going to need to distinguish one set of reflections from maybe 50 sets of reflections and direct beams.
First of all I am a Luddite, with some science knowledge. Is that the equivalent of Accidental Radio Jamming? or interference?
 
  • Love
  • Like
Reactions: 4 users

Dhm

Regular
I’d vote for FF all day long


I'm meeting your Blackadder and raising you a python ptang ptang ole biscuit barrel election result. Not sure if FF is involved, but probably is.
 
  • Haha
Reactions: 3 users

Jumpchooks

Regular
Morning Diogenese,

Interesting thought.

If most, if not all future cars , drones etc have this new fangled technology how dose one distinguish their own output / input results from another machine / machines.

That's a hell of alot of lazer / lidar / radar / sonar signals being fired around willy nilly.

Not being at all clued up in this area l would think each autonomous machine must have a unique signature somehow incorperated into its onboard navigation system.

On the same thought train , nature seems to of figured it out....

Dolphins & Whales, Bat's ( not shaw about Jellyfish) all use Echolocation to navigate / communicate.

Imagine several thousand bats flying in a tight formation, all sending out little echo's which allows them to navigate blind ( not intoxicated), at night, without accident.

Amazing.

I'm certain if nature has figured out how to overcome this problem then the boffins at Mercedes Benz would have a fair idea, nevermind those folks at NASA & DARPA.

Regards,
Esq.
I love the analogy, I love you thinking
 
  • Thinking
  • Like
Reactions: 2 users
Hope it comes in other colours. 🤪 FF
Your starting to sound like Diana Fisher from the inventors that aired on the ABC back in the day she loved pink lol 😂
 
  • Like
  • Love
Reactions: 4 users

Diogenese

Top 20
"Digital lidar is built on the idea that if you can consolidate all of the important functionality of a lidar sensor into semiconductors fabricated in a standard CMOS process, you can put your core technology on a radically different price/performance improvement curve than is possible with other analog, MEMS, or silicon photonics-based approaches.

At this point you may be asking, if using SPADs and VCSELs has all of these performance and cost advantages, why doesn’t everyone use them? The short answer is that it’s really hard to make them work. Back in 2015 when we first began down the path of digital lidar, the current state of the art detectors and lasers would have produced a sensor with a range of only a few meters. Today, our OS2 has a range of over 200 meters and will continue to improve significantly over time. Crucially, its cost – and the cost of all our sensors – will come down at the same time.

At Ouster, we envision a future where lidar-powered solutions are ubiquitous, with high-performing and affordable 3D perception capabilities built for every industry. We are convinced CMOS digital lidar technology is what will get us there."



As a side note to this radar / lidar discussion,
Some will recognise Ouster
Manny + Ouster
Manny + Brainchip
Well Ouster have heard of neural networks, and there is plenty of scope for them to incorporate Akida:

WO2021046547A1 PROCESSING OF LIDAR IMAGES

1651484762181.png



a kernel-based coprocessor; and a scanning light ranging device comprising: a transmission circuit comprising a plurality of light sources that emit light pulses; a detection circuit comprising: an array of photosensors that detect reflected light pulses and output signals measured over time; and a signal processor connected to the array of photosensors and configured to determine depth values from measurements using the array of photosensors; and an image reconstruction circuit communicably coupled with the detection circuit and configured to: assign a sensor ID to each of first depth values for a first scan of the scanning light ranging device; construct a first lidar image using the first depth values by: mapping, using the sensor IDs, the first depth values to first lidar pixels in the first lidar image, the first lidar image being a rectilinear image, wherein the mapping uses a mapping table that specifies a lidar pixel based on a corresponding sensor ID; and store the first lidar pixels of the first lidar image in a local image buffer of the scanning light ranging device; and send the first lidar pixels of a local frame of the first lidar image or of a complete frame of the first lidar image to the kernel-based coprocessor.


[0177] Classifier 1514 can provide classification information (e.g., classified lidar images with certain pixels identified as corresponding to a same object) to a signal processor 1516, which may be optional or have functions that are implemented in classifier 1514 instead. Various models can be used for classifier 1514, including convolution neural networks, which can include convolutional kernels that can be implemented by a filter kernel (e.g., 1414 of FIG. 14). A classified color image can assign each lidar pixel to an object (e.g., using an ID) so that all pixels corresponding to a same object can be determined via the contents of the classified lidar image. Accordingly, the classification information can indicate which lidar pixels of a lidar image correspond to the same object.

...

For example, classifications using various models can be applied to a same lidar image, e.g., a decision tree and a neural network, or different types of such models, or use of different parameters for a same model, such as number of nodes or hidden layers.

...

[0216] ...
Machine learning techniques, e.g., a neural network may be used.
 
  • Like
  • Fire
Reactions: 17 users

Diogenese

Top 20
  • Haha
  • Fire
  • Like
Reactions: 7 users

Jumpchooks

Regular
Hi @Diogenese

I read this somewhere and it was that an autonomous vehicle to be given life will need more than one source of sensory input and if one of the inputs is in conflict majority will rule.

This necessity for multiple sources necessitates ultra low latency processing hence why AKIDA technology is essential.

The second thing I would say is I remember from high school science something about angles of incidence equaling angles of refraction. So assuming my Lidar sends out one pulse of light which collides with an object I can expect that pulse to come back at a known angle and in a time frame which will tell me by reference to these two things, time and angle, the distance and the location of the object.

As I am very clever if the angle does not match the time then I will know that the pulse of light hitting my sensor is not the pulse of light I sent out and therefore must be @Diogenese fooling around with the laser pointer he got for Christmas.

Now in there somewhere which is well above my pay grade is the Doppler Effect but I think I will leave that to someone who knows what they are talking about.

Suffice to say I think random pulses of light must always be in play even if @Diogenese is the only one who could not sleep and has gone for a drive in the early hours to watch the transit of Venus or something.

Having more than one sensor and majority rules will deal with this issue of random inputs.

My opinion only made up completely out of my own head with nothing but high school science DYOR
FF


AKIDA BALLISTA
The best example I can give is reflectors. Reflector tape only shines back at the light source.
 
  • Like
  • Thinking
Reactions: 3 users
Top Bottom