BRN Discussion Ongoing

D

Deleted member 118

Guest
I suppose we all have bottom lines below which we will never go. I have been uncomfortable with neuralink and Elon Musk experiments on animals but the latest news regarding the deaths of chimpanzees is just too much.

A brief Google will highlight how for at least three decades mainstream medical scientists have been opposed to Chimpanzee research except in the most extreme cases of need where no other alternative exists.

The brutality he has been prepared to inflict on primates fits with his failure to front up for a flight in space. He is a throughly unpleasant little person with a very poor moral compass and I pray Brainchip and Mercedes send him back to obscurity.

My opinion only DYOR
FF

AKIDA BALLISTA
1645057519587.gif
 
  • Like
Reactions: 14 users

SERA2g

Founding Member
  • Like
  • Haha
Reactions: 26 users

Diogenese

Top 20
Just looked up who Hesai is... A Chinese headquartered Lidar solution that looks very very refined and ready to go/ be implemented

I've never heard of them before today, (I'd never heard of Renasas before BRN's announcement either) so there definitely is a lot i haven't heard of in this space and I'm learning as much as i can while digging for potential links.😉

wouldn't it be amazing if there is some form of relationship there.. my research only has found that Hesai have a strong link with Bosch....that's all I could find..😢

this is their flagship LiDar solution.. looks very very clean


apologies if someone else has posted any of this.

what did that Chinese patent cover?? i forget.........i need to go back and check and refresh my memory...does anyone have a link?
where's barrelsitter when i need a patent link?
Hi TT,

Hesai has a lot of patents in China. Several of them are "utility models" (petty patent).

Here is a lidar patent:
CN111983587A Laser radar and transmitting module, receiving module and detection method thereof
The invention discloses a laser radar and a transmitting module, a receiving module and a detection method thereof, the transmitting module of the laser radar comprises a plurality of lasers, the light emitting wavelengths of the lasers comprise a first wavelength and a second wavelength, and the lasers suitable for emitting light at the same time have different light emitting wavelengths; the receiving module of the laser radar comprises a plurality of detectors, and each detector is suitable for receiving an echo light beam, reflected by a target object, of a laser beam emitted by a corresponding laser in an emission module. By adopting the scheme, the anti-interference capability among a plurality of detectors can be improved, and the signal receiving performance of the laser radar canbe effectively guaranteed.

1645058576862.png
 
  • Like
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Not too long ago someone was discussing the Samsung fridde used in some BrainChip presentation slides. Just curious as to whether or not this is the same model?


Posted on 9th February 2022 - Samsung Family Hub fridge

 
Last edited:
  • Like
Reactions: 11 users

Eirexpat

Member
Is the tide coming back in? Bottom hit?

Possibly heading back towards greener pastures.....
 
  • Like
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Haha
Reactions: 28 users

buena suerte :-)

BOB Bank of Brainchip
Is the tide coming back in? Bottom hit?

Possibly heading back towards greener pastures.....
🤞🤞 hope so! just a snippet of good news would do nicely ...Please .. Thankyou..📰 :cool:
 
  • Like
Reactions: 6 users

Wickedwolf

Regular
I suppose we all have bottom lines below which we will never go. I have been uncomfortable with neuralink and Elon Musk experiments on animals but the latest news regarding the deaths of chimpanzees is just too much.

A brief Google will highlight how for at least three decades mainstream medical scientists have been opposed to Chimpanzee research except in the most extreme cases of need where no other alternative exists.

The brutality he has been prepared to inflict on primates fits with his failure to front up for a flight in space. He is a throughly unpleasant little person with a very poor moral compass and I pray Brainchip and Mercedes send him back to obscurity.

My opinion only DYOR
FF

AKIDA BALLISTA
I wouldn’t believe everything you read about Musk…he’s upsetting a lot of very powerful people. I own shares in only 2 companies, BrainChip n Tesla and can confidently say that 99% of mainstream news about Tesla and Musk usually turns out to be rubbish (he still may be an unpleasant little man). Think of all legacy auto, all fossil fuels n all the unions including sleepy Joe are keen to see him come undone.
 
  • Like
Reactions: 13 users

Eirexpat

Member
$1.45 heading north!! ;)
 
  • Like
  • Fire
Reactions: 8 users

Dhm

Regular
Well written article from the VP of GTI on GSA Global.



EDGE AI COMPUTING ADVANCEMENTS DRIVING AUTONOMOUS VEHICLE POTENTIAL​

Written by Manouchehr Rafie, Ph.D. | CEO | Gryfalcon
Large numbers of sensors, massive amounts of data, ever-increasing computing power, real-time operation and security concerns required for autonomous vehicles are driving the core of computation from the cloud to the edge of the network. Autonomous vehicles are constantly sensing and sending data on road conditions, location and the surrounding vehicles. Self-driving cars generate roughly 1 GB of data per second – it is impractical to send even a fraction of the terabytes of data for analysis to a centralized server because of the processing bandwidth and latency.
Due to the high volume of data transfer, latency issues and security, the current cloud computing service architecture hinders the vision of providing real-time artificial intelligence processing for driverless cars. Thus, deep learning, as the main representative of artificial intelligence, can be integrated into edge computing frameworks. Edge AI computing addresses latency-sensitive monitoring such as object tracking and detection, location-awareness, as well as privacy protection challenges faced in the cloud computing paradigm.
The real value of edge AI computing can only be realized if the collected data can be processed locally and decisions and predictions can be made in real-time with no reliance on remote resources. This can only happen if the edge computing platforms can host pre-trained deep learning models and have the computational resources to perform real-time inferencing locally. Latency and locality are key factors at the edge since data transport latencies and upstream service interruptions are intolerable and raise safety concerns (ISO26262) for driverless cars. As an example, the camera sensors on a vehicle should be able to detect and recognize its surrounding environment without relying on computational resources in the cloud within 3ms and with high reliability (99.9999%). For a vehicle with 120 km/h speed, 1ms round-trip latency corresponds to 3 cm between a vehicle and a static object or 6 cm between two moving vehicles.
Currently, most existing onboard AI computing tasks for autonomous vehicle applications including object detection, segmentation, road surface tracking, sign and signal recognition are mainly relying on general-purpose hardware – CPUs, GPUs, FPGAs or generic processors. However, power consumption, speed, accuracy, memory footprint, die size and BOM cost should all be taken into consideration for autonomous driving and embedded applications. High power consumption of GPUs magnified by the cooling load to meet the thermal constraints, can significantly degrade the driving range and fuel efficiency of the vehicle. Fancy packaging, fan-cooling, and general-purpose implementations have to go. Therefore, there is a need for cheaper, more power-efficient, and optimized AI accelerator chips such as domain-specific AI-based inference ASIC as a practical solution for accelerating deep learning inferences at the edge.
Advantages of Edge computing for AI automotive
Significant efforts have been recently spent on improving vehicle safety and efficiency. Advances in vehicular communication and 5G vehicle to everything (V2X) can now provide reliable communication links between vehicles and infrastructure networks (V2I). Edge computing is most suitable for bandwidth-intensive and latency-sensitive applications such as driverless cars where immediate action and reaction are required for safety reasons.
Autonomous driving systems are extremely complex; they tightly integrate many technologies, including sensing, localization, perception, decision making, as well as the smooth interactions with cloud platforms for high-definition (HD) map generation and data storage. These complexities impose numerous challenges for the design of autonomous driving edge computing systems.
Vehicular edge computing (VEC) systems need to process an enormous amount of data in real time. Since VEC systems are mobile, they often have very strict energy consumption restrictions. Thus, it is imperative to deliver sufficient computing power with reasonable energy consumption, to guarantee the safety of autonomous vehicles, even at high speed.
The overarching challenge of designing an edge computing ecosystem for autonomous vehicles is to deliver real-time processing, enough computing power, reliability, scalability, cost and security to ensure the safety and quality of the user experience of the autonomous vehicles.
Figure-2.png

Low Latency
Zero (low) latency for automotive safety is a must. Many of the self-driving car makers are envisioning that sensor data will flow up into the cloud for further data processing, deep learning, training and analysis required for their self-driving cars. This allows automakers to collect tons of driving data and be able to use machine learning to improve AIself-driving practices and learning. Estimates suggest that sending data back-and-forth across a network would take at least 150-200ms. This is a huge amount of time, given that the car is in motion and that real-time decisions need to be made about the control of the car.
According to Toyota, the amount of data transmitted between cars and the cloud could reach 10 exabytes a month by 2025. That’s 10,000 times the current amount. The cloud wasn’t designed to process massive amounts of data quickly enough for autonomous cars.
The self-driving car will be doing time-sensitive processing tasks such as lane tracking, traffic monitoring, object detection or semantic segmentation at the local (edge) level in real-time and taking driving actions accordingly. Meanwhile for longer-term tasks, it is sending the sensor data up to the cloud for data processing and eventually sending the analysis result back down to the self-driving car.
Edge computing technology will thus provide an end-to-end system architecture framework used to distribute computation processes to localized networks. A well-designed AI self-driving and connected car will be a collaborative edge-cloud computing system, efficient video/image processing, and multi-layer distributed (5G) network – a mixture of localized and cloud processing. Edge AI computing is meant to complement the cloud, not completely replace it.
Figure-5.png

Speed
Given the massive volume of data transmitting back-and-forth over a network, for safety reasons, much of the processing has to occur onboard the vehicle. The speed at which the vehicle needs to compute continuous data, without the need to transfer data, will help reduce latency and increase accuracy due to a reliance on connectivity and data transfer speeds.
The interdependency between humans and machines means the velocity of information transfer in real-time is essential. Using edge AI computing involves having enough localized computational processing and memory capacities to be able to ensure that the self-driving car and the AI processor can perform their needed tasks.
Reliability
The safety of autonomous cars is critical. Edge computing reduces the strain on clogged cloud networks and provides better reliability by reducing the lag between data processing and the vehicle. It didn’t take long for autonomous vehicle manufacturers to realize the limitations of the cloud. While the cloud is a necessity, autonomous cars require a more decentralized approach.
With edge computing and edge data centers positioned closer to vehicles, there is less chance of a network problem in a distant location affecting the local vehicles. Even in the event of a nearby data center outage, onboard intelligent edge inferencing of autonomous vehicles will continue to operate effectively on their own because they handle vital processing functions natively.
Today, automakers provide multiple layers of protection and redundancy for power failure, network failure and even compute failure. Vehicles also have the ability to dynamically re-route and power network traffic and even decision-making to bring an autonomous car to a safe stop. Driverless cars with edge AI computing can support onboard diagnostics with predictive analytics, a system that can grow and evolve in features over its lifecycle.
With so many edge computing vehicles connected to the network, data can be rerouted through multiple pathways to ensure vehicles retain access to the information they need. Effectively incorporating Internet of Vehicles (IoV) and edge computing into a comprehensive distributed edge architecture providing unparalleled reliability and availability.
Security
The ultimate challenge of designing an edge computing ecosystem for autonomous vehicles is to deliver enough computing power, redundancy, and security to guarantee the safety of autonomous vehicles. Thus, protecting autonomous driving
edge computing systems against attacks at different layers of the sensing and computing stack is of paramount concern.
Security of autonomous vehicles should cover different layers of the autonomous driving edge computing stack. These securities include sensor security, operating system security, control system security, and communication security.
In addition, AI at edge gateways reduces communication overhead and less communication results in an increase in data security.
Scalability
Vehicular edge computing inherently has a distributed architecture that can help bring data to the edge of networks, where vehicles can analyze and interact with the data in real time, as if it were local.
While the cloud is a necessity for certain tasks, autonomous cars require a more decentralized approach. For example, intelligent sensors can have the capability to analyze their own video feeds, determine which frames of a video require attention and send only that data to the server. This decentralized architecture reduces network latency during the data transfer process as data no longer has to traverse across the network to the cloud for immediate processing. AI vehicles are being equipped with more onboard computing power than in the past and can perform more tasks on their own. with higher predictability and less latency.
Cost
The increasing number of roadside units (RSUs) equipped with powerful local AI processors can help lower energy consumption, maintenance and operational costs as well as the associated high bandwidth cost of transferring data to the cloud. Meanwhile, one of the key drivers making edge computing a more viable reality today is that the cost of computing and sensors continues to plunge.
AI Automotive Processor Technology
The automotive industry is undergoing key technological transformations, advancing towards higher automation levels. Intelligent driving requires more efficient and powerful AI processors. According to Horizon Robotics’ summary of OEM demands, a higher level of automated driving requires more orders of magnitude tera operations per second (TOPS), namely, 2 TOPS for L2 autonomy, 24 TOPS for L3, 320 TOPS for L4 and 4,000+TOPS for L5.
Automotive processors typically fall into three broad categories:
  1. CPU and GPU-based processors: tend to have good flexibility but generally consume more power
  2. FPGAsrequires less computational resources, but more costly and limited programmability compared to GPUs
  3. ASICs: usually with a custom design, are more efficient in terms of performance, cost and power consumption
Graphs-2.png

Conventional CPUs and GPUs are struggling to meet the increasing demands of high computing requirements of L4 and L5 autonomous driving levels where FPGAs/ASICs are both outperforming CPUs/GPUs. Autonomous vehicles will need enough
computing power to become a “data center on wheels”. Taking the complexity of automotive applications into consideration, computing power alone is not enough. The energy efficiency, performance and cost-effectiveness of AI automotive processors should also be taken into consideration. Full-custom ASICs are by far superior to GPUs/FPGAs in terms of lower power consumption, performance and cost. That is why the integration of AI-specific ASIC accelerators in autonomous driving computing platforms is booming.
High-Performing Accelerator Chips
The inference accelerator chips of Gyrfalcon Technology, Inc (GTI) have a Convolutional Neural Network Domain-Specific Architecture (CNN-DSA) with a dedicated Matrix Processing Engine (MPE) and an efficient AI Processing in Memory (APiM) technology. As an example, GTI’s LightSpeeur 2803S provides a power efficiency performance of 24 TOPS/Watt with all CNN processing done in the internal memory instead of outside DRAM. It can classify 448×448 RGB image inputs at more than 16.8TOPS with a peak power consumption of less than 700mW and with an accuracy comparable to the VGG benchmark. Gyrfalcon’s CNN- DSA accelerators are reconfigurable to support CNN model coefficients of various layer sizes and layer types.
For more computationally-intensive edge computing applications such as in driverless-car AI platforms, GTI’s PCIe- based AI accelerator cards using 16x 2803S chips delivering 270 TOPS and 9.9 TOPS/W power efficiency can be used for Level 4 AI auto performance demand. Using 4x GTI 2803S PCIe cards (64 chips) can provide the top performance of 1080 TOPS for L5 AI auto performance and beyond.
GTI’s AI-based chips have a flexible and scalable architecture and can be easily arranged in either parallel or cascades for any given performance/model size application. Cascading capability provides flexibility and reduces the host workload. Cascading enables support for larger and more complex models (i.e. ResNet-101, ResNet-152, …).
Figure-8-Graphic.png
The underlying function of many autonomous vehicle applications is deep-learning technology, such as convolutional neural networks (CNNs) that are typically used for vehicle and pedestrian detection, road surface tracking, sign and signal recognition, and voice-command interpretation. GTI’s AI-based architecture is “silicon-proven” standalone accelerator technology that can be used with any type of sensor output, such as visual, audio and other forms of data. This can include high data rates from machine learning cameras and high-resolution LiDAR as well as low data rates from RADAR and ultrasonic sensors.
___________________________________________
Rafie.png

About Manouchehr Rafie, Ph.D.

Dr. Rafie is the Vice President of Advanced Technologies at Gyrfalcon Technology Inc. (GTI), where he is driving the company’s advanced technologies in the convergence of deep learning, AI Edge computing, and visual data analysis. He is also serving as the co-chair of the emerging Video Coding for Machines (VCM) at MPEG-VCM standards.
Thanks for posting this FMF, it really explains the necessity for edge chips to enable EV expansion without drowning the cloud. And for the slightly cerebrally challenged (me) a great explanation. I can repeat much of this to others and help spread the word. Cheers.
 
  • Like
Reactions: 10 users

Eirexpat

Member
BOOOOOOOOM!
 
  • Like
Reactions: 2 users

kenjikool

Regular
Sorry posted this on the Tweet forum, wrong spot....

Ok guys are you ready to see the Akida chip in actually use? Go to 7.40 on this video. Doing some other research on how the Hey Mercedes and the screen work was interesting. So the screen while large doesn't use black, interesting concept. When black is needed it actually just turns off the pixels where black would normally be used. Simple idea but another energy saving method.

 
  • Like
Reactions: 33 users
I wouldn’t believe everything you read about Musk…he’s upsetting a lot of very powerful people. I own shares in only 2 companies, BrainChip n Tesla and can confidently say that 99% of mainstream news about Tesla and Musk usually turns out to be rubbish (he still may be an unpleasant little man). Think of all legacy auto, all fossil fuels n all the unions including sleepy Joe are keen to see him come undone.
So is the report that 15 chimpanzees have died untrue? FF
 
  • Sad
Reactions: 1 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Anyone got access to this video?






Screen Shot 2022-02-17 at 12.43.46 pm.png
 
  • Like
Reactions: 7 users

Taproot

Regular
  • Like
Reactions: 5 users

ketauk

Emerged
So is the report that 15 chimpanzees have died untrue? FF

They are rhesus macaque monkeys not chimpanzees

This is an article detailing the response from Neuralink - https://www.teslarati.com/neuralink-inhumane-treatment-animal-testing-response/

Bit difficult to integrate hardware into an actual brain without actually testing the hardware throughout the development process, they have also tested on pigs

I too don't want to be cruel to animals needlessly

But if that sort of tech could restore the use of limbs to a crippled human in the future (or some other benefit we cannot deliver today) then surely its worth investigating & I wouldn't want them to experiment on humans?
 
  • Like
  • Thinking
Reactions: 7 users

Wickedwolf

Regular

McHale

Regular
Just putting it our there @zeeb0t, is there any dev opportunity to stamp all posts made under the BRN ticker on this Discussion Thread? Replies could still be limited as thread specific. Just thinking out loud, would save jumping from thread to thread. Cheers.
I would still like to see the "Old hot crapper" format made available.
 
  • Like
Reactions: 11 users

McHale

Regular
Yeah agreed…
I don't like having to move from thread to thread either, it is really clunky, time consuming etc etc
 
  • Like
  • Sad
Reactions: 10 users

jla

Regular
I would still like to see the "Old hot crapper" format made available.
Yes i loved the old hot crapper site.
 
  • Like
Reactions: 3 users
Top Bottom