BRN Discussion Ongoing

RobjHunt

Regular
Good Morning Chippers,

Getting that strange feeling again, though I have been wrong in the past.

Announcement ?????? Maybe..... Maybe not

?

Regards,
Esq.
Once again mate, great darts!! ;)šŸ‘Œ
 
  • Like
  • Haha
Reactions: 5 users

Boab

I wish I could paint like Vincent
  • Haha
  • Like
Reactions: 8 users

RobjHunt

Regular
  • Like
  • Haha
  • Fire
Reactions: 9 users

cosors

šŸ‘€
@Esq.111 do you have already the hardcover edition yet?
:)

___
Does this exist as a hard cover as in the picture? I would love to have it for my library.
SparkFun has it as a soft cover, it seems. Now the Brainchip Premium Gold Edition on vellum ā¤ļøā€šŸ”„

1689243438263.png

1689243470331.png

1689243584571.png

https://www.sparkfun.com/news/7566
 
  • Like
  • Haha
  • Love
Reactions: 14 users

Kachoo

Regular
emotion3D

emotion3Dā€™s First Half of 2023​

Jul 13, 2023 | Blog
Certified-2-1080x675.jpg

As we reflect on the first half of 2023, the emotion3D team is thrilled to highlight a number of achievements and milestones. From winning new customers, forging new collaborations to securing certifications and participating in prestigious trade shows, our company has been making solid progress in advancing the field of driver and occupant analysis.
We kicked off the year with a trip to Las Vegas. CES, as every year, is our first trip after our Christmas holidays. Not only did we have countless highly productive meetings with customers, partners and other industry stakeholders, but we also announced our collaboration with SAT & Garmin. Together, we devised an innovative solution to drowsiness detection by integrating our Cabin Eye software stack, SATā€™s sleep onset prediction algorithm, and Garminā€™s smartwatch technology.
In February, we embarked on an exciting collaboration with Brainchip, a leading provider of neuromorphic processors. By combining our expertise in driver and occupant analysis with Brainchipā€™s state-of-the-art processor, we aimed to revolutionize driving safety and enhance the overall driving experience. Our joint efforts focused on maximizing efficiency, precision, and minimizing energy consumption to deliver unparalleled results.
In March, we were awarded another large series production project with a new customer (stay tuned for exciting announcements).
While we dedicated considerable efforts to external collaborations, we also prioritized the enhancement of our internal processes and quality management. We are delighted to announce that our commitment has been recognized through our successful achievement of the TISAX Level 3 certification, which also encompasses the protection of prototype components. This certification proves our dedication to delivering top-tier solutions while upholding the highest industry standards.
Furthermore, we received invitations from prestigious international trade shows and events. These platforms provided us with a remarkable opportunity to showcase our technologies and expertise. Notable among these events were EAS IMS Tech 2023 and InCabin Brussels 2023, where our CTO, Michael Hƶdlmoser, delivered speeches outlining our expertise in deriving 3D occupant information.
Through our participation in trade shows such as the Automotive Testing Expo 2023 in Stuttgart and the highly specialized InCabin Brussels 2023 event, we had the opportunity of showcasingour newest solutions for driver and occupant monitoring to key stakeholders in the industry. Our technology was also present in our partnersā€™ booths; as BHTC, Varroc and SAT showcased our joint demos. Moreover, during InCabin Brussels 2023, our CEO, Florian Seitner, joined the press briefing to announce our latest partnership with SAT and Chuhang Tech. This collaboration aims to create a multi-sensor fusion solution that combines camera and radar technologies, complemented by SATā€™s sleep onset prediction algorithms. Together, we strive to deliver highly accurate drowsiness detection solutions, ensuring utmost safety for drivers and passengers.
With a successful first half of 2023 behind us, we look forward to the next half of the year ! when many exciting news and events such as IAA Mobility 2023 and CES 2024 are already being planned. Stay tuned as we share the newest developments for emotion3D and the automotive in-cabin industry!
Nice one ! Things actually are heating up.
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Tothemoon24

Top 20
2A26D36E-0130-4F2E-8321-8378BA96B79F.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 19 users
Bang on the money Hop!

Institutional Investors & Mutual Funds have increased their holdings.....

View attachment 39704


Insto's and Mutual Funds have just bought up approx another 5 million shares since Tuesday!
They now hold approx 316 million BRN shares.

1689245192327.png


 
Last edited:
  • Like
  • Wow
  • Fire
Reactions: 41 users

Mccabe84

Regular
  • Like
  • Fire
Reactions: 12 users

Proga

Regular
There are no published patent applications indicating Luminar has a NN in silicon.

Their latest published application:

US2022309685A1 NEURAL NETWORK FOR OBJECT DETECTION AND TRACKING

claims a method of tracking multiple objects using a processor, ie, it is software running on a processor. Their processing hardware includes a segmentation module, a classification module, and a tracking module. each of which can include a NN:


View attachment 39891


[0076] As seen in FIG. 3, the vehicle includes N different sensors 102 , with N being any suitable integer (e.g., 1, 2, 3, 5, 10, 20, etc.). At least ā€œSensor 1ā€ of the sensors 102 is configured to sense the environment of the autonomous vehicle by physically interacting with the environment in some way, such as transmitting and receiving lasers that reflect off of objects in the environment (e.g., if the sensor is a lidar device), transmitting and receiving acoustic signals that reflect off of objects in the environment (e.g., if the sensor is a radio detection and ranging (radar) device), simply receiving light waves generated or reflected from different areas of the environment (e.g., if the sensor is a camera), and so on. Depending on the embodiment, all of the sensors 102 may be configured to sense portions of the environment, or one or more of the sensors 102 may not physically interact with the external environment (e.g., if one of the sensors 102 is an inertial measurement unit (IMU)). The sensors 102 may all be of the same type, or may include a number of different sensor types (e.g., multiple lidar devices with different viewing perspectives, and/or a combination of lidar, camera, radar, and thermal imaging devices, etc.).

[0077] The data generated by the sensors 102 is input to a perception component 104 of the sensor control architecture 100 , and is processed by the perception component 104 to generate perception signals 106 descriptive of a current state of the vehicle's environment. It is understood that the term ā€œcurrentā€ may actually refer to a very short time prior to the generation of any given perception signals 106 , e.g., due to the short processing delay introduced by the perception component 104 and other factors. To generate the perception signals 106 , the perception component 104 may include a segmentation module 110 , a classification module 112 and a tracking module 114 .

[0078] The segmentation module 110 is generally configured to identify distinct objects within the environment, as represented by the sensor data (or a portion of the sensor data). Depending on the embodiment and/or scenario, the segmentation task may be performed separately for each of a number of different types of sensor data (e.g., the segmentation module 110 may include a number of modules operating in parallel), or may be performed jointly on a fusion of multiple types of sensor data. In some embodiments where lidar devices are used, the segmentation module 110 analyzes point cloud frames to identify subsets of points within each frame that correspond to probable physical objects in the environment. In other embodiments, the segmentation module 110 jointly analyzes lidar point cloud frames in conjunction with camera (and/or other) image frames to identify objects in the environment. Examples of lidar devices/systems and point clouds are discussed in further detail below. Other suitable techniques, and/or data from other suitable sensor types, may also be used to identify objects. As used herein, references to different or distinct ā€œobjectsā€ may encompass physical things that are entirely disconnected (e.g., with two vehicles being two different ā€œobjectsā€), as well as physical things that are connected or partially connected (e.g., with a vehicle being a first ā€œobjectā€ and the vehicle's hitched trailer being a second ā€œobjectā€).

[0079] The segmentation module 110 may use predetermined rules or algorithms to identify objects. For example, the segmentation module 110 may identify as distinct objects, within a point cloud, any clusters of points that meet certain criteria (e.g., having no more than a certain maximum distance between all points in the cluster, etc.). Alternatively, the segmentation module 110 may utilize a neural network that has been trained to identify distinct objects within the environment (e.g., using supervised learning with manually generated labels for different objects within test data point clouds, etc.), or another suitable type of machine learning based model. Example operation of the segmentation module 110 is discussed in more detail below in FIG. 5B, for an embodiment in which the perception component 104 processes point cloud data.

[0080] The classification module 112 is generally configured to determine classes (labels, categories, etc.) for different objects that have been identified by the segmentation module 110 . Like the segmentation module 110 , the classification module 112 may perform classification separately for different sets of the sensor data (e.g., the classification module 112 may include a number of modules operating in parallel), or may classify objects based on a fusion of data from multiple sensors, etc. Moreover, and also similar to the segmentation module 110 , the classification module 112 may execute predetermined rules or algorithms to classify objects, use a neural network that has been trained to classify identified objects within the environment (e.g., using supervised learning with manually generated labels for different point cloud representations of distinct objects, etc.), or use another suitable machine learning based model to classify objects. Example operation of the classification module 112 is discussed in more detail below in FIG. 5B, for an embodiment in which the perception component 104 processes point cloud data.

[0081] The tracking module 114 is generally configured to track distinct objects over time (e.g., across multiple lidar point cloud or camera image frames). The tracked objects are generally objects that have been identified by the segmentation module 110 , but may or may not be objects that were classified by the classification module 112 , depending on the embodiment and/or scenario. The segmentation module 110 may assign identifiers to identified objects, and the tracking module 114 may associate existing identifiers with specific objects where appropriate (e.g., for lidar data, by associating the same identifier with different clusters of points, at different locations, in successive point cloud frames). Like the segmentation module 110 and the classification module 112 , the tracking module 114 may perform separate object tracking based on different sets of the sensor data (e.g., the tracking module 114 may include a number of modules operating in parallel), or may track objects based on a fusion of data from multiple sensors. Moreover, and also similar to the segmentation module 110 and the classification module 112 , the tracking module 114 may execute predetermined rules or algorithms to track objects, may use a neural network that has been trained to track identified (and possibly classified) objects within the environment (e.g., using supervised learning with manually generated labels for different pairs or sets of point cloud frames, etc.), or another suitable machine learning model to track objects.

[0082] Because the blocks of FIG. 3 (and various other figures described herein) depict a software architecture rather than physical components, it is understood that, when any reference is made herein to a particular neural network or other software architecture component being ā€œtrained,ā€ or to the role of any software architecture component (e.g., sensors 102 ) in conducting such training, the operations or procedures described may have occurred on a different computing system (e.g., using specialized development software). Thus, for example, neural networks of the segmentation module 110 , classification module 112 and/or tracking module 114 may have been trained on a different computer system before being implemented within any vehicle. Put differently, the components of the sensor control architecture 100 may be included in a ā€œfinalā€ product within a particular vehicle, without that vehicle or its physical components (sensors 102 , etc.) necessarily having been used for any training processes
.

I think it is improbable that Mercedes will revert to a software NN.

Now there are some interesting features of Luminar's laser projector for lidar, such as foveation which enables the laser pulses to be concentrated more densely on objects of interest, and this could well attract MB, but I doubt they would adopt software to process the reflected pulses.
G'day mate,

Their processing hardware includes a segmentation module, a classification module, and a tracking module. each of which can include a NN: I thought they were using SnapDragon?

The question is does Luminar's Lidar work? They're in cahoots with Volvo and In April of 2022, Nissan announced they would use Luminar technology to integrate advanced autonomous functionality in all their cars by 2030.

 
  • Thinking
  • Like
  • Fire
Reactions: 3 users

Proga

Regular
Proga it takes at the bare minimum 3 years, sometimes 4-5 years for automotive.
Here is a video from RENESAS CEO from 2 months ago outlining how long it takes.
I encourage all here to listen to what he says about how long it takes for automotive.

Have a listen from 55 min mark-

They know all about it. Daimler AG truck unit took a minority stake in Luminar in October 2020, investing in the company as part of its efforts to develop self-driving trucks

As I said to Dio in my last post, I don't even know if Luminar's Lidar works. It's supposed to be plug-in according to their website.
 
  • Like
Reactions: 1 users

stockduck

Regular
  • Like
Reactions: 3 users

Diogenese

Top 20
G'day mate,

Their processing hardware includes a segmentation module, a classification module, and a tracking module. each of which can include a NN: I thought they were using SnapDragon?

The question is does Luminar's Lidar work? They're in cahoots with Volvo and In April of 2022, Nissan announced they would use Luminar technology to integrate advanced autonomous functionality in all their cars by 2030.

As I said, the Luminar laser pulse transmitter can be controlled to focus more pulses (foveation) on objects of interest to get a better point cloud picture. That aspect of their system would be of interest to auto makers. It probably contributes to a longer range.

[0082] Because the blocks of FIG. 3 (and various other figures described herein) depict a software architecture rather than physical components, it is understood that, when any reference is made herein to a particular neural network or other software architecture component being ā€œtrained,ā€ or to the role of any software architecture component (e.g., sensors 102 ) in conducting such training, the operations or procedures described may have occurred on a different computing system (e.g., using specialized development software). Thus, for example, neural networks of the segmentation module 110 , classification module 112 and/or tracking module 114 may have been trained on a different computer system before being implemented within any vehicle. Put differently, the components of the sensor control architecture 100 may be included in a ā€œfinalā€ product within a particular vehicle, without that vehicle or its physical components (sensors 102 , etc.) necessarily having been used for any training processes.

Their NN is software, as they state in their patent application. I doubt that car makers would be interested in that. It is too slow and too power hungry.

Akida would be able to handle the foveated point cloud processing, so it is possible that Mercedes are adopting the foveated laser pulse transmitter with Akida in the receiver to classify the reflected pulses.
 
  • Like
  • Fire
  • Love
Reactions: 30 users

Tothemoon24

Top 20

ARM Expands Application Range of edgeConnector Products from Softing Industrial​

The new version 3.50 of Softing Industrial's edgeConnector products is now compatible with ARM processors. This significantly expands the application possibilities.​

Editorial
by Editorial

July 13, 2023

Reading Time: 1 min read

ARM



The new version 3.50 of Softing Industrialā€™s edgeConnector products is now compatible with ARM processors. This significantly expands the application possibilities.
The Docker-based software modules of Softingā€™s edgeConnector product family provide access to process data in SIMATIC S7, SINUMERIK 840D and Modbus TCP controllers.

Version 3.50 of edgeConnector Siemens, edgeConnector 804D and edgeConnector Modbus is now compatible with the 64-bit version of ARM (Advanced RISC Machines) processors. This extends the application possibilities to devices like Raspberry Pi, Cisco IR1101, Orange Pi 5 or RevPi Connect.
By using container technology, edgeConnector products are very quickly ready for use. They are operated on standard hardware and can be easily administered centrally. This gives users a simple and secure way to integrate data from production into innovative and flexible Industrial IoT solutions. All edgeConnector products support state-of-the-art security standards such as SSL/TLS, X.509 certificates, authentication, and data encryption. They can be easily configured locally via an integrated web interface or managed remotely via a REST API. The individual edgeConnector products are available for download and free trial from online directories such as Docker Hub or Microsoft Azure Marketplace.
For more information, visit: https://industrial.softing.com
Tags: ARM processorsedgeConnectorModbus TCP controllersSIMATIC S7SINUMERIK 840DSofting Industrial
 
  • Like
  • Thinking
Reactions: 8 users

Diogenese

Top 20
They know all about it. Daimler AG truck unit took a minority stake in Luminar in October 2020, investing in the company as part of its efforts to develop self-driving trucks

As I said to Dio in my last post, I don't even know if Luminar's Lidar works. It's supposed to be plug-in according to their website.
Maybe a few extra kilowatts isn't a problem for a truck or ICE, but for EVs, every Watt counts. With a CPU or GPU, speed is traded for power. To speed up software processing, they run several processors in parallel, to get a faster result, burning more power in the process.

Their software patent for object tracking was only filed in mid-2020.
 
  • Like
  • Love
  • Fire
Reactions: 16 users

manny100

Regular
I don't think Mercedes would change direction with in 1.5 months like that these products are out soon in 6 months lol. They would likely be starting up production very soon lol. I feel some may have been convinced by the shorters that 20 cents is on the cards and now the price moved north.
Agree, it takes 2 years or more to test products in vehicles. No way MB will be changing now.
I think this type of ' I feel' negative post is generated by shorters hoping to sow seeds of doubt.
Take no notice.
 
  • Like
  • Haha
  • Thinking
Reactions: 24 users

Proga

Regular
Agree, it takes 2 years or more to test products in vehicles. No way MB will be changing now.
I think this type of ' I feel' negative post is generated by shorters hoping to sow seeds of doubt.
Take no notice.
šŸ¤£
 
  • Like
Reactions: 1 users
Well, that was a nice catch up on todays posts....much more enjoyable when we heading in the right direction :)

Was checking out emotion3D as well @Tothemoon24

Found a GitHub meet up presso from Nov 22 by various people including a couple from emotion3D.

Couple snips below and full presso HERE

Appears they had a crack at CNN with a Jetson and Ambarella. Maybe they started to see the error of their ways and figured geez...this Akida SNN gig might just be better :LOL:

Screenshot_2023-07-13-21-06-04-72_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
Screenshot_2023-07-13-21-06-30-88_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
Screenshot_2023-07-13-21-05-13-99_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
 
  • Like
  • Love
  • Fire
Reactions: 19 users

cosors

šŸ‘€
Welcome aboard!
A new maybe next employee has been on board for two months.
And as far as I see he has far more followers than any of his 'superiors', except of PVM.)
Read the congratulations/comments and recommendations. He seems to be a real talent.
Hopefully they can keep him and he stays on board.


"Mujahir Abbasi
Machine Learning Intern at Brainchip | X-Data Scientist at Accenture Applied Intelligence
2w Edited

I am absolutely thrilled to announce that amidst these unprecedented times, I have been fortunate enough to receive summer internship offers from three incredible companies. After careful consideration, I am delighted to accept the opportunity to join Brainchip as a Machine Learning intern! Brainchip, a pioneering company specializing in Neuromorphic computing, has truly captured my passion for cutting-edge technology and artificial intelligence. Their commitment to pushing the boundaries of innovation aligns perfectly with my own aspirations in the field of machine learning. I am eager to contribute my skills and knowledge to their remarkable team and embark on an exciting journey of learning and growth. I want to thank my friends and family for their unwavering support. Your constant encouragement and belief in me have been my driving force. A special shout-out goes to Dr. Eun-Young Elaine Kang, Cambrian Sorel and James Fukaye, MA, who went above and beyond to assist me in processing my application for the CPT. I would also like to extend my heartfelt appreciation to Nandan Nayampally, Jon Gallegos, Todd Vierra, Nikunj Kotecha, and Sheila Sabanal-Lau,MSHR,PHR for granting me this wonderful opportunity and guiding me throughout the process. I am truly honored to be joining your esteemed organization, and I am eager to contribute my skills and dedication to achieving our shared goals. #machinelearning #intern #artificialintelligence #neuromorphic #neuralnetwork #edgecomputing #summerinternship California State University, Los Angeles"

https://www.linkedin.com/posts/mujahir-abbasi-80b13112a_machinelearning-intern-artificialintelligence-activity-7079659403754287104-u7OF


"Mujahir Abbasi

Machine Learning Intern at Brainchip | X-Data Scientist at Accenture Applied Intelligence
Los Angeles Metropolitan Area
6K followers 500+ connections


About​


I am a Masters student at California State University, Los Angeles, previously worked at Accenture Applied Intelligence on identifying defects in GUMS in the production line using state-of-the-art Deep Learning algorithms. I was also responsible for setting up the data pipeline and building interactive PowerBI dashboards. Additionally, I have worked on building a Machine Learning based Demand Planning solution for a multinational consumer goods company.

I was engaged with a US based consumer intelligence company named Mobilewalla, where I worked on multidomain projects. Prior to Mobilewalla, I worked at Analytics and Insights group at Tata Consultancy Services (TCS_ Indiaā€™s biggest software firm). At TCS, I was on a project of machine learning based sales demand forecasting for an European Retail giant. One of my main responsibilities was to deal with enormous amount of data using Big Data Technologies. Apart from this, I also worked on data engineering, data analysis as well as data modeling. Another instinct in TCS was with TATA Digital (Tata Neu*) where I implemented the dedupe logic for ā€œTATA SuperAppā€.

At AIsee Technologies, I played a very important role for one of their government clients. I actively worked in the area of Deep Learning involving road traffic counting and classification for measuring the traffic volume.

Prior to AIsee Techonlogies, I worked at Tika Data and was involved in Automating Data Annotation Processes.

I have also worked for an educational startup on a tool which helped students in predicting the colleges in which there was maximum chances of getting selected based on their academic scores.

I have completed my graduation in Information Technology from a reputed university in India followed by a number of online courses in Machine Learning and Deep Learning.

Technology Expertise :- Machine Learning, Deep Learning, Artificial Intelligence, Computer Vision, Python, Azure Databricks, Azure Datalake Storage, AWS S3, AWS EMR, PySpark, MS-Excel, GitLab and Big Data.

Business Domains :- Retail, Advertisement, Consumer Goods, Government Organization and Education."
https://www.linkedin.com/in/mujahir-abbasi-80b13112a?trk=public_post_feed-actor-name

*https://en.wikipedia.org/wiki/Tata_Neu


and before šŸ‘‡

Data Science Analyst
Accenture Strategy & Consulting
Feb 2021 - Aug 2022
1 year 7 months
Bengaluru, Karnataka, India


his favourite is a Galton Board

not the original video as I don't know how to embed but the exact same board
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 32 users

Proga

Regular
Maybe a few extra kilowatts isn't a problem for a truck or ICE, but for EVs, every Watt counts. With a CPU or GPU, speed is traded for power. To speed up software processing, they run several processors in parallel, to get a faster result, burning more power in the process.

Their software patent for object tracking was only filed in mid-2020.
I was just replying to @chapman89. MB have known, researched and tested Luminar for 4 years so it isn't new to them as he was trying to suggest. From memory, someone posted a fortnight ago MB was always going to use Luminar in one of their vehicles but may have increased the number which prompted me to take another look.

Volvo are already using it in their SUV EX90 so this isn't new tech being developed or only for trucks. The fully electric successor to Volvo Carsā€™ XC90, to be revealed in 2022, will come with state-of-the-art sensors, including LiDAR technology developed by Luminar and an autonomous driving computer powered by the NVIDIA DRIVE Orinā„¢ system-on-a-chip, as standard.

Do they ever have a chip @Slade but what a mistake using the wrong pictures on your presentations and website. @Diogenese you looked into NVIDIA DRIVE Orinā„¢ system-on-a-chip last year. It uses CNN.
 
Last edited:
  • Like
Reactions: 4 users

Diogenese

Top 20
I was just replying to @chapman89. MB have known, researched and tested Luminar for 4 years so it isn't new to them as he was trying to suggest. From memory, someone posted a fortnight ago MB was always going to use Luminar in one of their vehicles but may have increased the number which prompted me to take another look.

Volvo are already using it in their SUV EX90 so this isn't new tech being developed or only for trucks. The fully electric successor to Volvo Carsā€™ XC90, to be revealed in 2022, will come with state-of-the-art sensors, including LiDAR technology developed by Luminar and an autonomous driving computer powered by the NVIDIA DRIVE Orinā„¢ system-on-a-chip, as standard.

Do they ever have a chip @Slade but what a mistake using the wrong pictures on your presentations and website. @Diogenese you looked into NVIDIA DRIVE Orinā„¢ system-on-a-chip last year.
Thanks Proga,

Luminar's patent for their tracking NN software was filed in mid-2020. It may be that the part of their lidar Volvo are using is the capability to focus laser pulses on objects of interest, but if, as you say, they are running it on Nvidia GPU, then the new graph neural network software could be included in an update after being verified for safety, as it is a critical safety function.

Running on Nvidia, it will use a lot more power and be significantly slower than Akida in classifying objects.

I think that the foveated lidar will probably allow autonomous driving in excess of 100 kph because it will enable the receiver to detect distant objects with greater certainty than vanilla lidar.

The further away an object is, the fewer lidar pulses strike it because the scan angle is quite broad (>100 degrees), so each pulse ray diverges from its adjacent pulses by 100/n degrees, where n is the number of pulses in a scan row. The laser pulse rays themselves do not get broader and spread out. This means that, the further away the object is, the larger the horizontal and vertical gaps between the pulses which hit the object.

Thus, by decreasing the angle between the pulse rays by increasing the rate at which the pulses directed to the objects of interest are generated while keeping the scan rate constant, more pulses strike the object and send reflections to the receiver.

There was a recent test quoted where Akida 1 proved to be 30 times better than Nvidia Jetson. I think tracking multiple objects using software on Nvidia will burn a lot of power.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 36 users
Top Bottom