BRN Discussion Ongoing

A very exciting company to watch: Alat

It is Saudi Arabia's ambition to build a world-class manufacturing hub in the Kingdom through next-generation technologies and sustainable practices.

Alat, headquartered in Riyadh, has been established to create a global champion in electronics and advanced industrial segments and mandated to create world class manufacturing enabled by global innovation and technology leadership. Alat is partnering with global technology leaders to transform industries like semiconductors, smart devices and next-gen infrastructure while establishing world class businesses in the Kingdom, powered by clean energy.

They have received $100 Billion USD in funding from the Saudi Arabia Public Investment Fund (PIF).

Alat is led by His Royal Highness Crown Prince Mohammed bin Salman bin Abdulaziz Al-Saud who is the Crown Prince and Prime Minister of the Kingdom of Saudi Arabia

On the Alat Executive Leadership team is Ross Jatou, President of their Semiconductors Business unit. He only formally announced his appointment yesterday.

Ross came from Onsemi where he was for 8 years and the Senior Vice President and General Manager of their Intelligent Sensing Group. He was with Nvidia for 14 years prior to Onsemi.

Ross is well aware of BrainChip where he re-posted this on LinkedIn 3 weeks ago.

View attachment 57834



Watch the Alat CEO video here https://www.alat.com/en/about/what-is-alat/

Alat has partnered for a joint venture with Softbank Alat and SoftBank Group form a strategic partnership to manufacture groundbreaking industrial robots in the Kingdom | SoftBank Group Corp.

“The new JV will build industrial robots based on intellectual property developed by SoftBank Group and its affiliates that will perform tasks with minimal additional programming, that are ideally suited for industrial assembly and applications in manufacturing and production. The robot manufacturing factory that the JV will create in the Kingdom is a lighthouse factory, that will use the latest technology to manufacture unprecedented next generation robots to perform a wide variety of tasks”. The first factory is targeted to open in December 2024.


This is what Alex Divinsky (Ticker Symbol You) posted about Alat earlier today.

View attachment 57835

https://www.linkedin.com/posts/acti...qv?utm_source=share&utm_medium=member_desktop


Chippers, it would be massive if we got in with Alat !

And I’ve got positive vibes about it.

DYOR.
Hi @Terroni2105

I just saw the below and the previous LinkedIn post by Ross on their TOF being used in our demo with Onsemi.

Did a search and saw you already picked up on them.

Agree, would be a nice hook up and I like who he is also speaking with. Would like BRN to have a seat at that table for sure :)


Ross Jatou
President - Semiconductors at ALAT
1mo

It was an honor to meet with Cristiano Amon, the CEO of Qualcomm, during his visit to Riyadh. Our discussions centered around exploring potential collaboration opportunities between Qualcomm and Alat. It was an insightful conversation, and we hope to continue our dialogue in the future. Thank you for taking the time to meet with us.
 
  • Like
  • Fire
  • Love
Reactions: 12 users

Iseki

Regular
Hi @Terroni2105

I just saw the below and the previous LinkedIn post by Ross on their TOF being used in our demo with Onsemi.

Did a search and saw you already picked up on them.

Agree, would be a nice hook up and I like who he is also speaking with. Would like BRN to have a seat at that table for sure :)


Ross Jatou
President - Semiconductors at ALAT
1mo

It was an honor to meet with Cristiano Amon, the CEO of Qualcomm, during his visit to Riyadh. Our discussions centered around exploring potential collaboration opportunities between Qualcomm and Alat. It was an insightful conversation, and we hope to continue our dialogue in the future. Thank you for taking the time to meet with us.
So why isn't Sean booking the first premium economy ticket to see if they want to invest?
 
  • Like
Reactions: 2 users

Tothemoon24

Top 20
IMG_9073.jpeg





On May 9, 2024, the U.S. National Highway Transportation Safety Administration (NHTSA) issued a final rule mandating that all passenger vehicles and light trucks sold in the United States after September 2029 must be equipped with an Automated Emergency Braking System (AEB). This is a significant step forward in mainstreaming technology that is already standard in all new luxury vehicles and available as an enhanced safety upgrade in most mass-market models. However, while the NHTSA's decision is welcome for driver, pedestrian, and cyclist safety, the effectiveness of these systems, particularly in night driving conditions, remains a concern due to the limitations of cost-efficient sensors used in mass-market vehicles.

The automotive electronics development cycle is considerably longer than that of complex consumer electronics like smartphones, often compared to dog years. The NHTSA 2029 mandate is expected to trigger a push among automotive OEMs to meet the new requirements economically within the next five years. AEB technology, first introduced by Volvo in 2010, has proven effective enough over time to become pervasive. The most advanced AEB systems combine a variety of sensors and sensor types (radar, camera, lidar, ultrasonic) and the silicon processing power to enhance accuracy and reduce false positives, which can potentially cause collisions that the systems are designed to prevent. However, last month’s mandate is bound to have an impact on some OEMs, forcing them to balance accuracy, BOM cost, and system complexity for mass market vehicles.

A critical issue for AEB systems is their performance in low-light conditions. Research supports the need for improved nighttime AEB performance. According to Jessica Cicchino's study in Accident Analysis & Prevention (AAP), "AEB with pedestrian detection was associated with significant reductions of 25%-27% in pedestrian crash risk and 29%-30% in pedestrian injury crash risk. However, there was no evidence that the system was effective in dark conditions without street lighting…"【Jessica Cicchino, AAP May 2022】

The effectiveness of automotive CMOS image sensors commonly used in these systems diminishes after dark. This is particularly concerning since drivers with limited visibility and reaction time are most dependent on AEB systems and other ADAS systems at night. Pedestrian fatalities in the U.S. have nearly doubled since 2001, with over 7,500 deaths nationwide in 2021, and about 77% of pedestrian fatalities happening after dark. Although the NHTSA's ruling is a positive move towards improving safety, the challenge of cost-effective solutions for nighttime driving remains for nearly 80% of the vehicle-on-pedestrian fatalities the mandate certainly seeks to mitigate.

Fortunately, AI-based computational imaging offers a promising solution. By applying real-time denoising using neural networks and embedded neural network processors (NPUs), the nighttime range and accuracy of automotive sensors can be significantly enhanced. This AI denoising software runs on existing automotive SoCs with embedded NPUs and removes temporal and spatial noise from the RAW image feed from the sensor before processing, allowing the analog gain and exposure time to be increased without increasing the associated sensor noise.

This method does not require any modifications or recalibration of the existing image signal pipeline (ISP). In initial OEM road tests, AI denoising works effectively with both high-cost low-light-capable sensors and mainstream automotive CMOS sensors, effectively giving them "steroids" for better and more accurate night vision. This improved night vision translates into earlier and more accurate computer vision results such as nighttime pedestrian detection in AEB systems.

Since this is a software upgrade to existing and planned ECUs leveraging existing/roadmap Tier-2 fabless SoCs, the time required for integration, testing, and productization is much lower compared to hardware-based alternatives.

I am proud to be part of a dynamic team of AI computational image scientists and software engineers who are changing the world by delivering technology that will potentially mitigate thousands of fatalities in the coming years.

For more information on how AI-based computational imaging can improve the nighttime performance and accuracy of ADAS, as well as human vision-assist systems, contact me via LinkedIn or consult one of our Tier-2 fabless partners about their adoption plans for AI-based computational imaging from Visionary.ai.
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Rach2512

Regular
From 6 days ago, sorry if already posted.


 
  • Like
  • Love
  • Fire
Reactions: 10 users

MegaportX

Regular
  • Like
Reactions: 2 users

Learning

Learning to the Top 🕵‍♂️
Sounds like the professor, is impressed enough to become a shareholder in our Company.

We can't know how big his "bet" is though.

And anyway, what would he know?
He's probably just caught up in the "hype" like the rest of us 🙄..
Hi Dingo,

I rather followed the "hype" of a professor, than a FOOL me think. 😏🤔🫡

😁😁😁😁

Learning 🪴
 
  • Like
  • Haha
  • Fire
Reactions: 10 users

Earlyrelease

Regular
So any nefarious traders don’t steal shares from the new to the game investors I remind those new investors of the Australian tax year ending in June. Which means those that have taken profit in shares and face a tax bill they tend to sell some other shares which may be at a loss to even out the gain against the loss so no net tax paid. Why do I say this. Well in the last two weeks on June there is a bit of turbulence in some stocks share price and volumes traded. Bear this in mind before you make any rash decisions. Further if there is no news from the company soon then we are prime for shorters to take advantage of the trading situation that will present.

So do what you think is right for your own circumstances but knowledge allows you to make informed decisions and not be hoodwinked to a parting with your shares.
 
  • Like
  • Fire
  • Love
Reactions: 30 users
Whilst this IV is from late last year I hadn't seen or read it and pretty sure most would know this gentleman's name from TCS in our dot joining.

If you don't, just search Arpan and you'll get like 6 pages of posts to wade through.

I can understand why the BRN relationship commenced and also an element of why it can take some time given his thoughts on the software - hardware co-design process. Unfortunately, it's not all just PnP.



maxresdefault-e1699113127996.jpg

Embedded Systems: A Journey with Arpan Pal, TCS Research’s Distinguished Chief Scientist​

Arpan Pal, a distinguished Chief Scientist and Research Area Head at TCS Research, has carved a prominent niche in Intelligent Sensing, Signal Processing & AI, and Edge Computing. With an impressive career spanning over three decades, he has contributed significantly to the advancements in Embedded Devices and Intelligent Systems. His expertise lies at the intersection of hardware and software, where he thrives, making significant contributions to embedded systems.

In this interview, Arpan delves into the intricacies of his career journey, shedding light on the inspirations that led him to pursue a path in embedded systems and the subsequent evolution of his expectations. Furthermore, he generously shares insights into the surprises and challenges encountered, emphasizing the critical balance between technological innovation and real-world applications. As a seasoned professional, Arpan offers invaluable advice for aspiring scientists and engineers in the field, providing a roadmap for success that revolves around a deep understanding of hardware-software co-design and the adaptability to emerging technologies.

Arpan envisions a future for Embedded Systems that is deeply rooted in the principles of power efficiency and sustainable computing. With his visionary perspective on the transformative potential of brain-inspired neuromorphic computing and spiking neural networks, Arpan anticipates a paradigm shift towards energy-conscious AI systems. Pal’s remarkable contributions guide the future of technology and innovation as we delve deeper into the world of embedded intelligent systems.

What inspired you to go into embedded systems? Would you say your career has matched what your original expectations were? If so, what? If not, why not?​


For me, embedded systems had been a natural fit since I liked both hardware and programming and my first two jobs were in hardware-driven embedded systems – one was to build a microcontroller-based PSTN telephone exchange monitoring system, and the other was to build missile seeker systems. As I began working on these projects, I realized that I loved embedded systems because it is the only field that allows one to work at the intersection of hardware and software and requires knowledge of both electronics and software programming.

So far, the experience and the journey have exceeded my expectations. The biggest satisfaction is to see how embedded systems are making a comeback in the form of Edge Computing and how the concept of “AI at the Edge” is becoming mainstream for IoT, Robotics and AR/VR as it enables reliable, low latency, low power yet privacy-preserving intelligence at the edge.

What in your career has surprised you the most? Are there any challenges you overcame that you’d like to share?​


The biggest surprise in the early part of my career was to discover the possibility, for a given use case, of building a computationally lighter version of a sophisticated complex algorithm when faced with the compute/memory/power constraint of embedded systems without any significant compromise on the algorithm performance and I have applied this understanding again and again in my career.

The main challenge in embedded systems research is how to marry technological novelty to a visible and useful impact in the application. When I worked in missile technology, this challenge manifested itself in designing novel real-time target-tracking algorithms that can run on a DSP chip. In my sensing work in TCS for healthcare, this meant designing AIML/Signal Processing algorithms that consume as little power as possible so that they can work with wearables. Our Industry 4.0 intelligent IoT work involved designing systems that provide real-time or near-real-time response with deterministic latency.

The other challenge is at the platform level, where we have come a long way from tiny microcontrollers to DSP processors to AI chipset accelerators. But what has not changed is that an algorithm will always need more time, memory, and power than is available in the target embedded hardware – optimizing it to fit the target hardware is always a challenging task that requires embedded engineering expertise.

What resources or skills did you find most helpful when moving up in your career?​


Key skills are as follows:
  • A thorough understanding of hardware system features and limitations is essential for abstracting their implications for embedded applications.
  • When dealing with real-time systems, how to make software optimally utilize the hardware – hardware-software co-design is the key.
  • Understanding on how to map the impact of an application to the novelty of an embedded system in terms of system/technology, and how an application-level constraint will translate into system-level constraint in an embedded system.

What advice would you give to scientists/engineers in embedded systems?​


The first piece of advice will be to understand the beauty and the nuances of the hardware-software co-design in embedded systems, which is unique in terms of hardware capability and software features.

The second piece of advice will be to keep an open mind and be ready to adapt to new technologies/techniques as they come. Let’s take an example – In today’s world AI is the hype word; however, AI on embedded systems is not really well-understood yet. Embedded Edge Computing technology is coming up in a big way to address this.

The third advice is to identify a problem and then use technology to solve it, rather than going bottom-up to build a novel technology system first and then look for its suitable application.

What do you see as the future of Embedded Systems?​


When will embedded intelligent systems become truly power-aware? Green computing is indispensable as we forge towards a sustainable future. Embedded System engineers are inherently trained to make their algorithms work on low-power, low-latency-constrained embedded devices. The same principles need to be applied to transform over-parameterized ultra-large and power-hungry AI models into power-efficient AI systems.

Our brain computation needs only 20 Watts, while a typical GPU cluster may need tens of kilowatts of power – how do we design AI systems that consume power in the order of our brain? In the area of low-power embedded systems, brain-inspired neuromorphic computing and spiking neural networks (SNN) tailor-made for non-Von-Neumann Neuromorphic architecture will result in significant power saving. SNN on Neuromorphic architecture is a great example of nature-inspired hardware-software co-design.

Learn More About Arpan Pal​


Infographic of Arpan Pal's career timeline.

Arpan Pal has more than 30 years of experience in the areas of Intelligent Sensing, Signal Processing &AI, Edge Computing and

Affective Computing. Currently, as Distinguished Chief Scientist and Research Area Head, Embedded Devices and Intelligent Systems, TCS Research, he is working in the areas of Connected Health, Smart Manufacturing, Smart Retail and Remote Sensing.

Arpan has been on the editorial board of notable journals like ACM Transactions on Embedded Systems, and Springer Nature Journal on

Computer Science. Additionally, he is on the TPC of notable conferences like IEEE Sensors, ICASSP, and EUSIPCO. He has filed 180+ patents (out of which 95+ were granted in different geographies) and have published 160+ papers and book chapters in reputed conferences and journals. He has also written three complete books on IoT, Digital Twins in Manufacturing, and Application AI in Cardiac screening. He is on the governing/review/advisory board of some Indian Government organizations like CSIR, and MeitY, as well as of educational Institutions like IIT, IIIT, and Technology Innovation Hubs. Arpan is two times winner of the Tata Group top Innovation award in Tata InnoVista under Piloted technology category.

Prior to joining Tata Consultancy Services (TCS), Arpan had worked for DRDO, India as Scientist for Missile Seeker Systems and in Rebeca Technologies as their Head of Real-time Systems. He has a B. Tech and M. Tech degree from IIT, Kharagpur, India and PhD. from Aalborg University, Denmark.
 
  • Like
  • Fire
  • Love
Reactions: 29 users
View attachment 64605




On May 9, 2024, the U.S. National Highway Transportation Safety Administration (NHTSA) issued a final rule mandating that all passenger vehicles and light trucks sold in the United States after September 2029 must be equipped with an Automated Emergency Braking System (AEB). This is a significant step forward in mainstreaming technology that is already standard in all new luxury vehicles and available as an enhanced safety upgrade in most mass-market models. However, while the NHTSA's decision is welcome for driver, pedestrian, and cyclist safety, the effectiveness of these systems, particularly in night driving conditions, remains a concern due to the limitations of cost-efficient sensors used in mass-market vehicles.

The automotive electronics development cycle is considerably longer than that of complex consumer electronics like smartphones, often compared to dog years. The NHTSA 2029 mandate is expected to trigger a push among automotive OEMs to meet the new requirements economically within the next five years. AEB technology, first introduced by Volvo in 2010, has proven effective enough over time to become pervasive. The most advanced AEB systems combine a variety of sensors and sensor types (radar, camera, lidar, ultrasonic) and the silicon processing power to enhance accuracy and reduce false positives, which can potentially cause collisions that the systems are designed to prevent. However, last month’s mandate is bound to have an impact on some OEMs, forcing them to balance accuracy, BOM cost, and system complexity for mass market vehicles.

A critical issue for AEB systems is their performance in low-light conditions. Research supports the need for improved nighttime AEB performance. According to Jessica Cicchino's study in Accident Analysis & Prevention (AAP), "AEB with pedestrian detection was associated with significant reductions of 25%-27% in pedestrian crash risk and 29%-30% in pedestrian injury crash risk. However, there was no evidence that the system was effective in dark conditions without street lighting…"【Jessica Cicchino, AAP May 2022】

The effectiveness of automotive CMOS image sensors commonly used in these systems diminishes after dark. This is particularly concerning since drivers with limited visibility and reaction time are most dependent on AEB systems and other ADAS systems at night. Pedestrian fatalities in the U.S. have nearly doubled since 2001, with over 7,500 deaths nationwide in 2021, and about 77% of pedestrian fatalities happening after dark. Although the NHTSA's ruling is a positive move towards improving safety, the challenge of cost-effective solutions for nighttime driving remains for nearly 80% of the vehicle-on-pedestrian fatalities the mandate certainly seeks to mitigate.

Fortunately, AI-based computational imaging offers a promising solution. By applying real-time denoising using neural networks and embedded neural network processors (NPUs), the nighttime range and accuracy of automotive sensors can be significantly enhanced. This AI denoising software runs on existing automotive SoCs with embedded NPUs and removes temporal and spatial noise from the RAW image feed from the sensor before processing, allowing the analog gain and exposure time to be increased without increasing the associated sensor noise.

This method does not require any modifications or recalibration of the existing image signal pipeline (ISP). In initial OEM road tests, AI denoising works effectively with both high-cost low-light-capable sensors and mainstream automotive CMOS sensors, effectively giving them "steroids" for better and more accurate night vision. This improved night vision translates into earlier and more accurate computer vision results such as nighttime pedestrian detection in AEB systems.

Since this is a software upgrade to existing and planned ECUs leveraging existing/roadmap Tier-2 fabless SoCs, the time required for integration, testing, and productization is much lower compared to hardware-based alternatives.

I am proud to be part of a dynamic team of AI computational image scientists and software engineers who are changing the world by delivering technology that will potentially mitigate thousands of fatalities in the coming years.

For more information on how AI-based computational imaging can improve the nighttime performance and accuracy of ADAS, as well as human vision-assist systems, contact me via LinkedIn or consult one of our Tier-2 fabless partners about their adoption plans for AI-based computational imaging from Visionary.ai.
"The automotive electronics development cycle is considerably longer than that of complex consumer electronics like smartphones, often compared to dog years."

That was reassuring :)
 
  • Like
  • Haha
Reactions: 4 users
Will they ever sign anyone ??
 
  • Like
  • Wow
  • Thinking
Reactions: 5 users

MegaportX

Regular
  • Like
Reactions: 2 users
5A2C4A3D-30EF-4A44-BC63-29F308CBFB2A.jpeg
 
Last edited:
  • Like
  • Thinking
  • Love
Reactions: 11 users
It is very difficult to understand from a layman’s prospective that BRN wasn’t chosen if we are three years ahead of everyone else ?. I am struggling to understand what the problem is as clearly something isn’t right or am I missing something ?.
 
  • Like
  • Fire
  • Thinking
Reactions: 14 users

Yoda

Regular
It is very difficult to understand from a layman’s prospective that BRN wasn’t chosen if we are three years ahead of everyone else ?. I am struggling to understand what the problem is as clearly something isn’t right or am I missing something ?.
I agree that something isn't working. We have revolutionary technology but after several years of availability no one since Megachips has signed on the dotted line for it. This certainly isn't what the company was expecting either (remember the explosive sale and exponential growth presentation at the AGM circa 2020 or 2021?). I'm not sure what the problem is but is it that the IP only business model isn't' working? I don't know what the answer is but I don't think anyone back in 2021 expected that halfway through 2024 we'd be sitting in the 20c range. I'm still expecting the company to get there but I hope that at board level some serious discussions are going on about why things have not worked out the way we thought they would to this point and what the road to success now looks like. We need to get traction or our 3 year advantage will be squandered and we risk being left behind.
 
  • Like
  • Fire
  • Thinking
Reactions: 22 users

Dougie54

Regular
I agree that something isn't working. We have revolutionary technology but after several years of availability no one since Megachips has signed on the dotted line for it. This certainly isn't what the company was expecting either (remember the explosive sale and exponential growth presentation at the AGM circa 2020 or 2021?). I'm not sure what the problem is but is it that the IP only business model isn't' working? I don't know what the answer is but I don't think anyone back in 2021 expected that halfway through 2024 we'd be sitting in the 20c range. I'm still expecting the company to get there but I hope that at board level some serious discussions are going on about why things have not worked out the way we thought they would to this point and what the road to success now looks like. We need to get traction or our 3 year advantage will be squandered and we risk being left behind.
I too expected to be living a little better than when I started this BRN journey.at have the deposit on my Island paid in full.i am afraid it looks like we are going to be left behind instead of being the the leader.”Come on BRAINCHIP”
 
  • Like
  • Haha
Reactions: 6 users

Guzzi62

Regular
It is very difficult to understand from a layman’s prospective that BRN wasn’t chosen if we are three years ahead of everyone else ?. I am struggling to understand what the problem is as clearly something isn’t right or am I missing something ?.
Maybe Akida isn't that much better than what different companies can do themselves?

If they can save money on a IP contract deal and still sell their own product, I am pretty sure they will go in that direction.

The TENN technology is still in the baby stage but I have some hope here.

If no deal is signed this year, it's starting to look very bleak IMO but we been promised so ...................!
 
  • Like
Reactions: 9 users

Iseki

Regular
Maybe Akida isn't that much better than what different companies can do themselves?

If they can save money on a IP contract deal and still sell their own product, I am pretty sure they will go in that direction.

The TENN technology is still in the baby stage but I have some hope here.

If no deal is signed this year, it's starting to look very bleak IMO but we been promised so ...................!
Maybe companies don't want to take the risk of of developing a product with a small company that has no industry investors. eg TATA or Softbank or Valeo.
 
  • Like
Reactions: 5 users

MrNick

Regular
It is very difficult to understand from a layman’s prospective that BRN wasn’t chosen if we are three years ahead of everyone else ?. I am struggling to understand what the problem is as clearly something isn’t right or am I missing something ?.
But, it opens the door to a huge player grabbing BRN for themselves. Apple are always talking to hundreds of suitors for new implementation and those conversations will have been happening for years. What also happens with Apple is that their licences preclude suppliers selling their tech to anyone else. They claim certain rights over them and confidentiality is just one. So, do you fulfil your company's potential with one single huge client or hundreds with potentially a far larger ultimate audience creating greater ROI? Potato patato.
 
  • Like
Reactions: 5 users

Diogenese

Top 20
I agree that something isn't working. We have revolutionary technology but after several years of availability no one since Megachips has signed on the dotted line for it. This certainly isn't what the company was expecting either (remember the explosive sale and exponential growth presentation at the AGM circa 2020 or 2021?). I'm not sure what the problem is but is it that the IP only business model isn't' working? I don't know what the answer is but I don't think anyone back in 2021 expected that halfway through 2024 we'd be sitting in the 20c range. I'm still expecting the company to get there but I hope that at board level some serious discussions are going on about why things have not worked out the way we thought they would to this point and what the road to success now looks like. We need to get traction or our 3 year advantage will be squandered and we risk being left behind.
There are a few reasons.

1. As you say the change of business model set back commercialization by a year or two and put the hockey sticks on hold.

2. Covid set the world back a year or two.

3. Then just when the green shoots of Akida 1 were beginning to show, we ploughed that back to plant Akida 2.

On the plus side:

A. Akida 2 in software simulation has been available to EAPs for a couple of years (see 3 above).

B. TeNNs is a whole new kettle of fish.

B. There is a rumour that some EAPs are moving to using Akida 2 simulation software commercially while Akida 2 silicon is finalized (Mercedes, Valeo, ...).

C. Edge Box is a powerful self-contained plug-and-play commercial/industrial system monitor demonstrating the prowess of Akida 1 which should generate some income this quarter.

D. The general AI zeitgeist.

E. The newly discovered enthusiasm for edge AI.

F. BRN's burgeoning ecosystem.

G. The pressure on the von Neumann processor makers to become truly AI-capable and save the planet.

...
 
  • Like
  • Love
  • Fire
Reactions: 72 users
Top Bottom