BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
Just noticed this Tata Exlsi publication dated 3 September 2024 on multimodal AI.

I've also included a slide from Tata Exlsi's Q2 FY24 Report as a reminder of how serious they are about our partnership in terms of driving our technology into medical and industrial applications.


Screenshot 2024-10-08 at 4.25.52 pm.png



Screenshot 2024-10-08 at 4.37.40 pm.png






Publication Name: Times tech.in
Date: September 03, 2024

Multimodal AI to Enhance Media & Communication Experiences​

Multimodal AI to Enhance Media & Communication Experiences

Multimodal AI is transforming media and communication by integrating various data types like text, images, and videos to enhance content creation, audience engagement, and advanced search capabilities. In an interview with TimesTech, Deewakar Thakyal, Senior Technology Lead at Tata Elxsi, explains how this groundbreaking technology is shaping the future of personalized and immersive content experiences across industries.

TimesTech: What is multimodal AI? How is it different from AI and GenAI?​

Deewakar: Multimodal AI is a type of artificial intelligence that can process and understand multiple types of data simultaneously, such as text, images, audio, and video using various AI techniques, such as Natural Language Processing (NLP), Computer Vision, Speech Recognition, Machine Learning, and Large Language Models (LLMs). Unlike traditional AI, which is often limited to a single modality, multimodal AI can integrate information from different sources to provide a more comprehensive understanding of the world.
GenAI, or generative AI, is a subset of AI that can create new content, such as text, images, or code, based on patterns it learns from existing data. While GenAI can be multimodal, it’s primarily focused on generating new content. In this context, multimodal AI is focused on understanding the context, while GenAI is about creating. Multimodal AI can analyse a complex scene, such as a street intersection, and understand the interactions between pedestrians, vehicles, and traffic signals. On the other hand, GenAI can create a realistic image of a person based on a textual description.

TimesTech: How is multimodal AI enhancing content creation?​

Deewakar: Multimodal AI is revolutionizing content creation by allowing for more dynamic, engaging, and personalized experiences. It enhances understanding by processing various forms of content simultaneously, tailors content to individual needs, assists human creators, enables new content formats, and improves accessibility. For example, multimodal AI can analyse user preferences and behaviour to create personalized recommendations, suggesting products or articles that align with their interests. It can also assist human creators by generating ideas, suggesting different angles, or providing feedback on drafts.
Multimodal AI can transform content production, advertising, and creative industries. By generating cohesive and contextually relevant content across different formats, such as text, images, and audio, these models can cater to diverse needs and preferences, enhancing both reach and impact.
Additionally, multimodal AI can enable the creation of novel content formats, such as interactive storytelling or personalized product recommendations, making content more engaging and immersive. By incorporating features like speech-to-text and text-to-speech, multimodal AI can make content more accessible to a wider audience, including those with disabilities. This helps to create a more inclusive and equitable content ecosystem.

TimesTech: What is the role of Multimodal AI in improving audience engagement?​

Deewakar: With multimodal AI comes the integration of various types of data such as text, images, videos etc. With such varied content, the use of AI makes it easy to ascertain user preferences by processing multiple sensory inputs simultaneously. With consumers looking for more personalization in content and more digital platforms struggling to keep up with the demand, employing multimodal AI helps enhance audience engagement by directly making use of audience insights.
Tata Elxsi’s AIVA platform, for example, utilizes AI to create highlights of video content based on user preferences, which enables consumers to get more insights into specific parts of the video content. The use of AI-powered chatbots provides an interactive avenue for users to receive content recommendations based on their interests. Chatbots are also important support systems that answer user queries and provide content support. Keeping in mind audience demographics, multimodal AI also helps in content localisation by helping with translation and subtitling, giving a more nuanced understanding of the content to specific consumers.

TimesTech: Does multimodal AI help in advanced search and analysis? How?​

Deewakar: Multimodal AI can be extended to provide video insights like facial expressions, and situational sentiments as well as identify actions and objects by integrating and analysing data from multiple sources such as images, audio, text etc., which becomes helpful for consumers to get a better understanding of the content. Multimodal AI is extensively utilized by advertisers and media companies to deliver personalized ads that fit user behaviour and are optimized for different platforms like websites, mobile apps, and social media.
This can be seen through Tata Elxsi’s content discovery and recommendation engine named uLike, which is powered by multimodal AI. The program helps users search for videos based on tags, keywords and text within videos, which helps make the content more visible. Through such mechanisms, it becomes easier to curate content that fits consumer preferences while also detecting and removing harmful or inappropriate content from platforms, which is a result of analysing user behaviour and feedback.
At the same time, while opening the scope for monetization and ethical use through licensing agreements. Multimodal AI becomes important to drive innovation in this regard.

TimesTech: What is the futuristic scope of multimodal AI?​

Deewakar: With major digital transformation firms inching toward multimodal AI, it only goes to show that this will be a major breakthrough in content generation and personalisation across the media and entertainment industry. However, this technology can be extended to other industries as well, such as e-commerce, healthcare, education etc. Due to the significance of technologies like NLP, which can better analyse context and sentiment, there is a higher scope for multimodal AI to enhance the human-machine experience. However, it also becomes necessary to pay attention to ethical concerns and privacy issues with its use, as this involves analysing user data to provide insights. With the proper measures, multimodal AI will be transformational for the industry and can bring in the much-needed innovation, as promised.



Screenshot 2024-10-08 at 4.28.50 pm.png


 
  • Like
  • Fire
  • Love
Reactions: 42 users

Diogenese

Top 20
I don't recall the exact date Valeo and BRN got together, but I think this Valeo patent application pre-dates that.

The bit that interests me is that, while they propose using a CNN, they have modified an algorithm so that it only works on the lane markings. The purpose of this is to reduce the processing load.
WO2023046617A1 ROAD LANE DELIMITER CLASSIFICATION 20210921

In general, the environmental sensor data may be evaluated by means of a known algorithm to identify the plurality of points, which represent the road lane delimiter, for example by edge recognition and/or pattern recognition algorithms.

The classification algorithm is, in particular, a classification algorithm with an architecture, which is trainable by means of machine learning and, in particular, has been trained by means of machine learning before the computer-implemented method for road lane delimiter classification is carried out. The classification algorithm may for example be based on a support vector machine or an artificial neural network, in particular a convolutional neural network, CNN. Due to the two-dimensional nature of the two- dimensional histogram, it may essentially be considered as an image and therefore be particularly suitable to be classified by means of a CNN.

On the other hand, the method allows for an individual classification of each road lane delimiter in a scene and, consequently, to an accurate classification of the road lane delimiters. In particular, the two-dimensional histogram is formed for a particular road lane delimiter such that each road lane delimiter represented by the corresponding two- dimensional histogram may be classified individually by means of the classification algorithm. This also means that the classification algorithm does not have to handle the complete point cloud of the lidar system or a complete camera image, but only the relevant fraction corresponding to the road lane delimiter. This also reduces the complexity of the classification algorithm and the respective memory requirements.

This was filed in September 2021, so they were working on it for some time before that. In other words, this was developed pre their introduction to Akida when they were struggling to compress the conventional processor load of CNN so it would operate on the available processors in a timely fashion (real time) and without draining the battery.

We can all remember Luca's excitement at the processing capability of Akida 1, so I imagine Akia 2/TeNNs blew his socks off.
 
  • Like
  • Fire
  • Love
Reactions: 38 users

jtardif999

Regular
Some prodigious research there Frangipani, as always 👍

What I like about what you brought to the surface, is the fact that like TENNs, AKIDA Pico has "already" been known to the "customers" we are dealing with, for some Time, which shortens the "lead time" for product developments.

Some of which, will hopefully break the surface soon.
I think Akida Pico is simply just Akida-E packaged with TENNs.
 
  • Thinking
  • Like
Reactions: 5 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 30 users

IloveLamp

Top 20
(Translated from arabic, talks about combining Neuralink tech with Akida Pico)

1000018863.jpg


1000018857.jpg
1000018860.jpg
 
  • Like
  • Fire
  • Love
Reactions: 36 users

7für7

Top 20
  • Fire
Reactions: 1 users

7für7

Top 20
  • Like
  • Love
Reactions: 5 users

Diogenese

Top 20
I think Akida Pico is simply just Akida-E packaged with TENNs.
Hi jt,

The brochure for Pico says it has a single neural processing engine.

1728373018940.png


Now the terms like NPE and NPU have been used incinsistently, but often they are used interchangeably.

In any case, it gives me the opportunity to post an image of th Monal Lisa of ICs - again:

US11468299B2 Spiking neural netwok 20181101

1728373188693.png
 
  • Like
  • Love
  • Fire
Reactions: 26 users
Last edited:
  • Like
  • Fire
Reactions: 10 users
I just stumbled across some interesting news about some researchers of a fruit-related company that have published a ML-model for creating a depth-map from a 2-dimensional image (without relying on the availability of metadata such as camera intrinsics).

Most obvious use case: blurring image regions dependent on (calculated) depth (e.g. for small image sensors in smartphones)
Depth mapping is handy for everything from robotic vision to blurring the background of images post-capture. Typically, it relies on being able to capture the scene from two slightly different angles — as with smartphones that have multiple rear-facing cameras, where the differences between the images on two sensors are used to calculate depth and separate the foreground from the background — or the use of a distance-measuring technology such as lidar. Depth Pro, though, requires neither of these, yet Apple claims it can turn a single two-dimensional image into an accurate depth map in well under a second.

Now the interesting parts:
"The key idea of our architecture," the researchers explain, "is to apply plain ViT [Vision Transformer] encoders on patches extracted at multiple scales and fuse the patch predictions into a single high-resolution dense prediction in an end-to-end trainable model. For predicting depth, we employ two ViT encoders, a patch encoder and an image encoder."
Ok, so it's about vision transformers ... hmm ...

Ah, and it's fast also:
It's also fast: in testing, Depth Pro delivers its results in just 0.3 seconds per image — though this, admittedly, is based on running the model on one of NVIDIA's high-end Tesla V100 GPUs.
What the ...?


Dear Brainchip, please
Thanks for your attention 😉


Edit - some addition

P.S.:
If this actually also works reliably for video, please consider adding companies/solutions related to the film and VFX hardware/software industry to your list of potential use-cases. Just for inspiration, imagine filming and keying (by depths-data) in real-time without green-screens (even if only used for pre-visualization).
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 14 users

Diogenese

Top 20
I just stumbled across some interesting news about some researchers of a fruit-related company that have published a ML-model for creating a depth-map from a 2-dimensional image (without relying on the availability of metadata such as camera intrinsics).

Most obvious use case: blurring image regions dependent on (calculated) depth (e.g. for small image sensors in smartphones)


Now the interesting parts:

Ok, so it's about vision transformers ... hmm ...

Ah, and it's fast also:

What the ...?


Dear Brainchip, please
Thanks for your attention 😉


Edit - some addition

P.S.:
If this actually also works reliably for video, please consider adding companies/solutions related to the film and VFX hardware/software industry to your list of potential use-cases. Just for inspiration, imagine filming and keying (by depths-data) in real-time without green-screens (even if only used for pre-visualization).
Apparently it makes a guess as to the depth of the object:

US2024331174A1 One Shot PIFu Enrollment 20230331
1728381294831.png


Generating a 3D representation of a subject includes obtaining an image of a physical subject. Front depth data is obtained for a front portion of the physical subject. Back depth data is obtained for the physical subject based on the image and the front depth data. A set of joint locations is determined for the physical subject from the image, the front depth data, and the back depth data.

1 . A method comprising:

obtaining an image of a physical subject;

obtaining front depth data for a front portion of the physical subject;

generating back depth data for a back portion of the physical subject based on the image of the physical subject and the front depth data;

determining a set of joint locations for the physical subject from the image of the physical subject, the front depth data, and the back depth data.

2 . The method of claim 1, wherein determining the set of joint locations comprises:

generating, by a trained network, a feature set corresponding to the physical subject based on the image of the physical subject, the front depth data, and the back depth data.

claim 1

3 . The method of claim 2, wherein the feature set corresponds to sample points for the subject, the method further comprising:

obtaining, for each of the sample points, a classifier value, wherein the classifier value indicates a relationship of the sample point to a volume corresponding to the physical subject.

claim 2

4 . The method of claim 3, wherein the back depth data is obtained based on the classifier value for the sample points.
 
  • Like
  • Thinking
  • Fire
Reactions: 8 users
I'm not sure if I interpreted your answer correctly. Did you basically say that the "information" of depth obtained from this model is only an approximation / no exact determination of the distances?

Yes, that should be correct. I don't expect it to be a compute alternative to Time-of-Flight/Lidar/Radar.

But I assume it could be an interesting solution where “good enough” is sufficient. So for fast (and easy) masking of parts of/subjects in an image, or applying a blur dependent on the "calculated" depth of field for aesthetic reasons (faking large sensor images on a smartphone), etc.

Edit:
Just as an example, having kind of an accelerator for masking in Photoshop/VFX software (on a device with limited resources, e.g. phone, tablet, glasses?)
 
  • Like
  • Fire
Reactions: 6 users
Today, 33 million shares have been sold at or under 28c.

Was this wise?
Well I "wish" I had of sold some at 28 cents today and bought back at 26...

But there were 22 million on the buy side and only 6 on the sell..

I'm still smarting from trying to do a quick trade, with 100000 shares, at 29 cents (around 4 years ago) just to make a quick 500 bucks..

(I "have" done the occasional trade since then, more often than not, more pain than gain, but this particular price point, holds significance to me, as it was particularly bad..)

The share price immediately started going up and never looked back, for a long, long time.. (eventually trading consistently around 50 cents).

I'm fairly sure the red dot on the chart is where it was (prices are different, due to extra share issues).

20241008_212426.jpg


All it would take, is an IP deal to drop, that we are all expecting, of significant size/scope and I think we will easily punch through 60 cents, in short order.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 14 users

KMuzza

Mad Scientist
 
  • Like
  • Fire
  • Haha
Reactions: 6 users

Tothemoon24

Top 20
IMG_9710.jpeg



Explore the results of our latest R&D project - here's how #neuromorphic chips can change the #ML landscape 👩‍💻

Recently, the R&D team of Data Science UA, led by the Head of AI Consulting, Vasyl Chumachenko, PhD, got a chance to work with #Akida neuromorphic chips.

👉 For our team, it's important to continuously gain unique experience with cutting-edge technology - that's how we implement innovative solutions for our clients while other companies have just begun to explore them!

So, what did we find out?

🔹 Brainchip Akida uses 10-20 times less energy than traditional options. While we still need to test them further, they have great potential for saving energy, which is important as more people look for sustainable technology.

🔹 Neuromorphic chips do have some advantages compared to widely-used NVIDIA Jetson chips, that were chosen to serve as our benchmark.

🔹 Based on our research, Akida is a great fit for low-power devices. Its ability to consume less energy and quickly "learn" to recognize new objects in videos and images makes them perfect for applications that need real-time responses.

Our commitment to innovation ensures we will be at the forefront of implementing new technologies effectively.
🌐 Let’s connect to explore how we can help you to implement this kind of groundbreaking advancements for your business!

IMG_9711.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 53 users

Tothemoon24

Top 20
Brilliant 🍻

The work with the University of Waterloo complements a series of existing Mercedes‑Benz research collaborations on neuromorphic computing. One focus is on neuromorphic end-to-end learning for autonomous driving. To realize the full potential of neuromorphic computing, Mercedes‑Benz is building up a network of universities and research partnerships. The company is, for example, consortium leader in the NAOMI4Radar project funded by the German Federal Ministry for Economic Affairs and Climate Action. Here, the company is working with partners to assess how neuromorphic computing can be used to optimize the processing of radar data in automated driving systems. In addition, Mercedes‑Benz has been cooperating with Karlsruhe University of Applied Sciences. This work centres on neuromorphic cameras, also known as event-based cameras.



IMG_9712.jpeg




October 8, 2024 – Stuttgart/Toronto
  • Mercedes-Benz and the Ontario government, through the Ontario Vehicle Innovation Network (OVIN), establish incubators to foster startup creation, startup scouting and automotive innovation in Ontario, Canada
  • OVIN Incubators join growing international Mercedes-Benz STARTUP AUTOBAHN network
  • Initiative aims to drive transfer to industrialization, leveraging the region's strong foundation in advanced automotive technology and smart mobility
  • Research collaboration with University of Waterloo complements existing academic research into neuromorphic computing
Mercedes-Benz is partnering with the Ontario Vehicle Innovation Network (OVIN), the Government of Ontario's flagship initiative for the automotive and mobility sector. The purpose is to expand startup creation and scouting activities in North America and to promote the commercialization of automotive innovation. The OVIN Incubators Program will focus on identifying and fostering innovation in future software & AI, future vehicle components and future electric drive. Working with startups, and in partnership with OVIN, Mercedes-Benz will help progress promising projects through the provision of its specialist expertise and use cases. Selected projects will also benefit from the international Mercedes-Benz STARTUP AUTOBAHN network. Separately, the company intends to start a research collaboration with the University of Waterloo, Ontario with a focus on neuromorphic computing for automated driving applications. The move complements a range of ongoing Mercedes-Benz R&D activities in Canada.
"Innovation is part of Mercedes-Benz DNA. In our global R&D strategy, open innovation gives us rapid and direct access to the latest ideas and developments around the world. We are therefore delighted to further expand our activities in Canada as a founding partner of the OVIN Incubators. In a fast-paced environment, it is another important channel for developing exciting future products and elevating our customer experience through new technologies."
Markus Schäfer, Member of the Board of Management of Mercedes-Benz Group AG, Chief Technology Officer, Development & Procurement​
The academic research collaboration and participation in the OVIN Incubators Program are the latest in a series of initiatives underpinned by the company's Memorandum of Understanding (MoU) with the government of Canada, signed in 2022. The aim of the MoU is to strengthen cooperation across the electric vehicle value chain. Through the partnership with the Ontario government through OVIN, Mercedes-Benz is accelerating and expanding its presence by tapping into Ontario's international acclaim as a centre for tech development, recognizing the province's significance for Mercedes-Benz's global innovation network.
Open innovation draws in ideas, inspiration and technologies from a wide variety of external sources and partners. This approach is a long-established part of Mercedes-Benz R&D strategy, enriching and complementing the company's internal R&D work worldwide.
"This new partnership between the Ontario Vehicle Innovation Network (OVIN) and Mercedes‑Benz is going to be a significant boost for our province's automotive and mobility sectors. By bringing together the best of industry, research, and entrepreneurial talent, we're fostering innovation that will strengthen our economy, create good jobs and position Ontario as a leader in the auto and electric vehicle technologies of the future."
Doug Ford, Premier of Ontario
"Ontario continues to build its reputation as a world leader in manufacturing the cars of the future, with $44 billion in new investments by automakers, EV battery manufacturers and parts suppliers coming into the province over the last four years. The launch of OVIN Incubators represents another link in our growing end-to-end, fully integrated, EV supply chain. With a new platform for our world-class tech ecosystem to develop homegrown mobility innovations, Ontario talent will continue to be on the forefront of creating the technologies that will power vehicles all over the world through the Mercedes-Benz STARTUP AUTOBAHN network."
Vic Fedeli, Ontario Minister of Economic Development, Job Creation and Trade
"As Ontario sets its sights on the next decade of growth of its automotive and mobility sector, it is vital that we continue to foster the talent, technical expertise and capacity for innovation to achieve this future. The OVIN Incubators build a robust foundation for nurturing the next generation of innovators by providing a clear pathway from research and development to commercialization and industrialization, in partnership with Ontario's leading postsecondary institutions and major industry players. This platform will further cement the foundation for sustainable economic growth within the sector and beyond, across the entire province."
Raed Kadri, Head of OVIN​
Mercedes-Benz partners in OVIN Incubators to accelerate startup scouting and support commercialization
In its pilot phase, the OVIN Incubators Program will conduct startup scouting to identify opportunities in Ontario relevant to Mercedes-Benz fields of research. The aim is to empower startups to engage with industry and establish a robust pipeline of companies whose growth can be catalyzed. Together, OVIN and Mercedes‑Benz will narrow down an initial longlist through a process of evaluation, ultimately arriving at individual projects that will progress to proof-of-concept based on Mercedes‑Benz use cases. The OVIN Incubators join a growing international network of regional programmes benefitting from the Mercedes‑Benz STARTUP AUTOBAHN platform for open innovation. This globally networked and locally executed approach seeks to maximize the pool of ideas, innovations and technologies that can flow into future Mercedes‑Benz products. Looking to the future, the next phase of the OVIN Incubators will seek to expand its scope through the addition of further partners from industry and academia.
Collaboration with the University of Waterloo to help seed, grow and harvest research in the field of neuromorphic computing
Mercedes-Benz and the University of Waterloo have signed a Memorandum of Understanding to collaborate on research led by Prof. Chris Eliasmith in the field of neuromorphic computing. The focus is on the development of algorithms for advanced driving assistance systems. By mimicking the functionality of the human brain, neuromorphic computing could significantly improve AI computation, making it faster and more energy efficient. While preserving vehicle range, safety systems could, for example, detect traffic signs, lanes and objects much better, even in poor visibility, and react faster. Neuromorphic computing has the potential to reduce the energy required to process data for autonomous driving by 90 percent compared to current systems.
"Industry collaboration is at the heart of our success as Canada's largest engineering school. We recognize that research partnerships with companies such as Mercedes-Benz bring opportunities to directly apply and test our work, while introducing our students to the highest standards in industry."
Mary Wells, Dean, Faculty of Engineering at the University of Waterloo​
The work with the University of Waterloo complements a series of existing Mercedes‑Benz research collaborations on neuromorphic computing. One focus is on neuromorphic end-to-end learning for autonomous driving. To realize the full potential of neuromorphic computing, Mercedes‑Benz is building up a network of universities and research partnerships. The company is, for example, consortium leader in the NAOMI4Radar project funded by the German Federal Ministry for Economic Affairs and Climate Action. Here, the company is working with partners to assess how neuromorphic computing can be used to optimize the processing of radar data in automated driving systems. In addition, Mercedes‑Benz has been cooperating with Karlsruhe University of Applied Sciences. This work centres on neuromorphic cameras, also known as event-based cameras.
# # #
About the Ontario Vehicle Innovation Network OVIN
OVIN is an initiative of the Government of Ontario, led by the Ontario Centre of Innovation (OCI), designed to reinforce Ontario's position as a North American leader in automotive and mobility technology and solutions such as connected vehicles, autonomous vehicles, and electric and low-carbon vehicle technologies. Through resources such as research and development (R&D) support, talent and skills development, technology acceleration, business and technical supports, and demonstration grounds, OVIN provides a competitive advantage to Ontario-made automotive and mobility technology companies.
About STARTUP AUTOBAHN
STARTUP AUTOBAHN is an open innovation platform for startups in the field of mobility. The innovation driver was founded in 2016 by Mercedes‑Benz, formerly Daimler, in cooperation with the innovation platform Plug and Play, the research factory ARENA2036 and the University of Stuttgart. This has resulted in an entire innovation network around the globe - with programmes in the United States, China, India, South Korea and now also in Canada. Since its foundation, a growing number of industrial partners and startups from all over the world have benefited from the STARTUP AUTOBAHN. Several technologies from the network have already been integrated into Mercedes-Benz series-production vehicles.
 
  • Like
  • Fire
  • Love
Reactions: 31 users

BrainShit

Regular
Love lamps
I am looking at your post and can somebody correct me. We haven’t committed to making Pico on chip or released anything to market about making this chip so has this guy let something out of the bag or is this person stating what could occur going forward based on publically known information and some assumptions. As the line

Akida Pico: a very low power NPU to bring Al
to any device with a battery or batteries. This chip is manufactured by GlobalFoundries at a 22 nm FDSOI (22FDX) manufacturing process.

This is in past tense and suggests this has occurred???

I guess he didn't got the following statements right and crossed the word "When" out or he's from the future.:

"When implemented using GlobalFoundries 22FDX (22nm-class, FD SOI) process technology, the Akida Pico NPU consumes less than a milliwatt and occupies only 0.12 mm^2. When enhanced with 50KB of SRAM, its die size grows to 0.18 mm^2. The company says that the IP can be synthesized for any production node, including TSMC's low-cost 12nm-class nodes and ultra-high-end 3nm-class process technologies if needed."

Source: trib.al/LMeM18e
Source was posted by BrainChip on X
 
  • Like
  • Love
Reactions: 9 users

Cand2it

Member
How can there be this much difference in the OTC market?
 

Attachments

  • IMG_8763.jpeg
    IMG_8763.jpeg
    459.9 KB · Views: 55
  • IMG_8762.jpeg
    IMG_8762.jpeg
    487.1 KB · Views: 54
  • Like
Reactions: 3 users

Tothemoon24

Top 20
IMG_9713.jpeg
IMG_9714.jpeg
IMG_9715.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 52 users
Top Bottom