BRN Discussion Ongoing

alwaysgreen

Top 20
Err ........... .. no.

Swing and a miss champ. Bow out gracefully.
Err fuck off.
 
  • Like
  • Haha
Reactions: 11 users

cassip

Regular
  • Like
Reactions: 3 users
Hi Fmf,

Following on from the link you posted, a little belated hindsight in the light of the Mercedes announcement 4 months later ( https://brainchip.com/mercedes-benz-vision-eqxx-concept/ ):

Akida spiking neural processor could head to FDSOI​

Technology News | August 2, 2021
By Peter Clarke
...
One of the application areas of interest is automotive where Artificial Intelligence and Machine Learning (AI/ML) to train an increasing number of sensors, components, image and video processors in each vehicle. Autonomous vehicles and near-autonomous vehicles are predicted to generate between 12 and 15 terabytes of data for every two hours of driving.

Latency, power consumption and privacy are the key reasons not to send this data to the cloud for processing.

One advantage of spiking neural network architectures is the ability to perform real-time incremental learning, sometimes called one-shot learning, within a fraction of a second. The ability to add voice commands, accept individuals as drivers by facial recognition and to flag events as significant or not in sensors is improved when using Akida, said BrainChip executives. “We are being benchmarked against deep learning accelerators and a GPU vendor and it is coming back favourably to u
s.”

Note the present tense (August 22021) "We are being benchmarked ... ", ie, the benchmarking was going on in August 2021. Five months later we got the Mercedes announcement.

We know about the use of Akida for voice commands in Mercedes.

We also know that Mercedes compared (benchmarked) Akida favourably with other voice recognition systems, providing 5 to 10 times better power efficiency than deep learning accelerators and a GPU vendor (Nvidia?).

We have speculated about other applications of Akida in Mercedes, so if we take a lead from the above-cited abstract, can we infer that facial recognition and significant event detection are included in EQXX, EQS, ... ?

... and don't mention Valeo ...

Now I wonder if the Sony/Prophesee deblurring camera could be useful in ADAS ...
Hi @Diogenese

I had thought that the benchmarking against GPUs and accelerators may have been carried out by Edge Impulse or MegaChips but your argument pointing to Mercedes Benz makes greater sense. We know Mercedes Benz had been working with Intel early 2021 and of course Nvidia and also Qualcomm so plenty of GPUs and accelerators available for testing/benchmarking against AKIDA.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 16 users

Sirod69

bavarian girl ;-)
something from Mercedes again:

Keeping a safe following distance, maintaining a certain speed or lane, and warning of possible side collisions or crossing pedestrians – our driving assistance systems can help you master a wide variety of situations.

As an Active Safety Engineer, Katharina Kupferschmid works on the development of all these intelligent systems. In doing so, it is important for her to understand how accidents occur and what protective systems could have prevented them. In this way, the analysis of real traffic accidents forms the basis for developing innovative #safety technologies and ever more efficient systems.

Because of dedicated employees like Katharina, #MercedesBenz earned a name for itself as a safety pioneer − a reputation it still enjoys today and will retain in the future!

Which driver assistance system would you never want to go without again?
 
  • Like
  • Fire
  • Love
Reactions: 17 users

Proga

Regular
Hi Fmf,

Following on from the link you posted, a little belated hindsight in the light of the Mercedes announcement 4 months later ( https://brainchip.com/mercedes-benz-vision-eqxx-concept/ ):

Akida spiking neural processor could head to FDSOI​

Technology News | August 2, 2021
By Peter Clarke
...
One of the application areas of interest is automotive where Artificial Intelligence and Machine Learning (AI/ML) to train an increasing number of sensors, components, image and video processors in each vehicle. Autonomous vehicles and near-autonomous vehicles are predicted to generate between 12 and 15 terabytes of data for every two hours of driving.

Latency, power consumption and privacy are the key reasons not to send this data to the cloud for processing.

One advantage of spiking neural network architectures is the ability to perform real-time incremental learning, sometimes called one-shot learning, within a fraction of a second. The ability to add voice commands, accept individuals as drivers by facial recognition and to flag events as significant or not in sensors is improved when using Akida, said BrainChip executives. “We are being benchmarked against deep learning accelerators and a GPU vendor and it is coming back favourably to u
s.”

Note the present tense (August 22021) "We are being benchmarked ... ", ie, the benchmarking was going on in August 2021. Five months later we got the Mercedes announcement.

We know about the use of Akida for voice commands in Mercedes.

We also know that Mercedes compared (benchmarked) Akida favourably with other voice recognition systems, providing 5 to 10 times better power efficiency than deep learning accelerators and a GPU vendor (Nvidia?).

We have speculated about other applications of Akida in Mercedes, so if we take a lead from the above-cited abstract, can we infer that facial recognition and significant event detection are included in EQXX, EQS, ... ?

... and don't mention Valeo ...

Now I wonder if the Sony/Prophesee deblurring camera could be useful in ADAS ...
In the Edge Impulse presentation, they start presenting about Brainchip at the 25.30 minute mark and say the same thing Dio. Akida is being compared to GPU's and outperforming them. They put up a comparison chart if anyone hasn't seen it yet.

 
  • Like
  • Fire
  • Love
Reactions: 28 users

Sirod69

bavarian girl ;-)

Tesla says it is adding radar in its cars next month amid self-driving suite concerns​

Tesla told the FCC that it plans to market a new radar starting next month. The move raises even more concerns about potentially needed updates to its hardware suite to achieve the promised self-driving capability.

Since 2016, Tesla has claimed that all its vehicles produced going forward have “all the needed hardware” to become self-driving with future software updates.

It turned out not to be true.
... bla bla bla

That’s why it was surprising earlier this year when we reported on Tesla filing with the FCC to use a new radar in its vehicles. The FCC had granted a confidential treatment to Tesla in order not to release the details of the new radar.

Those confidential treatments are generally good for six months, and it was coming up tomorrow, but Tesla has filed an extension:



 
  • Like
  • Fire
  • Love
Reactions: 19 users

White Horse

Regular
My final thought on the podcast became clear when she was given the opportunity to promote the EE Times conference and then it became clear to me as her effect stood in stark contrast to her articles when she has written about Brainchip.

What she was doing in a rather clumsy fashion was playing to the sensitivities of the conference participants by appearing to be independent of Brainchip.

She and EE Times are like all organisers of these conferences chasing advertising and sponsorship dollars.

I think Rob Telson was aware of what she was doing and this is why he fed us the nugget about Transformers and what his gut was telling him. Rob Telson knows AKIDA 2.0 with Transformers and LSTM is close to being revealed and he wanted to give a heads up to his loyal listeners and shareholders.

My opinion only DYOR
FF

AKIDA BALLISTA

Reminds me of one of those introspective songs from the golden years.

 
  • Haha
  • Like
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Not sure what to say about Sally, other than I expected more enthusiasm for Neuromorphic tech. I guess she got her plug and is happy about that.
Agreed Funky. I found the podcast very underwhelming unfortunately. i thought it was confusing because Sally wasn’t enthusiastic at all, even going so far as to suggest that analogue is more power efficient. Sorry to say this but I found it very disappointing.
 
  • Like
  • Haha
Reactions: 28 users

Sirod69

bavarian girl ;-)
Rene Haas
Status: online
Rene Haas• 1.Chief Executive Officer, Arm17 Min.

An amazing day in Arizona attending the ceremony around the first tool-in for the new TSMC fab. This facility will be producing some of the most advanced chips in the industry on leading edge nodes creating jobs for scores of people in the Phoenix area. This could not be done without the amazing work of our incredible ecosystem as a who's who attended the event, led by President Biden and other tech luminaries. (including Synopsys own Deirdre Hanford and our Drew Henry!)

Cannot wait to see the first products roll off the production line in a year of so, the first ones most certainly #onarm!
1670426237461.png
 
  • Like
  • Love
  • Fire
Reactions: 22 users

Sirod69

bavarian girl ;-)
PROPHESEE, a company specializing in industrial vision, has just made the headlines with a fundraising of €50 million. Prophesee is the inventor of the world's most advanced neuromorphic vision systems and is now "Europe's best-funded fabless semiconductor startup".
We met its co-founder and CEO, Luca Verre, Lyon Centralien from the class of 2005.

November 04, 2022
Meeting with Luca Verre (ECL 2005), Co-founder and CEO of Prophesee
The career of a very international Centralian
Published by Luca VERRE (ECL 2005) and François RAMAGET (ECL 1979) | Entrepreneur Centralians

PROPHESEE, a company specializing in industrial vision, has just made the headlines with a fundraising of €50 million. Prophesee is the inventor of the world's most advanced neuromorphic vision systems and is now "Europe's best-funded fabless semiconductor startup".
We met its co-founder and CEO, Luca Verre, Lyon Centralien from the class of 2005.

Can you summarize your academic background for us?

I am Italian by origin and in 2005 I obtained a double degree: Master of Science in physics, electronics and industrial engineering from the Politecnico di Milano - and an engineering degree from the Ecole Centrale de Lyon, with a specialization in electronics . After starting my career in industry, I completed this course in 2013-2014 with an MBA from INSEAD.

What memories do you have of your studies at the École Centrale de Lyon?

I'm keeping a very good memory of it. First, because the classes were much smaller than at the Politecnico in Milan, which now has more than 35,000 students. And I appreciated the generalist approach of the School with courses in economics or mechanical engineering for example, which opened me up to other specialities. I remember very well that mechanics class where we were asked to disassemble and reassemble an engine, an exercise far removed from electronics... And then the extraordinary social life on campus, those Thursday evenings when I was the cook on my floor! And enriching exchanges with foreign students, from China, Brazil or elsewhere. These are experiences of openness to professions and cultures that have served me throughout the rest of my career.

What positions did you hold before creating Prophesee in 2014?

I first worked at Schneider Electric, in Sophia Antipolis, then in Japan where I spent 4 years in marketing positions. This position in Japan, I owe it in part to the ECL - because the School had allowed me to take Japanese lessons with a high quality teacher, Ms. Shimamori, author of reference books. And it was in Japan that I completed my 2nd year internship. After the Far East, I returned to Europe, to Germany first to take care of business development and then to Paris. In 2013, I decided to take a sabbatical year to take courses at INSEAD and it was there that I met my future partner, Director of Research at the Institut de la Vision.

What were the company's goals when it was founded?

We established the company in February 2014 to design an artificial model of the biological retina and thereby advance retinal implant technology. We made a first fundraising in order to develop innovative neuromorphic sensors, which are based on capturing changes in the image - rather than on acquiring the entire image. This is what we call Event-Based Vision systems for our vision systems.

How did Prophesee develop?

We started with collaborations with medical companies like Pixium Vision or GenSight Biologics to pursue our initial ambition to restore vision to the visually impaired. But we quickly perceived the potential of our solutions in other areas: Firstly, industry where our technology supports measurement, counting or inspection applications in sectors such as agri-food, pharmaceuticals, manufacturing , etc. But also the autonomous car for driving assistance systems – and mobile telephony where our solutions must make it possible to improve the shots of conventional sensors for moving subjects (“motion blur”). Finally, virtual reality and augmented reality where our products are used for eye tracking. With Prophesee, we have raised more than €100m to date and we have just welcomed Xiaomi among our investors. We are very confident in the prospects of the company, experts in metavision for machines.

What advice would you give to a young engineer today?

Technologies are constantly evolving and you have to be attentive. The autonomous car or the metaverse are areas that will experience exciting developments. The situation has changed a lot since the days when junior engineers dreamed of joining a large group. For young engineers, they need to be open to risks. Start-ups can offer them great opportunities. Responsibility is taken on more quickly and these young companies integrate criteria of impact on daily life. These are essential elements to take into account to guide your career.
 
  • Like
  • Love
  • Fire
Reactions: 16 users

Diogenese

Top 20
Podcast out


I don't agree with the criticism of the SW-F podcast.

One bit which seems to have drawn a lot of angst starts just before 13 minutes, and my Pitmans is non-existent, but this is the gist:

"There is a debate about how close should we copy the brain – limits of Silicon

Should we directly copy what happens in the brain?

I think there are several companies including Brainchip that are making a good go of it

Products that work and advantages to be had by using spiking architecture

Not everyone is as far along as Brainchip there are a few other in spiking space

Different architectures ASIC? Like BrainChip or Analog even more power can be saved if you can do it that way. Some of these are unanswered questions.

Training challenges to be ironed out

Event based processors
..."

This seems like a fair summation to me. Analog can be more power efficient because they only need a couple of components for a synapse. However the tech has other problems such as repeatability of manufacture which causes variations in spike amplitude (not a problem with digital), the need for ADC (analog-to-digital converter) and possibly DAC, and analog lacks the versatility of Akida.

Another angst-generating bit was the reference to "niche" when discussing the edge which S W-F characterized as a spectrum and fragmented.

Sally was responding to Rod's lead-in:
"Let’s talk about what you believe to be some of the key drivers in this space (the edge) and some of the problems you see as needing to be addressed over the net few years", so Sally's remarks need to be considered in this context.

This podcast is a conversation and Rod Telson said "You're spot on there .. very few are flexible enough to handle voice, vision .. " Rod asked Sally to discuss the key drivers and problems to be overcome. He did not ask her to endorse Akida.
 
  • Like
  • Fire
  • Love
Reactions: 50 users

Sirod69

bavarian girl ;-)
Last edited:
  • Like
  • Love
  • Fire
Reactions: 48 users

Sirod69

bavarian girl ;-)
like every day
1670429447767.png


 
  • Like
  • Haha
  • Love
Reactions: 10 users

TopCat

Regular
Hi to all,

in connection with DARPA in the following article a company named Big Bear ai is mentioned. Search function gave no result; has this been topic here, does anyone know this company?

Regards
cassip

Hi Cassip , I haven’t looked into Big Bear AI but I found a link between Redwire and Thales / European Space Agency


Rome, November 30, 2022 - Thales Alenia Space, joint venture between Thales (67%) and Leonardo (33%), and the European Space Agency (ESA) have signed a letter of intent to collaborate in supporting the creation of future disruptive space-based solutions in the Earth Observation domain.

Thales Alenia Space will cooperate with the ESA Φ-lab to explore innovative technologies based on Artificial Intelligence (AI) and their applications to use cases of significant interest to both entities.

Artificial Intelligence and new computing paradigms like neuromorphic, quantum, and edge computing, applied to both optical and radar Earth Observation data, are a strategic area of interest for both Φ-lab and Thales Alenia Space. Key topics of the collaboration include end-to-end learning for Synthetic-Aperture Radar (SAR) data, physically-based Artificial Intelligence to extract information from SAR data and enable object detection, recognition and classification, collective intelligence and federated learning at the edge, and the use of AI and Earth Observation in immersive-reality scenarios such as Augmented and Virtual reality for satellite and mission data management.


This contract award is incredibly exciting. The adaption of processes and tools to the space environment, many of which we take for granted on Earth, will be critical in many areas of our future. I am proud that our dedicated teams will be leading this study together with our valued partners AVS, Metalysis, Open University and Redwire Space Europe to solve the complex challenge of creating oxygen to sustain life on the lunar surface.
 
  • Like
  • Fire
  • Love
Reactions: 16 users

Sirod69

bavarian girl ;-)
Hallo an alle,

Im Zusammenhang mit DARPA wird im folgenden Artikel eine Firma namens Big Bear ai erwähnt. Suchfunktion ergab kein Ergebnis; war das Thema hier, kennt jemand diese Firma?

Grüße
kassieren

Info​

BigBear.ai delivers AI-powered analytics and cyber engineering solutions to support mission-critical operations and decision-making in complex, real-world environments. BigBear.ai’s customers, which include the US Intelligence Community, Department of Defense, the US Federal Government, as well as customers in manufacturing, healthcare, commercial space, and other sectors, rely on BigBear.ai’s solutions to see and shape their world through reliable, predictive insights and goal-oriented advice. Headquartered in Columbia, Maryland, BigBear.ai is a global, public company traded on the NYSE under the symbol BBAI. For more information, please visit: http://bigbear.ai/

have not found anything important for us
 
  • Like
  • Love
Reactions: 13 users
  • Like
Reactions: 14 users

alwaysgreen

Top 20
Agreed Funky. I found the podcast very underwhelming unfortunately. i thought it was confusing because Sally wasn’t enthusiastic at all, even going so far as to suggest that analogue is more power efficient. Sorry to say this but I found it very disappointing.
I agree. I was also disappointed by the choice of guest. How good were the early podcasts with the likes of Dell and Arm? Essentially, they were announcements that we were working with them in some capacity. In the case of ARM, it has been proven to be correct. The transformer Easter egg was nice, but a hint of a potential partnership is what I would like to see.
 
  • Like
Reactions: 5 users

cassip

Regular
  • Like
Reactions: 5 users

FJ-215

Regular
I think it's nice to hear from him today, it's good! I love it !!!😘🥰
Edge Impulse

Edge Impulse29.113 Follower:innen42 Min. •

Hear what makes BrainChip excited about teaming up with us to build out an ecosystem of ML solutions and deliver value to customers via the company's neuromorphic technology and Edge Impulse tools.

cc: Rob Telson

AND on Twitter:


Hi Sirod69,

I've always been impressed by Rob, he is a born salesman and a very good communicator. He always displays enthusiasm and confidence but to me, there is something a little "next level" about him in the clip you posted. I get the sense you can add belief in to that list. Saying you have the best product is one thing, knowing it is something else.
 
  • Like
  • Love
Reactions: 19 users

Learning

Learning to the Top 🕵‍♂️
Not sure if this been posted.
Screenshot_20221208_082609_LinkedIn.jpg

Tired or distracted? These sensors are watching​

12/01/2022
How do interior surveillance systems work? See also the video ∙ Image: © Bosch, Video: © ADAC eV
A technology that could save lives: Interior sensors and cameras recognize whether drivers are tired, alert or distracted and warn in good time. Still a vision of the future or soon to be reality? The ADAC has tested four prototypes of interior detection systems, so-called "In-Cabin Sensing Systems".
  • Common causes of accidents: tiredness or distraction
  • Cameras analyze the driver and warn
  • Data protection must be guaranteed
Already five hours on the road without a break and really tired as a dog. But the one hour home – I can still manage that now. I'll quickly text my wife that it will be later. And until then, a cup of coffee from the Thermo will keep me awake. Where was she again? Oh, in the backpack on the back seat. No problem, I'll get there somehow...
Tired, distracted, busy with something while driving: This is not only dangerous for the occupants of the vehicle, but also for all other road users. Even checking an email for three seconds at a speed of 100 km/h leads to a blind flight of almost 100 meters . And during this time it is not possible to react to other vehicles, pedestrians or obstacles on the road.
Fatigue warning: Mandatory since July 2022

Fatigue warning systems are already warning of this © Daimler
In fact, according to the ADAC accident database (2009-2019) , around every tenth serious traffic accident outside of built-up areas is due to a distracted, tired or physically impaired driver. If you then add the so-called extended effective range, i.e. accidents in which distraction and tiredness were at least one of the accident-causing factors, the figure is even 25 percent of non-urban accidents .
But these accidents on motorways, federal roads and country roads usually end tragically because they often result in serious or fatal injuries: in 2021, 71 percent of those killed and 48 percent of those seriously injured in Germany were attributed to non-urban roads.

Therefore, the European legislator reacted at the beginning of 2020 with the General Safety Regulation 2 (GSR 2). This regulation regulates the mandatory equipment of vehicle safety systems for type approval.

In order to reduce the high number of accidents caused by distracted and tired drivers, the GSR prescribes a drowsiness warning as a first step : Since July 6, 2022, all new vehicle models in classes M and N, i.e. cars and trucks, for type approval have a warning system that assesses driver fatigue. This applies to all newly registered vehicles from July 2024 .

In a second step, from July 2024 or 2026 , the vehicles must be equipped with a system that can also detect a distracted driver .

In addition to the legal requirements, the consumer protection program Euro NCAP will also contain a test catalog for in-cabin sensing systems from 2023 . In order to score two points, the manufacturer must demonstrate that the cabin sensors can detect a distracted, tired and unresponsive driver in various test scenarios.

As a reaction to the detection, it is also required that the sensitivity of the front collision warning and lane departure warning be increased, slight braking interventions take place and, if necessary, a "minimum risk manoeuvre" be carried out.

Difference: Indirect and direct measuring systems

Already in series production: The directly measuring infrared camera in the Wey Coffee 01 © wey
Many automobile manufacturers have been equipping their cars with drowsiness warning systems for over ten years . These drowsiness warning systems, which are already available in many vehicles, are indirect systems . Depending on the steering behavior, speed, time and other parameters, they determine how tired the driver is. Very simply designed drowsiness warnings are based solely on a timer that warns the driver after a certain time has elapsed.

But recognizing a distracted driver is much more complicated. Directly measuring systems are necessary here , which use sensors in the vehicle compartment to detect and assess head movement, line of sight or hand movement.

This also requires an infrared camera that can be mounted at different locations in the vehicle interior and also works in the dark. If the focus is solely on the driver (driver monitoring system), it is often installed on the A-pillar or above the steering column.

In order to be able to cover not only the driver's line of sight but also his posture and other occupants (occupant status monitoring), the sensor is attached in the area of the roof module/lighting module, on the rear-view mirror or on the dashboard.

To determine the source of the distraction , some systems can also detect objects as such. These objects include mobile phones, for example, but also coffee mugs, drinking bottles and muesli bars. In combination with a certain movement (e.g. putting a coffee cup to the mouth), the system can recognize the type of distraction and evaluate the criticality for driving safety and decide whether the activity is relevant for the driving task or not.

occupant camera was the focus of the ADAC investigation. In order to detect a distracted driver, the vehicle interior is divided into zones . If the gaze is focused on a zone that is not relevant to the driving task for a specific period of time (e.g. footwell in front passenger), the driver is considered to be distracted. The Bosch system can also classify objects (mobile phone, coffee mug) as such and is therefore able to recognize which distraction or activity it is.

With the radar sensors , people or children can be recognized as such based on their breathing . In this way, further areas of application, particularly with regard to health , can be realized in the future . Even forgotten children can be recognized in parked vehicles and the parents can be warned. With the help of the camera information, comfort functions such as presetting the seating position and the appropriate playlist can also be enabled.

Result: The prototypes are already working well
The three "active" systems from Ford, DTS/XPERI and Bosch already meet large parts of the Euro NCAP protocol, which will apply from 2023. Some systems only showed weaknesses when there was a specific occlusion of the face (e.g. long facial hair) or the object causing the distraction was outside the sensor's coverage area .

The "fatigue" test scenario could not be represented as representative and was therefore not checked for any of the systems. However, since increasing tiredness manifests itself very differently in people, it is quite difficult to clearly detect it. In order to improve the detection of a tired driver, manufacturers often take into account other information such as driving time, time of day or steering behavior over the duration of the journey in addition to eyelid opening and body posture.

But Sony 's "passive" system also makes sense. With its depth image sensor, it can generate information to detect the volume and angle of the torso, the distance between the headrest and the head, or an "out of position" sitting position. This information can then be used by the car manufacturer to adapt the restraint systems to the specific characteristics of the occupants and their seating position.

The system from Bosch also has a lot of potential. By using the radar sensor, children left behind in the parked vehicle can be detected (requirement Euro NCAP from 2025). In addition, further functions in the field of health can be realized through the fusion of radar sensor and camera

Conclusion: What else needs to be considered
In-cabin sensing systems can address a large number of traffic accidents , especially on non-urban roads (freeways, federal highways, country roads), which often result in serious or fatal injuries.

Three of the four in-cabin sensing systems assessed as part of the demonstrations can meet a large part of the Euro NCAP protocol that will apply from 2023 .

Optimal utilization of the potential of in-cabin sensing systems can be achieved if these systems can address all areas of vehicle safety - before, during and after the crash.

The in-cabin sensing systems represent an important building block towards automated driving at SAE level 3. They can detect whether the driver is ready to take over the driving task again as soon as the automated driving function has reached its limits.

In-cabin sensing systems only warn the driver if they are tired, distracted or physically incapacitated. If the driver does not intervene in time, the accidents cannot be prevented. For this reason, it is recommended to link the ICS systems with driver assistance systems .

In order to increase the driver's acceptance and confidence in the systems, it is important to keep the rate of false alarms as low as possible. In particular, the functionality of the systems must not be limited by the physical characteristics of the occupants (wearing a face mask, skin color, seating position in the vehicle, etc.).

Comfort functions (pre-setting the seat position, automatic opening of the garage door) can increase the acceptance of the interior sensors.

User data should not be stored in the vehicle without their consent and should only be used to implement safety-related system functions. If data is nevertheless processed and stored, consumers should be informed about this. More on this: This vehicle data is collected by a modern car .

Full Artificial here, my apologies about the cut and paste.


Learning.
 
  • Like
  • Fire
  • Love
Reactions: 64 users
Top Bottom