BRN Discussion Ongoing

Yak52

Regular
Mornin FF.

Always watchin the B0Ts and trades across all my holdings. Have over 20 computer screens to track so got to do something!

I am also calm , but annoyed by the pattern. BRN is not the only fish out there so ................change bait and lines as necessary.

My own personal survey shows that more Retail investors are aware of how and who operates these B0Ts so awareness is increasing with time. GOOD!

As for the results next week? Hmm. not counting on anything worth talking about is safest position. After all we have not indication nor direction of outcome recently from management.

A NEW Top 20 Holders from the company would be very helpful to all. Can you rustle up one for us at all FF?

Yak52
 
  • Like
  • Fire
  • Wow
Reactions: 19 users

Shadow59

Regular
Mornin FF.

Always watchin the B0Ts and trades across all my holdings. Have over 20 computer screens to track so got to do something!

I am also calm , but annoyed by the pattern. BRN is not the only fish out there so ................change bait and lines as necessary.

My own personal survey shows that more Retail investors are aware of how and who operates these B0Ts so awareness is increasing with time. GOOD!

As for the results next week? Hmm. not counting on anything worth talking about is safest position. After all we have not indication nor direction of outcome recently from management.

A NEW Top 20 Holders from the company would be very helpful to all. Can you rustle up one for us at all FF?

Yak52
What results next week? Surely there is nothing due until the end of April?
 
  • Like
  • Thinking
Reactions: 7 users

Boab

I wish I could paint like Vincent
More demand for autonomous vehicles in unfortunate situations.
 

Attachments

  • Screen Shot 2022-03-25 at 7.55.31 am.png
    Screen Shot 2022-03-25 at 7.55.31 am.png
    113.4 KB · Views: 48
  • Like
Reactions: 1 users
Mornin FF.

Always watchin the B0Ts and trades across all my holdings. Have over 20 computer screens to track so got to do something!

I am also calm , but annoyed by the pattern. BRN is not the only fish out there so ................change bait and lines as necessary.

My own personal survey shows that more Retail investors are aware of how and who operates these B0Ts so awareness is increasing with time. GOOD!

As for the results next week? Hmm. not counting on anything worth talking about is safest position. After all we have not indication nor direction of outcome recently from management.

A NEW Top 20 Holders from the company would be very helpful to all. Can you rustle up one for us at all FF?

Yak52
The company has instituted a policy of providing a top 20 with every 4C so I would not be prepared to ask for an additional list.

Having had the experience of surprise out comes each time health wise always recommend taking care till you have the answers. Glad you are staying calm.

I personally never have and never will use stop losses. The numbers of times I have had connection or platform issues at times that coincided with opportunities lost having a stop loss that I could not guarantee I could change when needed just does not appeal.

Have a great day Yak. FF
 
  • Like
Reactions: 19 users

Dang Son

Regular
Today again, Page after page of 1 manipulator Bot trade to sell the share price down again and again. Why😱
 
  • Like
Reactions: 4 users

JK200SX

Regular
To my fellow Brainchipper's

One day I hope to call you all "Sextillionaire's" :)
Being just a millionaire is so out of fashion these days.....

1648167916925.png

1648168019817.png
 
  • Like
  • Fire
  • Wow
Reactions: 21 users
  • Haha
  • Like
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Haha
  • Like
  • Love
Reactions: 17 users

chapman89

Founding Member
23E81A78-0175-4391-B5E4-C1913129208B.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 82 users

Townyj

Ermahgerd
Loving the title under our name on the Backdrop... "Leading Neuromorphic Computing" 🔥🔥🔥
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Diogenese

Top 20
In the spider web that is from time to time visible I currently have two crossing points aligning that may be real or imagined.

In 2019 Peter van der Made stated that 100 AKD1000 could undertake all of the computing for an unconnected autonomous vehicle.

Early 2020 the then CEO Mr. Dinardo went to great lengths in a webinar to hose down the idea that AKD1000 would be used in this way. He stated that Peter van der Made gets carried away, that 100 AKD1000 could do this but this was not the focus so stop talking about it and asking questions was the point.

At the time I thought well that is fair enough he wants to focus on the Edge but the idea that 100 AKD1000 could power an unconnected AV is a pretty good statement or example of just how powerful and revolutionary it is and after all your making it so 64 AKD1000 chips can be ganged together.

Anyway as I said points of connection real or imagined that might fit with Mercedes.

My opinion only DYOR
FF

AKIDA BALLISTA

There was some discussion about the 64 Akida chips, because, before that, it was mooted that a lot more could be combined. The company (PvdM, LdN?) said something to the effect that that was all anyone would need.
On the theme of Renesas and implementing AKIDA technology in MCU’s I have been excited by this prospect since Peter van der Made and Anil Mankar separately disclosed this was where Renesas was heading. I have lacked the depth of technical knowledge to bring a proper explanation of why this is so exciting. Just now on the Renesas thread I found the following.

This article is not about AKIDA technology but It identifies the opportunity to be had in making trillions of MCUs smart and offering a clever but convoluted old school solution.

If you had a calculator that could do trillions you could work out what 1 trillion times a 2 cent royalty would be for IP to make an MCU smart with every cent being profit to the owner of the IP:

Date 02/18/20

What is tinyML?​

What is tinyML technology?​

“We see a new world with trillions of intelligent devices enabled by tinyML technologies that sense, analyze, and autonomously act together to create a healthier and more sustainable environment for all.”

This is a quote by Evgeni Gousev, senior director at Qualcomm and co-chair of the tinyML Foundation, in his opening remarks at a recent conference.

Machine learning is a subset of artificial intelligence. tinyML aka tiny ml is an abbreviation for tiny machine learning and means that machine learning algorithms are processed locally on embedded devices.

TinyML is very similar with Edge AI, but tinyML takes Edge AI one step further, making it possible to run machine learning models on the smallest microcontrollers (MCU’s). Learn more about Edge AI here.

What is a microcontroller?​

507_o_microcontroller-stm32-based-on-arm.png

Embedded systems normally include a microcontroller. A microcontroller is a compact integrated circuit designed to govern a specific operation in an embedded system. A typical microcontroller includes a processor, memory and input/output (I/O) peripherals on a single chip.
Microcontrollers are cheap, with average sales prices reaching under $0.50, and they’re everywhere, embedded in consumer and industrial devices. At the same time, they don’t have the resources found in generic computing devices. Most of them don’t have an operating system.
They have a small CPU, are limited to a few hundred kilobytes of low-power memory (SRAM) and a few megabytes of storage, and don’t have any networking gear. They mostly don’t have a mains electricity source and must run on cell and coin batteries for years.

What are the advantages with tinyML?​

There are a number of advantages with running tinyML machine learning models on embedded devices.

Low Latency: Since the machine learning model runs on the edge, the data doesn't have to be sent to a server to run inference. This reduces the latency of the output.
Low Power Consumption: Microcontrollers are ultra low power which means that they consume very little power. This enables them to run without being charged for a really long time.
Low Bandwidth: As the data doesn’t have to be sent to the server constantly, less internet bandwidth is used.
Privacy: Since the machine learning model is running on the edge, the data is not stored in the cloud.

What are the challenges with tinyML?​

Deep learning models require a lot of memory and consumes a lot of power. There have been multiple efforts to shrink deep machine learning models to a size that fits on tiny devices and embedded systems. Most of these efforts are focused on reducing the number of parameters in the deep learning model. For example, “pruning,” a popular class of optimization algorithms, compress neural networks by removing the parameters that are less significant in the model’s output.
The problem with pruning methods is that they don’t address the memory bottleneck of the neural networks. Standard implementations of embedded machine learning require an entire network layer and activation maps to be loaded into memory. Unfortunately, classic optimization methods don’t make any big changes to the early layers of the network, especially in convolutional networks.

What is quantization?​

Quantization is a branch of computer science and data science, and has to do with the actual implementation of the neural network on a digital computer, eg tiny devices. Conceptually we can think of it as approximating a continuous function with a discrete one.
Depending on the bit width of chosen integers (64, 32, 16 or 8 bits) and the implementation this can be done with more or less quantization errors. All modern PCs, smartphones, tablets, and more powerful microcontrollers (MCUs) for embedded systems have a so-called floating-point unit or an FPU.
This is a piece of hardware next to or is integrated with the main processor, with the purpose to make floating-point operations fast. But for the tiniest MCUs there are no FPUs and instead, one must resort to either of two options: Transform the data to integer numbers and perform all the calculations with integer arithmetic or perform the floating-point calculations in software.
The latter approach takes considerably more processor time, so go get a good performance on these chips (e.g., ARM’s M0-M3 cores), the former approach is the preferred one. If memory is sparse, which usually is the case on these devices, one can specify the integers to be 16 or 8-bit, thereby reducing the RAM and Flash memory used by the program to half, or a quarter.

What is tinyML software?​

tinyML software are typically ultra low power tinyML applications that run on a physical hardware device. There are a large number of machine learning algorithms that are used, but the trend is that deep learning is becoming more and more popular. People that develop tinyML applications are normally data scientists, machine learning engineers or embedded developers. Normally the tinyML models are developed and trained in a cloud system. When the tinyML application runs on the hardware device, the tinyML model can then understand what it was trained for. This is called inference.

What is tinyML hardware?​

tinyML can run on a range of different hardware platforms from Arm Cortex M-series processors to advanced neural processing devices. Deep learning normally requires more powerful hardware platforms than other types of machine learning algorithms. IoT devices are an example of tinyML hardware devices. Many people are using an Arduino board to demonstrate machine learning applications.

What is a tinyML framework?​

A tinyML framework is a software platform that makes it easier for developers to develop tinyML applications. Tensorflow Lite for microcontrollers is one of the most popular embedded machine learning frameworks.

What is a tinyML development platform?​

A tinyML development platform is an end-to-end software platform, where users can develop tinyML applications. They start with collecting data, they can use AutoML to develop the machine learning models and eventually they can deploy the machine learning model on a tiny device. The main benefit with a tinyML platform is that it reduces time-to-market and lower the costs for the developer.

What are some examples of tinyML use cases?​

The benefits of tinyML are huge, and the number of use cases for tinyML technology are almost unlimited. Popular use cases include Computer vision, visual wake words, keyword spotters, predictive maintenance, gesture recognition, maintenance of industrial machines and many more.

Imagimob in tinyML​

Imagimob offers Imagimob AI which is a tinyML development platform that covers the end-to-end process from data collection to deploying AI models on a microcontroller board”

My opinion only DYOR
FF

AKIDA BALLISTA
Hi FF,

What are the challenges with tinyML?

Deep learning models require a lot of memory and consumes a lot of power. There have been multiple efforts to shrink deep machine learning models to a size that fits on tiny devices and embedded systems. Most of these efforts are focused on reducing the number of parameters in the deep learning model. For example, “pruning,” a popular class of optimization algorithms, compress neural networks by removing the parameters that are less significant in the model’s output.
The problem with pruning methods is that they don’t address the memory bottleneck of the neural networks
. Standard implementations of embedded machine learning require an entire network layer and activation maps to be loaded into memory. Unfortunately, classic optimization methods don’t make any big changes to the early layers of the network, especially in convolutional networks.

What is quantization?

Quantization is a branch of computer science and data science, and has to do with the actual implementation of the neural network on a digital computer, eg tiny devices. Conceptually we can think of it as approximating a continuous function with a discrete one.
Depending on the bit width of chosen integers (64, 32, 16 or 8 bits) and the implementation this can be done with more or less quantization errors. All modern PCs, smartphones, tablets, and more powerful microcontrollers (MCUs) for embedded systems have a so-called floating-point unit or an FPU.
This is a piece of hardware next to or is integrated with the main processor, with the purpose to make floating-point operations fast. But for the tiniest MCUs there are no FPUs and instead, one must resort to either of two options: Transform the data to integer numbers and perform all the calculations with integer arithmetic or perform the floating-point calculations in software.
The latter approach takes considerably more processor time, so go get a good performance on these chips (e.g., ARM’s M0-M3 cores), the former approach is the preferred one. If memory is sparse, which usually is the case on these devices, one can specify the integers to be 16 or 8-bit, thereby reducing the RAM and Flash memory used by the program to half, or a quarter.


What is tinyML software?

tinyML software are typically ultra low power tinyML applications that run on a physical hardware device. There are a large number of machine learning algorithms that are used, but the trend is that deep learning is becoming more and more popular. People that develop tinyML applications are normally data scientists, machine learning engineers or embedded developers. Normally the tinyML models are developed and trained in a cloud system. When the tinyML application runs on the hardware device, the tinyML model can then understand what it was trained for. This is called inference.

Had a stroll down memory lane earlier:

https://brainchipinc.com/brainchip-announces-unsupervised-visual-learning-achieved/

BRAINCHIP ANNOUNCES: UNSUPERVISED VISUAL LEARNING ACHIEVED

ALISO VIEJO, CA — (Marketwired) — 02/23/16 —
BrainChip Holdings Limited (ASX: BRN), developer of a revolutionary new Spiking Neuron Adaptive Processor (SNAP) technology that has the ability to learn autonomously, evolve and associate information just like the human brain, is pleased to report that it has achieved a further significant advancement of its artificial intelligence technology.

The R&D team in Southern California has completed the development of an Autonomous Visual
Feature Extraction system (AVFE), an advancement of the recently achieved and announced Autonomous Feature Extraction (AFE) system. The AVFE system was developed and interfaced with the DAVIS artificial retina purchased from its developer, Inilabs of Switzerland. DAVIS has been developed to represent data streams in the same way as BrainChip’s neural processor, SNAP.

Highlights


  • Capable of processing 100 million visual events per second
  • Learns and identifies patterns in the image stream within seconds — (Unsupervised Feature Learning)
  • Potential applications include security cameras, collision avoidance systems in road vehicles and Unmanned Aerial Vehicle’s (UAV’S), anomaly detection, and medical imaging
  • AVFE is now commercially available
  • Discussions with potential licensees for AVFE are progressing
AVFE is the process of extracting informative characteristics from an image. The system initially has no knowledge of the contents of an input stream. The system learns autonomously by repetition and intensity, and starts to find patterns in the image stream. BrainChip’s SNAP learns to recognize features within a few seconds, just like a human would when looking at a scene. This image stream can originate from any source, such as an image sensor like the DAVIS artificial retina, but also from other sources that are outside of human perception such as radar or ultrasound images.

In traditional systems, a computer program loads a single frame from a video camera and searches that frame for identifying features, predefined by a programmer. Each section of the image is compared to a template until a match is found and a percentage of the match is returned, along with its location. This is a cumbersome operation.

An AVFE test sequence was conducted on a highway in Pasadena, California for 78.5 seconds. An average event rate of 66,100 events per second was recorded. The SNAP spiking neural network learned the features of vehicles passing by the sensor within seconds (see Figure 1). It detected and started counting cars in real time. The results of this hardware demonstration shows that SNAP can process events emitted by the DAVIS camera in real time and perform unsupervised learning of temporally correlated features.

AVFE can be configured for a large number of uses including surveillance and security cameras, collision avoidance systems in road vehicles and Unmanned Aerial Vehicle’s (UAV’S), anomaly detection, medical imaging, audio processing and many other applications.

Peter van der Made, CEO and Inventor of the SNAP neural processor said, “We are very excited about this significant advancement. It shows that BrainChips neural processor SNAP acquires information and learns without human supervision from visual input. The AVFE is remarkable and capable of high speed visual perception and learning, that has wide spread commercial applicability.”

#########################################################


https://brainchipinc.com/brainchip-...h-developer-of-the-dynamic-vision-sensor-dvs/

ALISO VIEJO, CA — (Marketwired) — 04/15/16 —
BrainChip, Inc., a wholly owned subsidiary of BrainChip Holdings Ltd (ASX: BRN), a developer of a revolutionary new Spiking Neuron Adaptive Processor (SNAP) technology that has the ability to learn autonomously and associate information just like the human brain, has signed a strategic, joint development and marketing agreement with Inilabs GmbH, the Swiss developer and manufacturer of a revolutionary new vision sensor that works like the human retina.

The dynamic vision sensor (DVS) works like the retina; only sending information when something is changing or moving. This is an exciting technology that responds much faster than a normal camera, has an extremely high intra-scene dynamic range allowing it to simultaneously observe object in bright sunlight and dark shadow, and outputs events from individual pixels — similarly to the code that our retinae use; it is therefore a perfect match to BrainChip’s Neuro-Computing SNAP technology. BrainChip used the DAVIS vision sensor, which is an enhancement of the DVS, in the Autonomous Visual Feature Detection demonstration. The DVS and DAVIS vision sensors use the same AER (Address Event Representation) interface bus as BrainChip SNAP devices which has become a neuromorphic- industry standard.

The Inilabs web site (http://Inilabs.com) describes this device as follows: “Only the local pixel-level changes caused by movement in a scene are transmitted — at exactly the time they occur. The result is a stream of events at microsecond time resolution, equivalent to or better than conventional high-speed vision sensors running at thousands of frames per second.” The DVS communicates in ‘spikes’, modelling the short electrical bursts that our nervous systems use; the BrainChip SNAP technology directly processes such spikes, and has the same low energy requirements. Inilabs GmbH is participating as a team member in the US DARPA Fast Light Autonomy UAV program.

Under the terms of the agreement, BrainChip and Inilabs GmbH will jointly, non-exclusively, promote each other’s products and services to create a referral relationship, under which each will refer potential customers to the other party in exchange for the opportunity to enhance each other’s selling proposition by offering the customers an integrated hardware offering, and increasing the chances of sales success in the emerging neuromorphic technologies markets.

“We are very excited to undertake this strategic partnership with Inilabs, whose DVS is well suited to detect fast moving objects. They are a partner with extensive industry experience that will help us accelerate our sales strategy. This cooperative agreement is a testimony to BrainChip’s ongoing efforts to expand marketing; it enables the marketing of visual feature-detection solutions used in drones, robotics and self-driving cars,” said Peter van der Made, Founder and CEO of BrainChip.

Sim Bamford, CTO of Inilabs, said: “We look forward to this collaboration, thereby expanding our business opportunities in neuromorphic technologies markets.”

About Inilabs GmbH

Inilabs GmbH is a spin-off from the Institute of Neuro-informatics (INI) at the University of Zurich and the ETH Zurich, Switzerland. It develops and sells neuromorphic technologies such as the DVS and a dynamic audio sensor, which have been created by researchers at INI. Their founders were pioneers and established some of the foundations of this field, and continue to lead the world in applications of neuromorphic engineering. The DVS is now in use at over 100 laboratories and companies around the world, and is under active research in industries that include aerospace, automotive, consumer electronics, industrial vision and security. Additional information is available by visiting
http://inilabs.com/

These two press releases evidence BrainChip's depth of expertise with DVS and machine learning.

We have several years of cooperation with Inilabs and their DVS camera which LdN described as a natural fit (paraphrase). The DVS emits "spikes' only when the amount of light falling on a pixel changes. Given that, in most scenes, large areas of the scene have uniform illumination/colour, DVS inherently produces a sparse output data stream.


The use of "spikes" is the ultimate quantization.

BrainChip had mastered on-chip learning in 2016 - the rest of the world only began to wake up in 2020.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 35 users

Fox151

Regular
Just might be worthwhile going to jail in ACT for a stretch:

Canberra chef and former owner of Courgette Restaurant James Mussillon pleads guilty to perjury and money laundering​


https://www.msn.com/en-au/news/aust...and-money-laundering/ar-AAVqRdi?ocid=msedgntp
Seems to be the way to do things in Canberra.

When I was there in 2010, The London burger and beers place did the best burgers n beers for $20 in Canberra. They had a massive, probably roid munching, dude behind the counter and a bunch of shady characters who would sit in one of the booths. One day they had promo girls there n someone got a little inappropriate so they got kicked out. They argued at the door and about 5 dudes who looked like they were cooks - but not of the hamburger variety - come bursting out of the kitchen to 'politely ask him to move along'. Couple of weeks later, they got raided by the AFP for the same thing.
 
  • Like
  • Fire
  • Haha
Reactions: 7 users

hotty4040

Regular
Likewise Chappy, it stands out imo. Haven't heard of Untether before, competitor ???

Anyway chippers, I'm off to Darwin this arvo for a bit of a break, with the one in charge ( she who must be obeyed ) by my side. I'm hoping BRN will continue the charge into the unknown and deliver something special in my time away. It is so very exciting, I have to admit, what is around the corner. Thanks to all you special researchers who constantly deliver what we need to hear, on a daily basis.

I'll see what's exciting in the " top end " and report on anything extraordinary, whilst supping the occasional " Great Northern " to keep cool.

Coming back on " the Ghan " so that'll be interesting too.........

Take care fellow " chippers " and please keep the S/P moving up over the next few weeks, this trip has to be payed for whilst i'm away hopefully.



Akida Ballista >>>>>>>>>> Cheers to all and please keep the faith, cos it will happen IMO <<<<<<<<


hotty
 
  • Like
  • Fire
Reactions: 23 users

VictorG

Member
I've always said that I will never sell my BRN shares, sorry but change of plans.
I will now sell only one share when the share price reaches US$500, buy a bottle of champagne for the same value, drink the bottle on my own then share it over the grave of a BRN shorter.
 
  • Like
  • Haha
Reactions: 31 users

Mccabe84

Regular
I wasn’t going to buy any more BRN shares this week, but couldn’t help myself at these prices. I wonder what the share price will be when revenue starts coming in the 2nd half of the year, surely almost double of what it is now.
 
  • Like
  • Fire
Reactions: 25 users

HopalongPetrovski

I'm Spartacus!
I wasn’t going to buy any more BRN shares this week, but couldn’t help myself at these prices. I wonder what the share price will be when revenue starts coming in the 2nd half of the year, surely almost double of what it is now.
Someone had to............:)

 
  • Like
  • Haha
Reactions: 8 users
THIS IS HUGE NEWS - WHY WAS I NOT TOLD - BECAUSE I AM JUST A RETAIL SHAREHOLDER - THIS IS ANOTHER PILLAR SUPPORTING
MY FUTURE GENERATIONAL WEALTH TRAJECTORY.

"AKIDA NET
Recently, we have developed a replacement for the popular MobileNet v1
model used as a backbone in many applications that we call AkidaNet. AkidaNet’s architecture utilizes the Akida hardware more efficiently. Some of our preliminary results are shown below for object classification, face recognition, and face detection. In many cases, switching from MobileNet v1 to AkidaNet results in a slight increase in speed and accuracy accompanied by a 15% to 30% decrease in power usage."

How mind blowingly important is a 30% decrease in already ludicrously low power usage at the Edge.

I think we need to start a list of those who want to purchase an EQXX.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 73 users
THIS IS HUGE NEWS - WHY WAS I NOT TOLD - BECAUSE I AM JUST A RETAIL SHAREHOLDER - THIS IS ANOTHER PILLAR SUPPORTING
MY FUTURE GENERATIONAL WEALTH TRAJECTORY.

"AKIDA NET
Recently, we have developed a replacement for the popular MobileNet v1
model used as a backbone in many applications that we call AkidaNet. AkidaNet’s architecture utilizes the Akida hardware more efficiently. Some of our preliminary results are shown below for object classification, face recognition, and face detection. In many cases, switching from MobileNet v1 to AkidaNet results in a slight increase in speed and accuracy accompanied by a 15% to 30% decrease in power usage."

How mind blowing important is a 30% decrease in already ludicrously low power usage at the Edge.

I think we need to start a list of those who want to purchase an EQXX.

My opinion only DYOR
FF

AKIDA BALLISTA
Sorry in my excitement I forgot that you might not have read Fullmoonfever's post under breaking news but this is the link he posted:
 
  • Like
  • Fire
Reactions: 22 users

Foxdog

Regular
I wasn’t going to buy any more BRN shares this week, but couldn’t help myself at these prices. I wonder what the share price will be when revenue starts coming in the 2nd half of the year, surely almost double of what it is now.
Same, just grabbed another 5k worth at .98. I maintain that anything under $1.00 will look super cheap in 6 months time.🤞IMO
 
  • Like
Reactions: 15 users
That explains EV but it does not explain EQXX leaping from nowhere onto the drawing board from a position where Mercedes were saying we will keep doing our ICE and building a few token EV models.

My opinion only DYOR
FF

AKIDA BALLISTA
There was probably large readjustment in strategic planning when large countries ie. China, Euro zone etc started saying that there were moving to a high percentage of EV’s by 2030 etc which is a very short timeframe to fully change over. The mass dollar signs start appearing for the mass changeover that is required....and of course Akida allowed them to effeciently enact the strategic change of thought 😉
 
  • Like
  • Love
Reactions: 8 users
Top Bottom