BRN Discussion Ongoing

Krustor

Regular
Totally agree Dell is a badass. Jensen made a big dell by calling Michael Dell out, then Synopsys, Candence, and Ansys (which synopsys is buying).

But we still need to talk about Bitcoin! It is the 15-year winner, and the 10-year winner. Nvidia won the last five years, love them. Bitcoin is international, unique. The etf approval is freaky.

No, we don't need to talk about Bitcoin here. This is the BRN forum - if you need to talk about Bitcoin please feel free to spam in any Bitcoin forum of your choice.

Slowly but surely your non BRN-related spamming is getting out of hand here.
 
  • Like
  • Love
  • Fire
Reactions: 38 users
No, we don't need to talk about Bitcoin here. This is the BRN forum - if you need to talk about Bitcoin please feel free to spam in any Bitcoin forum of your choice.

Slowly but surely your non BRN-related spamming is getting out of hand here.
1712388580846.gif
 
  • Haha
Reactions: 8 users

CHIPS

Regular
Brainchip and Friends. Seems like this forum is for Bullish Brainchip, and Friends. We all have other stocks.

Why would a Bearish person even show up here, as they should be focused on what they are bullish on.

Brainchip and Friends is all things AI moving the eco-system forward.

Hopefully Brainchip grabs a chuck, but no matter what we are also buying all the Friends (AVGO, ANET, CDNS, GOOGL, ARM, etc.).

We love friends! Maybe Brainchip will shine, but we know the group will.

What the hell are you taking, drinking or consuming??? Can you please stop this mass posting?
Sorry, but I don't even read your posts anymore because they do not give me any BrainChip information and there are too many.
 
  • Like
  • Haha
  • Love
Reactions: 28 users

CHIPS

Regular
Totally agree Dell is a badass. Jensen made a big dell by calling Michael Dell out, then Synopsys, Candence, and Ansys (which synopsys is buying).

But we still need to talk about Bitcoin! It is the 15-year winner, and the 10-year winner. Nvidia won the last five years, love them. Bitcoin is international, unique. The etf approval is freaky.


NO, we do NOT have to talk about Bitcoin or any other stocks because this here is only about BrainChip and its connection to other companies!!
 
  • Like
Reactions: 17 users
  • Like
  • Love
  • Fire
Reactions: 52 users
Has anyone noticed the new job advertisement for a HR Manager at Brainchip?
The fact that Sheila is the one doing the hiring, it would suggest it is a new role that would be assisting in the increased level of hiring/recruitment of staff as the company continues to grow. I don't believe they would go and hire a second HR manager if there is minimal growth on the horizon :)



1712395245078.png
 
  • Like
  • Fire
  • Love
Reactions: 48 users
What the hell are you taking, drinking or consuming??? Can you please stop this mass posting?
Sorry, but I don't even read your posts anymore because they do not give me any BrainChip information and there are too many.
Yep. Grew tired of the never ending nonsensical rubbish non related to Brainchip posts. Not sure what planet Stuart is on but his ADHD posts have become too much to try to comprehend. He is a born in the USA personality and therefore in his world we all must live the dream. Hey. There is another alternatives. The Aussie way. She’ll be right mate! Yeh nah Fck it. What ever.

Relax Stuart. Going to be okey-dokey. 😜
 
  • Like
  • Haha
  • Love
Reactions: 21 users

ndefries

Regular
Has anyone noticed the new job advertisement for a HR Manager at Brainchip?
The fact that Sheila is the one doing the hiring, it would suggest it is a new role that would be assisting in the increased level of hiring/recruitment of staff as the company continues to grow. I don't believe they would go and hire a second HR manager if there is minimal growth on the horizon :)



View attachment 60428
She must be leaving and recruiting for her replacement. No way we need another hr person at this senior level for a head count under 100. If the role grew you would start with a more junior burger to assist.

Could be wrong but time will tell.
 
  • Like
Reactions: 11 users

CHIPS

Regular
She must be leaving and recruiting for her replacement. No way we need another hr person at this senior level for a head count under 100. If the role grew you would start with a more junior burger to assist.

Could be wrong but time will tell.
She never stayed long in her previous positions so I guess you are right.
 

IloveLamp

Top 20
1000014821.jpg
 
  • Like
  • Fire
Reactions: 4 users

charles2

Regular

SiMa.ai secures $70M funding to introduce a multimodal GenAI chip​

Jagmeet Singh@jagmeets13 / 11:00 PM GMT+11•April 4, 2024
Comment
SiMa.ai founder Krishna Rangasayee

Image Credits: SiMa.ai
SiMa.ai, a Silicon Valley–based startup producing embedded machine learning (ML) system-on-chip (SoC) platforms, today announced that it has raised a $70 million extension funding round as it plans to bring its second-generation chipset, specifically built for multimodal generative AI processing, to market.
According to Gartner, the market for AI-supporting chips globally is forecast to more than double by 2027 to $119.4 billion compared to 2023. However, only a few players have started producing dedicated semiconductors for AI applications. Most of the prominent contenders initially focused on supporting AI in the cloud. Nonetheless, various reports predicted a significant growth in the market of AI on the edge, which means the hardware processing AI computations are closer to the data gathering source than in a centralized cloud. SiMa.ai, named after “seema,” the Hindi word for “boundary,” strives to leverage this shift by offering its edge AI SoC to organizations across industrial manufacturing, retail, aerospace, defense, agriculture and healthcare sectors.
The San Jose–headquartered startup, which targets the market segment between 5W and 25W of energy usage, launched its first ML SoC to bring AI and ML through an integrated software-hardware combination. This includes its proprietary chipset and no-code software called Palette. The combination has already been used by over 50 companies globally, Krishna Rangasayee, the founder and CEO of SiMa.ai, told TechCrunch.

The startup touts that its current generation of the ML SoC delivered the highest FPS/W results on the MLPerf benchmark across the MLPerf Inference 4.0 closed, edge and power division categories. However, the first-generation chipset was focused on classic computer vision.
As the demand for GenAI is growing, SiMa.ai is set to introduce its second-generation ML SoC in the first quarter of 2025 with an emphasis on providing its customers with multimodal GenAI capability. The new SoC will be an “evolutionary change” over its predecessor with “a few architectural tunings” over the existing ML chipset, Rangasayee said. He added that the fundamental concepts would remain the same.
The new GenAI SoC would adapt to any framework, network, model and sensor — similar to the company’s existing ML platform — and will also be compatible with any modality, including audio, speech, text and image. It would work as a single-edge platform for all AI across computer vision, transformers and multimodal GenAI, the startup said.
“You cannot predict the future, but you can pick the vector and say, hey, that’s the vector I want to bet on. And I want to continue evolving around my vector. That’s kind of the approach that we took architecturally,” said Rangasayee. “But fundamentally, we really haven’t walked away or had to drastically change our architecture. This is also the benefit of us taking a software-centric architecture that allows more flexibility and nimbleness.”

SiMa.ai has Taiwan’s TSMC as the manufacturing partner for both its first- and second-generation AI chipsets and Arm Holdings as the provider for its compute subsystem. The second-generation chipset will be based on TSMC’s 6nm process technology and include Synopsys EV74 embedded vision processors for pre- and post-processing in computer vision applications.

The startup considers incumbents like NXP, Texas Instruments, STMicro, Renaissance and Microchip Technology, and Nvidia, as well as AI chip startups like Hailo, among the competition. However, it considers Nvidia as the primary competitor — just like other AI chip startups.
Rangasayee told TechCrunch that while Nvidia is “fantastic in the cloud,” it has not built a platform for the edge. He believes that Nvidia lacks adequate power efficiency and software for edge AI. Similarly, he asserted that other startups building AI chipsets do not solve system problems and are just offering ML acceleration.
“Amongst all of our peers, Hailo has done a really good job. And it’s not us being better than them. But from our perspective, our value proposition is quite different,” he said.
The founder continued that SiMa.ai delivers higher performance and better power efficiency than Hailo. He also said SiMa.ai’s system software is quite different and effective for GenAI.

“As long as we’re solving customer problems, and we are better at doing that than anybody else, we are in a good place,” he said.
SiMa.ai’s fresh all-equity funding, led by Maverick Capital and with participation from Point72 and Jericho, extends the startup’s $30 million Series B round, initially announced in May 2022. Existing investors, including Amplify Partners, Dell Technologies Capital, Fidelity Management and Lip-Bu Tan also participated in the additional investment. With this fundraising, the five-year-old startup has raised a total of $270 million.
The company currently has 160 employees, 65 of whom are at its R&D center in Bengaluru, India. SiMa.ai plans to grow that headcount by adding new roles and extending its R&D capability. It also wants to develop a go-to-market team for Indian customers. Further, the startup plans to scale its customer-facing teams globally, starting with Korea and Japan and in Europe and the U.S.
“The computational intensity of generative AI has precipitated a paradigm shift in data center architecture. The next phase in this evolution will be widespread adoption of AI at the edge. Just as the data center has been revolutionized, the edge computing landscape is poised for a complete transformation. SiMa.ai possesses the essential trifecta of a best-in-class team, cutting-edge technology, and forward momentum, positioning it as a key player for customers traversing this tectonic shift. We’re excited to join forces with SiMa.ai to seize this once-in-a-generation opportunity,” said Andrew Homan, senior managing director at Maverick Capital, in a statement.
Reading this makes me even more sure that NVDA or a heavyweight competitor will offer an outrageous amount (for us Brainchip shareholders, that is) which for the buyer will remain a pittance.

Things are happening at light speed in the edge/ml/neuromorphic space and money seems abundant.

How does that saying go...."You snooze you........."

O to live to see that day!
 
  • Like
  • Fire
Reactions: 14 users

Tothemoon24

Top 20

IMG_8755.jpeg


What do early 20th-century tractors and AI and ML at the edge have in common? 🤔 More than you might think!

Just like tractors once revolutionized agriculture but faced implementation hurdles at the time, today’s AI and ML technologies are facing challenges on the journey to widespread adoption.

From hardware fragmentation to model lifecycle hurdles and performance optimization, new challenges are holding developers back from scaling the AI opportunity. But overcoming these challenges is possible by building #onArm.

In our latest blog, Paul Williamson shares valuable insights on deploying AI and ML at the edge on Arm to unlock a world of innovation and possibility. 👉 https://okt.to/GYBolQ

📅 If you're heading to #EmbeddedWorld next week, join us in Hall 4, Stand 504 as we explore the practical solutions and showcase the advantages of building #onArm. #EW24

From Possibility to Reality: Enabling AI and ML at the Edge with Arm​

The transformative opportunities and challenges from deploying AI and ML at the edge across IoT markets.
By Paul Williamson, SVP and GM of the IoT LoB, Arm
Artificial Intelligence (AI)Internet of Things (IoT)
Share
GettyImages-1196162091-1400x830.jpg

When you think of artificial intelligence (AI) and machine learning (ML), you always think of tractors, right? Of course not, but this comparison in the Economist can be helpful.
Here’s why. Despite tractors’ heralded launch in the early 20th century, farmers were slow to embrace the technology. Only 23% of U.S. farms used them by 1940. Why the slow uptake? Limited functionality, reliability issues, maintenance challenges, and prohibitive costs for the most part. But despite the challenges, most farmers could see the transformation these machines would bring once the bugs were ironed out and they became more economically attractive.
The pace of technological adoption today far outstrips that of the 20th-century agriculture sector, but the lessons learned from the evolution of the tractor are relevant to the early adoption of AI and ML at the edge. Put another way, the competition to invest in AI systems must move from amazing possibilities (1940s farmers gazing admiringly at tractors) to realistic implementation plans – for example, increased farming efficiency, diversification and intensification of agriculture, and development of specialized attachments and services for tractors.
To get to AI and ML at the edge at scale, however, several obstacles currently stand in the way of widespread adoption.

A fragmented landscape​

One of the challenges of deploying AI and ML at the edge is the diversity of hardware available for different applications and use cases. Often, the variety of hardware options means that developers must tailor their models and code for the specific hardware they are targeting, which adds complexity and overhead to the development process.
In reality – just as in mobile and high performance IoT – the majority of ML models run on CPUs. The common denominator in IoT is the Arm architecture. In 2020, Arm launched Helium as a seamless extension to the Cortex-M instruction set, enabling ML acceleration on ultra-low-power devices. With Helium, developers can achieve up to 15x more performance and 5x more energy efficiency for ML applications compared to previous Cortex-M generations. More than 35 partners are already shipping devices with Helium technology, including NXP, Renesas, Ambiq, and Alif. Embedded World 2024 will see even more devices built on Helium, as we enter a decade of AI innovation in embedded systems.
The natural progression in this performance journey is the Arm family of Ethos NPUs, designed to deliver the highest performance and efficiency for ML workloads at the edge. Ethos NPUs are scalable and configurable, offering different levels of performance and power consumption for different applications, such as computer vision, natural language processing, speech recognition, and recommendation systems. Ethos NPUs can be integrated with any Arm-based system-on-chip (SoC), providing a seamless solution for ML acceleration on devices ranging from smart speakers to security cameras.

AI model lifecycle​

Another challenge is the lifecycle of AI models, which includes training, tuning, and deployment. To deploy AI models at the edge, developers need to consider how to optimize the models for the specific hardware they are targeting. This involves choosing the right model architecture, data format, quantization scheme and inference engine that can run efficiently on the embedded device. Moreover, developers need to select an inference engine that can leverage the hardware features of the device, such as an Ethos NPU or Helium technology, to accelerate the execution of the model.
Arm makes it easy to use popular ML frameworks, such as PyTorch and ExecuTorch, on embedded devices. For example, Arm Keil MDK, the integrated development environment (IDE) that simplifies the development and debugging of embedded applications, supports CMSIS Packs, which provide a common abstraction layer for device capabilities and ML models. Simplified development flows are bringing AI within reach on a single toolchain and single proven architecture, with more than 100 billion Cortex-M devices shipped to date amid a global ecosystem of more than 100 ML partners.
By using Arm solutions, developers can reduce the time and cost of developing ML applications for embedded devices and achieve better performance and efficiency.

Working with embedded devices​

One of the main challenges of embedded development is to optimize the performance and efficiency of ML applications on resource-constrained devices. Unlike cloud-based solutions, which can leverage the abundant computing power and memory of servers, embedded devices have to run ML models locally and often under strict power and latency constraints. To achieve desired ML performance developers often have to compromise on price or power consumption in the first iteration of the product.
Arm Virtual Hardware, which offers cloud-based simulations of Arm-based systems, is an innovative solution that allows developers to create and test ML applications without having to rely on physical hardware. It integrates seamlessly with MLOps solutions, such as AWS SageMaker and Google Cloud AI Platform, to streamline the deployment and management of ML models across devices. These platforms provide tools and services for automating the entire ML lifecycle, from data management and model training to deployment and monitoring. By combining Arm Virtual Hardware and MLOps solutions, developers can achieve faster time to market, lower costs and better scalability for their embedded ML applications.

Deploying and securing intellectual property​

Deploying and securing valuable intellectual property across millions of endpoints is a major challenge. This stems from the fact that ML models are essentially mathematical functions that can be extracted and replicated by anyone who has access to the device or the data stream. It exposes the devices and the data to potential tampering, manipulation, or malicious attacks that could compromise their functionality and reliability. Developers, therefore, need to ensure that their ML models are protected and cannot be easily reverse engineered.
One of the ways that Arm helps developers deploy and secure their ML models on edge devices is by working within the framework provided by PSA Certified. Based on the Platform Security Architecture (PSA) – best practices and specifications developed by Arm and its partners to help secure IoT devices – PSA Certified enables users to verify and trust the security of IoT products, and comply with regulations and standards.

AI at the embedded edge​

The emergence of AI and ML is reshaping the landscape of embedded systems, and this will be on full display next week at Embedded World in Nuremberg, an event that’s quickly evolving into what you might call “Edge AI World.”
Last year, we and our partners talked about the myriad ways some familiar challenges of embedded development were being tackled – whether it was the rise of development solutions such as Arm Virtual Hardware, the emergence of new industry standards, or the adoption of the Arm architecture to enable flexibility, efficiency and minimize security risk.
At this year’s Embedded World, we confront the dizzying pace of innovation of AI and ML at the edge and the consequences for the Arm developer ecosystem. Consider that with the rise of interconnected devices at the IoT edge, there’s an exponential surge in data, providing ample opportunity for AI algorithms to process and derive real-time insights. And while the spotlight often shines on generative AI and large language models (LLMs), smaller models are making their mark by being deployed on edge IoT devices, such as Raspberry Pi. Transformer network models are also making waves at the edge, setting themselves apart from conventional convolutional neural networks (CNNs) by their inherent flexibility.
The accelerated pace of change is breathtaking. We at Arm are excited to play a vital role in enabling AI in high-performance IoT devices and systems. Our vision is to deliver intelligent and secure devices and systems that can empower innovation and transform lives. Arm remains committed to assisting developers in tackling challenges by offering:
  • Optimized hardware and software for AI in high-performance IoT that carefully balance performance, power consumption, cost-effectiveness, security and scalability.
  • Streamlined tools and platforms that democratize the development and deployment of AI in high-performance IoT, empowering developers and system builders from diverse backgrounds to create and tailor solutions according to their needs.
  • Robust ecosystem support and strategic partnerships that drive the adoption and maximize the impact of AI in high-performance IoT, encouraging collaboration and co-creation across various stakeholders and industries.
These are the pillars of our vision for AI at the IoT edge, which we believe – in the same way the tractor revolutionized farming and the food chain – will transform the way we interact with the physical world and unlock new possibilities for human creativity and innovation.
Join us at Embedded World at the Nuremberg Messe, Hall 4, Stand 504.
 
  • Like
  • Fire
  • Love
Reactions: 18 users

Frangipani

Regular

Hi Pmel,

my guess is that to quite a few forum readers your post was suggestive of a Samsung employee with some sort of inside knowledge regarding current or future implementation of our disruptive tech, such as a hardware or software engineer busy developing the next generation of Samsung Galaxy phones?

To put it into perspective:

Michael Novak works as an Inside Sales Manager (so he is not in R&D) for Samsung SDI Europe and not for Samsung Electronics, which is a separate - and the most famous and lucrative - company within the Samsung Group, South Korea’s largest chaebol (business conglomerate run by an individual or family). The Samsung Group consists of about 25 affiliated companies:


8AB39F1B-1C34-4CC1-BBA9-7294CFBBC86E.jpeg


69D90079-5618-477E-A917-A60B5F0B895B.jpeg






38ED5A6C-70E0-4018-A5CD-7BAC143A9091.jpeg
26990B0A-89E2-43C8-B89F-B067969B114C.jpeg
94C0588C-4C4D-48F1-8B2B-F3D4876CD1ED.jpeg
66CBB8A9-ACD2-4533-8CAD-250FEF299991.jpeg
9215D8C2-D87A-40C0-950B-526B0364555E.jpeg




Fair chance he’s a shareholder..

I second that.
 

Attachments

  • 192D1808-F810-420E-80FC-7709BA204321.jpeg
    192D1808-F810-420E-80FC-7709BA204321.jpeg
    201.7 KB · Views: 55
  • Like
  • Fire
  • Love
Reactions: 12 users
  • Like
  • Fire
  • Love
Reactions: 8 users

TheFunkMachine

seeds have the potential to become trees.
722D7727-CB8F-480F-A6EF-7BD1834C953A.jpeg

Anil Mankar personally congratulates and asks to sync up with vice president of engineering of Infineon Technologies. Sounds like an electric date to me;)
 
  • Like
  • Love
  • Fire
Reactions: 34 users

equanimous

Norse clairvoyant shapeshifter goddess
NO, we do NOT have to talk about Bitcoin or any other stocks because this here is only about BrainChip and its connection to other companies!!
Actually there is alot of use cases which brn can help with.

BrainChip's Akida neuromorphic processor can contribute significantly to blockchain technology and its applications, particularly in the areas of security, efficiency, and innovation. By leveraging the Akida processor's ultra-low power, fully digital, event-based, neuromorphic AI capabilities, it can enhance various aspects of blockchain technology:

Security: The Akida processor's event-based processing can be used to improve the security of blockchain networks. For example, it can be used to detect anomalies and potential threats in real-time, which is crucial for maintaining the integrity of a blockchain.
Efficiency: The ultra-low power consumption of the Akida processor makes it ideal for running blockchain nodes and mining operations. This can help reduce the energy costs associated with maintaining a blockchain network, making it more sustainable and efficient.
Innovation: BrainChip's partnership with MYWAI aims to deliver next-generation Edge AI solutions leveraging neuromorphic compute. This collaboration can lead to innovative applications of blockchain technology, such as integrating AI with blockchain for enhanced security, transparency, and automation in various industries.
Scalability: The Akida processor's ability to handle complex computations at the Edge can help address the scalability challenges faced by blockchain networks. By distributing the computational load across the network, it can help increase the transaction throughput and overall efficiency of the blockchain.
Smart Contracts: The Akida processor can potentially be used to develop and execute more complex and efficient smart contracts on blockchain platforms. This can enable a broader range of decentralized applications (DApps) and improve the functionality of existing ones.
Data Integrity: The Akida processor's capabilities in processing and analyzing data can be used to ensure the integrity of data stored on a blockchain. This is particularly important in industries where data tampering or manipulation can have serious consequences, such as healthcare, finance, and supply chain management.
Privacy: The Akida processor's event-based processing can also be used to enhance the privacy features of blockchain networks. By processing and analyzing data locally, it can help reduce the amount of sensitive information that needs to be stored or transmitted on the blockchain, thereby improving privacy and security.
 
  • Like
  • Fire
  • Love
Reactions: 19 users

equanimous

Norse clairvoyant shapeshifter goddess
Actually there is alot of use cases which brn can help with.

BrainChip's Akida neuromorphic processor can contribute significantly to blockchain technology and its applications, particularly in the areas of security, efficiency, and innovation. By leveraging the Akida processor's ultra-low power, fully digital, event-based, neuromorphic AI capabilities, it can enhance various aspects of blockchain technology:

Security: The Akida processor's event-based processing can be used to improve the security of blockchain networks. For example, it can be used to detect anomalies and potential threats in real-time, which is crucial for maintaining the integrity of a blockchain.
Efficiency: The ultra-low power consumption of the Akida processor makes it ideal for running blockchain nodes and mining operations. This can help reduce the energy costs associated with maintaining a blockchain network, making it more sustainable and efficient.
Innovation: BrainChip's partnership with MYWAI aims to deliver next-generation Edge AI solutions leveraging neuromorphic compute. This collaboration can lead to innovative applications of blockchain technology, such as integrating AI with blockchain for enhanced security, transparency, and automation in various industries.
Scalability: The Akida processor's ability to handle complex computations at the Edge can help address the scalability challenges faced by blockchain networks. By distributing the computational load across the network, it can help increase the transaction throughput and overall efficiency of the blockchain.
Smart Contracts: The Akida processor can potentially be used to develop and execute more complex and efficient smart contracts on blockchain platforms. This can enable a broader range of decentralized applications (DApps) and improve the functionality of existing ones.
Data Integrity: The Akida processor's capabilities in processing and analyzing data can be used to ensure the integrity of data stored on a blockchain. This is particularly important in industries where data tampering or manipulation can have serious consequences, such as healthcare, finance, and supply chain management.
Privacy: The Akida processor's event-based processing can also be used to enhance the privacy features of blockchain networks. By processing and analyzing data locally, it can help reduce the amount of sensitive information that needs to be stored or transmitted on the blockchain, thereby improving privacy and security.
I strongly believe BRN will play a crucial role with blockchain as stated above. Remember when the lehman brothers went bust and no one knew who owend what, well there is a need for a new ledger system.

The banks and corporations and banks are definitely going to want the best security and privacy and BRN is the best solution for this.

BlackRock CEO Larry Fink said that "the next generation for markets, the next generation for securities, will be tokenization of securities."

In the world of blockchain, tokenization refers to a process where a digital representation of an asset is created on a blockchain, authenticating its transaction and ownership history.

This approach enables a different way to trade assets like stocks, bonds, real estate, or even alternative assets like land, wine, or art, allowing the transfers to be visible on a public ledger.

Speaking at a New York Times DealBook event, Fink argued that tokenization will provide “instantaneous settlement” and “reduced fees.”
 
  • Like
  • Love
  • Fire
Reactions: 14 users

7für7

Regular
No, we don't need to talk about Bitcoin here. This is the BRN forum - if you need to talk about Bitcoin please feel free to spam in any Bitcoin forum of your choice.

Slowly but surely your non BRN-related spamming is getting out of hand here.
How funny 🤣 he was one of the guys who was complaining me a while ago because i posted Jesus… even my post was Related to BRN..
 
  • Haha
Reactions: 4 users

Frangipani

Regular
How funny 🤣 he was one of the guys who was complaining me a while ago because i posted Jesus… even my post was Related to BRN..

Hi 7für7,

nope, that’s a clear case of mistaken identity…
Only the initial s and the middle 8 are a match.
The poster you have in mind recently decided to leave TSE altogether and confine himself to HC, shortly before the one you confuse him with returned to TSE from a one-year sabbatical.

Speaking of HC:
… but obviously someone is manipulating posts from me without my permission! This is really almost criminal
Did you ever find out who the culprit was?! 🤭

If not, here is your answer:

DBBCB74D-3CF5-490C-AE9A-1CA848441C06.jpeg


Schönen Sonntag
Frangipani
 
  • Like
Reactions: 1 users
Top Bottom