BRN Discussion Ongoing

charles2

Regular
AI chip startup to go public...focus on the edge

Blaize

 
  • Like
  • Wow
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

1736868075284.gif

Can Arm’s Mobile Lead Translate to AI? Chip Designer Bets on Efficiency

BY PYMNTS | JANUARY 13, 2025
|
Arm computer chip


British chip designer Arm Holdings is eyeing robust opportunities in AI as it strives to maintain its near-total monopoly in mobile devices. According to Arm Chief Commercial Officer Will Abbey, the company is going full steam ahead into powering artificial intelligence devices — and sees its power-efficient design as a competitive advantage especially since AI is an infamous power guzzler.

In an interview with PYMNTS, Abbey said Arm chip designs focus on power-efficient, high-performance central processing units (CPUs). This has been Arm’s hallmark since its inception, leading to its dominance in the mobile device market. Arm’s chip designs are used by Apple, Nvidia, Google, Microsoft, Amazon, Samsung, Intel, Qualcomm and others.
However, the company’s reach extends far beyond smartphones. Abbey outlined Arm’s four main business lines: client devices (anything with a display), infrastructure (data centers), IoT (Internet of Things) and autonomy (including electric vehicles and robotics). Across these sectors, Arm’s designs have shipped in nearly 300 billion devices, with the company delivering about 35 billion devices each quarter.
In 2020, Nvidia offered to acquire Arm for $40 billion from Japanese tech giant SoftBank in a cash and stock deal to create a chip powerhouse. But British regulators scuttled the deal due to antitrust concerns and the transaction was terminated two years later. In September 2023, Arm went public, with SoftBank retaining a 90% stake.
As the AI revolution heated up, Arm executives said in a 2024 earnings call that they see “strong momentum and tailwinds from all things AI” in complex devices like Nvidia Superchips that combine a GPU and CPU to edge devices like Samsung smartphones.
Arm’s planned expansion into AI comes at a time when more consumers and businesses are more comfortable using the technology. In fact, PYMNTS data showsmost business leaders believe generative AI will have positive impacts on workplaces.

Unique Business Model​

Arm’s business model is unique in the semiconductor industry. Rather than manufacturing chips, the company licenses its designs to partners who then produce the actual silicon. Clients can further customize chips with Arm designs. This flexibility lets Arm broadly influence the semiconductor ecosystem, with partnerships spanning over 1,000 companies including major foundries like TSMC, Samsung and Intel.
This approach has allowed Arm to achieve remarkable market penetration. In the mobile sector, Arm boasts a staggering 99% market share. The company is also making significant inroads in other areas, including PCs, automotive applications and data centers.
The company’s market position is growing in other areas as well. Beyond mobile devices, Arm is gaining ground in the PC market, particularly with AI-enabled Windows PCs. In the data center space, where Nvidia leads in AI training, Arm’s CPUs also play a critical role alongside GPUs. The Grace Hopper super chip exemplifies this partnership, combining Nvidia’s GPU technology with Arm’s CPU designs.
The automotive sector presents another growth opportunity as vehicles become more electronically sophisticated, with Arm’s designs found in applications ranging from body sensors to advanced driver assistance systems, according to Abbey.

Arm to Become a Chip Manufacturer?​

Asked about rumors that Arm is interested in becoming a chip manufacturer, Abbey declined to comment. Last May, Reuters reported that the SoftBank-controlled company plans to develop its own AI chips in 2025, with a prototype ready by spring. If Arm does indeed become a chipmaker, it would directly compete with many of its licensees.

When it comes to AI, Arm is leveraging its strengths in CPU design to address the growing demand for AI-capable devices. Abbey said that AI fundamentally relies on matrix multiplications, which CPUs have always been adept at handling. With the latest version of its architecture, Arm has introduced special instructions to make these operations even more efficient, delivering better performance while using less power.

This focus on power efficiency is crucial as AI workloads become more prevalent and energy-intensive. Abbey emphasized the importance of balancing performance with power consumption: “We as a society are going to have to make informed choices of, ‘do we want to keep our lights on, or do we want to keep compute taking place for AI.’” He said Arm’s approach, which combines high performance with power efficiency, positions the company well to address these challenges.


Arm’s strategy also relies heavily on its robust software ecosystem. With a community of 20 million developers creating content for Arm-based devices, the company has built a ‘flywheel effect’ that drives adoption across various markets, Abbey said. This ecosystem is particularly important as Arm expands its presence in areas like data centers, where it’s competing with established players like Intel and AMD.
When asked about the recent formation of an x86 advisory group by Intel, AMD and other tech giants, which would compete with Arm’s architecture in AI, Abbey said he viewed the move as an endorsement of Arm’s long-standing approach to providing choice and flexibility in the market. “We’re a big believer in standards. We’re a big believer in choice,” Abbey said, adding that “competition is healthy for the whole of the ecosystem.”
The x86 chip architecture is the foundation of modern computing in its over four decades of use. Last October, Intel, AMD, Dell, Meta, Lenovo, Google, HP, Microsoft, Oracle, Red Hat, Broadcom and others came together to form this advisory group to ensure interoperability across hardware and software. Broadcom CEO Hock Tan said the computing industry is at a “crossroads” and x86 architectural decisions made today will affect systems for decades.

Looking ahead, Abbey identified the shift of AI processing from cloud to edge devices as a key trend, emphasizing the need to balance performance with power efficiency and security. This transition presents new challenges in protecting personal data while delivering AI capabilities directly on consumer devices.

Arm is focused on continuing to improve the power efficiency and performance of its designs while expanding its software development community. Abbey sees these three elements — software development, power efficiency and performance — as critical to “bringing AI to the masses.”
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 29 users

Guzzi62

Regular
AI chip startup to go public...focus on the edge

Blaize

Wow, another AI edge start-up coming on-line with afterburners blazing.

Well it's a big market, I just hope BRN's products are better.

https://www.blaize.com/
 
  • Like
  • Haha
Reactions: 7 users

Diogenese

Top 20
View attachment 75946





Sorry, I don’t have an update on the NASA rover project because my monthly meeting at NSAS HQ has been postponed until next week.

However, I have it on very good authority, that “the powers that be” are considering naming the next mission “Bravo Mars“.

Whilst I’m truly humbled by this honour, I won‘t be disappointed if it doesn’t come to fruition. I know that the top brass have expressed concerns about keeping my identity a secret, so it wouldn‘t surprise me should they try to keep me under wraps, so to speak.

Whilst I can‘t shed any further light on the rover project per se, I can provide an update on the status of my ignore button, which I will be launching in …3…2….1…! 🚀
It seems from
Solicitation: SBIR_21_P1

Topic Number: H6.22

that the Boeing High performance Spaceflight Computer (HSPC), due for delivery in December 2022, was conceived before NASA tumbled to the benefits of neuromorphics:

https://legacy.www.sbir.gov/node/1836297

The current state of the art (SOA) for in-space processing is the High Performance Spaceflight Computing (HPSC) processor being developed by Boeing for NASA Goddard Space Flight Center (GSFC). The HPSC, called the Chiplet, contains 8 general purpose processing cores in a dual quad-core configuration. Delivery is expected by December 2022. In a submission to the Space Technology Mission Directorate (STMD) Game Changing Development (GCD) program, the highest computational capability required by a typical space mission is 35 to 70 GFLOPS (billion fast logical operations per second).

The current SOA does not address the capabilities required for artificial intelligence and machine learning applications in the space environment. These applications require significant amounts of multiply and accumulate operations, in addition to a substantial amount of memory to store data and retain intermediate states in a neural network computation. Terrestrially, these operations require general-purpose graphics processing units (GP-GPUs), which are capable of teraflops (TFLOPS) each—approximately 3 orders of magnitude above the anticipated capabilities of the HPSC.

Neuromorphic processing offers the potential to bridge this gap through a novel hardware approach. Existing research in the area shows neuromorphic processors to be up to 1,000 times more energy efficient than GP-GPUs in artificial intelligence applications. Obviously, the true performance depends on the application, but nevertheless the architecture has demonstrated characteristics that make it well-adapted to the space environment
.

Phase 1 of the project had extraordinarily short deadlines over a holiday period:

Release Date: November 09, 2020

Open Date: November 09, 2020

Application Due Date: January 08, 2021


Close Date: January 08, 2021

... not that we could draw any inferences from that. After all, doesn't everybody have "concept of operations of the research topic, simulations, and preliminary results. Early development and delivery of prototype hardware/software is encouraged" for a SWaP compliant neuromorphic processor in their back pocket, eady to be produced at a moment's notice?

Phase II will emphasize hardware and/or software development with delivery of specific hardware and/or software products for NASA, targeting demonstration operations on a low-SWaP platform. Phase II deliverables include a working prototype of the proposed product and/or software, along with documentation and tools necessary for NASA to use the product and/or modify and use the software. In order to enable mission deployment, proposed prototypes should include a path, preferably demonstrated, for fault and mission tolerances. Phase II deliverables should include hardware/software necessary to show how the advances made in the development can be applied to a CubeSat, SmallSat, and rover flight demonstration..

I don't know if Phase 2 of any NASA SBIR has ever gone under the radar, but, in retrospect, ANT61 does spring to mind as a cubesat implementation, and one which offered little prospect of near-term commercial viability while absorbing valuable BRN engineering time. Not that that's a dot, but it is a coincidence that the Akida engineering samples and the feature enhanced 4-bit Akida 1000 and a Cubesat implementation did occur during a fairly compressed time period.

Contra-indication is NASA's referenes to the inherent rad-hardness of memristors, which point to a leaning to an analog implementation.

Still, we know that NASA has been playing with Akida for some time. The short submission period suggests that there had been significant pre-match discussions between NASA and their prospective SBIR applicants. One factor to take into consideration is that the "S" in SBIR would exclude the big boys.

So now we come to the recent and not so recent announcements linking Akida to NASA or space applications. Some which spring to mind:

The MOU with EdgX with links to ESA.

There's Frontgrade which links to ESA.

There's Intellisense Neuromorphic Enhanced Cognitive Radio (NERC) which links to a NASA Phase 2 SBIR.

We dabbled in rad-hard processes with Vorago in 2020 for a Phase 1 NASA project.

RTX/Raytheon as the putative sub-contractor for the recent Phase 2 NASA project.

We know some of the big boys, such as IBM, have been dabbling with analog NNs/memristors for some years, so it is an open question as to whether this is at the behest of NASA, but it is very clear that a great deal of the SBIR requirements fit Akida like a glove.
 
  • Like
  • Fire
  • Love
Reactions: 26 users
No doubt a known but I couldn't be bothered trying to search if posted...I know, a bit lazy. But anyway.

We linked up with MegaChips in late 21 and this R&D update was sometime in 22.

Might have to make time for a wander through the Nara Institute of Science & Tech research info and see if we pop up anywhere :unsure:

IMG_20250114_233624.jpg
 
  • Like
Reactions: 19 users

charles2

Regular

View attachment 75954

Can Arm’s Mobile Lead Translate to AI? Chip Designer Bets on Efficiency

BY PYMNTS | JANUARY 13, 2025
|
Arm computer chip


British chip designer Arm Holdings is eyeing robust opportunities in AI as it strives to maintain its near-total monopoly in mobile devices. According to Arm Chief Commercial Officer Will Abbey, the company is going full steam ahead into powering artificial intelligence devices — and sees its power-efficient design as a competitive advantage especially since AI is an infamous power guzzler.

In an interview with PYMNTS, Abbey said Arm chip designs focus on power-efficient, high-performance central processing units (CPUs). This has been Arm’s hallmark since its inception, leading to its dominance in the mobile device market. Arm’s chip designs are used by Apple, Nvidia, Google, Microsoft, Amazon, Samsung, Intel, Qualcomm and others.
However, the company’s reach extends far beyond smartphones. Abbey outlined Arm’s four main business lines: client devices (anything with a display), infrastructure (data centers), IoT (Internet of Things) and autonomy (including electric vehicles and robotics). Across these sectors, Arm’s designs have shipped in nearly 300 billion devices, with the company delivering about 35 billion devices each quarter.
In 2020, Nvidia offered to acquire Arm for $40 billion from Japanese tech giant SoftBank in a cash and stock deal to create a chip powerhouse. But British regulators scuttled the deal due to antitrust concerns and the transaction was terminated two years later. In September 2023, Arm went public, with SoftBank retaining a 90% stake.
As the AI revolution heated up, Arm executives said in a 2024 earnings call that they see “strong momentum and tailwinds from all things AI” in complex devices like Nvidia Superchips that combine a GPU and CPU to edge devices like Samsung smartphones.
Arm’s planned expansion into AI comes at a time when more consumers and businesses are more comfortable using the technology. In fact, PYMNTS data showsmost business leaders believe generative AI will have positive impacts on workplaces.

Unique Business Model​

Arm’s business model is unique in the semiconductor industry. Rather than manufacturing chips, the company licenses its designs to partners who then produce the actual silicon. Clients can further customize chips with Arm designs. This flexibility lets Arm broadly influence the semiconductor ecosystem, with partnerships spanning over 1,000 companies including major foundries like TSMC, Samsung and Intel.
This approach has allowed Arm to achieve remarkable market penetration. In the mobile sector, Arm boasts a staggering 99% market share. The company is also making significant inroads in other areas, including PCs, automotive applications and data centers.
The company’s market position is growing in other areas as well. Beyond mobile devices, Arm is gaining ground in the PC market, particularly with AI-enabled Windows PCs. In the data center space, where Nvidia leads in AI training, Arm’s CPUs also play a critical role alongside GPUs. The Grace Hopper super chip exemplifies this partnership, combining Nvidia’s GPU technology with Arm’s CPU designs.
The automotive sector presents another growth opportunity as vehicles become more electronically sophisticated, with Arm’s designs found in applications ranging from body sensors to advanced driver assistance systems, according to Abbey.

Arm to Become a Chip Manufacturer?​

Asked about rumors that Arm is interested in becoming a chip manufacturer, Abbey declined to comment. Last May, Reuters reported that the SoftBank-controlled company plans to develop its own AI chips in 2025, with a prototype ready by spring. If Arm does indeed become a chipmaker, it would directly compete with many of its licensees.

When it comes to AI, Arm is leveraging its strengths in CPU design to address the growing demand for AI-capable devices. Abbey said that AI fundamentally relies on matrix multiplications, which CPUs have always been adept at handling. With the latest version of its architecture, Arm has introduced special instructions to make these operations even more efficient, delivering better performance while using less power.

This focus on power efficiency is crucial as AI workloads become more prevalent and energy-intensive. Abbey emphasized the importance of balancing performance with power consumption: “We as a society are going to have to make informed choices of, ‘do we want to keep our lights on, or do we want to keep compute taking place for AI.’” He said Arm’s approach, which combines high performance with power efficiency, positions the company well to address these challenges.


Arm’s strategy also relies heavily on its robust software ecosystem. With a community of 20 million developers creating content for Arm-based devices, the company has built a ‘flywheel effect’ that drives adoption across various markets, Abbey said. This ecosystem is particularly important as Arm expands its presence in areas like data centers, where it’s competing with established players like Intel and AMD.
When asked about the recent formation of an x86 advisory group by Intel, AMD and other tech giants, which would compete with Arm’s architecture in AI, Abbey said he viewed the move as an endorsement of Arm’s long-standing approach to providing choice and flexibility in the market. “We’re a big believer in standards. We’re a big believer in choice,” Abbey said, adding that “competition is healthy for the whole of the ecosystem.”
The x86 chip architecture is the foundation of modern computing in its over four decades of use. Last October, Intel, AMD, Dell, Meta, Lenovo, Google, HP, Microsoft, Oracle, Red Hat, Broadcom and others came together to form this advisory group to ensure interoperability across hardware and software. Broadcom CEO Hock Tan said the computing industry is at a “crossroads” and x86 architectural decisions made today will affect systems for decades.

Looking ahead, Abbey identified the shift of AI processing from cloud to edge devices as a key trend, emphasizing the need to balance performance with power efficiency and security. This transition presents new challenges in protecting personal data while delivering AI capabilities directly on consumer devices.

Arm is focused on continuing to improve the power efficiency and performance of its designs while expanding its software development community. Abbey sees these three elements — software development, power efficiency and performance — as critical to “bringing AI to the masses.”
If ARM would leverage AKIDA technology the sky would be the limit for BrainChip.

My question is how or if it would hinder ARM.
 
  • Like
  • Thinking
Reactions: 8 users

7für7

Top 20
Podcast is out 1hour ago

 
  • Like
  • Fire
  • Love
Reactions: 28 users
AI chip startup to go public...focus on the edge

Blaize

Pretty sure we've looked at Blaize before..

Will be interesting to watch how they go on the NASDAQ, compared to their commercial progress..
 
  • Like
Reactions: 4 users

IloveLamp

Top 20
It seems from
Solicitation: SBIR_21_P1

Topic Number: H6.22

that the Boeing High performance Spaceflight Computer (HSPC), due for delivery in December 2022, was conceived before NASA tumbled to the benefits of neuromorphics:

https://legacy.www.sbir.gov/node/1836297

The current state of the art (SOA) for in-space processing is the High Performance Spaceflight Computing (HPSC) processor being developed by Boeing for NASA Goddard Space Flight Center (GSFC). The HPSC, called the Chiplet, contains 8 general purpose processing cores in a dual quad-core configuration. Delivery is expected by December 2022. In a submission to the Space Technology Mission Directorate (STMD) Game Changing Development (GCD) program, the highest computational capability required by a typical space mission is 35 to 70 GFLOPS (billion fast logical operations per second).

The current SOA does not address the capabilities required for artificial intelligence and machine learning applications in the space environment. These applications require significant amounts of multiply and accumulate operations, in addition to a substantial amount of memory to store data and retain intermediate states in a neural network computation. Terrestrially, these operations require general-purpose graphics processing units (GP-GPUs), which are capable of teraflops (TFLOPS) each—approximately 3 orders of magnitude above the anticipated capabilities of the HPSC.

Neuromorphic processing offers the potential to bridge this gap through a novel hardware approach. Existing research in the area shows neuromorphic processors to be up to 1,000 times more energy efficient than GP-GPUs in artificial intelligence applications. Obviously, the true performance depends on the application, but nevertheless the architecture has demonstrated characteristics that make it well-adapted to the space environment
.

Phase 1 of the project had extraordinarily short deadlines over a holiday period:

Release Date: November 09, 2020

Open Date: November 09, 2020

Application Due Date: January 08, 2021


Close Date: January 08, 2021

... not that we could draw any inferences from that. After all, doesn't everybody have "concept of operations of the research topic, simulations, and preliminary results. Early development and delivery of prototype hardware/software is encouraged" for a SWaP compliant neuromorphic processor in their back pocket, eady to be produced at a moment's notice?

Phase II will emphasize hardware and/or software development with delivery of specific hardware and/or software products for NASA, targeting demonstration operations on a low-SWaP platform. Phase II deliverables include a working prototype of the proposed product and/or software, along with documentation and tools necessary for NASA to use the product and/or modify and use the software. In order to enable mission deployment, proposed prototypes should include a path, preferably demonstrated, for fault and mission tolerances. Phase II deliverables should include hardware/software necessary to show how the advances made in the development can be applied to a CubeSat, SmallSat, and rover flight demonstration..

I don't know if Phase 2 of any NASA SBIR has ever gone under the radar, but, in retrospect, ANT61 does spring to mind as a cubesat implementation, and one which offered little prospect of near-term commercial viability while absorbing valuable BRN engineering time. Not that that's a dot, but it is a coincidence that the Akida engineering samples and the feature enhanced 4-bit Akida 1000 and a Cubesat implementation did occur during a fairly compressed time period.

Contra-indication is NASA's referenes to the inherent rad-hardness of memristors, which point to a leaning to an analog implementation.

Still, we know that NASA has been playing with Akida for some time. The short submission period suggests that there had been significant pre-match discussions between NASA and their prospective SBIR applicants. One factor to take into consideration is that the "S" in SBIR would exclude the big boys.

So now we come to the recent and not so recent announcements linking Akida to NASA or space applications. Some which spring to mind:

The MOU with EdgX with links to ESA.

There's Frontgrade which links to ESA.

There's Intellisense Neuromorphic Enhanced Cognitive Radio (NERC) which links to a NASA Phase 2 SBIR.

We dabbled in rad-hard processes with Vorago in 2020 for a Phase 1 NASA project.

RTX/Raytheon as the putative sub-contractor for the recent Phase 2 NASA project.

We know some of the big boys, such as IBM, have been dabbling with analog NNs/memristors for some years, so it is an open question as to whether this is at the behest of NASA, but it is very clear that a great deal of the SBIR requirements fit Akida like a glove.
ANT61 Signs First Japanese Commercial Agreement with SOMPO


1000021165.jpg
 
  • Like
  • Fire
  • Love
Reactions: 39 users
ANT61 Signs First Japanese Commercial Agreement with SOMPO


View attachment 75957
I don't think AKIDA is in Beacon but?..
 
  • Like
  • Thinking
Reactions: 3 users

Terroni2105

Founding Member
  • Like
Reactions: 6 users

Mt09

Regular
Yes it is
Akida is in the Ant61 Brain, not the beacon as far as we know.

 
  • Like
Reactions: 6 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 55 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 15 users

IloveLamp

Top 20

1000021177.jpg
1000021175.jpg
 
  • Like
  • Fire
  • Love
Reactions: 39 users

Terroni2105

Founding Member
Akida is in the Ant61 Brain, not the beacon as far as we know.

I have written down that Ant61 confirmed it is in Beacon but I haven’t kept a link (from memory it was on a LinkedIn comment they responded to), perhaps someone else here has it (however I do know I wouldn't have written it down for myself if I didn't see it with my own eyes)
 
  • Like
Reactions: 10 users
  • Like
  • Fire
  • Love
Reactions: 17 users

Puck

Emerged
I have written down that Ant61 confirmed it is in Beacon but I haven’t kept a link (from memory it was on a LinkedIn comment they responded to), perhaps someone else here has it (however I do know I wouldn't have written it down for myself if I didn't see it with my own eyes)
I believe the Brain will be employed across multiple products.
 

Frangipani

Regular
Interesting like of a BrainChip post on LinkedIn by a Fraunhofer ITWM (Institut für Techno*- und Wirtschaftsmathematik / Institute for Industrial Mathematics) research scientist, given this Fraunhofer Institute’s High Performance Computing division is co-coordinating a project called STANCE (Strategic Alliance For Neuromorphic Computing and Engineering) alongside Fraunhofer IIS (Institut für Integrierte Schaltungen / Institute for Integrated Circuits), which aims to push for the adoption of spiking and neuromorphic technologies in industrial production by bringing together users and solution providers. The STANCE project got underway five months ago.

*Ich musste echt zweimal hinschauen, aber die forschen wohl doch nicht über Raves… 🤣
@cosors, ohne Bindestrich wäre das ansonsten doch echt der perfekte Arbeitsplatz für dich gewesen, oder?! 😉


View attachment 69152


View attachment 69147

View attachment 69148

View attachment 69149

In September, I posted about a Fraunhofer Society-backed project in Germany called STANCE (Strategic Alliance For Neuromorphic Computing and Engineering), which kicked off in April and is co-coordinated by two Fraunhofer Institutes: Fraunhofer ITWM (Institut für Techno-und Wirtschaftsmathematik / Institute for Industrial Mathematics) and Fraunhofer IIS (Institut für Integrierte Schaltungen / Institute for Integrated Circuits).

Eleven other Fraunhofer Institutes are also part of the project (although surprisingly not the Berlin-based Fraunhofer HHI, whose researchers from the Wireless Communications and Networks Department utilised Akida for their PoC implementation of neuromorphic wireless cognition) that aims to build a Neuromorphic Knowledge Hub in Europe and also seeks “to establish a long term industrial and academic alliance to push for the adoption of spiking and neuromorphic technologies by the broader industry.”



B56DF85E-9BFE-4B10-A6C4-6489E31DC03F.jpeg



Earlier this month, the two co-coordinating Fraunhofer Institutes published a whitepaper that I found a worthwhile read:



FAEC6F9B-2CFD-4DC0-A09E-A3ECEDEDD237.jpeg

2B7BE59F-357A-45CE-9496-9BF9EF2596F1.jpeg

999B8083-2EE2-4DF3-B5B3-DAD0DC0BE2EA.jpeg

08149CEE-DB36-425F-ABD1-DACAE33A912C.jpeg

869715EF-AB62-4F83-A589-69EBD9A7FE12.jpeg

2135C70B-8664-4805-8A9F-D9FA885E03B4.jpeg

8C2611A4-56FD-420C-83D9-14725059E024.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 26 users
Top Bottom