BRN Discussion Ongoing

Come on brainchip, partner or even a new partner and dropped something massive in the next 24 hours and destroy some of these 70 million shorts outstanding.

1705038714936.gif
 
  • Fire
  • Like
Reactions: 10 users

buena suerte :-)

BOB Bank of Brainchip
  • Like
  • Haha
Reactions: 7 users

KKFoo

Regular
Good afternoon Chippers, Are there any videos available from Brainchip demos?
 
  • Like
Reactions: 1 users

DK6161

Regular
  • Like
  • Thinking
  • Love
Reactions: 5 users
I haven't posted in a while as the forum gets rather toxic at times.

These podcasts at CES however do clearly tell me one thing - The team at Brainchip have listened to feedback regarding communication. These have been great little snippets of updates with a few partners. My favourite so far being Onsemi and Infineon.

I sincerely hope Nandan will be bringing Farshad's (Infineon) comments to Sean Hehirs attention ASAP. I emailed Tony to also follow up with Sean to hopefully benefit from valuable critique from Farshad.
"I have been involved with Brainchip for less than a year, or maybe a year so far...I've been interacting with the company in terms of learning their capabilities"
"it is very promising, I also think their are certain areas that you're not advertising as best as you could"
"Kind of underestimating yourselves"


I really hope the dot joiners can calm down a little and not dismiss posters where someone doesn't jump to the premature conclusions that some draw here without fact or announcements. I like this place and hate blatant downramping as much as the next well intentioned shareholder. But we are still VERY early on in partnerships with some companies from the sounds of it. And we clearly aren't advertising our full capabilities well enough just yet (one partners' opinion but they are in a better position to comment on this than any of us). Hopefully this is a very fruitful year for us and it sounds like a little traction may be headed our way.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 57 users

skutza

Regular
I know you want a reply to further bury the value content from today's CES but even Blind Freddie will tell you that if you read my whole post you would have read the words I have informed the company and it is being attended to. Obviously implicit in that statement is a level of disapproval.

As you claim to have read my other posts you will note that I suggested the appropriate course is to note the error on here if you must but send an email bringing the error to the company's attention for correction.

What I am not doing like you is to assume that the company which immediately reacted to the notification by me is just laughing it off and not taking any action to improve the employee, cadet or as I would suspect the IR contractors performance.

What I would like to know from you though as a person who runs a successful business why you have so readily leapt to the conclusion that Brainchip's executive staff are not successful business people who work to improve staff performance and counsel the staff member, contractor or even the cadet from one of the University programs responsible for the error? Have you communicated with Brainchip at an executive level and been informed that they find these errors acceptable and they encourage the making thereof by staff?

I must thank you though for a two things:

1. Confirming as I have said many times typos will not destroy the company nor put at risk partnership relations or frighten away real investors.

2. I have heard it screamed by some and the majority on HC that the CEO should be drawn and quartered for letting these typos go out. Now thanks to you if they scream out such nonsense again we can refer them to your post because you have made clear that successful business people like you allow staff to post things without checking every word beforehand then take corrective action.

My opinion only DYOR
Fact Finder
Lol, yes that is my main goal here, you caught me out. You are like the Collingwood football club, either love them or hate them. (I am a Cats supporter in case you missed the point) Although I am aware you like the last word and try your best to belittle people, I will also respond because you asked.

"What I would like to know from you though as a person who runs a successful business why you have so readily leapt to the conclusion that Brainchip's executive staff are not successful business people who work to improve staff performance and counsel the staff member, contractor or even the cadet from one of the University programs responsible for the error?" (Jesus I hope not)

I did not imply the majority of what you have written here, but you love to try and play the wordsmith and bend the meaning behind the truth. The truth is, I have not once before commented on the typos that happened previously. My question was how many errors are acceptable? You call anyone that isn't happy with the situation a troll and go off on a rage defending the company by telling everyone to ignore the errors, look at the content.

The errors are small, but for my liking becoming too frequent so I spoke up. If someone lost their job over this or were hung, drawn and quartered then fine but that is the companies business, if they told anyone what happened, that would again be very unprofessional.

So is this the case? Is the company running in an unprofessional manner? Typos, the wrong photo, pretty basic errors, can happen, and what about inviting a few SH to have a private chat? In the eyes of some (especially the invitees) this was all above board and a non-event, but not all believe this and many voiced their concerns. Me, well I have met and spoken with many execs in companies I hold, so for me it wasn't a huge deal but I respected their view. The company seems great, that's why I have invested, but like some when do we start to question if the problem is not the tech, but the people? Here's a quote from a professional in the field just recently. "There are areas I feel you are not explaining or under advertising as well as you could".

So what is my point? People have other points of view, and you make it difficult to question anything, because you label them as trolls. TBH this place was much better without you, as your research while seemingly wonderful has proven to be worthless. Your timelines have been awful and your prediction for revenue and shareprice have been abysmal.

These are the true FACTS, but alas you still have this overpowering sense of superiority over all. If I had $1 for every time you've told us you're a retired lawyer and some shitty story about it then I would likely have had more revenue than Brainchip.

Why am I letting it out today? Because unlike you, I will leave this forum and no longer post. It is clear people cannot question management or the company here without ramifications from the BRN almighty retired lawyer!!! I'm thinking you were more likely a catholic priest, in all respects.

So good luck with your investment here Mr Retired Lawyer, I do note, you use this title frequently to try and have your minions in awe, but someone who is proud of what they have achieved in the small business world is considered worthy of ridicule.

To the rest of the holders, good luck, remember FF has been wrong in all of his predictions to this point, (but only because of outside influences remember ROFL!) so he's not your messiah he's just a naughty little boy. Look on the bright side people, only invest what you can afford, and as good as a stock seems, there's a reason people don't put all their eggs in the one basket.

Adios.
 
  • Like
  • Fire
  • Love
Reactions: 29 users
I haven't posted in a while as the forum gets rather toxic at times.
Yes Wouldn’t this place be so much better if those from the other side never spread there poison.

1705042551748.gif
 
  • Like
  • Haha
Reactions: 5 users

Rodney

Regular
Afternoon Boab ,

Presently , I'm baffled to be honest .

Think the algos suppressing us are waiting for the slightest whiff of something ....... anything with $ attached .

Sorry , not much help .



Regards ,
Esq.
I thought it might of been the 1.4 mil shares in the que for 17 cents that’s slowed the selling to themselves down.
 
  • Like
Reactions: 5 users

Mt09

Regular
I haven't posted in a while as the forum gets rather toxic at times.

These podcasts at CES however do clearly tell me one thing - The team at Brainchip have listened to feedback regarding communication. These have been great little snippets of updates with a few partners. My favourite so far being Onsemi and Inivation.

I sincerely hope Nandan will be bringing Farshad's (Inivation) comments to Sean Hehirs attention ASAP. I emailed Tony to also follow up with Sean to hopefully benefit from valuable critique from Farshad.
"I have been involved with Brainchip for less than a year, or maybe a year so far...I've been interacting with the company in terms of learning their capabilities"
"it is very promising, I also think their are certain areas that you're not advertising as best as you could"
"Kind of underestimating yourselves"


I really hope the dot joiners can calm down a little and not dismiss posters where someone doesn't jump to the premature conclusions that some draw here without fact or announcements. I like this place and hate blatant downramping as much as the next well intentioned shareholder. But we are still VERY early on in partnerships with some companies from the sounds of it. And we clearly aren't advertising our full capabilities well enough just yet (one partners' opinion but they are in a better position to comment on this than any of us). Hopefully this is a very fruitful year for us and it sounds like a little traction may be headed our way.
Infineon** Good post, well said.
 
Last edited:
  • Like
Reactions: 2 users
Infineon** Good post, we’ll said.
Thanks mt, edited. It has been a hell of a first week back at work.
 
  • Like
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Got anything more up to date……we all know about the EQXX from 2022!…….and it was a great endorsement from MB.

The new CLA concept is going to be using NVIDIA and is the reason for the ’water-cooled’ chip…….and the chip-to-cloud processing.

Now unless you can find any evidence that Akida will be used for processing data at the edge of the sensors in the CLA concept…..then all you have is ‘Hope’.

Akida will be used extensively in future MB models……..just not in this one…..it’s too early for us.

hit me with your condescending and patronising memes….i’m ready
Happy to oblige.


#60,508
#74,270 (4 months ago Gerrit Ecke from Mercedes)

To be honest I just plucked a couple of random things close at hand.

I could continue but that would just be embarrassing for you.
 
  • Like
  • Haha
  • Sad
Reactions: 13 users

Tothemoon24

Top 20

Apple Plans to Integrate EDGE AI into Upcoming iOS Release​

themarketinfo.com
themarketinfo.com
7 mins ago



Facebook
X Reddit Linkedin Pinterest Whatsapp


Apple Plans to Integrate EDGE AI into Upcoming iOS Release



Table of Contents​

Apple is set to elevate the iPhone experience to new heights with the integration of Edge AI (Artificial Intelligence) directly into its operating system. This strategic decision marks a significant shift in how iPhones handle AI-driven tasks, promising a range of benefits that could reshape the landscape of mobile technology.
  • Offline Empowerment:
    The integration of Edge AI empowers iPhones with the ability to perform AI tasks locally, even without an internet connection. This translates to an unprecedented level of accessibility, allowing users to seamlessly engage with AI features irrespective of their online status.
  • Reduced Latency, Enhanced Performance:
    One of the standout advantages of Edge AI is its capability to minimize latency. By processing tasks directly on the device, iPhone users can expect faster responses and smoother interactions, eliminating the delays associated with traditional chatbots that rely on external servers for computation.
  • Privacy Takes Center Stage:
    Apple’s commitment to user privacy gets a significant boost with Edge AI. The local processing of sensitive data ensures that personal information stays on the device, reinforcing Apple’s stance on safeguarding user privacy. This move aligns with the growing demand for privacy-centric technologies.
  • Customized User Experiences:
    The integration of Edge AI into the iPhone OS opens the door to highly personalized and customized user experiences. By harnessing on-device processing capabilities, Apple can deliver AI-driven features that adapt to individual preferences without compromising data security.
  • Resource Efficiency and Battery Optimization:
    iPhones, known for their optimization of hardware resources, will now leverage Edge AI to enhance resource efficiency. On-device processing not only contributes to better performance but also ensures that AI tasks are executed with minimal impact on battery life.
  • Seamless Hardware Integration:
    Apple’s seamless integration of Edge AI extends beyond software. iPhones often incorporate specialized hardware for AI-related tasks, providing a harmonious marriage of software and hardware capabilities. This synergy is poised to elevate the overall AI performance on iPhones.
  • Diverse Use Cases Unleashed:
    With Edge AI at the core, iPhones can explore a myriad of new use cases and applications. The local processing power unlocks the potential for handling complex tasks without reliance on external services, making iPhones versatile devices catering to a broader range of user needs.
  • Challenges and Future Prospects:
    While the integration of Edge AI brings forth a myriad of advantages, challenges may arise in terms of the diversity and complexity of tasks compared to more sophisticated chatbot systems. The success of this endeavor hinges on Apple’s ability to deploy rich, adaptive models that align with user expectations.
As Apple pioneers the integration of Edge AI into the iPhone operating system, the tech giant is poised to redefine the smartphone landscape. This strategic move not only aligns with the company’s commitment to privacy and user-centric design but also positions iPhones as powerful, self-sufficient devices capable of delivering cutting-edge AI experiences. The future of mobile technology has just taken a significant leap forward with Apple’s visionary approach to Edge AI integration.

How does Edge AI work?​

Screenshot_2024_0111_232650.jpg

  • On-Device Processing:
  • Data Processing Locally: In Edge AI, the AI algorithms and models are deployed directly on the edge devices (smartphones, IoT devices, etc.). This allows data to be processed locally without the need for constant communication with a centralized server.
  • Reduced Latency: By processing data on the device itself, Edge AI significantly reduces the latency associated with sending data to a remote server for processing. This is crucial for applications requiring real-time decision-making.
  • Edge AI Workflow:
  • Data Collection: Edge devices collect data from various sensors or inputs, depending on the application (e.g., images from a camera, sensor data from IoT devices).
  • Preprocessing: Raw data may undergo preprocessing on the edge device to enhance its quality and prepare it for input into the AI model.
  • Inference: The pre-trained AI model runs inference directly on the device, making predictions or decisions based on the processed data.
  • Output: The inference results, such as classifications, recommendations, or actions, are generated locally on the device.
  1. Types of Edge AI Models:
  • Compact Models: Edge AI often involves deploying lightweight and efficient machine learning models optimized for on-device processing. These models are designed to run with minimal resource requirements.
  • Offline Models: Some Edge AI applications are designed to function even without an internet connection. This is achieved by deploying models capable of running offline, making decisions independently.
  • Optimizations for Edge Devices:
  • Quantization: This involves reducing the precision of numerical values in the model, making it more suitable for deployment on devices with limited computational resources.
  • Model Compression: Techniques like pruning and quantization help reduce the size of the AI model, optimizing storage and memory usage on the device.
  • Hardware Acceleration: Edge devices may leverage specialized hardware, such as GPUs or TPUs, to accelerate AI computations, enhancing overall performance.
  1. Security Considerations:
  • Secure Model Deployment: Ensuring the security of on-device AI models is crucial. Techniques like model encryption and secure deployment mechanisms help protect against potential threats.
  • Privacy Preservation: Edge AI contributes to privacy by processing sensitive data locally, reducing the need to transmit personal information to external servers.
  1. Use Cases:
  • Image and Video Analysis: Edge AI is commonly used for tasks like object detection, facial recognition, and image classification directly on cameras or smartphones.
  • Predictive Maintenance: IoT devices equipped with Edge AI can predict equipment failures by analyzing sensor data locally.
  • Voice Recognition: On-device voice assistants leverage Edge AI for natural language processing without continuous internet connectivity.
In summary, Edge AI empowers devices to perform AI tasks locally, leading to faster response times, improved privacy, and efficient use of resources. The technical implementation involves deploying optimized models, preprocessing data, and leveraging hardware acceleration on edge devices.

Apple has introduced the Vision Pro augmented reality headset, ushering in a new era of immersive experiences. Paired with the integration of Edge AI into iOS, this dynamic duo promises to redefine how users engage with technology, seamlessly blending the real and virtual realms.

Vision Pro: A Glimpse into Tomorrow

The Vision Pro headset, a result of years of meticulous development, stands as a testament to Apple’s commitment to pushing technological boundaries. Priced at $3,499 and slated for an early release next year, the headset boasts 4K displays, infrared cameras, LED illuminators, and a unique mixed-reality dial allowing users to effortlessly transition between the real and virtual worlds.
Unlike traditional VR headsets, Vision Pro acknowledges the importance of the physical environment. With a floating “Home View” visible upon wearing the headset, users can navigate their surroundings while interacting with large virtual screens seamlessly integrated into their physical space. It’s a shift from merely looking at a display to immersing oneself in a world where digital content coexists with reality.

Edge AI: The Intelligent Backbone of iOS

Complementing Vision Pro, Apple has strategically embedded Edge AI into iOS, creating a powerful foundation for intelligent interactions. Edge AI processes data locally on the device, enabling swift decision-making and reducing dependency on cloud-based services. Siri, Apple’s voice-activated assistant, plays a central role, providing users with an intuitive interface to control apps and media through voice commands.

The Synergy Unveiled

The true marvel lies in the seamless connection between Vision Pro and Edge AI. As users traverse augmented realities, Edge AI quietly analyzes their interactions, preferences, and behaviors. This data, processed locally on iOS devices, contributes to a collective intelligence that enhances the overall AR experience for users.
Imagine a scenario where Vision Pro wearers, connected through Edge AI, collaboratively engage in augmented meetings. The devices intelligently synchronize information, creating a shared workspace that transcends the limitations of traditional digital communication. It’s a paradigm shift from solitary experiences to a collaborative ecosystem where Vision Pro and Edge AI redefine productivity.
As Apple CEO Tim Cook aptly puts it, “Apple Vision Pro will introduce spatial computing” similar to how the iPhone revolutionized mobile computing. The integration of Edge AI amplifies this spatial computing by infusing intelligence into every interaction.

Looking Ahead

While Vision Pro and Edge AI currently stand at the forefront of innovation, their true potential and impact on Apple’s ecosystem will unfold in the coming years. As users eagerly anticipate the early release of Vision Pro, they find themselves on the cusp of a new era, where the boundaries between the real and virtual worlds blur, and technology becomes an integral part of their everyday experiences. The journey has just begun, and Apple enthusiasts can undoubtedly expect more groundbreaking revelations in the ever-evolving landscape of augmented reality and intelligent computing.



AAPL Share :

Apple shares experienced a period of weakness recently, influenced by reports of sluggish iPhone and MacBook sales attributed to perceived shortcomings in AI features compared to competitors like Microsoft. However, the tide seems poised to turn in 2024, fueled by Apple’s foray into Edge AI. This move holds the promise of elevating Apple’s offerings and potentially reshaping the landscape for the company.
The introduction of Edge AI technology not only addresses previous concerns but positions Apple at the forefront of innovation. As the market increasingly values AI capabilities, 2024 could emerge as a pivotal year for Apple shares, marked by renewed investor confidence and a positive trajectory. The key lies in monitoring how consumers respond to these advancements and how effectively Apple leverages Edge AI to enhance its product ecosystem, setting the stage for a potentially robust performance in the stock market.
 
  • Like
  • Fire
  • Thinking
Reactions: 35 users

Mazewolf

Regular
  • Thinking
  • Like
Reactions: 3 users

Easytiger

Regular
IMG_3434.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Diogenese

Top 20
I haven't posted in a while as the forum gets rather toxic at times.

These podcasts at CES however do clearly tell me one thing - The team at Brainchip have listened to feedback regarding communication. These have been great little snippets of updates with a few partners. My favourite so far being Onsemi and Inivation.

I sincerely hope Nandan will be bringing Farshad's (Inivation) comments to Sean Hehirs attention ASAP. I emailed Tony to also follow up with Sean to hopefully benefit from valuable critique from Farshad.
"I have been involved with Brainchip for less than a year, or maybe a year so far...I've been interacting with the company in terms of learning their capabilities"
"it is very promising, I also think their are certain areas that you're not advertising as best as you could"
"Kind of underestimating yourselves"


I really hope the dot joiners can calm down a little and not dismiss posters where someone doesn't jump to the premature conclusions that some draw here without fact or announcements. I like this place and hate blatant downramping as much as the next well intentioned shareholder. But we are still VERY early on in partnerships with some companies from the sounds of it. And we clearly aren't advertising our full capabilities well enough just yet (one partners' opinion but they are in a better position to comment on this than any of us). Hopefully this is a very fruitful year for us and it sounds like a little traction may be headed our way.
Hi D&S,

Good to hear from you.

As usual, that introduction presages a tongue lashing/ear bashing.

This technology is evolving at such a rapid rate that we've met ourselves coming back.

Akida 1 is brilliant groundbreaking technology - digital spiking neural network system on a chip with special sauce (N-of-M coding), using spikes (originally 1-bit) and only activating when an "event" (a change in the input data) occurred.

Previously, the only practicable way of identifying objects in a field of view was with a software program implementing convolutional neural networks (CNNS), either on a CPU (quite slow and power hungry), or GPU (faster but proportionally more power hungry.

Mostly in academic circles, attempts have been made to implement analog (ReRAM/MemRistor) CNN in silicon, but the manufacturing processes and temperature variability have limiter the accuracy of such devices.

PvdM's genius was in realizing that digital NNs could avoid these inherent inaccuracies, and in recognizing the genius of Simon Thorpe's N-of-M coding and, just as importantly, how it could be applied in silicon.

This gave rise to Akida 1, a technology years ahead of the competition. The design is highly flexible, allowing for a few nodes (4*NPEs per node), up to a couple of hundred nodes. Akida 1 was applicable to any sensor (video, audio, taste, smell, vibration ... ) so that, with the appropriate model library, it could classify any input signal. Of course, you don't find model libraries just lying around, but there are open source versions available, and Akida 1 has the ability to learn new classes to add to the model on chip. In addition, BRN has developed its own in-house model library "zoo".

Akida 1 went through a couple of iterations based on customer feedback. Initially it has 1-bit weights and activations - lightning fast and anorexically power sipping.

Customers were prepared to forego portions of these advantages for greater accuracy and somewhat higher power consumption.

So Akida 1 switched to 4-bit weights/activations.

In addition, customers required the ability to use their existing CNN models, so Akida 1 includes CNN2SNN conversion capability.

BRN has been involved with a few leading edge sensor makers for some years - eg, Valeo for lidar, Prophesee for DVS event cameras, both of which are a natural fit for Akida's snn capabilities. But there is an infinite number of applications for which Akida 1 is the best solution, except for Akida 2.

Our switch to IP only made a rod for our own back, excluding all but those who had the odd $50 million laying around to invest in developing new chips. Not that this is an unworkable business model - ARM does nicely out of it, although it would like a larger slice of the pie.

So now we come to the stage where we are prepared to sell devices including Akida 1, not so much as an income generating enterprise as a capability demonstration of Akida 1 ...

... and, to top it off, Akida 2 blows the socks off Akida 1.

How many EAPs are primed to explode? - well. in all the excitement, I've kinda forgotten myself, so, go ahead punk, make my day!
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 71 users

7für7

Top 20
In the German forum there is a discussion about a possible reverse split if it continues like this. What is your opinion?
 
  • Haha
Reactions: 1 users
nope you failed again…you did not find what I requested below;

“Now unless you can find any evidence that Akida will be used for processing data at the edge of the sensors in the CLA concept…..then all you have is ‘Hope’.”

….and the Award for the most “Hope Akida Inside 🐂💩”…..goes to BRAVO!!!

connect the dots below.

View attachment 54087
we are done….or at least I am!
1705047281070.gif
 
  • Haha
  • Like
  • Fire
Reactions: 8 users
Hi D&S,

Good to hear from you.

As usual, that introduction presages a tongue lashing.

This technology is evolving at such a rapid rate that we've met ourselves coming back.

Akida 1 is brilliant groundbreaking technology - digital spiking neural network system on a chip with special sauce (N-of-M coding), using spikes (originally 1-bit) and only activating when an "event" (a change in the input data) occurred.

Previously, the only practicable way of identifying objects in a field of view was with a software program implementing convolutional neural networks (CNNS), either on a CPU (quite slow and power hungry), or GPU (faster but proportionally more power hungry.

Mostly in academic circles, attempts have been made to implement analog (ReRAM/MemRistor) CNN in silicon, but the manufacturing processes and temperature variability have limiter the accuracy of such devices.

PvdM's genius was in realizing that digital NNs could avoid these inherent inaccuracies, and in recognizing the genius of Simon Thorpe's N-of-M coding and, just as importantly, how it could be applied in silicon.

This gave rise to Akida 1, a technology years ahead of the competition. The design is highly flexible, allowing for a few nodes (4*NPEs per node), up to a couple of hundred nodes. Akida 1 was applicable to any sensor (video, audio, taste, smell, vibration ... ) so that, with the appropriate model library, it could classify any input signal. Of course, you don't find model libraries just lying around, but there are open source versions available, and Akida 1 has the ability to learn new classes to add to the model on chip. In addition, BRN has developed its own in-house model library "zoo".

Akida 1 went through a couple of iterations based on customer feedback. Initially it has 1-bit weights and activations - lightning fast and anorexically power sipping.

Customers were prepared to forego portions of these advantages for greater accuracy and somewhat higher power consumption.

So Akida 1 switched to 4-bit weights/activations.

In addition, customers required the ability to use their existing CNN models, so Akida 1 includes CNN2SNN conversion capability.

BRN has been involved with a few leading edge sensor makers for some years - eg, Valeo for lidar, Prophesee for DVS event cameras, both of which are a natural fit for Akida's snn capabilities. But there is an infinite number of applications for which Akida 1 is the best solution, except for Akida 2.

Our switch to IP only made a rod for our own back, excluding all but those who had the odd $50 million laying around to invest in developing new chips. Not that this is an unworkable business model - ARM does nicely out of it, although it would like a larger slice of the pie.

So now we come to the stage where we are prepared to sell devices including Akida 1, not so much as an income generating enterprise as a capability demonstration of Akida 1 ...

... and, to top it off, Akida 2 blows the socks off Akida 1.

How many EAPs are primed to explode? - well. in all the excitement, I've kinda forgotten myself, so, go ahead punk, make my day!
I don't know what I wrote that got such a rise out of you (or tongue lashing, as you so eloquently put it), or what the relevance to my post really is. I wouldn't be invested in a tech company that I didn't believe have fantastic technology.

My post was about a partner of ours critiquing the advertising of our capabilities to partners and how I hope Brainchip do take this feedback on board so we get more sales. Something I also sent to Tony because it's great feedback from Farshad. How is improving a function of the business that a partner provided feedback for, an inherently bad thing?? We want more sales, yes? Then we are on the same team.

Beyond that, dot joining can be fun, but it should not be taken as gospel and anyone who doesn't agree with a dot join shouldn't be berated because they have an opposing view. It's the very reason this forum becomes toxic at times.

It seems like you just wanted to chew someone out, or I'm on your bad side for whatever reason.

Hope this punk has made your day.
 
  • Like
  • Fire
  • Thinking
Reactions: 14 users

Deadpool

hyper-efficient Ai
In the German forum there is a discussion about a possible reverse split if it continues like this. What is your opinion?
REVERSE SLIT PIGS *UCKING ARSE



sexy pig GIF
 
  • Haha
  • Like
Reactions: 4 users
Top Bottom