LexLuther77
Regular
A bitter pill to swallow... sheesh
Some perspective. It has literally become available in the last few weeks. 4C is for cash receipts so obviously no revenue is going to appear in the 4C. It is also yet to go through final qualification so buyers will likely wait until this process has been carried out.
View attachment 35213
We have been trying to sell akida for 2 years and we have had 2 absolute dud quarters in a row.
Lucky that Manny H is potentially getting his 8 mill "free" shares. IMO.... any legal action on this issue could potentially eat into existing Co funds held.What can ypu possibly say at this point to clear the image of the company??
We have been told too many lies. False promises by management. Something has to change within company.Lucky that Manny H is potentially getting his 8 mill "free" shares. IMO.... any legal action on this issue could potentially eat into existing Co funds held.
Directors still will be most likely of getting their bonuses /shares/options after the AGM votes.
We have funds available for only a further 3 quarters.
Potential risk of a future CR.
Just to name a few ..............
Someone has imo dropped the ball big time !!!
With nearly 10% up today, I'd like to know how many new shorts were taken today.
We should have that info by tomorrow.
Alternative theory is someone covering their shorts, as we saw with UBS a day prior to Renesas deal.
My intuition is with the first possibility. Opinion only.
Well I took advantage of it and brought some moreLooks like the first scenario where they pumped it to dump it for a quick ~ 20% gain both ways. Not a bad day's work.
Hopefully not many fomo'd.
That also leaves the question, if someone knew the report beforehand to make such a play?
Or was it just a punter taking punts?![]()
This has been the statement from BRN for how many years/announcements/press releases? I remain long on BRN but this announcement is the biggest test of patience after 7 years of holding. The amount of sales staff hired is not trivial and possibly very premature, plus the bonuses/options etc. that have been a large topic of discussion here recently. It's nearly impossible to vote for bonuses, 2.0 is exciting but still only words on a PDF from a shareholder's perspective.the world’s first commercial producer of neuromorphic artificial intelligence chips
Not sure I agree with that. Renesas is (allegedly) producing a product with our IP and Megachips have obviously sold the concept on - to who know who. NoOnly thing that can save us is a proof that Valeo is using us for Scala 3
Not sure about you guys, but Akidio 3.0 set to be released in 2030 might be the turning point for us now.
I remain long on BRN but this announcement is the biggest test of patience after 7 years of holding
Hi Frangipani,Has anyone else stumbled upon this 3 year EU-funded research project called Nimble AI, kick-started in November 2022, that “aims to unlock the potential of neuromorphic vision?“ Couldn’t find anything here on TSE with the help of the search function except a reference to US-based company Nimble Robotics, but they seem totally unrelated.
The 19 project partners include imec in Leuven (Belgium) as well as Paris-based GrAI Matter Labs, highly likely Brainchip’s most serious competitor, according to other posters.
An article about Nimble AI’s ambitious project was published today:
What do you make of of the consortium’s claim that their 3D neuromorphic vision chip will have more than an edge over Akida once it will be ready to hit the market?
![]()
NimbleAI
Today only very light AI processing tasks are executed in ubiquitous IoT endpoint devices, where sensor data are generated and access to energy is usually constrained. However, this approach is not scalable and results in high penalties in terms of security, privacy, cost, energy consumption, and...www.hipeac.net
NimbleAI: Ultra-Energy Efficient and Secure Neuromorphic Sensing and Processing at the Endpoint
“Today only very light AI processing tasks are executed in ubiquitous IoT endpoint devices, where sensor data are generated and access to energy is usually constrained. However, this approach is not scalable and results in high penalties in terms of security, privacy, cost, energy consumption, and latency as data need to travel from endpoint devices to remote processing systems such as data centres. Inefficiencies are especially evident in energy consumption.
To keep up pace with the exponentially growing amount of data (e.g. video) and allow more advanced, accurate, safe and timely interactions with the surrounding environment, next-generation endpoint devices will need to run AI algorithms (e.g. computer vision) and other compute intense tasks with very low latency (i.e. units of ms or less) and energy envelops (i.e. tens of mW or less).
NimbleAI will harness the latest advances in microelectronics and integrated circuit technology to create an integral neuromorphic sensing-processing solution to efficiently run accurate and diverse computer vision algorithms in resource- and area-constrained chips destined to endpoint devices. Biology will be a major source of inspiration in NimbleAI, especially with a focus to reproduce adaptivity and experience-induced plasticity that allow biological structures to continuously become more efficient in processing dynamic visual stimuli.
NimbleAI is expected to allow significant improvements compared to state-of-the-art (e.g. commercially available neuromorphic chips), and at least 100x improvement in energy efficiency and 50x shorter latency compared to state-of-the-practice (e.g. CPU/GPU/NPU/TPUs processing frame-based video). NimbleAI will also take a holistic approach for ensuring safety and security at different architecture levels, including silicon level.”
What I find a little odd, though, is that this claim re expected superiority over “state-of-the-art (e.g. commercially available neuromorphic chips)“ doesn’t get any mention on the official Nimble AI website (https://www.nimbleai.eu/), in contrast to the expectation of “at least 100x improvement in energy efficiency and 50x shorter latency compared to state-of-the-practice (e.g. CPU/GPU/NPU/TPUs Processing Frame-based Video).”