Nice find and Info thanks.Lakshmi Varshika Mirtinti is currently a PhD student at Drexel University (graduating in 2025) who - as we find out from her comment underneath our CTO’s post - was working with Akida for her PhD thesis and is now keen on applying for a job with our company. Nice!
But there is something else about their little dialogue that caught my eye:
View attachment 74032
View attachment 74033
View attachment 74041
View attachment 74038
Why was she expressing her interest in joining BrainChip’s Santa Clara team? Does that signify there is one?
Tony Lewis responded somewhat cryptically by saying “I am afraid the R&D team is located down south.”
He could have said “We don’t have a team in Santa Clara”, but he didn’t.
Not sure whether I am reading too much into his reply, but given that one of our former Perth Research Institute staff members - Vi Nguyen Thanh Le - has been based in Santa Clara for months now without ever updating her LinkedIn profile with regards to a new employer, could BrainChip either have placed a number of staff there with another company on a contract basis or possibly even have opened a small office in Silicon Valley?
Nothing but a wild idea so far - in case there is more to it regarding the latter, it will surely be featured in the upcoming podcast (even though I personally would expect them to announce such news through their social media channels at the time they eventuate.)
I just find the concrete reference to a BrainChip Santa Clara team odd, and our CTO’s reply struck me as somewhat ambiguous… Time will tell.
View attachment 74040
Nice find and Info thanks.
Have we got our very own Skunkworks
We are currently at just under 2 million shares. However, trading will continue for another 5.5 hours.Afternoon Chippers ,
Great to see a ASX contract , Hopefully the start of something VASTLY larger.
She's gaining pace....be thinking the American market should fully light the fuse on this tonight when their market comes on line.
RELEASE THE HANDBREAK
Regards,
Esq.
Thank you very much for explaining and making us investor who are not so tech savvy but want to be in the leagueHe specifically mentions "CNN models".
CNN models are used in Software NNs and in competing NN accelerator hardware.
CNN models are usually associated with MAC (multiply/accumulate) processing in full digital, 16 bit or 32 bit per byte processors, although the pressure to reduce energy and latency has seen 8 bits adopted as standard recently. While some zero multiplication can be skipped to reduce the number of multipliction operations, basically all the input bytes are processed.
MAC operations are carried out in a matrix array (rows and columns) where all the multiplication results are added column by column.
Akida on the other hand, only processes bytes (1, 2, or 4 bit events or spikes) which change.
In addition, Akida uses N-of-M coding, processing only the largest N of M input bytes as these N bytes carry the most relevant information, meaning that M-N events are dropped from the calculation.
Further more, data in Akida models is 1, 2, or 4 bit bytes and can also use N-of-M coding, greatly reducing the size of each model, so, in fact, you get a double-whammy of compression.
Thus Akida 1 is much more power efficient as it performs far fewer operations in less time to process the same input information as conventional CNNs.
Now TENNs beings additional efficiencies, which I haven't got to the bottom of yet (It took me a long time to get some sort of grasp of N-of-M coding). This makes TENNs more power efficient and further reduces latency.
So I think Tony is saying that CNNs are yesterdays tech, so make way for tomorrow.
ISL say that they were the other awardee
Don’t know how IRAD are but Tony definitely works for RTX?Looks as if RTX (formerly Raytheon Technologies Corporation) could be the mystery multinational aerospace and defence customer?
View attachment 74054
View attachment 74057
RTX Corporation - Wikipedia
en.m.wikipedia.org
View attachment 74056
Courtesy of Chat GPT (I just asked what company IRAD would refer to, no prompting as it relates to defense):Don’t know how IRAD are but Tony definitely works for RTX?
Tom M. on LinkedIn: BrainChip Awarded Air Force Research Laboratory Radar Development Contract
Result of a company IRAD that myself and others worked on prior to thiswww.linkedin.com
Yes me, my Mrs and my children are going to be really rich come the end of 2025. Especially after I win the lotteryFinally someone with a working
Any other suggestions? Your success rate seems reliable.
I'm not so sure that is confirmation of anything other than they were the first to use Akida in this space through the SBIR grant a couof years ago when they used Akida. If it was ISL Phase 3 would have finished that, although TENNs wasn't available thenISL say that they were the other awardee
That was my thoughts, considering there have been a few job ads over the years needing DOD clearanceNice find and Info thanks.
Have we got our very own Skunkworks
I will ask today ...CheersI agree with Dodgy Knees on the announcement.
I believe we'll be paying the sub contractor 800K, leaving us 1Mill USD.
Maybe the mob going out tomorrow night in Perth can get some clarification from Tony D or someone else.
The post on Linkedin from Tony Lewis was pretty exciting. He's definitely pumped.
Good day !!
Anthony Lewis
"Yes to Edge AI!
BrainChip has racked up an impressive number of beyond state of the art models for edge inference. These models are based on our own TENNs family of models. hashtag#TENNS are a kind of State-Space Model recently popularized by hashtag#Mamba. We have been working on our variant for a number of year and have already created hardware to run our models.
I am constantly shocked and amazed to see so many incredible results across a broad array of use-cases.
We are at a defining moment in AI and I am proud to be working with an incredible team making it happen at hashtag#BrainChip
Usually, when companies talk about pushing the boundaries of PPA--- Performance, Power and Area (or cost) they can nudge one of these metrics at the cost of another. We are seemingly riding along a Pareto Frontier and "no company shall pass" that frontier... until now.
With TENN+Akida we are seen remarkable performance and power and model size reduction when compared to conventional CNN model. I think we might be able to schedule a funeral for CNN models for temporal processing.
Why does this matter to anyone? AI everywhere is a solution to climbing energy usage of cloud computing, latency, and expensive back-end inference.
I am guess 80% of LLM tasks can be pushed to the edge right now and with TENNS+Akida, LLM, audio signal processing and more can be pushed into inexpensive chips.
Incredible levels of intelligence at the edge is now possible at very low cost of ownership.
If you are highly skilled in Edge AI, understand building networks from scratch, have a great positive attitude and like working in a highly collaborative atmosphere, consider working for us in R&D here in sunny Southern California. Lets make this happen together!"