Fact Finder
Top 20
If you look like Jennifer Lopez I could say I do.I must say FF your mention of being a retired lawyer (which most of us happen to know) made me think - if you look anything like Jennifer Robinson I want to meet up![]()

If you look like Jennifer Lopez I could say I do.I must say FF your mention of being a retired lawyer (which most of us happen to know) made me think - if you look anything like Jennifer Robinson I want to meet up![]()
I'm kinda thunderstruck..........
OK, not great but give it a chance. gets better around the 2 minute mark.
make that the 4mins
No Akida -Was looking into Renesas again to see if any other info around.
I found their DRP AI uses EdgeCortix in one component and maybe @Diogenese can elaborate on any overlap with Akida functions or shows this DRP not using Akida?
Or, is the TVM just a compiler / converter as how I read it?
TIA.
![]()
Energy-Efficient AI Processors and Acceleration | EdgeCortix
Next Wave of Generative AI at the Edge. Introducing SAKURA-II, the world’s most flexible and energy-efficient AI accelerator.www.edgecortix.com
Features of DRP-AI
DRP-AI consists of AI-MAC (multiply-accumulate processor) and DRP (reconfigurable processor). AI processing can be executed at high speed by assigning AI-MAC for operations on the convolution layer and fully connected layer, and DRP for other complex processing such as preprocessing and pooling layer
Tool: DRP-AI Translator / DRP-AI TVM※1
DRP-AI Translator and DRP-AI TVM are tools that are available to convert trained AI models into a format that can run on DRP-AI. This section describes the features of these two tools.
The DRP-AI Translator is a tool that is tuned to maximize DRP-AI performance. DRP-AI achieves high-speed performance, low power consumption and reduced CPU load by enabling DRP-AI to perform all the operations of an AI model.
DRP-AI TVM applies the DRP-AI accelerator to the proven ML compiler framework Apache TVM※2」. This enables support for multiple AI frameworks (ONNX, PyTorch, TensorFlow, etc.). In addition, it enables operation in conjunction with the CPU, allowing more AI models to be run.
These two tools can be selected according to the customer's product application.
View attachment 19344
View attachment 19345
*1 DRP-AI TVM is powered by EdgeCortix MERATM Compiler Framework
2 For more information on Apache TVM, please refer to https://tvm.apache.org
If you look like Jennifer Lopez I could say I do.![]()
You musta know'd you waz in trubbl when you seen dat BUTT!
Actually, this one is much better (and half as long)
Probably would be useful if it could detect if you are having a seizure, heart attack or strike, so that the car can automatically stop safely and call emergency services.I like the driver identification and driver authentication side of this technology.
As well as supplying input to all the personalised adjustments—seat, steering wheel, and mirror positions to name a few, then also air conditioning setting, music choice, driver assist preferences etc.
This potentially could do away with auto theft and car-jacking. I.e. if the car doesn‘t recognise you, then you can‘t use it!
I don‘t see a use for my car telling me what emotions it thinks I am expressing. I don‘t even like it checking my attention, which should become less important anyway once automation levels mature to useful levels.
Or another option, it can detect when there’s "movement at the station" (so to speak) and it texts your wife / partner that she should open a bottle of wine, draw the shades and put on some Barry White.Probably would be useful if it could detect if you are having a seizure, heart attack or strike, so that the car can automatically stop safely and call emergency services.
Or run for the hillsOr another option, it can detect when there’s "movement at the station" (so to speak) and it texts your wife / partner that she should open a bottle of wine, draw the shades and put on some Barry White.
![]()
Institution Name | Shares Held (% Change) | % Outstanding |
---|---|---|
Vanguard Investments Australia Ltd. | 22,140,950 (+0.17%) | 1.29 |
BlackRock Institutional Trust Company, N.A. | 18,225,041 (+0.03%) | 1.06 |
BlackRock Advisors (UK) Limited | 13,016,223 (-0.00%) | 0.76 |
The Vanguard Group, Inc. | 11,749,260 (+0.64%) | 0.68 |
LDA Capital Limited | 10,000,000 (-0.07%) | 0.58 |
Irish Life Investment Managers Ltd. | 9,187,138 (+0.04%) | 0.53 |
FV Frankfurter Vermögen AG | 7,200,000 (-0.01%) | 0.42 |
BetaShares Capital Ltd. | 5,270,349 (+0.03%) | 0.31 |
BlackRock Investment Management (Australia) Ltd. | 3,077,154 (-0.02%) | 0.18 |
State Street Global Advisors Australia Ltd. | 3,065,409 (+0.00%) | 0.18 |
State Street Global Advisors (US) | 2,475,052 (+0.01%) | 0.14 |
First Trust Advisors L.P. | 2,086,443 (+0.02%) | 0.12 |
Nuveen LLC | 1,773,407 (+0.00%) | 0.10 |
Charles Schwab Investment Management, Inc. | 1,546,196 (+0.09%) | 0.09 |
California State Teachers Retirement System | 1,324,606 (+0.08%) | 0.08 |
ClassicOr another option, it can detect when there’s "movement at the station" (so to speak) and it texts your wife / partner that she should open a bottle of wine, draw the shades and put on some Barry White.
![]()
Look at the note to the receivables. Realisticly closer to 2-2.5mThis post is a good reminder from half yearlys which showed $4,8m revenue, and receivables at $3.4m, so my money is around $3m cash receipts for 4C
Not sure if this has been shared as yet?
For those not born in the Stone Age:Hi @Bobbygrant
I am pretty sure everyone would have seen this interview before but that does not diminish its value.
Those who have read my posts over a few years know that I am fond of doing retrospectives as it is always the case that when I revisit past interviews, presentations and releases hindsight comes into play and the significance of things that I previously missed now stand out like neon signs.
In this interview towards the end the CEO Sean Hehir speaks about understanding customer requirements and how when power is not an issue they can tailor their offering and go toe to toe with other solutions taking advantage of AKIDA’s flexibility to scale.
This statement put me in mind of Rob Lincourt from DELL Technologies statement in the Brainchip podcast that what they were doing with AKIDA was seeing just how far it could scale.
Which put me in mind of Anil Mankar’s statement that they were being benchmarked against a GPU and it was coming up favourable to AKIDA.
Which put me in mind of Tim Llewellyn from Nviso who said that with Anil Mankar’s advice they have tweaked AKIDA to run at up to 1670 fps.
Which put me in mind of Peter van der Made’s statements going back to 2015 that AKIDA technology could do all the compute from data centres to the far edge.
Which finally brings me to the former CEO Mr. Dinardo’s comment when hosing down shareholder excitement around Peter van der Made’s statement that 100 AKD1000 could do all the compute for full autonomy saying that Peter gets carried away and while what he has said is true the focus of Brainchip was the Edge and Far Edge.
So going back to CEO Sean Hehir’s statement regarding where power is not an issue I think I can now safely say that Brainchip has been widening its horizon and that when we go looking for AKIDA technology our focus should no longer simply be on the energy constrained Edge application.
Ubiquitous now means just that. Anywhere you need Ai semiconductors from the cloud data centre to the box on a remote heavy rail line in the Kimberly where only battery power exists AKIDA will be found as it is Essential Ai.
Why? Because just like the song “AKIDA can do anything better than you” ever imagined was possible with a hugely reduced power footprint and in real-time and with one shot, few shot, incremental learning.
It can do maths and regression analysis better than you. Yes it can, yes it can, yes it cannnnnn.
My opinion only DYOR
FF
AKIDA BALLISTA
![]()
A.I.-driven robots are cooking your dinner
Your next Friday night pizza may be made by robot hands.fortune.com
“What they built is a robot that not only learns intellectually as a human would, but can pull from all five “senses.” The robot chef can touch, smell, see, hear, and even taste, thanks to a tasting bar that mimics the human tongue. The senses send feedback to the OS, creating a learning loop similar to a human’s, which logs all information for future use. It’s as if the robot is constantly in a culinary school class, but remembers every detail from the homework”
“Multiple senses all provide data in a way that we can make an intellectual decision,” Sunkara says. “That’s the A.I. process as well.”
Sounds familiar or I could just be hungry![]()