Good morning! Oh, I've subtly overlooked the "Aomori"There is still snow on the beach in Aomori so sakura not blooming yet. Even next week may be too early.
This reduced in size article is from Sally Ward EE times .
Embedded World 2023
Also on the STMicro booth were another couple of fun demos, including a washing machine that could tell how much laundry was in the machine in order to optimize the amount of water added. This system is sensorless; it is based on AI analysis of the current required to drive the motor, and predicted the weight of the 800g laundry load to within 30g. A robot vacuum cleaner equipped with a time-of-flight sensor also used AI to tell what type of floor surface it was cleaning, to allow it to select the appropriate cleaning method.
Renesas
Next stop was the Renesas booth, to see the Arm Cortex-M85 up and running in a not-yet-announced product (due to launch in June). This is the first time EE Times has seen AI running on a Cortex-M85 core, which was announced by Arm a year ago.
The M85 is a larger core than the Cortex-M55, but both are equipped with Helium—Arm’s vector extensions for the Cortex-M series—ideal for accelerating ML applications. Renesas’ figures had the M85 running inference 5.3× faster than a Renesas M7-based design, though the M85 was also running faster (480 MHz compared with 280).
Renesas’ demo had Plumerai’s person-detection model up and running in 77 ms per inference.
Renesas’ not-yet-announced Cortex-M85 device is the first we’ve seen running AI on the M85. Shown here running Plumerai people-detection model. (Source: EE Times/Sally Ward-Foxton)
Renesas field application engineer Stefan Ungerechts also gave EE Times an overview of the DRP-AI (dynamically reconfigurable processor for AI), Renesas’ IP for AI acceleration. A demo of the RZ/V2L device, equipped with a 0.5 TOPS @ FP16 (576 MACs) DRP-AI engine, was running tinyYOLOv2 in 27 ms at 500 mW (1 TOPS/W). This level of power efficiency means no heat sink is required, Ungerechts said.
The DRP-AI is, in fact, a two-part accelerator; the dynamically reconfigurable processor handles acceleration of non-linear functions, then there is a MAC array alongside it. Non-linear functions in this case might be image preprocessing functions or model pooling layers of a neural network. While the DRP is reconfigurable hardware, it is not an FPGA, Ungerechts said. The combination is optimized for feed-forward networks like convolutional neural networks commonly found in computer vision, and Renesas’ software stack allows either the whole AI workload to be passed to the DRP-AI or use of a combination of the DRP-AI and the CPU.
Also available with a DRP-AI engine are the RZ/V2MA and RZ/V2M, which offer 0.7 TOPS @ FP16 (they run faster than the -V2L at 630 MHz compared to 400, and have higher memory bandwidth).
A next-generation version of the DRP-AI that supports INT8 for greater throughput, and is scaled up to 4K MACs, will be available next year, Ungerechts said.
Squint
Squint, an AI company launched earlier this year, is taking on the challenge of explainable AI.
Squint CEO Kenneth Wenger told EE Times that the company wants to increase trust in AI decision making for applications like autonomous vehicles (AVs), healthcare and fintech. The company takes pre-production models and tests them for weaknesses—identifying in what situations they are more likely to make a mistake.
This information can be used to set up a mitigating factors, which might include human-in-the-loop—perhaps flagging a medical image to a doctor—or trigger a second, more specialized model that has been specifically trained for that situation. Squint’s techniques can also be used to tackle “data drift”—for maintaining models over longer periods of time.
Embedl
Swedish AI company Embedl is working on retraining models to optimize them for specific hardware targets. The company has a Python SDK that fits into the training pipeline. Techniques include replacing operators with alternatives that may run more efficiently on the particular target hardware, as well as quantization-aware retraining. The company’s customers so far have included automotive OEMs and tier 1s, but they are expanding to Internet of Things (IoT) applications.
Embedl has also been a part of the VEDL-IoT project, an EU-funded project in collaboration with Bielefeld University that aims to develop an IoT platform, which distributes AI across a heterogeneous cluster.
Their demo showed managing AI workloads across different hardware: an Nvidia AGX Xavier GPU in a 5G basestation and an NXP i.MX8 application processor in a car. With sufficient 5G bandwidth available, “difficult” layers of the neural network could be computed remotely in the basestation, and the rest in the car, for optimum latency. Reduce the 5G bandwidth available, and more or all of the workload goes to the i.MX8. Embedl had optimized the same model for both hardware types.
The VEDL-IoT project demo shows splitting AI workloads across 5G infrastructure and embedded hardware. (Source: EE Times/Sally Ward-Foxton)
Silicon Labs
Silicon Labs had several xG24 dev kits running AI applications. One had a simple Sparkfun camera with the xG24 running people counting, and calculating the direction and speed of movement.
A separate wake word demo ran in 50 ms on the xG24’s accelerator, and a third board was running a gesture recognition algorithm.
BrainChip
BrainChip had demos running on a number of partner booths, including Arm and Edge Impulse. Edge Impulse’s demo showed the company’s FOMO (faster objects, more objects) object detection network running on a BrainChip Akida AKD1000 in under 1 mW.
That looks big
One Tops (Operations per second, millisecond, hydro second)? That sent me to Google Of course I know Akida is a Tops champion. Right, per operation smarts. Less big math, and fast concise material events. No fake events (Brainchip).Yes I read this the other day and the difference in Tops pulled me up. It of course is possible that as has been mentioned before like Renesas that bought two nodes of AKIDA IP as it was sufficient for there target market Texas Instruments used less nodes as at 32TOPS it would be more than adequate for their target market and of course is cheaper and also allows for the new improved models ie 40TOPS, 45TOPS, 50TOPS for later upselling of customers.
My opinion only DYOR
FF
AKIDA BALLISTA
Could be because our clear profit margin on our I P is approximately 97 percent and we don’t have to worry about the manufacturing.I received a Brainchip March 2023 Newsletter today. I tried to read it from the point of view of a potential manufacturer.
My opinion is that actual product releases are being held back, partly because the tech is hard to understand and a good example of a product is not out there for manufacturers to see.
I understand why we want to go down the "I.P. license" path, but what if we design a "killer" product and get someone to make it for us? Then we release and sell it for the world to see.
Who better than ourselves to do it to get the ball rolling? Sean H. could make clear the reason why we have taken this step to his contacts and that it is a once-only thing i.e. we are not going into competition.
Could be because our clear profit margin on our I P is approximately 97 percent and we don’t have to worry about the manufacturing.
On another note I checked our current MC in the good ol’yankee $ dollar and we are a pissant 525,000,000 US. Sooo…looks great for a run probably sooner rather than later
I meant to include that another reason producers may be holding back is that the ongoing development by us causes them to wait because they think "we will wait until things are sorted because someone could leapfrog our product". They need re-assurance that we have been bold enough to produce something right now, put our money where our mouth is, so it can be done.I received a Brainchip March 2023 Newsletter today. I tried to read it from the point of view of a potential manufacturer.
My opinion is that actual product releases are being held back, partly because the tech is hard to understand and a good example of a product is not out there for manufacturers to see.
I understand why we want to go down the "I.P. license" path, but what if we design a "killer" product and get someone to make it for us? Then we release and sell it for the world to see.
Who better than ourselves to do it to get the ball rolling? Sean H. could make clear the reason why we have taken this step to his contacts and that it is a once-only thing i.e. we are not going into competition.
you greedy F7%ker!Yep too true…BUT, I for one won’t be selling my soul/ barn’s to those takeover parasites under 40 AU dollars. Vlad
Not happening. Insiders own >>50% and have not indicated any desire to sell. And their dreams and goals are no where near being accomplished. Not even close.I know we dont like talking take overs but 1000% someone has to be looking at this price.
100% Arabica in Gion is the place in Tokyo.Good morning! Oh, I've subtly overlooked the "Aomori"I will be in Tokyo
thanks for the correction
We only have 7 trading days to go to the end of this quarterly reporting period.With no new IP deals, it's almost a certainty that our next quarterly will be poor. I have no idea how shorters think but it seems safe to say that they are hoping to lower the share price further post release of the quarterly.
Hopefully we can announce a few IP deals prior to the quarterly release to burn them.
Let's hope it's a goody !! .... Reported by the end of April... And then the AGM!!! 23rd May (8 weeks away!) Hoping we get heaps of positive news before then!!!! EDIT; we are getting heaps of positive news, absolutely we are!!.... just need something to get the SP headingWe only have 7 trading days to go to the end of this quarterly reporting period.
true, but if there is a will there is a way and the likely suitors are some of the least poorNot happening. Insiders own >>50% and have not indicated any desire to sell. And their dreams and goals are no where near being accomplished. Not even close.