alwaysgreen
Top 20
Inellisense looks to be a great partner. Decent company (124 employees according to LinkedIn) so not a little start-up.
Innatera has been around for a while now and the last time it came up which was reasonably recent when we dug into it again for the umptieth time tucked away in the latest release was a line to the effect they still had to iron out the problem with production issues. @Diogenese has often written about the main issue with analogue is the inability to produce the chips without errors (my words) and in my words again a tiny error/defect in analogue will multiply when being used for spiking neuromorphic computing. This production issue is why Peter van der Made and Anil Mankar went with digital so it is reliable, cheap and capable of mass production across a range of different processes and foundries.There is also another future BRN competitor, Innatera. They appear to still be in R&D phase.
Innatera’s ultra-efficient neuromorphic processors mimic the brain’s mechanisms for processing sensory data. Based on a proprietary analog-mixed signal computing architecture, Innatera’s processors leverage the computing capabilities of spiking neural networks to deliver ground-breaking cognition performance within a narrow power envelope. With an unprecedented combination of ultra-low power consumption and short response latency, these devices enable high-performance always-on pattern recognition capabilities in applications at the sensor-edge.
Ultra low power intelligence for the sensor edge. | Innatera
Our processors help sensors recognise patterns with more energy efficiency, lower latency and greater privacy.www.innatera.com
Innatera tech is analog-mixed signal whereas Akida is digital.
The Texas Instruments AM62A, AM68A & AM69A vision processors must be using Akida.
At 1min 18sec the Texas Instrument rep mentions that over time it improves. "Where it starts & where it finishes only gets better".
So it must be learning in order to improve.
The Texas Instruments AM62A, AM68A & AM69A vision processors must be using Akida.
At 1min 18sec the Texas Instrument rep mentions that over time it improves. "Where it starts & where it finishes only gets better".
So it must be learning in order to improve.
Yes I read this the other day and the difference in Tops pulled me up. It of course is possible that as has been mentioned before like Renesas that bought two nodes of AKIDA IP as it was sufficient for there target market Texas Instruments used less nodes as at 32TOPS it would be more than adequate for their target market and of course is cheaper and also allows for the new improved models ie 40TOPS, 45TOPS, 50TOPS for later upselling of customers.View attachment 32756
The cheapest AM62A3 is 1 TOPS similar to Akida-S & sells for $12 USD for 1,000+ volume.
The top of the range AM69A is up to 32 TOPS & sells for $150 USD for 1,000+ volume.
The mid range AM68A is up to 8 TOPS & sells for $20 USD for 1,000+ volume.
Only thing that doesn't fit is their top of the range AM69A is up to 32 TOPS instead of Akida-P's up to 50 TOPS. You would think they would offer the maximum TOPS for top of the range.
Texas Instruments AM62A, AM8, and AM69A Arm Cortex-A53 or Cortex-A72 Vision processors come with 2 to 8 CPU cores and deep learning accelerators delivering from 1 TOPS to 32 TOPS for low-power vision and artificial intelligence (AI) processing in applications such as video doorbells, machine vision, and autonomous mobile robots. Three families and a total of 6 parts are available: AM62A3, AM62A3-Q1, AM62A7, and AM62A7-Q1 single to quad-core Cortex-A53 processors support one to two cameras at less than 2W in applications such as video doorbells and smart retail systems. Equipped with a 1TOPS vision processor, the AM62A3 is the cheapest model of the family going for US$12 in 1,000-unit quantities. AM68A dual-core Cortex-A72 processor can handle one to eight cameras in applications like machine vision, with up to 8 TOPS of AI processing for video analytics. AM69A octa-core Cortex-A72 SoC supports up to 12 cameras and achieves up to 32 [...]
[CNX-Software] - Texas Instruments unveils AM62A, AM68A and AM69A Arm Cortex Vision processors and devkits
Texas Instruments AM62A, AM8, and AM69A Arm Cortex-A53 or Cortex-A72 Vision processors come with 2 to 8 CPU cores and deep learning accelerators delivering from 1 TOPS to 32 TOPS for low-power vision and artificial intelligence (AI) processing in applications such as video doorbells, machine visi...forum.armbian.com
Good morning! Oh, I've subtly overlooked the "Aomori" I will be in Tokyo thanks for the correctionThere is still snow on the beach in Aomori so sakura not blooming yet. Even next week may be too early.
This reduced in size article is from Sally Ward EE times .
Embedded World 2023
Also on the STMicro booth were another couple of fun demos, including a washing machine that could tell how much laundry was in the machine in order to optimize the amount of water added. This system is sensorless; it is based on AI analysis of the current required to drive the motor, and predicted the weight of the 800g laundry load to within 30g. A robot vacuum cleaner equipped with a time-of-flight sensor also used AI to tell what type of floor surface it was cleaning, to allow it to select the appropriate cleaning method.
Renesas
Next stop was the Renesas booth, to see the Arm Cortex-M85 up and running in a not-yet-announced product (due to launch in June). This is the first time EE Times has seen AI running on a Cortex-M85 core, which was announced by Arm a year ago.
The M85 is a larger core than the Cortex-M55, but both are equipped with Helium—Arm’s vector extensions for the Cortex-M series—ideal for accelerating ML applications. Renesas’ figures had the M85 running inference 5.3× faster than a Renesas M7-based design, though the M85 was also running faster (480 MHz compared with 280).
Renesas’ demo had Plumerai’s person-detection model up and running in 77 ms per inference.
Renesas’ not-yet-announced Cortex-M85 device is the first we’ve seen running AI on the M85. Shown here running Plumerai people-detection model. (Source: EE Times/Sally Ward-Foxton)
Renesas field application engineer Stefan Ungerechts also gave EE Times an overview of the DRP-AI (dynamically reconfigurable processor for AI), Renesas’ IP for AI acceleration. A demo of the RZ/V2L device, equipped with a 0.5 TOPS @ FP16 (576 MACs) DRP-AI engine, was running tinyYOLOv2 in 27 ms at 500 mW (1 TOPS/W). This level of power efficiency means no heat sink is required, Ungerechts said.
The DRP-AI is, in fact, a two-part accelerator; the dynamically reconfigurable processor handles acceleration of non-linear functions, then there is a MAC array alongside it. Non-linear functions in this case might be image preprocessing functions or model pooling layers of a neural network. While the DRP is reconfigurable hardware, it is not an FPGA, Ungerechts said. The combination is optimized for feed-forward networks like convolutional neural networks commonly found in computer vision, and Renesas’ software stack allows either the whole AI workload to be passed to the DRP-AI or use of a combination of the DRP-AI and the CPU.
Also available with a DRP-AI engine are the RZ/V2MA and RZ/V2M, which offer 0.7 TOPS @ FP16 (they run faster than the -V2L at 630 MHz compared to 400, and have higher memory bandwidth).
A next-generation version of the DRP-AI that supports INT8 for greater throughput, and is scaled up to 4K MACs, will be available next year, Ungerechts said.
Squint
Squint, an AI company launched earlier this year, is taking on the challenge of explainable AI.
Squint CEO Kenneth Wenger told EE Times that the company wants to increase trust in AI decision making for applications like autonomous vehicles (AVs), healthcare and fintech. The company takes pre-production models and tests them for weaknesses—identifying in what situations they are more likely to make a mistake.
This information can be used to set up a mitigating factors, which might include human-in-the-loop—perhaps flagging a medical image to a doctor—or trigger a second, more specialized model that has been specifically trained for that situation. Squint’s techniques can also be used to tackle “data drift”—for maintaining models over longer periods of time.
Embedl
Swedish AI company Embedl is working on retraining models to optimize them for specific hardware targets. The company has a Python SDK that fits into the training pipeline. Techniques include replacing operators with alternatives that may run more efficiently on the particular target hardware, as well as quantization-aware retraining. The company’s customers so far have included automotive OEMs and tier 1s, but they are expanding to Internet of Things (IoT) applications.
Embedl has also been a part of the VEDL-IoT project, an EU-funded project in collaboration with Bielefeld University that aims to develop an IoT platform, which distributes AI across a heterogeneous cluster.
Their demo showed managing AI workloads across different hardware: an Nvidia AGX Xavier GPU in a 5G basestation and an NXP i.MX8 application processor in a car. With sufficient 5G bandwidth available, “difficult” layers of the neural network could be computed remotely in the basestation, and the rest in the car, for optimum latency. Reduce the 5G bandwidth available, and more or all of the workload goes to the i.MX8. Embedl had optimized the same model for both hardware types.
The VEDL-IoT project demo shows splitting AI workloads across 5G infrastructure and embedded hardware. (Source: EE Times/Sally Ward-Foxton)
Silicon Labs
Silicon Labs had several xG24 dev kits running AI applications. One had a simple Sparkfun camera with the xG24 running people counting, and calculating the direction and speed of movement.
A separate wake word demo ran in 50 ms on the xG24’s accelerator, and a third board was running a gesture recognition algorithm.
BrainChip
BrainChip had demos running on a number of partner booths, including Arm and Edge Impulse. Edge Impulse’s demo showed the company’s FOMO (faster objects, more objects) object detection network running on a BrainChip Akida AKD1000 in under 1 mW.
That looks big
One Tops (Operations per second, millisecond, hydro second)? That sent me to Google Of course I know Akida is a Tops champion. Right, per operation smarts. Less big math, and fast concise material events. No fake events (Brainchip).Yes I read this the other day and the difference in Tops pulled me up. It of course is possible that as has been mentioned before like Renesas that bought two nodes of AKIDA IP as it was sufficient for there target market Texas Instruments used less nodes as at 32TOPS it would be more than adequate for their target market and of course is cheaper and also allows for the new improved models ie 40TOPS, 45TOPS, 50TOPS for later upselling of customers.
My opinion only DYOR
FF
AKIDA BALLISTA
Could be because our clear profit margin on our I P is approximately 97 percent and we don’t have to worry about the manufacturing.I received a Brainchip March 2023 Newsletter today. I tried to read it from the point of view of a potential manufacturer.
My opinion is that actual product releases are being held back, partly because the tech is hard to understand and a good example of a product is not out there for manufacturers to see.
I understand why we want to go down the "I.P. license" path, but what if we design a "killer" product and get someone to make it for us? Then we release and sell it for the world to see.
Who better than ourselves to do it to get the ball rolling? Sean H. could make clear the reason why we have taken this step to his contacts and that it is a once-only thing i.e. we are not going into competition.
Could be because our clear profit margin on our I P is approximately 97 percent and we don’t have to worry about the manufacturing.
On another note I checked our current MC in the good ol’yankee $ dollar and we are a pissant 525,000,000 US. Sooo…looks great for a run probably sooner rather than later
I meant to include that another reason producers may be holding back is that the ongoing development by us causes them to wait because they think "we will wait until things are sorted because someone could leapfrog our product". They need re-assurance that we have been bold enough to produce something right now, put our money where our mouth is, so it can be done.I received a Brainchip March 2023 Newsletter today. I tried to read it from the point of view of a potential manufacturer.
My opinion is that actual product releases are being held back, partly because the tech is hard to understand and a good example of a product is not out there for manufacturers to see.
I understand why we want to go down the "I.P. license" path, but what if we design a "killer" product and get someone to make it for us? Then we release and sell it for the world to see.
Who better than ourselves to do it to get the ball rolling? Sean H. could make clear the reason why we have taken this step to his contacts and that it is a once-only thing i.e. we are not going into competition.
you greedy F7%ker!Yep too true…BUT, I for one won’t be selling my soul/ barn’s to those takeover parasites under 40 AU dollars. Vlad
Not happening. Insiders own >>50% and have not indicated any desire to sell. And their dreams and goals are no where near being accomplished. Not even close.I know we dont like talking take overs but 1000% someone has to be looking at this price.