BRN Discussion Ongoing

Steve10

Regular
The Texas Instruments AM62A, AM68A & AM69A vision processors must be using Akida.

At 1min 18sec the Texas Instrument rep mentions that over time it improves. "Where it starts & where it finishes only gets better".

So it must be learning in order to improve.

 
  • Like
  • Fire
  • Thinking
Reactions: 42 users

alwaysgreen

Top 20
Inellisense looks to be a great partner. Decent company (124 employees according to LinkedIn) so not a little start-up.

1679439186611.png


1679439250199.png
 
  • Like
  • Fire
  • Love
Reactions: 36 users
There is also another future BRN competitor, Innatera. They appear to still be in R&D phase.

Innatera’s ultra-efficient neuromorphic processors mimic the brain’s mechanisms for processing sensory data. Based on a proprietary analog-mixed signal computing architecture, Innatera’s processors leverage the computing capabilities of spiking neural networks to deliver ground-breaking cognition performance within a narrow power envelope. With an unprecedented combination of ultra-low power consumption and short response latency, these devices enable high-performance always-on pattern recognition capabilities in applications at the sensor-edge.


Innatera tech is analog-mixed signal whereas Akida is digital.
Innatera has been around for a while now and the last time it came up which was reasonably recent when we dug into it again for the umptieth time tucked away in the latest release was a line to the effect they still had to iron out the problem with production issues. @Diogenese has often written about the main issue with analogue is the inability to produce the chips without errors (my words) and in my words again a tiny error/defect in analogue will multiply when being used for spiking neuromorphic computing. This production issue is why Peter van der Made and Anil Mankar went with digital so it is reliable, cheap and capable of mass production across a range of different processes and foundries.

I suspect that when they work out how to do it and many have tried before them it will be a more expensive process and they will be at a different price point. A lot of water to flow under their bridge before they are actually a competitor in the mass end of the market with AKIDA at low cost.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 29 users

Steve10

Regular
The Texas Instruments AM62A, AM68A & AM69A vision processors must be using Akida.

At 1min 18sec the Texas Instrument rep mentions that over time it improves. "Where it starts & where it finishes only gets better".

So it must be learning in order to improve.



1679439218274.png


The cheapest AM62A3 is 1 TOPS similar to Akida-S & sells for $12 USD for 1,000+ volume.

The top of the range AM69A is up to 32 TOPS & sells for $150 USD for 1,000+ volume.

The mid range AM68A is up to 8 TOPS & sells for $20 USD for 1,000+ volume.

Only thing that doesn't fit is their top of the range AM69A is up to 32 TOPS instead of Akida-P's up to 50 TOPS. You would think they would offer the maximum TOPS for top of the range.

Texas Instruments AM62A, AM8, and AM69A Arm Cortex-A53 or Cortex-A72 Vision processors come with 2 to 8 CPU cores and deep learning accelerators delivering from 1 TOPS to 32 TOPS for low-power vision and artificial intelligence (AI) processing in applications such as video doorbells, machine vision, and autonomous mobile robots. Three families and a total of 6 parts are available: AM62A3, AM62A3-Q1, AM62A7, and AM62A7-Q1 single to quad-core Cortex-A53 processors support one to two cameras at less than 2W in applications such as video doorbells and smart retail systems. Equipped with a 1TOPS vision processor, the AM62A3 is the cheapest model of the family going for US$12 in 1,000-unit quantities. AM68A dual-core Cortex-A72 processor can handle one to eight cameras in applications like machine vision, with up to 8 TOPS of AI processing for video analytics. AM69A octa-core Cortex-A72 SoC supports up to 12 cameras and achieves up to 32 [...]

 
  • Like
  • Thinking
  • Love
Reactions: 26 users

Murphy

Life is not a dress rehearsal!
The Texas Instruments AM62A, AM68A & AM69A vision processors must be using Akida.

At 1min 18sec the Texas Instrument rep mentions that over time it improves. "Where it starts & where it finishes only gets better".

So it must be learning in order to improve.



Sounds like he is describing AKIDA to a T, with the on-chip learning AND ultra-low power. Thanks again Steve.👍


If you don't have dreams, you can't have dreams come true!
 
  • Like
  • Fire
  • Love
Reactions: 19 users
View attachment 32756

The cheapest AM62A3 is 1 TOPS similar to Akida-S & sells for $12 USD for 1,000+ volume.

The top of the range AM69A is up to 32 TOPS & sells for $150 USD for 1,000+ volume.

The mid range AM68A is up to 8 TOPS & sells for $20 USD for 1,000+ volume.

Only thing that doesn't fit is their top of the range AM69A is up to 32 TOPS instead of Akida-P's up to 50 TOPS. You would think they would offer the maximum TOPS for top of the range.

Texas Instruments AM62A, AM8, and AM69A Arm Cortex-A53 or Cortex-A72 Vision processors come with 2 to 8 CPU cores and deep learning accelerators delivering from 1 TOPS to 32 TOPS for low-power vision and artificial intelligence (AI) processing in applications such as video doorbells, machine vision, and autonomous mobile robots. Three families and a total of 6 parts are available: AM62A3, AM62A3-Q1, AM62A7, and AM62A7-Q1 single to quad-core Cortex-A53 processors support one to two cameras at less than 2W in applications such as video doorbells and smart retail systems. Equipped with a 1TOPS vision processor, the AM62A3 is the cheapest model of the family going for US$12 in 1,000-unit quantities. AM68A dual-core Cortex-A72 processor can handle one to eight cameras in applications like machine vision, with up to 8 TOPS of AI processing for video analytics. AM69A octa-core Cortex-A72 SoC supports up to 12 cameras and achieves up to 32 [...]

Yes I read this the other day and the difference in Tops pulled me up. It of course is possible that as has been mentioned before like Renesas that bought two nodes of AKIDA IP as it was sufficient for there target market Texas Instruments used less nodes as at 32TOPS it would be more than adequate for their target market and of course is cheaper and also allows for the new improved models ie 40TOPS, 45TOPS, 50TOPS for later upselling of customers.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Thinking
Reactions: 18 users

stuart888

Regular
Fidelity just sent me this!

1679440253597.png
 
  • Like
  • Fire
  • Love
Reactions: 69 users
D

Deleted member 2799

Guest
There is still snow on the beach in Aomori so sakura not blooming yet. Even next week may be too early.
Good morning! Oh, I've subtly overlooked the "Aomori" 😵‍💫 I will be in Tokyo 🧐😂 thanks for the correction
 
  • Like
Reactions: 2 users

stuart888

Regular
I am not a Linkedin person, but this is a neural nut. They point out Brainchip Smarts, and some of the big groups behind the SNN thrust.

https://www.linkedin.com/pulse/mind-like-machines-exploring-fascinating-world-computing-jatin-khera

1679440930508.png

Research Institutes working in the domain of Neuromorphic Computing

There are many research institutes around the world that are working in the field of neuromorphic computing. Some examples include:

  • Neural Information Processing Systems (NIPS) Foundation: The NIPS Foundation is a nonprofit organization that promotes the development of artificial intelligence and machine learning technologies, including neuromorphic computing. The organization hosts an annual conference on neural information processing systems, which is a major forum for the presentation of research in the field.
  • Institute of Neuroscience and Medicine (INM-4): The INM-4 is a research institute based in Germany that is focused on developing new technologies and applications for neuromorphic computing. The institute is part of the German Aerospace Center (DLR), and is involved in a wide range of research activities related to neuromorphic computing.
  • Neuromorphic Engineering Research Group (NERG): The NERG is a research group based at the University of Edinburgh that is focused on developing new technologies and applications for neuromorphic computing. The group is involved in a wide range of research activities related to neuromorphic computing, including the development of new chip architectures and the exploration of new materials and fabrication techniques.
  • Institute of Neuromorphic Computing (INM-9): The INM-9 is a research institute based in Germany that is focused on developing new technologies and applications for neuromorphic computing. The institute is part of the German Aerospace Center (DLR), and is involved in a wide range of research activities related to neuromorphic computing, including the development of new chip architectures and the exploration of new materials and fabrication techniques.
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Steve10

Regular
This reduced in size article is from Sally Ward EE times .


Embedded World 2023




Also on the STMicro booth were another couple of fun demos, including a washing machine that could tell how much laundry was in the machine in order to optimize the amount of water added. This system is sensorless; it is based on AI analysis of the current required to drive the motor, and predicted the weight of the 800g laundry load to within 30g. A robot vacuum cleaner equipped with a time-of-flight sensor also used AI to tell what type of floor surface it was cleaning, to allow it to select the appropriate cleaning method.

Renesas

Next stop was the Renesas booth, to see the Arm Cortex-M85 up and running in a not-yet-announced product (due to launch in June). This is the first time EE Times has seen AI running on a Cortex-M85 core, which was announced by Arm a year ago.


The M85 is a larger core than the Cortex-M55, but both are equipped with Helium—Arm’s vector extensions for the Cortex-M series—ideal for accelerating ML applications. Renesas’ figures had the M85 running inference 5.3× faster than a Renesas M7-based design, though the M85 was also running faster (480 MHz compared with 280).

Renesas’ demo had Plumerai’s person-detection model up and running in 77 ms per inference.

Renesas AI on Cortex-M85 model from Plumerai at Embedded World 2023 Renesas’ not-yet-announced Cortex-M85 device is the first we’ve seen running AI on the M85. Shown here running Plumerai people-detection model. (Source: EE Times/Sally Ward-Foxton)
Renesas field application engineer Stefan Ungerechts also gave EE Times an overview of the DRP-AI (dynamically reconfigurable processor for AI), Renesas’ IP for AI acceleration. A demo of the RZ/V2L device, equipped with a 0.5 TOPS @ FP16 (576 MACs) DRP-AI engine, was running tinyYOLOv2 in 27 ms at 500 mW (1 TOPS/W). This level of power efficiency means no heat sink is required, Ungerechts said.

The DRP-AI is, in fact, a two-part accelerator; the dynamically reconfigurable processor handles acceleration of non-linear functions, then there is a MAC array alongside it. Non-linear functions in this case might be image preprocessing functions or model pooling layers of a neural network. While the DRP is reconfigurable hardware, it is not an FPGA, Ungerechts said. The combination is optimized for feed-forward networks like convolutional neural networks commonly found in computer vision, and Renesas’ software stack allows either the whole AI workload to be passed to the DRP-AI or use of a combination of the DRP-AI and the CPU.

Also available with a DRP-AI engine are the RZ/V2MA and RZ/V2M, which offer 0.7 TOPS @ FP16 (they run faster than the -V2L at 630 MHz compared to 400, and have higher memory bandwidth).

A next-generation version of the DRP-AI that supports INT8 for greater throughput, and is scaled up to 4K MACs, will be available next year, Ungerechts said.

Squint

Squint, an AI company launched earlier this year, is taking on the challenge of explainable AI.

Squint CEO Kenneth Wenger told EE Times that the company wants to increase trust in AI decision making for applications like autonomous vehicles (AVs), healthcare and fintech. The company takes pre-production models and tests them for weaknesses—identifying in what situations they are more likely to make a mistake.

This information can be used to set up a mitigating factors, which might include human-in-the-loop—perhaps flagging a medical image to a doctor—or trigger a second, more specialized model that has been specifically trained for that situation. Squint’s techniques can also be used to tackle “data drift”—for maintaining models over longer periods of time.

Embedl

Swedish AI company Embedl is working on retraining models to optimize them for specific hardware targets. The company has a Python SDK that fits into the training pipeline. Techniques include replacing operators with alternatives that may run more efficiently on the particular target hardware, as well as quantization-aware retraining. The company’s customers so far have included automotive OEMs and tier 1s, but they are expanding to Internet of Things (IoT) applications.

Embedl has also been a part of the VEDL-IoT project, an EU-funded project in collaboration with Bielefeld University that aims to develop an IoT platform, which distributes AI across a heterogeneous cluster.

Their demo showed managing AI workloads across different hardware: an Nvidia AGX Xavier GPU in a 5G basestation and an NXP i.MX8 application processor in a car. With sufficient 5G bandwidth available, “difficult” layers of the neural network could be computed remotely in the basestation, and the rest in the car, for optimum latency. Reduce the 5G bandwidth available, and more or all of the workload goes to the i.MX8. Embedl had optimized the same model for both hardware types.

VEDL-IoT/Embedl demo The VEDL-IoT project demo shows splitting AI workloads across 5G infrastructure and embedded hardware. (Source: EE Times/Sally Ward-Foxton)

Silicon Labs

Silicon Labs had several xG24 dev kits running AI applications. One had a simple Sparkfun camera with the xG24 running people counting, and calculating the direction and speed of movement.

A separate wake word demo ran in 50 ms on the xG24’s accelerator, and a third board was running a gesture recognition algorithm.

BrainChip

BrainChip had demos running on a number of partner booths, including Arm and Edge Impulse. Edge Impulse’s demo showed the company’s FOMO (faster objects, more objects) object detection network running on a BrainChip Akida AKD1000 in under 1 mW.

Renesas ARM Cortex-M85 AI chip due to launch in June requires inventory for the launch. They would have produced a few chips already for Embedded World demo's & for select clients such as Plumerai to trial.

The inventory of chips should be in production very soon & be finished by the end of May or possibly earlier to allow for some batch testing.

Revenue now expected next quarter. If they do a small run of 1M chips x 30c BRN IP royalty = $300k. Could be higher depending on the percentage royalty fee. It could be anywhere between 2-15% royalty.
 
  • Like
  • Fire
  • Love
Reactions: 40 users

wasMADX

Regular
I received a Brainchip March 2023 Newsletter today. I tried to read it from the point of view of a potential manufacturer.

My opinion is that actual product releases are being held back, partly because the tech is hard to understand and a good example of a product is not out there for manufacturers to see.

I understand why we want to go down the "I.P. license" path, but what if we design a "killer" product and get someone to make it for us? Then we release and sell it for the world to see.

Who better than ourselves to do it to get the ball rolling? Sean H. could make clear the reason why we have taken this step to his contacts and that it is a once-only thing i.e. we are not going into competition.
 
  • Like
  • Thinking
  • Fire
Reactions: 12 users

stuart888

Regular
Yes I read this the other day and the difference in Tops pulled me up. It of course is possible that as has been mentioned before like Renesas that bought two nodes of AKIDA IP as it was sufficient for there target market Texas Instruments used less nodes as at 32TOPS it would be more than adequate for their target market and of course is cheaper and also allows for the new improved models ie 40TOPS, 45TOPS, 50TOPS for later upselling of customers.

My opinion only DYOR
FF

AKIDA BALLISTA
One Tops (Operations per second, millisecond, hydro second)? That sent me to Google Of course I know Akida is a Tops champion. Right, per operation smarts. Less big math, and fast concise material events. No fake events (Brainchip).

Interesting for sure. Just trying to learn. Seems like the Akida Framework is 100% focused on winning here. That ultra-low energy AI/ML smarts is the key differentiator. I am all in. 16-bit MAC math is for expensive solutions Von-Neuman only?

https://medium.com/@aron.kirschen/w...t-to-benchmark-next-gen-ai-chips-573b9152f9ae

1679442145298.png
 
  • Like
  • Love
Reactions: 12 users

Vladsblood

Regular
I received a Brainchip March 2023 Newsletter today. I tried to read it from the point of view of a potential manufacturer.

My opinion is that actual product releases are being held back, partly because the tech is hard to understand and a good example of a product is not out there for manufacturers to see.

I understand why we want to go down the "I.P. license" path, but what if we design a "killer" product and get someone to make it for us? Then we release and sell it for the world to see.

Who better than ourselves to do it to get the ball rolling? Sean H. could make clear the reason why we have taken this step to his contacts and that it is a once-only thing i.e. we are not going into competition.
Could be because our clear profit margin on our I P is approximately 97 percent and we don’t have to worry about the manufacturing.

On another note I checked our current MC in the good ol’yankee $ dollar and we are a pissant 525,000,000 US. Sooo…looks great for a run probably sooner rather than later
 
  • Like
  • Fire
Reactions: 14 users

ndefries

Regular
Could be because our clear profit margin on our I P is approximately 97 percent and we don’t have to worry about the manufacturing.

On another note I checked our current MC in the good ol’yankee $ dollar and we are a pissant 525,000,000 US. Sooo…looks great for a run probably sooner rather than later

I know we dont like talking take overs but 1000% someone has to be looking at this price.
 
  • Like
  • Thinking
  • Fire
Reactions: 13 users

ndefries

Regular
Our new partner is recruiting with a need for 'Optimization of cutting-edge neural network architectures for deployment on neuromorphic processors'

Interpretation - working on Akida for our customer products.


Principal Software Engineer​


Intellisense Systems Inc
Torrance, CA
  • Posted: 12 days ago
  • Full-Time
Job Description
Intellisense Systems innovates what seemed impossible. We are a fast-growing Southern California technology innovator that solves tough, mission-critical challenges for our customers in advanced military and commercial markets. We design, develop, and manufacture novel technology solutions for ground, vehicle, maritime, and airborne applications. Our products have been deployed in every extreme environment on Earth!
We are looking for an exceptional Principal Software Engineer skilled in development, Algorithms, SQL, Python, and C/C++ to join our Artificial Intelligence (AI) and Radio Frequency (RF) Systems team. This person will code everything in Python, convert python into C/C++ for optimization to utilize in final production. The team works on cutting-edge technologies for government customers and DoD applications.
As part of the team, you will work alongside other experienced scientists and engineers to develop novel cutting-edge solutions to several challenging problems. From creating experiments and prototyping implementations to designing new machine learning algorithms, you will contribute to algorithmic and system modeling and simulation, transition your developments to software and hardware implementations, and test your integrated solutions in accordance with project objectives, requirements, and schedules.
Projects You May Work On:
  • Real-time object detection, classification, and tracking
  • RF signal detection, classification, tracking, and identification
  • Fully integrated object detection systems featuring edge processing of modern deep learning algorithms
  • Optimization of cutting-edge neural network architectures for deployment on neuromorphic processors
 
  • Like
  • Fire
  • Love
Reactions: 41 users

wasMADX

Regular
I received a Brainchip March 2023 Newsletter today. I tried to read it from the point of view of a potential manufacturer.

My opinion is that actual product releases are being held back, partly because the tech is hard to understand and a good example of a product is not out there for manufacturers to see.

I understand why we want to go down the "I.P. license" path, but what if we design a "killer" product and get someone to make it for us? Then we release and sell it for the world to see.

Who better than ourselves to do it to get the ball rolling? Sean H. could make clear the reason why we have taken this step to his contacts and that it is a once-only thing i.e. we are not going into competition.
I meant to include that another reason producers may be holding back is that the ongoing development by us causes them to wait because they think "we will wait until things are sorted because someone could leapfrog our product". They need re-assurance that we have been bold enough to produce something right now, put our money where our mouth is, so it can be done.

Yes, I'm in a manic mood today and brainstorming.
 
  • Like
Reactions: 7 users

Vladsblood

Regular
Yep too true…BUT, I for one won’t be selling my soul/ barn’s to those takeover parasites under 40 AU dollars. Vlad
 
  • Like
  • Fire
  • Love
Reactions: 25 users

ndefries

Regular
Yep too true…BUT, I for one won’t be selling my soul/ barn’s to those takeover parasites under 40 AU dollars. Vlad
you greedy F7%ker! :)
 
  • Haha
Reactions: 12 users

charles2

Regular
I know we dont like talking take overs but 1000% someone has to be looking at this price.
Not happening. Insiders own >>50% and have not indicated any desire to sell. And their dreams and goals are no where near being accomplished. Not even close.
 
  • Like
  • Love
  • Fire
Reactions: 18 users
Top Bottom