BRN Discussion Ongoing

7für7

Top 20
:cry:

View attachment 88799



EXTRACT

View attachment 88800





ChatGPT 4 says:

Hailo
and BrainChip are competing in several overlapping markets, particularly in the edge AI space. Here's a breakdown of how and where they compete—and where they differ:




🔁 Overlapping Markets


Both BrainChip and Hailo target ultra-low-power, high-efficiency AI inference at the edge. Their core focus areas include:


  1. Industrial Automation / Robotics
    • BrainChip’s Akida: Optimised for spiking neural networks and event-based processing, ideal for ultra-low-power robotics and sensor fusion.
    • Hailo-8: Geared for high-performance real-time processing in industrial machines, robotics, and automation tasks.
  2. Smart Cameras / Surveillance
    • Both provide hardware for object detection, facial recognition, and anomaly detection at the edge.
    • BrainChip supports neuromorphic processing for efficiency and low bandwidth needs.
    • Hailo focuses on high-frame-rate, dense neural networks (e.g., YOLOv5) on traditional CNNs.
  3. Automotive (ADAS & In-Cabin Monitoring)
    • Hailo has certified partnerships with major Tier 1 automotive suppliers (e.g., Renesas, Foresight).
    • BrainChip is developing neuromorphic vision and audio sensing with ultra-low latency, especially suited for sensor fusion and V2X potential.
  4. Medical and Wearables
    • Both are eyeing AI health diagnostics, especially for on-device inference (e.g., EEG, ECG).
    • BrainChip's strength lies in spiking neural networks mimicking brain activity—a possible edge in neurology applications.



🧠 Key Differentiators


FeatureBrainChipHailo
Core ArchitectureSpiking Neural Network (SNN) (Akida)Deep Learning Accelerator (Hailo-8)
Processing StyleEvent-based, asynchronousFrame-based, synchronous
Power ConsumptionExtremely low (<1mW for some ops)Low (but higher than Akida)
Data EfficiencyExcellent for sparse, time-series dataDesigned for dense CNN workloads
Training CompatibilityRequires SNN conversion/training toolsStandard TensorFlow / PyTorch models
Ecosystem IntegrationEarly-stage but expanding (Renesas, NVISO)Strong integrations with NVIDIA, Renesas, ABB




🚀 Summary​


  • Competing Areas: Edge AI for vision, audio, industrial, and automotive.
  • BrainChip Advantage: Ultra-low-power, neuromorphic design, asynchronous event-based inference.
  • Hailo Advantage: Higher throughput for traditional CNNs and more mature ecosystem for frame-based processing.

They are not direct substitutes, but are converging in markets such as automotive and smart surveillance. The choice often comes down to use case characteristics (e.g., sparse vs dense data, power constraints, real-time latency)

Basically competition is good for business right?? RIGHT????!


kermit worry GIF
 
Last edited:
  • Haha
  • Like
Reactions: 5 users

7für7

Top 20
But seriously….Can someone please explain why anyone who’s serious about this stock should be buying right now?

Every time we see the slightest sign of upward movement, the shorters crawl out of the woodwork … drooling, scheming, like a pack of hyenas just waiting to pounce.

Or like cockroaches, scurrying in at the first sign of a crumb.

It’s become a complete circus.

The stock can’t rise in a healthy, sustainable way because these miserable players keep crushing any momentum.

Honestly, I can’t find another explanation at this point.

I’ve always bought the dips, reinvested, doubled down – and by now I hold a position that’s many times larger than what I originally intended to invest.
Out of pure conviction. And to be clear: I’m still convinced about the tech and the long-term potential.

But… the current relationship between share price, progress, and partnerships just doesn’t make any sense anymore.

We see solid developments. Real partnerships. A growing ecosystem.

And yet… the stock behaves like none of that matters.
Something’s off – and it’s getting harder and harder to justify with logic alone.
 
  • Like
  • Love
  • Fire
Reactions: 10 users
Hi @Fullmoonfever,

I strongly doubt it.

First of all, Bascom Hunter developed the 3U VPX SNAP (Spiking Neuromorphic Advanced Processor) Card with SBIR funding from the Navy, not the Air Force.

The publication you shared, however, is the AFRL (Air Force Research Laboratory) Facilities Book FY25.

NAVAIR, the Navy Air Systems Command, would likely be the first DoD entity to get their hands on the Bascom Hunter 3U VPX SNAP card, given they awarded the SBIR funding for the N202-099 project “Implementing Neural Network Algorithms on Neuromorphic Processors”.


View attachment 88772



View attachment 88770 View attachment 88771

However, as you can see, Bascom Hunter’s SBIR II phase is still ongoing - the SBIR award end date is 18 August 2025, which as of today is still four weeks away. Bascom Hunter as the awardee will then be required to submit a Phase II Final Report.

Which in turn means the 3U VPX SNAP Card is highly likely not a commercially ready product, yet (although one could be forgiven for thinking so when checking out the BH website https://bascomhunter.com/bh-tech/di...c-processors/asic-solutions/3u-vpx-snap-card/), but rather still a prototype, which Bascom Hunter will subsequently aim to commercialise in the ensuing Phase III (which will need to happen without further SBIR funding, though):


View attachment 88773

I believe that also explains why we had never heard anything “ASX-worthy” about a connection with Bascom Hunter prior to the Appendix 4C and Quarterly Activities Report for the Period Ended 31 December 2024, dated 28 January 2025:


View attachment 88774

Those two sentences about BH even suggested to me at the time they may have changed their prototype plan and may have decided on using AKD1500 rather than AKD1000 for their commercialisation efforts, possibly having already secured an interested Phase III commercialisation partner/customer.

Of course I could be wrong and the AKD1500 chips are actually slated for a different Bascom Hunter product-in-the-making. But in that case we’d still have to see another major purchase of AKD1000 chips before BH will be able to take their SNAP Card to market, and so far we have neither had an ASX announcement about it nor have we yet seen any evidence of such a deal in the financials. (Even assuming for a moment a top-secret NDA could have been be the reason for a non-announcement, the financials wouldn’t lie. But I don’t buy the NDA “excuse” anyway, as BH has been openly (ie. on their website) promoting their 3U VPX SNAP Card to have a total of 5 AKD1000 chips for months, and our company also let the cat out of the bag about their connection to BH with the January ASX announcement, hence there is no [more] secrecy required).

So unless it were a custom-made-to-order-design and BH were excitingly waiting for their first customer(s) to sign a deal before placing an order with BrainChip, it seems unlikely to me the 3U VPX SNAP Card is a commercially available product, yet.

Happy to be corrected, though…




Rugged VPX chassis systems are commonly used for mission-critical defense and aerospace applications, and VPX cards are available in 3U VPX and 6 U VPX form factors (cf. https://militaryembedded.com/avionics/computers/vpx-and-openvpx).

Here are two alternative suggestions what the mention on page 66 of the AFRL Facilities Book FY 2025 could possibly refer to:


The Embedded Edge Computing Laboratory is one of 7 labs housed by the AFRL Extreme Computing Facility (ECF), see page 65:

“EXTREME COMPUTING FACILITY
Research and development of unconventional computing and communications architectures and paradigms, trusted systems and edge processing.
A 7,100 Sq. foot multi discipline lab housing 7 Laboratories: Embedded Edge Computing, Nanocomputing, Trusted Systems, Agile Software Development, and 3 Quantum Labs along with
a Video Wall for demonstrations focused on research and development of unconventional computing architectures, networks, and processing that is secure, trusted, and can be done at the tactical edge.
$6.5 Million laboratory possesses world class capabilities in Neuromorphic based hardware characterization and testing, secure processing and Quantum based Communication, Networking, and Computing.

Chief, Mr. Greg Zagar
Deputy Chief, Mr. Michael Hartnett”


View attachment 88779

Under “Examples”, two AFRL programs are referred to: 6.3 SE3PO and 6.2 NICS+ (Neuromorphic Computing for Space, in collaboration with Intel: https://afresearchlab.com/technology/nics).

On page 72, which covers the “ECF AGILE SOFTWARE DEVELOPMENT LAB”, led by Pete Lamonica (named as Primary Alternate POC [point of contact] of the Embedded Edge Computing Laboratory on page 66), SE3PO is spelled out as the “Secure Extreme Embedded Exploitation and Processing On-board (SE3PO) program”.

The Secure Processor and the adjacent lab referred to in the second sentence underlined in green could possibly be this one:

View attachment 88780

As you can see, AFRL has been developing a military-grade secure processor with built-in cyber-defensive capabilities, dubbed T-CORE, testing it on a 3U VPX Board and is currently working on version 2 of T-CORE.

So that’s one of many option what the mention of 3U VPX heterogeneous computing (systems that use multiple types of processors such as CPUs, GPUs, ASICs, FPGAs, NPUs) could possibly refer to.



As for neuromorphic research, we know that AFRL has been collaborating with both IBM and Intel for years.

While the AFRL Facilities Book FY25 doesn’t specify whether or not the “3UVPX heterogeneous computing” in their equipment list includes a 3U VPX board with a neuromorphic processor, it could theoretically even refer to IBM’s NorthPole in a 3U VPX form factor (aka NP-VPX):




View attachment 88776
(…)

View attachment 88777
The following October 2023 exchange on LinkedIn between IBM’s Dharmendra Modha and AFRL Principal Computer Scientist Mark Barnell (who is also named as the Embedded Edge Computing Lab’s Primary POC on page 66 of the AFRL Facilities Book FY25) on the release of NorthPole is evidence that the collaboration between AFRL and IBM did not end with the TrueNorth era.





View attachment 88778

@Frangipani

Great research and logic as usual.

When we go back to the BH & Navair transition program paper and roadmap, we can see that part of the commercialisation project will be a crossover to other DoD areas incl AFRL.

Given the web interconnecting all the players incl BRN there is always the possibility imo, that the card is advanced along enought that AFRL could be currently looking at the BH card as well.



IMG_20250723_090709.jpg
 
  • Like
  • Love
  • Fire
Reactions: 12 users

HopalongPetrovski

I'm Spartacus!
But seriously….Can someone please explain why anyone who’s serious about this stock should be buying right now?

Every time we see the slightest sign of upward movement, the shorters crawl out of the woodwork … drooling, scheming, like a pack of hyenas just waiting to pounce.

Or like cockroaches, scurrying in at the first sign of a crumb.

It’s become a complete circus.

The stock can’t rise in a healthy, sustainable way because these miserable players keep crushing any momentum.

Honestly, I can’t find another explanation at this point.

I’ve always bought the dips, reinvested, doubled down – and by now I hold a position that’s many times larger than what I originally intended to invest.
Out of pure conviction. And to be clear: I’m still convinced about the tech and the long-term potential.

But… the current relationship between share price, progress, and partnerships just doesn’t make any sense anymore.

We see solid developments. Real partnerships. A growing ecosystem.

And yet… the stock behaves like none of that matters.
Something’s off – and it’s getting harder and harder to justify with logic alone.
Welcome to the club.
Seems you have just arrived where many of us found ourselves 4 or 5 years ago. 🤣
It's been a bit of a slog.
And unfortunately become something of a shorter's wet dream.
So now its dig in, double down or rack off. 🤣
Time crawls in tech land.......until it doesn't. 🤣
After having cried wolf, many a time, only a significant deal with dollars attached will be enough now to shake the parasites off, for a while.
Bring It, BrainChip!
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Guzzi62

Regular
But seriously….Can someone please explain why anyone who’s serious about this stock should be buying right now?

Every time we see the slightest sign of upward movement, the shorters crawl out of the woodwork … drooling, scheming, like a pack of hyenas just waiting to pounce.

Or like cockroaches, scurrying in at the first sign of a crumb.

It’s become a complete circus.

The stock can’t rise in a healthy, sustainable way because these miserable players keep crushing any momentum.

Honestly, I can’t find another explanation at this point.

I’ve always bought the dips, reinvested, doubled down – and by now I hold a position that’s many times larger than what I originally intended to invest.
Out of pure conviction. And to be clear: I’m still convinced about the tech and the long-term potential.

But… the current relationship between share price, progress, and partnerships just doesn’t make any sense anymore.

We see solid developments. Real partnerships. A growing ecosystem.

And yet… the stock behaves like none of that matters.
Something’s off – and it’s getting harder and harder to justify with logic alone.
Yes agreed, but I think people are expecting some good IP deals giving some serious money to BRN, then the SP could quickly go over $1 IMO.

That would force the shorts to cover helping fueling the rocket.

With the growing ecosystem and many partners/connections, hopefully it's only a matter of when an IP deal is signed and preferable a real big player.

Ohh just saw HopalongPxxx post, well he agree so
 
  • Like
  • Love
  • Fire
Reactions: 10 users

Yoda

Regular
But seriously….Can someone please explain why anyone who’s serious about this stock should be buying right now?

Every time we see the slightest sign of upward movement, the shorters crawl out of the woodwork … drooling, scheming, like a pack of hyenas just waiting to pounce.

Or like cockroaches, scurrying in at the first sign of a crumb.

It’s become a complete circus.

The stock can’t rise in a healthy, sustainable way because these miserable players keep crushing any momentum.

Honestly, I can’t find another explanation at this point.

I’ve always bought the dips, reinvested, doubled down – and by now I hold a position that’s many times larger than what I originally intended to invest.
Out of pure conviction. And to be clear: I’m still convinced about the tech and the long-term potential.

But… the current relationship between share price, progress, and partnerships just doesn’t make any sense anymore.

We see solid developments. Real partnerships. A growing ecosystem.

And yet… the stock behaves like none of that matters.
Something’s off – and it’s getting harder and harder to justify with logic alone.
I think part of the problem is the company's irrational obsession with avoiding any kind of ASX announcement involving the partners whether it be price sensitive or otherwise. That is not helping.
 
  • Like
  • Fire
Reactions: 18 users

7für7

Top 20
Welcome to the club.
Seems you have just arrived where many of us found ourselves 4 or 5 years ago. 🤣
It's been a bit of a slog.
And unfortunately become something of a shorter's wet dream.
So now its dig in, double down or rack off. 🤣
Time crawls in tech land.......until it doesn't. 🤣
After having cried wolf, many a time, only a significant deal with dollars attached will be enough now to shake the parasites off, for a while.
Bring It, BrainChip!
No, that’s not quite right… I’ve expressed my concerns regarding this cockroaches etc. many times before. But for some reason, I was unjustly insulted and immediately shoved into the “basher” corner. I never understood why legitimate criticism ..which is clearly visible to everyone … gets shot down so harshly.

It’s not like I’m saying BrainChip is fundamentally some kind of joke or a carnival stand…
 
  • Like
Reactions: 2 users

7für7

Top 20
I think part of the problem is the company's irrational obsession with avoiding any kind of ASX announcement involving the partners whether it be price sensitive or otherwise. That is not helping.
I can’t really explain why they’re not doing it either…

But one possible reason could be that they want to avoid situations like the one with Mercedes happening again.

Things like that attract the attention of regulatory authorities, who might take action that could further damage BrainChip’s reputation.

Extreme ups and downs are a red flag for regulators…

We all know how fast they put a speeding ticket out of the 🎩 after a nice rise! 🙄
 
  • Like
Reactions: 4 users

jrp173

Regular
But… the current relationship between share price, progress, and partnerships just doesn’t make any sense anymore.

We see solid developments. Real partnerships. A growing ecosystem.

Yes, but where is the revenue...

All the developments and partnerships in the world will not move the share price, particularly when BrainChip refuse to engage in making any ASX announcements...

Until there is real revenue (and I'm not talking the small contracts we saw late last year), nothing will change....
 
  • Fire
Reactions: 3 users

AusEire

Founding Member.
1000003232.png
 
  • Haha
  • Like
  • Fire
Reactions: 18 users

jrp173

Regular
I can’t really explain why they’re not doing it either…

But one possible reason could be that they want to avoid situations like the one with Mercedes happening again.

Things like that attract the attention of regulatory authorities, who might take action that could further damage BrainChip’s reputation.

I never understand why people use Mercedes as an example or excuse as to why BrainChip don't use the ASX to make announcements.

The fact is BrainChip never made any ASX announcement regarding Mercedes Benz, it was Mercedes Benz who let the cat out of the bag....
 
Last edited:
  • Like
  • Fire
Reactions: 8 users

7für7

Top 20
Yes, but where is the revenue...

All the developments and partnerships in the world will not move the share price, particularly when BrainChip refuse to engage in making any ASX announcements...

Until there is real revenue (and I'm not talking the small contracts we saw late last year), nothing will change....
Fair point …revenue is the key driver in the end. We all are aware of that…

But I’d still argue that partnerships and developments are the foundation for future revenue. It’s just that the market currently gives them zero credit – which creates the disconnect.

Bug regarding the small revenue…We’ve seen it before with other companies… the early revenue was symbolic… until it wasn’t.

Of course, I’d also love to see clearer communication via the ASX. No argument there.
 
  • Like
Reactions: 2 users

From the other site​

BrainChip’s Runway: Longer Haul, Not Instant Failure​

BrainChip’s near-term revenue lag isn’t an omen of doom but a hallmark of deep-tech ventures: cutting-edge R&D takes time to commercialize. While SynSense and Innatera captured early inference-only wins, BrainChip bet on the harder path—on-chip learning—which demands more silicon complexity, partner integration and validation before scale sales kick in.

Why It’s Likely to Succeed—Eventually​

  • Strategic Pilot Programs Military (AFRL), aerospace (Airbus, European Space Agency) and automotive partners are already evaluating Akida’s on-chip learning. Converting these pilots into production licences can unlock substantial royalties over time.
  • Unique Technological Differentiator Incremental and one-shot learning directly on device remains rare. As edge AI markets grow, on-chip learning will command premium adoption in privacy-sensitive, low-latency systems (wearables, smart cameras, industrial sensors).
  • Strong Cash Runway and Funding BrainChip’s recent capital raises, combined with modest burn rates relative to its R&D outlay, give it the financial cushion to refine Akida IP and expand silicon tape-outs with foundry partners (e.g., GlobalFoundries, Intel Foundry Services).
 
  • Like
Reactions: 6 users

7für7

Top 20
I never understand why people use Mercedes as an example or excuse as to why BrainChip don't use the ASX to make announcements.

The fact is BrainChip never made any ASX announcement regarding Mercedes Benz, it Mercedes Benz who let the cat out of the bag....


I never claimed that BrainChip made an official ASX announcement regarding Mercedes.

What I meant was that the market’s reaction to the Mercedes news (which originally came from Mercedes themselves) might have served as a wake-up call for management.

Such unexpected disclosures ..whether intentional or not…can draw the attention of regulatory bodies or trigger exaggerated price movements. That might be why BrainChip has become particularly cautious when it comes to public statements, even regarding new partners or technical progress.

Whether that’s always the best communication strategy is debatable. But considering the number of NDAs involved in this high-tech space, I can understand why the company prefers a defensive approach.

Still, there should be a middle ground… staying completely silent only fuels uncertainty among investors.


Ah, I don’t know maaaan, to be honest…

asks mr bean GIF
 
  • Like
Reactions: 2 users

jrp173

Regular

From the other site​

BrainChip’s Runway: Longer Haul, Not Instant Failure​

BrainChip’s near-term revenue lag isn’t an omen of doom but a hallmark of deep-tech ventures: cutting-edge R&D takes time to commercialize. While SynSense and Innatera captured early inference-only wins, BrainChip bet on the harder path—on-chip learning—which demands more silicon complexity, partner integration and validation before scale sales kick in.

Why It’s Likely to Succeed—Eventually​

  • Strategic Pilot Programs Military (AFRL), aerospace (Airbus, European Space Agency) and automotive partners are already evaluating Akida’s on-chip learning. Converting these pilots into production licences can unlock substantial royalties over time.
  • Unique Technological Differentiator Incremental and one-shot learning directly on device remains rare. As edge AI markets grow, on-chip learning will command premium adoption in privacy-sensitive, low-latency systems (wearables, smart cameras, industrial sensors).
  • Strong Cash Runway and Funding BrainChip’s recent capital raises, combined with modest burn rates relative to its R&D outlay, give it the financial cushion to refine Akida IP and expand silicon tape-outs with foundry partners (e.g., GlobalFoundries, Intel Foundry Services).

This is just guff from chatgpt that the poster on the other form is posting....

It would be responsible for the other poster on HC to actually indicate that's where he sourced the information from.

Not directed at you smoothsailing18, but wouldn't it be great if we were kept informed by BrainChip, rather than the bollocks that people get from chatgpt.
 
Last edited:
  • Like
  • Fire
Reactions: 6 users

Getupthere

Regular
I never claimed that BrainChip made an official ASX announcement regarding Mercedes.

What I meant was that the market’s reaction to the Mercedes news (which originally came from Mercedes themselves) might have served as a wake-up call for management.

Such unexpected disclosures ..whether intentional or not…can draw the attention of regulatory bodies or trigger exaggerated price movements. That might be why BrainChip has become particularly cautious when it comes to public statements, even regarding new partners or technical progress.

Whether that’s always the best communication strategy is debatable. But considering the number of NDAs involved in this high-tech space, I can understand why the company prefers a defensive approach.

Still, there should be a middle ground… staying completely silent only fuels uncertainty among investors.


Ah, I don’t know maaaan, to be honest…

asks mr bean GIF
I think it’s important to remember that BrainChip’s core business model is to license IP, not to partner in the traditional sense. When a company partners with BrainChip, they’re essentially embedding Akida into their own broader product roadmap and they’ll likely be working with several IP vendors, not just BrainChip. So a partnership announcement doesn’t always equate to immediate or exclusive commercial wins.

What does concern me, though, is that Akida 2.0 has been out for quite a while now, and we’ve yet to see any IP licensing deals materialize. That’s a long time in this space, especially considering the supposed advantages of the tech. The silence around that isn’t just about strategy it risks sending the wrong message to investors who are eager to see tangible progress.

We can’t keep on blaming the shorters.

There has to be a better balance between protecting IP and keeping the market informed.

August 1st will be one to watch.
it marks the end of the 12-month escrow period, meaning investors from last year’s capital raise will finally be free to sell their shares.
 
  • Like
  • Love
  • Thinking
Reactions: 7 users

7für7

Top 20
What I find kind of funny.. both here and on the HC forum … is how some people who regularly throw around wild accusations against the management or the company itself are always the first to demand “sources” from anyone who dares to post something positive… Yet they never provide any sources for their own claims.
 
  • Like
  • Love
Reactions: 3 users

Diogenese

Top 20
Many of the long term holders have been here for going on 10 years, so it's not surprising that patience occasionally wears thin. However, it is only in the last 5 years that Akida has been available. Before that there was Brainchip Studio (software) and subsequently Brainchip Accelerator (hardware adapted to improve the function of Studio). Akida has gone through a number of iterations since then, and the business model has changed from IP plus ASIC to IP only, and now back to IP plus a somewhat limited hardware embodiment plus "software" in the form of models and MetaTF.

One of the significant developments in Akida 1 was on-chip learning which mitigates the need for retraining when new items need to be included in the model, a task which required the retraining of the entire model in the cloud. Cloud training is a major cost in terms of power and energy for the major AI engines.

Our tech has continued to evolve at a rapid rate under pressure of expanding market requirements. The commercial version of Akida 1 went from 1-bit weights and activations to 4-bits. Akida 1 has been incorporated in groundbreaking on-chip cybersecurity applications (QV, DoE), which is surely a precursor for Akida2/TENNs in microDoppler radar for AFRL/RTX. Akida 2 is embodied in FPGA for demonstration purposes.

Akida 2 went to 8-bits. It also introduced new features like skip, TENNs and state-space models.

The Roadmap also included Akida GenAI with INT16, also in FPGA, and Akida 3 (INT16, FP32), to be incorporated in FPGA within a year.

There is a clear trend to increased accuracy (INT16/FP32) in Akida 3. This indicates an improved ability to distinguish smaller features, although at the expense of greater power consumption. To at least partially compensate for the increased power consumption, Jonathan Tapson did mention a couple of under-wraps patent applications to improve the efficiency of data movement in memory. So I see Akida 3 as a highly specialized chip adapted for applications needing extremely high accuracy, while I think Akida2/TENNs will be the basis for our general business IP/SoC.

The fact that Akida has evolved so rapidly in such large steps is a tribute to the flexibility of its basic design.

https://brainchip.com/brainchip-sho...essing-ip-and-device-at-tinyml-summit-2020-2/

BrainChip Showcases Vision and Learning Capabilities of its Akida Neural Processing IP and Device at tinyML Summit 07.02.2020​

SAN FRANCISCO–(BUSINESS WIRE)–
BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high-performance edge AI technology, today announced that it will present its revolutionary new breed of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California February 12-13.

In the Poster Session, “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” representatives from BrainChip will explain to attendees how the company’s Akida™ Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. These features allow Akida to require 40 to 60 percent fewer computations to process a given CNN when compared to a DLA, as well as allowing it to perform learning directly on the chip.

BrainChip will also demonstrate “On-Chip Learning with Akida” in a presentation by Senior Field Applications Engineer Chris Anastasi. The demonstration will involve capturing a few hand gestures and hand positions from the audience using a Dynamic Vision Sensor camera and performing live learning and classification using the Akida neuromorphic platform. This will showcase the fast and lightweight unsupervised live learning capability of the spiking neural network (SNN) and the Akida neuromorphic chip, which takes much less data than a traditional deep neural network (DNN) counterpart and consuming much less power during training.

“We look forward to having the opportunity to share the advancements we have made with our flexible neural processing technology in our Poster Session and Demonstration at the tinyML Summit,” said Louis DiNardo, CEO of BrainChip. “We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems. We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”

Akida is available as a licensable IP technology that can be integrated into ASIC devices and will be available as an integrated SoC, both suitable for applications such as surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis, and Industrial Internet-of-Things (IoT). Akida performs neural processing on the edge, which vastly reduces the computing resources required of the system host CPU. This unprecedented efficiency not only delivers faster results, it consumes only a tiny fraction of the power resources of traditional AI processing while enabling customers to develop solutions with industry standard flows, such as Tensorflow/Keras. Functions like training, learning, and inferencing are orders of magnitude more efficient with Akida.

Tiny machine learning is broadly defined as a fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. tinyML Summit 2020 will continue the tradition of high-quality invited talks, poster and demo presentations, open and stimulating discussions, and significant networking opportunities. It will cover the whole stack of technologies (Systems-Hardware-Algorithms-Software-Applications) at the deep technical levels, a unique feature of the tinyML Summits. Additional information about the event is available at
https://tinymlsummit.org/
 
  • Like
  • Love
  • Fire
Reactions: 30 users

7für7

Top 20
Many of the long term holders have been here for going on 10 years, so it's not surprising that patience occasionally wears thin. However, it is only in the last 5 years that Akida has been available. Before that there was Brainchip Studio (software) and subsequently Brainchip Accelerator (hardware adapted to improve the function of Studio). Akida has gone through a number of iterations since then, and the business model has changed from IP plus ASIC to IP only, and now back to IP plus a somewhat limited hardware embodiment plus "software" in the form of models and MetaTF.

One of the significant developments in Akida 1 was on-chip learning which mitigates the need for retraining when new items need to be included in the model, a task which required the retraining of the entire model in the cloud. Cloud training is a major cost in terms of power and energy for the major AI engines.

Our tech has continued to evolve at a rapid rate under pressure of expanding market requirements. The commercial version of Akida 1 went from 1-bit weights and activations to 4-bits. Akida 1 has been incorporated in groundbreaking on-chip cybersecurity applications (QV, DoE), which is surely a precursor for Akida2/TENNs in microDoppler radar for AFRL/RTX. Akida 2 is embodied in FPGA for demonstration purposes.

Akida 2 went to 8-bits. It also introduced new features like skip, TENNs and state-space models.

The Roadmap also included Akida GenAI with INT16, also in FPGA, and Akida 3 (INT16, FP32), to be incorporated in FPGA within a year.

There is a clear trend to increased accuracy (INT16/FP32) in Akida 3. This indicates an improved ability to distinguish smaller features, although at the expense of greater power consumption. To at least partially compensate for the increased power consumption, Jonathan Tapson did mention a couple of under-wraps patent applications to improve the efficiency of data movement in memory. So I see Akida 3 as a highly specialized chip adapted for applications needing extremely high accuracy, while I think Akida2/TENNs will be the basis for our general business IP/SoC.

The fact that Akida has evolved so rapidly in such large steps is a tribute to the flexibility of its basic design.

https://brainchip.com/brainchip-sho...essing-ip-and-device-at-tinyml-summit-2020-2/

BrainChip Showcases Vision and Learning Capabilities of its Akida Neural Processing IP and Device at tinyML Summit 07.02.2020​

SAN FRANCISCO–(BUSINESS WIRE)–
BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high-performance edge AI technology, today announced that it will present its revolutionary new breed of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California February 12-13.

In the Poster Session, “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” representatives from BrainChip will explain to attendees how the company’s Akida™ Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. These features allow Akida to require 40 to 60 percent fewer computations to process a given CNN when compared to a DLA, as well as allowing it to perform learning directly on the chip.

BrainChip will also demonstrate “On-Chip Learning with Akida” in a presentation by Senior Field Applications Engineer Chris Anastasi. The demonstration will involve capturing a few hand gestures and hand positions from the audience using a Dynamic Vision Sensor camera and performing live learning and classification using the Akida neuromorphic platform. This will showcase the fast and lightweight unsupervised live learning capability of the spiking neural network (SNN) and the Akida neuromorphic chip, which takes much less data than a traditional deep neural network (DNN) counterpart and consuming much less power during training.

“We look forward to having the opportunity to share the advancements we have made with our flexible neural processing technology in our Poster Session and Demonstration at the tinyML Summit,” said Louis DiNardo, CEO of BrainChip. “We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems. We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”

Akida is available as a licensable IP technology that can be integrated into ASIC devices and will be available as an integrated SoC, both suitable for applications such as surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis, and Industrial Internet-of-Things (IoT). Akida performs neural processing on the edge, which vastly reduces the computing resources required of the system host CPU. This unprecedented efficiency not only delivers faster results, it consumes only a tiny fraction of the power resources of traditional AI processing while enabling customers to develop solutions with industry standard flows, such as Tensorflow/Keras. Functions like training, learning, and inferencing are orders of magnitude more efficient with Akida.

Tiny machine learning is broadly defined as a fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. tinyML Summit 2020 will continue the tradition of high-quality invited talks, poster and demo presentations, open and stimulating discussions, and significant networking opportunities. It will cover the whole stack of technologies (Systems-Hardware-Algorithms-Software-Applications) at the deep technical levels, a unique feature of the tinyML Summits. Additional information about the event is available at
https://tinymlsummit.org/

Yeah you know… but why you don’t talk abut the real issues here??? SEAN IS NOT WEARING A NECKTIE!!!!!


Sad Season 3 GIF by The Lonely Island
 
Top Bottom