D
Deleted member 3781
Guest
With so many positive things happening in the business at the moment it is hard to fathom why the share price continues to sink.
With so many positive things happening in the business at the moment it is hard to fathom why the share price continues to sink.
Every knows there will be cap raises until revenue is sufficient to self sustain. So there is no surprise with that at all.This is the reason why the price is dropping shorts over 100 million plus, where they are getting the shares from???NAKED bet yah and the fact there will be another capital raise in the future to support managements high burn rate of cash. View attachment 92768
| Aspect | BrainChip Akida | Nanoveu ECS-DoT (EMASS) |
|---|---|---|
| Architecture | Neuromorphic (Spiking Neural Networks - SNNs); event-based, digital neuron fabric with sparsity exploitation (neurons fire only on thresholds). Supports CNNs, DNNs, RNNs, ViTs. | RISC-V core with AI accelerators and non-volatile memory (e.g., ReRAM integration); optimized for multimodal fusion (vision, audio, sensors). Traditional ANN focus. |
| Power Consumption | Ultra-low: ~1 mW (Akida Pico variant); milliwatt-scale for inference. Leverages sparsity for energy savings. | Milliwatt-scale (0.1–10 mW); benchmarks show 90% less energy vs. competitors (e.g., 0.8 µJ/inference in anomaly detection, 20% lower overall vs. peers). |
| Performance | Up to 1.2M neurons, 10B synapses per chip; scalable to 1,024 chips (1.2B neurons total). 8-bit weights/activations; low latency via multi-pass processing. | Up to 30 GOPS/W; 93% faster execution vs. competitors (e.g., 1.22 ms in anomaly detection, 3.9 ms in keyword spotting). 4 MB on-board SRAM for efficient compute. |
| Memory | Configurable local scratchpads; supports LPDDR4 SDRAM (e.g., 256M x 16 bytes in dev kits). | 4 MB on-board SRAM; non-volatile tech reduces leakage and enables always-on modes. |
| Learning/Training | On-chip edge learning via reinforcement/inhibition; incremental learning supported. | Primarily inference-focused; on-device training not emphasized (relies on cloud/offline optimization). |
| Interfaces/Connectivity | PCIe 2.0, ARM Cortex-M4 (300 MHz), GPIO; multi-chip fabric for scaling. | Sensor-integrated (vision/audio); SDKs for IoT integration; partnerships for reference designs. |
| Applications | Edge vision (e.g., industrial inspection), voice, vibration; automotive, consumer electronics, IoT. Strong in pattern recognition. | Drones (extended flight time), wearables, healthcare (biometrics), smart cities; excels in real-time 2D-to-3D conversion, anomaly detection. |
| Process Node | 28 nm (AKD1000); considering 14 nm. | Not specified; modular for future scaling to 6 nm/4 nm. |
| Software Ecosystem | MetaTF framework (TensorFlow/Keras integration); Edge Impulse support; cloud dev tools. | Enhanced SDKs/reference designs via Arrow Electronics; RISC-V tools for custom AI. |
| Maturity/Availability | Commercial since 2022 (AKD1000); dev kits (PCIe/Raspberry Pi) available; partnerships (e.g., Edge Impulse). | Emerged from stealth in 2025; benchmarks completed, OEM integrations underway (e.g., drones); sales reps appointed. |
| Strengths | Bio-mimetic efficiency for sparse data; on-device adaptability; scalable for larger networks. | Benchmark dominance in speed/energy; multimodal versatility; thermal efficiency (no cooling needed). |
| Challenges | Higher power in dense workloads; neuromorphic requires model optimization. | Less emphasis on learning; newer market entry may limit ecosystem breadth. |
Agree, gross shorts alone conceal the true directional position (market bet) of a fund. Retail investors do not know what the funds are ultimately betting on. Funds have different strategies eg may have % of shorts to hedge against the long bet but we only see the gross shorts not the long or the net.For those unaware .
The ASX recently changed the reporting of short positions to save the reporting participants time and money on what they claimed as information that no one used . This eliminated access to the net short positions that was considered more accurate than the gross numbers as used by Shortman for example .
View attachment 92770
The below is an example of how retail are now exposed to false market information regarding short positions . Over this small example on Brainchip you can clearly see the current issue with accuracy . Due to this new level of abusive corruption in my opinion , I am currently seeking with the ASX to return to the previous method of reporting and to reinstate access to the more accurate net positions .
Note : Anyone thinking Brainchip currently has 100+ million being shorted and that professional shorts are going to be BurNt you may need to do some extra research , especially if you are an amateur retail invest with a current short position relying on the pro’s to protect you .
View attachment 92771
All in my opinion …..
So the gamekeeper is giving the poacher a leg up over the fence?For those unaware .
The ASX recently changed the reporting of short positions to save the reporting participants time and money on what they claimed as information that no one used . This eliminated access to the net short positions that was considered more accurate than the gross numbers as used by Shortman for example .
View attachment 92770
The below is an example of how retail are now exposed to false market information regarding short positions . Over this small example on Brainchip you can clearly see the current issue with accuracy . Due to this new level of abusive corruption in my opinion , I am currently seeking with the ASX to return to the previous method of reporting and to reinstate access to the more accurate net positions .
Note : Anyone thinking Brainchip currently has 100+ million being shorted and that professional shorts are going to be BurNt you may need to do some extra research , especially if you are an amateur retail investor with a current short position relying on the pro’s to protect you .
View attachment 92771
All in my opinion …..
One major announcement and expecting us to hit 0.10 in no timeWith so many positive things happening in the business at the moment it is hard to fathom why the share price continues to sink.
Thanks, AKIDA will be favoured where on chip learning is needed eg, some Defense, Health space and robotics etc applications. I think we may in time find hybrid chips which include AKIDA and another chip for its qualities.Hi all.
I asked Grok to "contrast and compare Brainchip's Akida technology with Nanoveu's ECS-DoT AI SoC."
I have a finger in both pies and thought perhaps others may be interested as well.
For what it is worth, here below is it's answer.
Overview
BrainChip's Akida and Nanoveu's ECS-DoT (developed by its subsidiary EMASS) are both ultra-low-power System-on-Chip (SoC) solutions designed for edge AI applications, targeting always-on processing in power-constrained environments like wearables, IoT devices, drones, and industrial sensors. Akida emphasizes neuromorphic computing, mimicking the brain's sparse, event-based processing for efficiency in neural network tasks. In contrast, ECS-DoT focuses on traditional RISC-V-based AI acceleration with non-volatile memory optimizations, prioritizing multimodal sensor fusion and benchmark-proven energy savings. While both enable cloud-free inference, Akida excels in bio-inspired learning, whereas ECS-DoT offers superior raw efficiency in standardized tests.
Key Comparison Table
Aspect BrainChip Akida Nanoveu ECS-DoT (EMASS) Architecture Neuromorphic (Spiking Neural Networks - SNNs); event-based, digital neuron fabric with sparsity exploitation (neurons fire only on thresholds). Supports CNNs, DNNs, RNNs, ViTs. RISC-V core with AI accelerators and non-volatile memory (e.g., ReRAM integration); optimized for multimodal fusion (vision, audio, sensors). Traditional ANN focus. Power Consumption Ultra-low: ~1 mW (Akida Pico variant); milliwatt-scale for inference. Leverages sparsity for energy savings. Milliwatt-scale (0.1–10 mW); benchmarks show 90% less energy vs. competitors (e.g., 0.8 µJ/inference in anomaly detection, 20% lower overall vs. peers). Performance Up to 1.2M neurons, 10B synapses per chip; scalable to 1,024 chips (1.2B neurons total). 8-bit weights/activations; low latency via multi-pass processing. Up to 30 GOPS/W; 93% faster execution vs. competitors (e.g., 1.22 ms in anomaly detection, 3.9 ms in keyword spotting). 4 MB on-board SRAM for efficient compute. Memory Configurable local scratchpads; supports LPDDR4 SDRAM (e.g., 256M x 16 bytes in dev kits). 4 MB on-board SRAM; non-volatile tech reduces leakage and enables always-on modes. Learning/Training On-chip edge learning via reinforcement/inhibition; incremental learning supported. Primarily inference-focused; on-device training not emphasized (relies on cloud/offline optimization). Interfaces/Connectivity PCIe 2.0, ARM Cortex-M4 (300 MHz), GPIO; multi-chip fabric for scaling. Sensor-integrated (vision/audio); SDKs for IoT integration; partnerships for reference designs. Applications Edge vision (e.g., industrial inspection), voice, vibration; automotive, consumer electronics, IoT. Strong in pattern recognition. Drones (extended flight time), wearables, healthcare (biometrics), smart cities; excels in real-time 2D-to-3D conversion, anomaly detection. Process Node 28 nm (AKD1000); considering 14 nm. Not specified; modular for future scaling to 6 nm/4 nm. Software Ecosystem MetaTF framework (TensorFlow/Keras integration); Edge Impulse support; cloud dev tools. Enhanced SDKs/reference designs via Arrow Electronics; RISC-V tools for custom AI. Maturity/Availability Commercial since 2022 (AKD1000); dev kits (PCIe/Raspberry Pi) available; partnerships (e.g., Edge Impulse). Emerged from stealth in 2025; benchmarks completed, OEM integrations underway (e.g., drones); sales reps appointed. Strengths Bio-mimetic efficiency for sparse data; on-device adaptability; scalable for larger networks. Benchmark dominance in speed/energy; multimodal versatility; thermal efficiency (no cooling needed). Challenges Higher power in dense workloads; neuromorphic requires model optimization. Less emphasis on learning; newer market entry may limit ecosystem breadth.
Similarities
Differences
- Ultra-Low-Power Edge Focus: Both target milliwatt-level operation for battery-powered, always-on AI, reducing cloud dependency, latency, and privacy risks. They enable real-time sensor processing (e.g., vision/audio) in constrained devices.
- Efficiency-Driven Design: Prioritize sparsity/exploitation of data patterns (Akida via neurons; ECS-DoT via memory optimizations) for 10–100x energy savings over general-purpose chips.
- Scalability and Integration: Modular for multi-chip or application-specific variants; support standard ML workflows (e.g., TensorFlow) and dev tools for rapid deployment.
- Target Markets: Overlap in IoT, wearables, drones, and industrial uses, addressing the growing edge AI market (projected >$20B by 2028).
In summary, choose Akida for adaptive, brain-inspired AI in vision-heavy or learning-centric apps; opt for ECS-DoT for energy-critical, multimodal inference where benchmarks matter most. Both advance edge AI, but ECS-DoT's recent benchmarks suggest it could disrupt in power-sensitive niches.
- Core Philosophy: Akida's neuromorphic approach provides brain-like adaptability and on-chip learning, ideal for dynamic, sparse environments (e.g., event-driven sensors). ECS-DoT's RISC-V foundation delivers predictable, high-throughput performance in multimodal tasks, with superior benchmark results against incumbents like STMicroelectronics or Syntiant.
- Efficiency Metrics: ECS-DoT claims quantifiable edges (e.g., 287x faster in anomaly detection), positioning it for ultra-constrained apps like extended drone flights. Akida shines in qualitative sparsity gains but lacks direct head-to-head benchmarks here.
- Maturity and Ecosystem: Akida is more established with commercial products and broader partnerships, suiting immediate prototyping. ECS-DoT, as a 2025 newcomer, leverages Nanoveu's display tech synergies (e.g., 3D imaging) but focuses on rapid OEM adoption via distributors like Arrow.
- Innovation Trade-offs: Akida enables unique features like incremental learning for evolving models, while ECS-DoT prioritizes raw speed and thermal simplicity for "set-and-forget" deployments.
Another cap raise must be close and I’m guessing the shorters are taking such high position knowing that they will be able to cover these back from LDA probably. Let’s just hope we get a decent $$$ announcement fall before the share price does.This is the reason why the price is dropping shorts over 100 million plus, where they are getting the shares from???NAKED bet yah and the fact there will be another capital raise in the future to support managements high burn rate of cash. View attachment 92768
He must supports the shorters and is secretly accumulating, as there can’t be any other reason why our SP is where is is currentlyIt will an explosive joy to see the shorts burn.. come on Sean… what’s the deal here?
Fact Finder's response from over on the crapper..........Hi all.
I asked Grok to "contrast and compare Brainchip's Akida technology with Nanoveu's ECS-DoT AI SoC."
I have a finger in both pies and thought perhaps others may be interested as well.
For what it is worth, here below is it's answer.
Overview
BrainChip's Akida and Nanoveu's ECS-DoT (developed by its subsidiary EMASS) are both ultra-low-power System-on-Chip (SoC) solutions designed for edge AI applications, targeting always-on processing in power-constrained environments like wearables, IoT devices, drones, and industrial sensors. Akida emphasizes neuromorphic computing, mimicking the brain's sparse, event-based processing for efficiency in neural network tasks. In contrast, ECS-DoT focuses on traditional RISC-V-based AI acceleration with non-volatile memory optimizations, prioritizing multimodal sensor fusion and benchmark-proven energy savings. While both enable cloud-free inference, Akida excels in bio-inspired learning, whereas ECS-DoT offers superior raw efficiency in standardized tests.
Key Comparison Table
Aspect BrainChip Akida Nanoveu ECS-DoT (EMASS) Architecture Neuromorphic (Spiking Neural Networks - SNNs); event-based, digital neuron fabric with sparsity exploitation (neurons fire only on thresholds). Supports CNNs, DNNs, RNNs, ViTs. RISC-V core with AI accelerators and non-volatile memory (e.g., ReRAM integration); optimized for multimodal fusion (vision, audio, sensors). Traditional ANN focus. Power Consumption Ultra-low: ~1 mW (Akida Pico variant); milliwatt-scale for inference. Leverages sparsity for energy savings. Milliwatt-scale (0.1–10 mW); benchmarks show 90% less energy vs. competitors (e.g., 0.8 µJ/inference in anomaly detection, 20% lower overall vs. peers). Performance Up to 1.2M neurons, 10B synapses per chip; scalable to 1,024 chips (1.2B neurons total). 8-bit weights/activations; low latency via multi-pass processing. Up to 30 GOPS/W; 93% faster execution vs. competitors (e.g., 1.22 ms in anomaly detection, 3.9 ms in keyword spotting). 4 MB on-board SRAM for efficient compute. Memory Configurable local scratchpads; supports LPDDR4 SDRAM (e.g., 256M x 16 bytes in dev kits). 4 MB on-board SRAM; non-volatile tech reduces leakage and enables always-on modes. Learning/Training On-chip edge learning via reinforcement/inhibition; incremental learning supported. Primarily inference-focused; on-device training not emphasized (relies on cloud/offline optimization). Interfaces/Connectivity PCIe 2.0, ARM Cortex-M4 (300 MHz), GPIO; multi-chip fabric for scaling. Sensor-integrated (vision/audio); SDKs for IoT integration; partnerships for reference designs. Applications Edge vision (e.g., industrial inspection), voice, vibration; automotive, consumer electronics, IoT. Strong in pattern recognition. Drones (extended flight time), wearables, healthcare (biometrics), smart cities; excels in real-time 2D-to-3D conversion, anomaly detection. Process Node 28 nm (AKD1000); considering 14 nm. Not specified; modular for future scaling to 6 nm/4 nm. Software Ecosystem MetaTF framework (TensorFlow/Keras integration); Edge Impulse support; cloud dev tools. Enhanced SDKs/reference designs via Arrow Electronics; RISC-V tools for custom AI. Maturity/Availability Commercial since 2022 (AKD1000); dev kits (PCIe/Raspberry Pi) available; partnerships (e.g., Edge Impulse). Emerged from stealth in 2025; benchmarks completed, OEM integrations underway (e.g., drones); sales reps appointed. Strengths Bio-mimetic efficiency for sparse data; on-device adaptability; scalable for larger networks. Benchmark dominance in speed/energy; multimodal versatility; thermal efficiency (no cooling needed). Challenges Higher power in dense workloads; neuromorphic requires model optimization. Less emphasis on learning; newer market entry may limit ecosystem breadth.
Similarities
Differences
- Ultra-Low-Power Edge Focus: Both target milliwatt-level operation for battery-powered, always-on AI, reducing cloud dependency, latency, and privacy risks. They enable real-time sensor processing (e.g., vision/audio) in constrained devices.
- Efficiency-Driven Design: Prioritize sparsity/exploitation of data patterns (Akida via neurons; ECS-DoT via memory optimizations) for 10–100x energy savings over general-purpose chips.
- Scalability and Integration: Modular for multi-chip or application-specific variants; support standard ML workflows (e.g., TensorFlow) and dev tools for rapid deployment.
- Target Markets: Overlap in IoT, wearables, drones, and industrial uses, addressing the growing edge AI market (projected >$20B by 2028).
In summary, choose Akida for adaptive, brain-inspired AI in vision-heavy or learning-centric apps; opt for ECS-DoT for energy-critical, multimodal inference where benchmarks matter most. Both advance edge AI, but ECS-DoT's recent benchmarks suggest it could disrupt in power-sensitive niches.
- Core Philosophy: Akida's neuromorphic approach provides brain-like adaptability and on-chip learning, ideal for dynamic, sparse environments (e.g., event-driven sensors). ECS-DoT's RISC-V foundation delivers predictable, high-throughput performance in multimodal tasks, with superior benchmark results against incumbents like STMicroelectronics or Syntiant.
- Efficiency Metrics: ECS-DoT claims quantifiable edges (e.g., 287x faster in anomaly detection), positioning it for ultra-constrained apps like extended drone flights. Akida shines in qualitative sparsity gains but lacks direct head-to-head benchmarks here.
- Maturity and Ecosystem: Akida is more established with commercial products and broader partnerships, suiting immediate prototyping. ECS-DoT, as a 2025 newcomer, leverages Nanoveu's display tech synergies (e.g., 3D imaging) but focuses on rapid OEM adoption via distributors like Arrow.
- Innovation Trade-offs: Akida enables unique features like incremental learning for evolving models, while ECS-DoT prioritizes raw speed and thermal simplicity for "set-and-forget" deployments.
I feel very comfortable with this arrangement, oh, just one question.... who approached who and the big question, is why?
I will leave that for you to ponder over Einstein's......![]()
![]()
Hi Manny,The question is - In what ways is the 'new' AKIDA1500 superior to the 2000 if at all?
Hi Dio, I asked Chat gpt5. The problem with the chat boxes is that you pretty much have to know the 'guts' of the answer before you ask. The chat box then gives a useful summary you can use.Hi Manny,
My guess is that it is adapted to handle TENNs (using the MAC-Lite circuits*), and can do regression.
* DAM**