Pom down under
Top 20
Hopefully we’re apart of there massive growth in sales in the next 3 years
Hopefully we’re apart of there massive growth in sales in the next 3 years
Hopefully we’re apart of there massive growth in sales in the next 3 years
Please don't regard my ramblings as investment advice. I have no way to assess the commercial potential of competing applications, and I have an inherent bias in favour of BRN.Thanks Diogenese … I hope you’re right.
The only thing that makes me think it’s more than just a pig in a poke is that the CEO recently put up $350,000 of his own money to buy shares in a capital raise … and he has very impressive credentials. Now that is something I’ve never seen Sean or anyone else from BRN do.
Also Weebit Nano partnered with EMASS to demo hand gesture recognition at
Hi GS,Maybe we’re still integrated with Edge Impulse/Qualcomm (docs updated 3 days ago)
![]()
Frequently asked questions (FAQ) | Edge Impulse Documentation
docs.edgeimpulse.com
Appreciate your input DioHi GS,
While I think that the EI association will continue for a while, I don't see Qualcomm as a near term customer.
Qualcomm are committed to their inhouse DSP-based Hexagon 8 Ai solution. They distribute the AI workload between CPU/GPU/NPU depending on the type of work. Their NPU design philosophy is different from BRN's.
https://www.qualcomm.com/content/da...I-with-an-NPU-and-heterogeneous-computing.pdf
A personal assistant that offers a natural voice user interface (UI) to improve productivity and enhance user experiences is expected to be a popular generative AI application. The speech recognition, LLM, and speech models must all run with some concurrency, so it is desirable to split the models between the NPU, GPU, CPU, and the sensor processor. For PCs, agents are expected to run pervasively (always-on), so as much of it as possible should run on the NPU for performance and power efficiency.
...
Our latest Hexagon NPU offers significant improvements for generative AI, delivering 98% faster performance and 40% improved performance per watt. It includes micro-architecture upgrades, enhanced micro-tile inferencing, reduced memory bandwidth, and a dedicated power rail for optimal performance and efficiency. These enhancements, along with INT4 hardware acceleration, make the Hexagon NPU the leading processor for on-device AI inferencing.
...
5. Building our NPU from a DSP architecture was the right choice for improved programmability and the ability to tightly control scalar, vector, and tensor operations that are inherent to AI processing.
Appears they may be talking about still supporting "existing trained Edge Impulse projects" but still can't train new models.Maybe we’re still integrated with Edge Impulse/Qualcomm (docs updated 3 days ago)
![]()
Frequently asked questions (FAQ) | Edge Impulse Documentation
docs.edgeimpulse.com