BRN Discussion Ongoing

oh manny......

Steve Brightfield ( our marketing guy?????) No....Just......NO!!!!

Parsons..... we just raised a truck load of dollars to guarantee the supply of 74,000 chips :sick:



I say no to Antonio J. Viana in 2026
Antonio talks fluff, the shareprice will do what the shareprice will do,
To attract investors the shareprice needs to do something, What was the shareprice when Mr Heir joined the company til now over 4 years in,
2 AGM 'S ago he was talking to 3 big companies and landed none, He gets a seat at the table to do what excatly,
When is this company going to release game changing deals,
But where working on unvieling Akida 3,
Tenns, pico hello where are the deals
 
  • Like
Reactions: 2 users

manny100

Top 20
  • Like
  • Fire
  • Thinking
Reactions: 7 users

manny100

Top 20
Antonio talks fluff, the shareprice will do what the shareprice will do,
To attract investors the shareprice needs to do something, What was the shareprice when Mr Heir joined the company til now over 4 years in,
2 AGM 'S ago he was talking to 3 big companies and landed none, He gets a seat at the table to do what excatly,
When is this company going to release game changing deals,
But where working on unvieling Akida 3,
Tenns, pico hello where are the deals
The SP will not move until we get deals getting us into products. We have several clients with prototypes (including Parsons) but these need to take the next step.
We should see further news this year.
See the earlier post where ARM say its just the very beginning of physical AI. Momentum is building and its getting closer.
Better in early and patient than late and pay a packet more.
 
  • Like
  • Thinking
Reactions: 5 users

Cardpro

Regular
For those not sure that ARM is serious about Brainchip check out the ARM partner brief on the ARM Website.
Also.
Yea, we are one of 100s and we've been partner for a while now - no one questions that.

But I do want to question when it will materialise...
 
  • Like
  • Fire
Reactions: 3 users

Earlyrelease

Regular
Hey Dio I am stealing your work and personalizing it:,

This is Dio's spiderweb of value compared to Simply Wall Street - Not financial advice, just an opinion which, in part, underlies my investment decision.
1769050234161.png



Value:
In my opinion, anything over what I paid is a bonus, anything under $5 is excellent value. The boomerang above in orange is indicative of a market where by shorters have feasted on limited news from a start up Tech company. But being an Aussie we all know that boomerangs return so lets wait that ride out. (Now before people get technical about the shape of a boomerang, our Western motion of the shape, differs vastly to the actual shape of the desert aboriginals of Western Australia where I spent a fair portion of my career, depending on their purpose)

Future

There will be competition, that is expected but our moat of patents, hopefully ground breaking tech and the 10 yrs head start we have had on some will ensure that we capture above our fair share of the market.

Past

Yes we could have maybe incorporated a hybrid model of chip making and IP at the start to encourage adoption, but as per one's own experience, one learns the most from the mistakes they make themselves.


Dividend

To date nil, but I am hoping patience and the great Australian Government Superannuation guarantee will enable me to leave the holding I have long enough to take a bit a cream of the top to enjoy retirement and leave a future stream of revenue for the "Early Release" Generational wealth fund. This will ensure that my already spoilt and cute, little grandson son will turn out to be a shitty nosed rich kids ;)(oops I meant well educated and housed productive member of the future generation with the financial ability to make smart choices and live in a home not a shed.)

Sorry to plagiarize Dio.
 
  • Like
  • Haha
  • Fire
Reactions: 7 users
The SP will not move until we get deals getting us into products. We have several clients with prototypes (including Parsons) but these need to take the next step.
We should see further news this year.
See the earlier post where ARM say its just the very beginning of physical AI. Momentum is building and its getting closer.
Better in early and patient than late and pay a packet more.
First time posting on here, but have been observing from the sidelines.

Manny, getting in early, like most on here, hasn’t really played out well.

The opportunity cost, and the effect of inflation on our investments, has left many of us underwater, and breathing through a straw.

I invested In BRN back in 2021, and my average is 30.2 cents, after averaging down a number of times.

I’ve listened to the hype, both here and at the other place, and I’ve come to the conclusion that hot air doesn’t pay the bills.

If we’d waited five years, we could’ve got in way cheaper, and have many more shares. Hard to believe that it’s 17.5 cents, after being told that the technology is a game changer, and years ahead of the opposition.

Anyway, I believe in the technology, but don’t like being treated like a mushroom by the management.

Good luck to all.
 
  • Like
  • Fire
  • Sad
Reactions: 26 users

manny100

Top 20
First time posting on here, but have been observing from the sidelines.

Manny, getting in early, like most on here, hasn’t really played out well.

The opportunity cost, and the effect of inflation on our investments, has left many of us underwater, and breathing through a straw.

I invested In BRN back in 2021, and my average is 30.2 cents, after averaging down a number of times.

I’ve listened to the hype, both here and at the other place, and I’ve come to the conclusion that hot air doesn’t pay the bills.

If we’d waited five years, we could’ve got in way cheaper, and have many more shares. Hard to believe that it’s 17.5 cents, after being told that the technology is a game changer, and years ahead of the opposition.

Anyway, I believe in the technology, but don’t like being treated like a mushroom by the management.

Good luck to all.
Only hindsight can be the judge of whether we get in too early. It's the market and early means different times and prices for different people. All times and all prices to date can only be judged in hindsight.
The difference between ARM (and others) and Brainchip explains the wait.
ARM is evolutionary — it builds on decades of established Tech so adoption is quick because and the industry already understands it.
AKIDA is revolutionary — it introduces a new Tech never seen or known before, so adoption is way slower and way more cautious.
 
  • Like
  • Thinking
Reactions: 9 users

Aretemis

Regular
Give it a rest manny
 
  • Like
  • Fire
Reactions: 5 users

jrp173

Regular
New post from the other place, under "Commercialisation Claims vs Share Price Reality"

A worthwhile read...


1769074519887.png

1769074565728.png

1769074598962.png

1769074623043.png
 
  • Like
  • Fire
  • Haha
Reactions: 4 users
Douglas Fairbairn has shared and posted stuff a lot just recently about quadric and Morse micro but nothing about Brainchip. What does that say?
1769075247638.gif
 
  • Haha
Reactions: 3 users
* Guzzi62 now scrambling to work out who Douglas Fairbairn is 🙄
I’ve a post saved somewhere from late last year that Douglas mentioned BRN but my phone crapped itself and I can’t find it again. 😢
 
  • Wow
Reactions: 2 users

manny100

Top 20
Interesting take over of Celestial AI by Marvel at an unbelievable mouth watering premium in DEC'25.
Marvell to Acquire Celestial AI, Accelerating Scale-up Connectivity for Next-Generation Data Centers | Marvell Technology, Inc. (MRVL)
Marvel will pay $US3.35B - $US1 bill cash and 27.2 mill Marvel shares. PLUS an additional $US2.25 mill in Marvel shares subject to revenue targets.
Obviously not the same as Brainchip but Marvell bought Celestial AI to secure a breakthrough photonic‑interconnect technology that removes the data‑movement bottleneck in AI data centers to get leadership in that field. Their tech is great.
It shows the premiums available to 'revolutionary' tech.
Celestial AI operate in an already mature Data Center Industry. The Edge is fledgling but growing - ask ARM.
Brainchip once it gets a deal or 2 say from any of Parsons, RTX, Bascom Hunter, MetaGuard-RT, ONSOR its game on value wise...........
If anyone one wants Leadership they will have to pay a big premium.
Marvell’s $5.5B Celestial AI acquisition expands its role in AI data center hardware — firm now positioned to deliver next-gen optical interconnects | Tom's Hardware - Tom's says the acquisition was one of the most aggressive any silicon mid tier silicon vendor has made in this cycle.
This post is to highlight premiums for leading tech. Celestial operates in an already mature industry whereas we are in a fledgling but exponentially grown industry - so our turn will come.
Evidently Celestial AI was formed in 2020 as a private company? So records scarce.
 
  • Like
Reactions: 4 users

Frangipani

Top 20
Early access abstract only but some interesting authors using Akida and Raspberry Pi.



Fernando Sevilla Martínez
e-Health Center, Universitat Oberta de Catalunya UOC, Barcelona, Spain
Volkswagen AG, Wolfsburg, Germany

Jordi Casas-Roma
Computer Vision Center, Universitat Autònoma de Barcelona, Barcelona, Spain

Laia Subirats
e-Health Center, Universitat Oberta de Catalunya UOC, Barcelona, Spain

Raúl Parada
Centre Tecnològic de Telecomunicacions de Catalunya CTTC/CERCA, Barcelona, Spain


Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware

Abstract:​

This letter presents a practical and energy-aware framework for deploying Spiking Neural Networks on low-cost hardware for edge computing on existing software and hardware components. We detail a reproducible pipeline that integrates neuromorphic processing with secure remote access and distributed intelligence. Using Raspberry Pi and the BrainChip Akida PCIe accelerator, we demonstrate a lightweight deployment process including model training, quantization, and conversion. Our experiments validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence. This letter offers a blueprint for scalable and secure neuromorphic deployments across edge networks, highlighting the novelty of providing a reproducible integration pipeline that brings together existing components into a practical, energy-efficient framework for real-world use.

Great find, @Fullmoonfever!

It appears, though, that you haven’t yet made the connection between the paywalled IEEE Networking Letter titled “Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware” and the GitHub repository named “SevillaFe/SNN_Akida_RPI5” by first author Fernando Sevilla Martínez, which you had already discovered back in July.


View attachment 91440


At the time, the content freely accessible via GitHub enabled us to gather quite a bit of info on the use cases Fernando Sevilla Martínez and his fellow researchers (which as predicted includes Raúl Parada Medina) had had in mind, when they set out to “validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence”.

The GitHub repository concluded with the acknowledgment that “This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.” Nevertheless, one focus was evidently on V2X (= Vehicle-to-Everything) communication systems.

So I am reposting some of the July posts on this topic here to refresh our memory:

Please find below an article published today on the Universitat Oberta de Catalunya (UOC) website, referring to two papers co-authored by Fernando Sevilla Martínez, Jordi Casas-Roma, Laia Subirats & Raúl Parada Medina - familiar names that have come up in several 2025 posts here on TSE.

Back in September, @Fullmoonfever had spotted an early access abstract of the first paper referenced (“Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware”) 👆🏻, which “presents a practical pipeline for deploying Spiking Neural Networks (SNNs) on low-cost edge hardware, combining Raspberry Pi 5 (RPI5) with the BrainChip Akida PCIe accelerator”. I then linked the abstract to a GitHub repository named SevillaFe/SNN_Akida_RPI5👆🏻, which @Fullmoonfever had discovered two months earlier.

Today’s UOC publication provides a link, through which the full paper can now be accessed - cf. my screenshots below.

The second paper referenced in the article ("Energy-aware regression in spiking neural networks for autonomous driving: A comparative study with convolutional networks" https://onlinelibrary.wiley.com/doi/10.1155/int/4879993) does not mention BrainChip or Akida.




22/1/26 · TECNOLOGÍA

Investigadores de la UOC desarrollan un modelo de IA de bajo consumo y alto rendimiento​

El uso de redes neuronales de impulsos reduce el consumo energético de la IA y acerca la tecnología a grupos y comunidades con menos recursos

Una inteligencia artificial más eficiente no solo beneficia al planeta, sino que también permite mejorar la resiliencia en entornos con conectividad o energía limitadas

1769074402456.jpeg
La eficiencia energética debe pasar a ser un parámetro central en el diseño de la IA (foto: Adobe)

Juan F. Samaniego / Roser Montserrat

No existe inteligencia artificial sin energía: un centro de datos dedicado en exclusiva a productos y servicios de IA consume hoy tanta electricidad como 100.000 hogares, según la Agencia Internacional de la Energía (AIE). En los últimos años, la inteligencia artificial ha dejado de pertenecer en exclusiva al mundo de la investigación para conquistar cada vez más espacios de nuestra vida y ello ha venido acompañado de un aumento importante de sus necesidades de energía. De acuerdo con la AIE, los centros de datos consumen en la actualidad un 1,5 % de toda la electricidad producida en el mundo y, si nada cambia, su demanda de energía se duplicará de aquí a finales de la década.

En el camino para que algo cambie y se reduzca la huella energética de la IA, dos trabajos de la Universitat Oberta de Catalunya (UOC), con la participación de los investigadores Fernando Sevilla Martínez y Laia Subirats Maté, del grupo NeuroADaS Lab (Cognitive Neuroscience and Applied Data Science Lab), proponen sendas alternativas hacia una IA más sostenible y eficiente y, también, más asequible. Los artículos han sido publicados en abierto en IEEE Networking Letters y en el International Journal of Intelligent Systems.

También tiene implicaciones desde el punto de vista social y ético, ya que permite que la IA esté al alcance de cualquier persona y refuerza la privacidad de los datos”

La eficiencia energética debe pasar a ser un parámetro central en el diseño de la IA. No se trata solo de hacer modelos más rápidos o con mejor rendimiento, sino de hacerlos sostenibles, éticos y accesibles", señala Subirats Maté, también profesora agregada de los Estudios de Informática, Multimedia y Telecomunicación de la UOC. "Diseñar IA energéticamente eficiente no solo beneficia al planeta, sino que también permite desplegar IA en dispositivos pequeños como robots y sensores, reducir los costes de operación de las empresas y los centros de datos y mejorar la resiliencia en entornos con conectividad o energía limitadas".



Redes neuronales más ecológicas​

El primero de los trabajos publicados, liderado desde la UOC por el doctorando Fernando Sevilla Martínez y con la participación de la Universitat Autònoma de Barcelona, el Centre de Visió per Computador (CVC/UAB), el Centre Tecnològic de Telecomunicacions de Catalunya (CTTC) y el grupo Volkswagen, ha demostrado que es posible desarrollar redes neuronales de impulsos (un tipo de IA que imita el funcionamiento del cerebro humano) de bajo consumo y de alto rendimiento utilizando componentes económicos y accesibles, como Raspberry Pi 5 y el acelerador BrainChip Akida. Este estudio abre el camino hacia redes distribuidas de inteligencia artificial eficientes energéticamente, aplicables en campos como el transporte, la monitorización ambiental o el internet de las cosas (IoT) industrial.

"La metodología que proponemos permite entrenar, convertir y ejecutar estos modelos de redes neuronales de impulsos sin necesidad de una unidad de procesamiento gráfico ni de conexión a un centro de datos o a la nube, con un consumo de menos de diez vatios de energía", detallan los autores. "Además, gracias a otras tecnologías como Message Queuing Telemetry Transport, Secure Shell y comunicación Vehicle-to-Everything, varios dispositivos pueden colaborar entre sí en tiempo real, y compartir resultados en menos de un milisegundo y con un gasto energético de apenas diez a treinta microjulios por operación".

De acuerdo con los investigadores, esto no solo tiene implicaciones desde el punto de vista del consumo energético, sino también desde el punto de vista social y ético, ya que permite que la IA esté al alcance de cualquier persona y refuerza la privacidad de los datos. Esto hace que las escuelas o los hospitales, las zonas rurales con infraestructura limitada o los grupos de ciudadanos con pocos recursos puedan usar una inteligencia artificial eficiente, sostenible, accesible y distribuida.


Hacia una conducción autónoma eficiente​

El segundo de los trabajos, liderado también desde la UOC por Fernando Sevilla Martínez y con los mismos participantes que el anterior, analiza en detalle cómo las redes neuronales de impulsos pueden reducir el consumo energético de los sistemas de conducción autónoma, en comparación con las redes convolucionales, muy utilizadas en sistemas de visión artificial como los que llevan algunos vehículos autónomos. Para ello, comparan ambas tecnologías en tareas como la predicción de ángulos de giro del volante o la detección de obstáculos. La propuesta de los investigadores pasa también por introducir una nueva forma de medir la eficiencia real de los sistemas, para lograr así un mejor equilibrio entre precisión y consumo energético.

"Las pruebas que hemos llevado a cabo con diferentes arquitecturas muestran que las redes neuronales de impulsos con una determinada codificación logran un equilibrio óptimo entre rendimiento y bajo consumo, y utilizan entre diez y veinte veces menos energía que las redes convolucionales", explican los investigadores del grupo NeuroADaS Lab de la UOC, adscrito al eHealth Centre. "Esto demuestra que las redes neuronales pueden impulsar una IA más sostenible incluso sin la necesidad de hardwareespecializado, lo que marca un hito clave hacia una computación eficiente en el transporte inteligente y autónomo", añaden.

De acuerdo con los autores, ambos estudios aportan datos valiosos en la investigación para lograr sistemas de IA que consuman menos energía y, por lo tanto, sean también más asequibles y accesibles. "El primer trabajo aporta un flujo de trabajo práctico con menos demanda eléctrica, menor generación de calor y la posibilidad de desplegar la IA directamente sin centros de datos, el llamado edge computing", concluyen. "Y el segundo introduce una métrica que combina rendimiento y consumo energético, lo que nos permite impulsar el diseño de una IA más sostenible".



Artículos relacionados
Martínez, F.S., Casas-Roma, J., Subirats, L., & Parada, R. (2025) "Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware". IEEE Networking Letters https://doi.org/10.1109/LNET.2025.3611426.

Sevilla Martínez, F., Casas-Roma, J., Subirats, L., & Parada, R. (2025). "Energy-aware regression in spiking neural networks for autonomous driving: A comparative study with convolutional networks". International Journal of Intelligent Systems https://doi.org/10.1155/int/4879993.




English translation by DeepL:

22/1/26 - TECHNOLOGY


UOC researchers develop a low-consumption, high-performance AI model

The use of pulse neural networks reduces the energy consumption of AI and brings the technology closer to groups and communities with fewer resources.

More efficient artificial intelligence not only benefits the planet, but also improves resilience in environments with limited connectivity or energy.


1769074402456.jpeg
Energy efficiency must become a central parameter in AI design (photo: Adobe).

Juan F. Samaniego / Roser Montserrat

There is no artificial intelligence without energy: a data centre dedicated exclusively to AI products and services now consumes as much electricity as 100,000 homes, according to the International Energy Agency (IEA). In recent years, artificial intelligence has moved from the world of research into more and more areas of our lives, and this has been accompanied by a significant increase in its energy needs. According to the IEA, data centres currently consume 1.5% of all electricity produced worldwide and, if nothing changes, their energy demand will double by the end of the decade.

On the road to making a difference and reducing the energy footprint of AI, two papers from the Universitat Oberta de Catalunya (UOC), with the participation of researchers Fernando Sevilla Martínez and Laia Subirats Maté, from the NeuroADaS Lab (Cognitive Neuroscience and Applied Data Science Lab) group, propose alternative paths towards a more sustainable and efficient AI that is also more affordable. The articles have been published openly in IEEE Networking Letters and the International Journal of Intelligent Systems.

"It also has implications from a social and ethical point of view, as it allows AI to be available to anyone and reinforces data privacy

"Energy efficiency must become a central parameter in AI design. It's not just about making models faster or with better performance, but about making them sustainable, ethical and accessible," says Subirats Maté, also an associate professor at the UOC's Faculty of Computer Science, Multimedia and Telecommunications. "Designing energy-efficient AI not only benefits the planet, but also makes it possible to deploy AI in small devices such as robots and sensors, reduce the operating costs of companies and data centres, and improve resilience in environments with limited connectivity or energy".



Greener neural networks

The first of the published studies, led at the UOC by PhD student Fernando Sevilla Martínez and with the participation of the Universitat Autònoma de Barcelona, the Centre de Visió per Computador (CVC/UAB), the Centre Tecnològic de Telecomunicacions de Catalunya (CTTC) and the Volkswagen Group, has shown that it is possible to develop low-power, high-performance impulse neural networks (a type of AI that mimics the functioning of the human brain) using inexpensive and accessible components, such as Raspberry Pi 5 and the BrainChip Akida accelerator. This study paves the way towards energy-efficient distributed artificial intelligence networks, applicable in fields such as transport, environmental monitoring and the industrial internet of things (IoT).

"The methodology we propose allows training, converting and executing these neural network models without the need for a graphics processing unit or connection to a data centre or the cloud, with a power consumption of less than ten watts,' the authors explain. "Furthermore, thanks to other technologies such as Message Queuing Telemetry Transport, Secure Shell and Vehicle-to-Everything communication, multiple devices can collaborate with each other in real time, sharing results in less than a millisecond and with an energy expenditure of only ten to thirty microjoules per operation.

According to the researchers, this not only has implications from an energy consumption point of view, but also from a social and ethical point of view, as it makes AI accessible to anyone and enhances data privacy. This makes it possible for schools or hospitals, rural areas with limited infrastructure or groups of citizens with few resources to use efficient, sustainable, accessible and distributed artificial intelligence.


Towards efficient autonomous driving

The second study, also led at the UOC by Fernando Sevilla Martínez and with the same participants as the previous one, analyses in detail how impulse neural networks can reduce the energy consumption of autonomous driving systems, compared with convolutional networks, which are widely used in artificial vision systems such as those used in some autonomous vehicles. To this end, they compare both technologies in tasks such as steering wheel steering angle prediction or obstacle detection. The researchers' proposal also involves introducing a new way of measuring the real efficiency of the systems, in order to achieve a better balance between precision and energy consumption.

"The tests we have carried out with different architectures show that pulse neural networks with a specific encoding achieve an optimal balance between performance and low power consumption, and use between ten and twenty times less energy than convolutional networks," explain the researchers from the UOC's NeuroADaS Lab group, which is part of the eHealth Centre. "This shows that neural networks can drive more sustainable AI even without the need for specialised hardware, marking a key milestone towards efficient computing in intelligent and autonomous transport," they add.

According to the authors, both studies provide valuable insights into research for AI systems that consume less energy and are therefore also more affordable and accessible. "The first paper provides a practical workflow with lower power demand, less heat generation and the possibility to deploy AI directly without data centres, so-called edge computing," they conclude. "And the second introduces a metric that combines performance and energy consumption, allowing us to drive the design of more sustainable AI.


Related articles

Martínez, F.S., Casas-Roma, J., Subirats, L., & Parada, R. (2025) "Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware". IEEE Networking Letters https://doi.org/10.1109/LNET.2025.3611426.

Sevilla Martínez, F., Casas-Roma, J., Subirats, L., & Parada, R. (2025). "Energy-aware regression in spiking neural networks for autonomous driving: A comparative study with convolutional networks". International Journal of Intelligent Systems https://doi.org/10.1155/int/4879993.





https://ieeexplore.ieee.org/ielx8/8...leHBsb3JlLmllZWUub3JnL2RvY3VtZW50LzExMTcxNjE3

F7FA3C6D-5563-4343-8D3C-C7A23E4FC384.jpeg
A20A493A-9E4E-46E4-A1C2-35AF8C2E6F92.jpeg
037A96C6-80A7-47C7-9C7B-7BBF999872C0.jpeg
7E90C244-4FF3-46E9-AD60-8F182B875259.jpeg
A1567A24-95DA-42B3-BCE7-B4614835BEA2.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 14 users

IloveLamp

Top 20
  • Haha
  • Like
Reactions: 4 users

Diogenese

Top 20
Hey Dio I am stealing your work and personalizing it:,

This is Dio's spiderweb of value compared to Simply Wall Street - Not financial advice, just an opinion which, in part, underlies my investment decision.
1769050234161.png



Value:
In my opinion, anything over what I paid is a bonus, anything under $5 is excellent value. The boomerang above in orange is indicative of a market where by shorters have feasted on limited news from a start up Tech company. But being an Aussie we all know that boomerangs return so lets wait that ride out. (Now before people get technical about the shape of a boomerang, our Western motion of the shape, differs vastly to the actual shape of the desert aboriginals of Western Australia where I spent a fair portion of my career, depending on their purpose)

Future

There will be competition, that is expected but our moat of patents, hopefully ground breaking tech and the 10 yrs head start we have had on some will ensure that we capture above our fair share of the market.

Past

Yes we could have maybe incorporated a hybrid model of chip making and IP at the start to encourage adoption, but as per one's own experience, one learns the most from the mistakes they make themselves.


Dividend

To date nil, but I am hoping patience and the great Australian Government Superannuation guarantee will enable me to leave the holding I have long enough to take a bit a cream of the top to enjoy retirement and leave a future stream of revenue for the "Early Release" Generational wealth fund. This will ensure that my already spoilt and cute, little grandson son will turn out to be a shitty nosed rich kids ;)(oops I meant well educated and housed productive member of the future generation with the financial ability to make smart choices and live in a home not a shed.)

Sorry to plagiarize Dio.
Hi Early,

Happy to see my post stimulated some discussion. My only gripe is that the value web you've shown is not mine. It's the Simply Wall Street one which, in my view, grossly underestimates future potential and value.

As far as dividends/revenue is concerned, as I've said before, I think the CyberNeuro-RT/Akida Edge Box is our shortest path to a commercial market. In fact, MetaGuard AI are marketing it now. While IP licences may take some time to mature, this is being commercialized now. This is the product of SBIRs from the US DoE and MDA (Missile Defence Agency).

https://brainchip.com/metaguard-ai-...de-access-and-brainchip-neuromorphic-support/
...
CyberNeuroRT Platform Capabilities:

Zeek and Corelight Integration: Adds multi-model ML inference to existing Zeek and Corelight deployments without infrastructure replacement.

Neuromorphic Optimization: Optimized for BrainChip Akida processors for ultra-low power edge deployment in industrial IoT and distributed environments.

Federal Validation: Competitive SBIR awards from the Department of Energy and Missile Defense Agency validate defense-grade capabilities
.

Cybersecurity is essential for any device connected to the interweb. The threats are increasing, and AI will only exacerbate this.
 
  • Like
Reactions: 8 users
So, still fishing around GitHub and found this person / team who operate a drone business / developers in California called Dronomy.

Snip below is from a discussion area within GitHub. Someone else posts about Openpilot every couple years or something and Dronomy believe it may assist them with their work on drones with what appears a development using Akida.

Anyway info below fwiw.




IMG_20260122_212842.jpg




Kumar Robotics is part of the University of Pennsylvania Engineering. I thought they did some work with Akida as well some time ago but not sure. I need to try find it again.



 
  • Like
  • Love
Reactions: 6 users

Diogenese

Top 20
So, still fishing around GitHub and found this person / team who operate a drone business / developers in California called Dronomy.

Snip below is from a discussion area within GitHub. Someone else posts about Openpilot every couple years or something and Dronomy believe it may assist them with their work on drones with what appears a development using Akida.

Anyway info below fwiw.




View attachment 94479



Kumar Robotics is part of the University of Pennsylvania Engineering. I thought they did some work with Akida as well some time ago but not sure. I need to try find it again.



If there was a camel milk producer that used drones to heard the camels, that would be a dronedairy.
 
  • Haha
Reactions: 3 users

Frangipani

Top 20
News from our friends at OHB Hellas: Meet Hummingbird, their new small demonstrator, for which they used Akida “directly on the drone to perform real-time vehicle detection from RGB images, without any ground connection”!

OHB Hellas are translating their space onboard computing research to the UAV world: “We just flew neuromorphic AI on a small UAV - onboard, in real time, with very low power.

“… an important step to explore technologies that can serve UAVs today and also satellite missions tomorrow.”


71D3390D-EC7C-44CA-BFF4-21A324AA7CB0.jpeg





In his comment, OHB Hellas CTO and Managing Director Mathieu Bernou names the three researchers from the Orbital High Performance Computing (Orbital-HPC) team (https://www.ohb-hellas.gr/activities/orbital-hpc/) who worked on Hummingbird:

92C97893-3044-4491-BA54-1E1C58BAAADE.jpeg



Niki Doulou: https://www.linkedin.com/in/niki-doulou-168944254/

9DE1FC8F-1ACB-4A1C-AF0D-4AE12E69CDFE.jpeg


Giannis Panagiotopoulos: https://www.linkedin.com/in/giannispanagiotopoulos/

CD8EDED2-84ED-4977-9CD2-B377377D5DCC.jpeg
F4CCA826-7B4D-4CE8-A608-9A5424ACE081.jpeg


Evgenios Tsigkanos: https://www.linkedin.com/in/evgenios-tsigkanos/

08DF53EE-CFE5-407C-A279-B9E943732F3B.jpeg
CEB5C5C7-B2C4-44F4-BDE9-30BA0264B412.jpeg
 
Last edited:
  • Love
  • Like
  • Fire
Reactions: 5 users

Frangipani

Top 20

62440284-8C07-4BA0-A180-A798A7B0E104.jpeg





BrainChip logo
BrainChip



Digital Design Engineer (MPU)​



Laguna Hills, CA · 1 day ago · 92 applicants
Promoted by hirer ·
Company review time is typically 1 week
$100K/yr - $160K/yr
Hybrid
Full-time


About the job​


BrainChip is seeking a Digital Design Engineer to join a team working on cutting-edge and novel AI hardware. The primary job function is to work with team members to design and develop digital modules from concept stage to tape-out or FPGA release. This position will be part of our Hardware Development group. The Digital Design Engineer needs to be able to start from a Product Requirement Specification, develop a feasible microarchitecture, implement the function using RTL language, verify the functionality, and follow up until completion of the product.

This is a hybrid role requiring you in our Laguna Hills, CA office 3x a week.

Additional Requirement:
Candidates must have experience in embedded MPU design, preferably with RISC-V architectures and custom accelerators. This includes developing and integrating processor cores, designing instruction sets, and optimizing performance for AI workloads. Familiarity with pipeline design, cache systems, and hardware/software co-design is highly desirable.


ESSENTIAL JOB DUTIES AND RESPONSIBILITIES:

  • Understand product requirements, gather the relevant information, and develop a solution.
  • Specify and implement CPUs such as RISC-V in a co-processor or host role.
  • Use RTL language to design the digital functional modules.
  • Program, debug, and build the FPGA for system verification.
  • Use simulation tools to check the functionalities of the designs in RTL and gate level.
  • Collaborate with other team members to define a verification methodology and a test plan
  • Write clear documentation of the designs.

QUALIFICATIONS:
To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.

Education/Experience/Qualifications:
  • BS/MS in Electrical Engineering or related degree or certification required
  • 3+ years of experience in digital logic design
  • Good understanding of modern computational architectures
  • Experience in embedded MPU design, preferably RISC-V
  • Fluent in Verilog and SystemVerilog
  • Knowledge of MPU and SRAM based SoC components and system busses (AXI, AHB, APB) is strongly desired
  • Knowledge of standard SoC interfaces (SPI, I2C, etc.…) and high-speed IO protocols (PCIe, USB, DDR) is a plus
  • Good skills in Python and shell scripting are desired
  • Good debugging skills, and well experienced with VCS/Verdi or similar toolsets


At BrainChip, we hire based on potential, performance, and alignment with our mission to shape the future of intelligent edge computing. We are committed to providing a fair and respectful workplace where individuals are evaluated on their qualifications and contributions. Employment decisions are made without regard to race, color, religion, sex, national origin, age, disability, or any other status protected by applicable law.

We value the diverse perspectives and experiences that help drive innovation in neuromorphic AI and welcome applicants from all backgrounds who share our passion for advancing technology that matters.
 
  • Like
Reactions: 1 users

Frangipani

Top 20
News from our friends at OHB Hellas: Meet Hummingbird, their new small demonstrator, for which they used Akida “directly on the drone to perform real-time vehicle detection from RGB images, without any ground connection”!

OHB Hellas are translating their space onboard computing research to the UAV world: “We just flew neuromorphic AI on a small UAV - onboard, in real time, with very low power.

“… an important step to explore technologies that can serve UAVs today and also satellite missions tomorrow.”


View attachment 94480




In his comment, OHB Hellas CTO and Managing Director Mathieu Bernou names the three researchers from the Orbital High Performance Computing (Orbital-HPC) team (https://www.ohb-hellas.gr/activities/orbital-hpc/) who worked on Hummingbird:

View attachment 94490


Niki Doulou: https://www.linkedin.com/in/niki-doulou-168944254/

View attachment 94491

Giannis Panagiotopoulos: https://www.linkedin.com/in/giannispanagiotopoulos/

View attachment 94492 View attachment 94493

Evgenios Tsigkanos: https://www.linkedin.com/in/evgenios-tsigkanos/

View attachment 94494 View attachment 94495

Kenneth Östberg (Frontgrade Gaisler) commented:

554426A1-3BF4-435D-8D91-5A289632701D.jpeg
 
  • Like
Reactions: 1 users
Top Bottom