BRN Discussion Ongoing

paxs

Emerged
Hi Paxs.

I too would love to see our tech shining and improving capabilities where there is both a current need and where we have obvious advantages over extant systems.
I have long thought we need to show the world something that knocks their sox off, some killer app that showcases our advantages in a clear and easily understandable fashion, in order that we may both gain traction and establish our reputation broadly, both for rapid sustained growth and for brand recognition.
I had originally thought that would be in some medical diagnostic kit, such as a cheap, reliable and data protective Covid detector, which would have ridden the coattails of the unfortunate Pandemic into global consciousness.

Therefore I like your idea of us being involved in the hearing aid sector and dream of us being partnered with Cochlear or a similar entity, with both the existing good reputation and commercial heft to drive us forward.

The other immediate area we could enhance is drones/robotics and I would like to see us widely utilised here in some fashion where our characteristics should allow and showcase just how our hardware/software can vastly improved efficiency, battery endurance and the autonomy of even existing systems. I would love us to, again, be involved with a player such as Boston Dynamics or Tesla, who can market what we bring to the table, effectively. Maybe, this then provides the entree into a Samsung or Hyundai conglomerate so we begin to experience both scale and ubiquity.

I guess this is really retail investor thinking, largely driven with the impetus of an increasing share price, whilst the reality may turn out very differently, with a long, slow, quiet progression, largely hidden behind closed doors, defense procurement initiatives and NDA's and enmeshed within other's trademarks and brands.
For personal reasons, I'm still hoping for an announcement this week coming, and have been waiting for such, since October 2015, when I bought my first parcel of 10,000 shares for .325 cents.
Oh baby, what a ride it's been. 🤣
Bring It BrainChip!
This is where I find management strange that they do very little to install any confidence in the actual company itself self-promoting wise. I am under the impression that the company has the expertise to troubleshoot and collaborate the incorporation of Brianchips suite of technology into potential customer products.
 
  • Like
  • Fire
Reactions: 3 users

paxs

Emerged
Here my lack of technological smarts will show how much of a novice I am. But I would be thinking by now they would have the ability to be able to demonstrate examples in rudimentary ways what is achievable on their platform without naming or disclosing anything covered by NDA agreements.
 
  • Like
  • Love
  • Fire
Reactions: 6 users

HopalongPetrovski

I'm Spartacus!
This is where I find management strange that they do very little to install any confidence in the actual company itself self-promoting wise. I am under the impression that the company has the expertise to troubleshoot and collaborate the incorporation of Brianchips suite of technology into potential customer products.

I think the issue that many have and struggle with, is that the company is very focused on just who it is communicating with.
It’s not trying to sell hamburgers to the masses. The people they are trying to convince is a small select group of influential individuals that have the decision making ability to incorporate our technology into their’s. And the upcoming generation who are currently studying at certain universities.
So, they target trade shows and publications attended by and read by said people.
It’s a long slog unfortunately and the luck of what happens in the wider world is also a big factor which is largely beyond their control.
My desire for our amalgamation within some killer application is merely in the belief that some form of positive brand recognition would be helpful for all the same reasons that the Apple’s and Tesla’s have pursued in the past.
 
  • Like
  • Love
  • Fire
Reactions: 26 users

manny100

Regular
Have been following this stock for a fair few birthdays now and it has been interesting watching the story unfold, I am hoping it is just more than a story. One thing I hope that this technology does improve is the hearing aid sector me being a deaf old Coote and not wanting an implant just yet. It got me thinking with all the people that bounce back and forward between the various sites what would one item or product that would be on top of people wish list to gain improvement through SNN edge compute technology. I am not pushing any product or stock just interested to see where everyday people are hoping neuromorphic tech can improve outcomes. As I say just curious shoot me down blow me up put me on ignore or add a thought just for shits and giggles.
Have a look at the 2024 AGM video from circa 34 minute mark. Sean and the BOD fully understand there is frustration among holders about deal delays.
Sean explains why there are delays and shares his reasons for being confident and optimistic about engagements and wins.
These issues are also addressed in Question time.
IMO PICO will further enhance the value of our patent portfolio which is likely many times our current value.
 
  • Like
  • Fire
  • Love
Reactions: 30 users

rgupta

Regular
  • Like
Reactions: 3 users

paxs

Emerged
Have a look at the 2024 AGM video from circa 34 minute mark. Sean and the BOD fully understand there is frustration among holders about deal delays.
Sean explains why there are delays and shares his reasons for being confident and optimistic about engagements and wins.
These issues are also addressed in Question time.
IMO PICO will further enhance the value of our patent portfolio which is likely many times our current value.
I suppose my real concern is another Australian company redomicile or taken over just because the people not in the know are kept in the dark and manipulation is used to force an agenda. I think the board can do more and should do more to address this. A simple annual share distribution report would go a long way. Doesn't matter if it good or bad people have a right to make informed decisions. If anyone can post a share distribution report later then 2022 would be an added bonus.
 
  • Like
Reactions: 2 users

Tezza

Regular
My concern is ,if brainchip keeps growing the product offerings without any deals and the sp doesn't grow, a takeover or buy out will look more and more appealing.
 
  • Like
  • Sad
Reactions: 6 users

TECH

Regular
Good morning all,

Our "gun researchers/posters" may have already posted this, but the link below contains Steve Brightfields 18 slide presentation
at the recent AI Hardware & Edge Summit...if this hasn't already been posted, well then, enjoy reading these slides.

Once again, ANY shareholder who can't see the writing on the wall, as in, we are moving forward week by week, well then you
need to research our company more and absorb what you are actually reading, including what other experts say about our
technology and the future direction of AI at the "far edge".....Love our company and all our brilliant staff.....Tech x

 
  • Like
  • Fire
  • Love
Reactions: 40 users

buena suerte :-)

BOB Bank of Brainchip
Hi Paxs.

I too would love to see our tech shining and improving capabilities where there is both a current need and where we have obvious advantages over extant systems.
I have long thought we need to show the world something that knocks their sox off, some killer app that showcases our advantages in a clear and easily understandable fashion, in order that we may both gain traction and establish our reputation broadly, both for rapid sustained growth and for brand recognition.
I had originally thought that would be in some medical diagnostic kit, such as a cheap, reliable and data protective Covid detector, which would have ridden the coattails of the unfortunate Pandemic into global consciousness.

Therefore I like your idea of us being involved in the hearing aid sector and dream of us being partnered with Cochlear or a similar entity, with both the existing good reputation and commercial heft to drive us forward.

The other immediate area we could enhance is drones/robotics and I would like to see us widely utilised here in some fashion where our characteristics should allow and showcase just how our hardware/software can vastly improved efficiency, battery endurance and the autonomy of even existing systems. I would love us to, again, be involved with a player such as Boston Dynamics or Tesla, who can market what we bring to the table, effectively. Maybe, this then provides the entree into a Samsung or Hyundai conglomerate so we begin to experience both scale and ubiquity.

I guess this is really retail investor thinking, largely driven with the impetus of an increasing share price, whilst the reality may turn out very differently, with a long, slow, quiet progression, largely hidden behind closed doors, defense procurement initiatives and NDA's and enmeshed within other's trademarks and brands.
For personal reasons, I'm still hoping for an announcement this week coming, and have been waiting for such, since October 2015, when I bought my first parcel of 10,000 shares for .325 cents.
Oh baby, what a ride it's been. 🤣
Bring It BrainChip!
Hey Hoppy,

Very similar story matey :)

1728173879775.png
1728174070417.png


And then a 2 weeks later topped up at half the price :)

1728173953057.png
1728173967450.png


And topped up 'many' times when it hit below 10c

Hoping for a great few weeks ahead chippers
Mr Bean Waiting GIF by Bombay Softwares


Good luck oh patient ones :)


C'mon Sean .... WE are waiting !!!!!!!!!!!!!!! 🙏🙏🙏
 
  • Like
  • Love
  • Fire
Reactions: 36 users
I am loving the excitement and build up to what eventually will become the best stock on the market in Australia
And it will truly explod when we get into the American market
 
  • Like
  • Fire
  • Thinking
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Is it more than just co-incidence that Raspberry Pi has a Pico Series as @Humble Genius alluded to yesterday?

We know that our team showed off AKIDA Pico on applications that detect keywords in speech and the example given in the article I posted yesterday was that this would be useful for voice assistance which wait for keywords to activate, which seems to be exactly what the Raspberry Pico requires.

Intentional or coincidental, only time will tell...

View attachment 70280

View attachment 70281



200w.gif




What's this then? Picovoice...

NASA is going to use Picovoice in it's next-gen voice-controlled applications? Hmmmm...

And, Picovoice are working on something that will take PicoLLM to the next level? Hmmm...


max-0321-image-for-home-page-llms-on-the-edge.jpg

July 23, 2024

Want to Run LLMs on the Edge?​

by Max Maxfield
I’ve just heard something that left me flabbergasted. Seriously. I cannot recall the last time my flabber was quite this gasted. All I can say is that if you dare to read this column, your own flabber is in danger of joining mine, so this might be a good time for you to don clothing appropriate to the occasion.
Let’s start with the concept of generative AI (GenAI) models like ChatGPT and Stable Fusion. These are known as large language models (LLMs). LLMs usually run in the cloud; that is, on honking big servers in honking big data centers. Well, suppose I were to tell you that I know of a company that has come up with a way of taking LLMs and running them on low-power processors located at the edge where the “internet rubber” meets the “real-world road”? Even better, suppose I were to tell you that the company in question is making this technology available for us all to use for free? How’s your flabber feeling now.
I was just chatting with Alireza Kenarsari-Anhari, who is the CEO of Picovoice. Based in Canada (there seems to be a heck of a lot of high-technology coming out of Canada these days), the company was founded in 2018. Although this seems like yesterday, it’s a lifetime away in the context of GenAI (remember ChatGPT wasn’t presented to the world until 30 November 2022).

Picovoice started life as a voice AI company with a mission to accelerate the transition of voice AI from running in the cloud to running on edge devices like the Arduino, STM32, and Raspberry Pi Zero.

It turns out that the folks at Picovoice are really, really good at what they do. They originally targeted their solutions at hardware companies, but they quickly discovered that a lot of software companies were also interested in building natural speech capabilities into things like security systems and web browsers. Even NASA is going to use Picovoice technology in its next generation of voice-controlled space applications like spacesuits.

Since the guys and gals at Picovoice wanted to squeeze their technology onto the smallest of processors, they spent a lot of effort figuring out how to implement artificial neural networks (ANNs) very, very efficiently. They also created their own ANN architecture, because even TensorFlow Lite (TFLite) was too big and hairy for what they were doing, and things like TFLite for Microcontrollers wasn’t available at that time (that little scamp didn’t see the light of day until 2019). Furthermore, they also created their own runtime for running neural networks on any processor known to humankind. This is known as XPU, which stands for MPU, MCU, GPU, NPU, etc.
Now, this is where things start to get very interesting indeed. It turns out that if you have a small neural network with only a couple of million parameters (weights), then almost every parameter contributes equally to the accuracy of the model, and it doesn’t much matter where the parameter is in the network.
By comparison, once you start working with neural networks like LLMs with hundreds of billions of parameters, then not all parameters are created equal (which makes me want to paraphrase George Orwell by saying: “all parameters are equal, but some are more equal than others”). In this case, we discover that there is a relatively small number of parameters that are extremely important. We can think of these as the “aristocracy” of parameters. If you perturb these parameters even a tiny bit, they have the ability to make your world go pear-shaped.
Then there’s a bigger group we might think of as “middle-class” parameters. Although they’re important, it’s not fatal if you ruffle their feathers a bit. Finally, we meet the largest group of all, the “working class” parameters, which are not particularly important on an individual basis, but they’re useful to have around—otherwise nothing ends up getting done. To put this another way, this last group of parameters are not individually important, but they contribute to the accuracy of the overall model by their sheer number.
But wait, there’s more, because in addition to having more parameters, LLMs also have more layers. The neural network models we use for things like machine vision have tens of neural layers. By comparison, LLMs have hundreds of layers, but not all layers are of equal significance, and their importance changes depending on the model you are using.
As Alireza told me, “All this got us thinking there should be an algorithm that tells us how to allocate our resources among all these parameters. Almost like a triage.”
After a lot of work, the result is picoLLM, which is an end-to-end local large language model (LLM) platform that enables enterprises to build AI assistants running on-devices, on-premises, and in private clouds without sacrificing accuracy.

If you have a hardware platform with limited resources, like 1 gigabyte of RAM, for example, and you have an LLM with hundreds of layers and 10 billion parameters, for example, then picoLLM can analyze the LLM’s layers and the parameters, determine what’s most important, prune things down, and distribute what’s left across the available hardware resources. All this is extremely fine-grained. Some of the parameters become one bit, some become two bits, some become three bits, and so forth depending on how important they are. In a crunchy nutshell, picoLLM can take a humongous LLM and boil it down into something that will fit into your physical system.
As I mentioned earlier, the folks at Picovoice started as a voice AI company with a mission to accelerate the transition of voice AI from running in the cloud to running on edge devices like the Arduino, STM32, and Raspberry Pi Zero. Now they’ve expanded their mission to accelerate the transition from LLMs running in the cloud to running on the edge.
Obviously, Picovoice is a for-profit company, so why are the folks at Picovoice making their awesome picoLLM technology available for the rest of us to use for free?
Well, it must be acknowledged that Alireza sounded just a little smug when he told me that the guys and gals at Picovoice are in a lucky position in that their voice products are making money and the voice market is on the rise, so they don’t need to raise money and they don’t need investors.
When they started thinking about the next growth enabler, LLMs were the obvious choice. The chaps and chapesses at Picovoice were already good at making ANNs run efficiently with limited resources on the edge, and they realized that many LLMs need to run locally because of cost, privacy, latency, etc. issues.
As Alireza says: “Any cloud user we turn into an edge advocate is a win for us in the long term.” He also told me about the new technology they are working on—something that will take picoLLM to the next level—but my lips are sealed and that will be a topic for another day. In the meantime, do you have any thoughts you’d care to share on any of this?
 
  • Like
  • Love
  • Thinking
Reactions: 35 users

7für7

Top 20
  • Fire
  • Haha
  • Wow
Reactions: 7 users

7für7

Top 20
I am loving the excitement and build up to what eventually will become the best stock on the market in Australia
And it will truly explod when we get into the American market
American market ha?

1728191685472.gif
 
  • Haha
Reactions: 2 users

Diogenese

Top 20
View attachment 70405



What's this then? Picovoice...

NASA is going to use Picovoice in it's next-gen voice-controlled applications? Hmmmm...

And, Picovoice are working on something that will take PicoLLM to the next level? Hmmm...


max-0321-image-for-home-page-llms-on-the-edge.jpg

July 23, 2024

Want to Run LLMs on the Edge?​

by Max Maxfield
I’ve just heard something that left me flabbergasted. Seriously. I cannot recall the last time my flabber was quite this gasted. All I can say is that if you dare to read this column, your own flabber is in danger of joining mine, so this might be a good time for you to don clothing appropriate to the occasion.
Let’s start with the concept of generative AI (GenAI) models like ChatGPT and Stable Fusion. These are known as large language models (LLMs). LLMs usually run in the cloud; that is, on honking big servers in honking big data centers. Well, suppose I were to tell you that I know of a company that has come up with a way of taking LLMs and running them on low-power processors located at the edge where the “internet rubber” meets the “real-world road”? Even better, suppose I were to tell you that the company in question is making this technology available for us all to use for free? How’s your flabber feeling now.
I was just chatting with Alireza Kenarsari-Anhari, who is the CEO of Picovoice. Based in Canada (there seems to be a heck of a lot of high-technology coming out of Canada these days), the company was founded in 2018. Although this seems like yesterday, it’s a lifetime away in the context of GenAI (remember ChatGPT wasn’t presented to the world until 30 November 2022).

Picovoice started life as a voice AI company with a mission to accelerate the transition of voice AI from running in the cloud to running on edge devices like the Arduino, STM32, and Raspberry Pi Zero.

It turns out that the folks at Picovoice are really, really good at what they do. They originally targeted their solutions at hardware companies, but they quickly discovered that a lot of software companies were also interested in building natural speech capabilities into things like security systems and web browsers. Even NASA is going to use Picovoice technology in its next generation of voice-controlled space applications like spacesuits.

Since the guys and gals at Picovoice wanted to squeeze their technology onto the smallest of processors, they spent a lot of effort figuring out how to implement artificial neural networks (ANNs) very, very efficiently. They also created their own ANN architecture, because even TensorFlow Lite (TFLite) was too big and hairy for what they were doing, and things like TFLite for Microcontrollers wasn’t available at that time (that little scamp didn’t see the light of day until 2019). Furthermore, they also created their own runtime for running neural networks on any processor known to humankind. This is known as XPU, which stands for MPU, MCU, GPU, NPU, etc.
Now, this is where things start to get very interesting indeed. It turns out that if you have a small neural network with only a couple of million parameters (weights), then almost every parameter contributes equally to the accuracy of the model, and it doesn’t much matter where the parameter is in the network.
By comparison, once you start working with neural networks like LLMs with hundreds of billions of parameters, then not all parameters are created equal (which makes me want to paraphrase George Orwell by saying: “all parameters are equal, but some are more equal than others”). In this case, we discover that there is a relatively small number of parameters that are extremely important. We can think of these as the “aristocracy” of parameters. If you perturb these parameters even a tiny bit, they have the ability to make your world go pear-shaped.
Then there’s a bigger group we might think of as “middle-class” parameters. Although they’re important, it’s not fatal if you ruffle their feathers a bit. Finally, we meet the largest group of all, the “working class” parameters, which are not particularly important on an individual basis, but they’re useful to have around—otherwise nothing ends up getting done. To put this another way, this last group of parameters are not individually important, but they contribute to the accuracy of the overall model by their sheer number.
But wait, there’s more, because in addition to having more parameters, LLMs also have more layers. The neural network models we use for things like machine vision have tens of neural layers. By comparison, LLMs have hundreds of layers, but not all layers are of equal significance, and their importance changes depending on the model you are using.
As Alireza told me, “All this got us thinking there should be an algorithm that tells us how to allocate our resources among all these parameters. Almost like a triage.”
After a lot of work, the result is picoLLM, which is an end-to-end local large language model (LLM) platform that enables enterprises to build AI assistants running on-devices, on-premises, and in private clouds without sacrificing accuracy.

If you have a hardware platform with limited resources, like 1 gigabyte of RAM, for example, and you have an LLM with hundreds of layers and 10 billion parameters, for example, then picoLLM can analyze the LLM’s layers and the parameters, determine what’s most important, prune things down, and distribute what’s left across the available hardware resources. All this is extremely fine-grained. Some of the parameters become one bit, some become two bits, some become three bits, and so forth depending on how important they are. In a crunchy nutshell, picoLLM can take a humongous LLM and boil it down into something that will fit into your physical system.
As I mentioned earlier, the folks at Picovoice started as a voice AI company with a mission to accelerate the transition of voice AI from running in the cloud to running on edge devices like the Arduino, STM32, and Raspberry Pi Zero. Now they’ve expanded their mission to accelerate the transition from LLMs running in the cloud to running on the edge.
Obviously, Picovoice is a for-profit company, so why are the folks at Picovoice making their awesome picoLLM technology available for the rest of us to use for free?
Well, it must be acknowledged that Alireza sounded just a little smug when he told me that the guys and gals at Picovoice are in a lucky position in that their voice products are making money and the voice market is on the rise, so they don’t need to raise money and they don’t need investors.
When they started thinking about the next growth enabler, LLMs were the obvious choice. The chaps and chapesses at Picovoice were already good at making ANNs run efficiently with limited resources on the edge, and they realized that many LLMs need to run locally because of cost, privacy, latency, etc. issues.
As Alireza says: “Any cloud user we turn into an edge advocate is a win for us in the long term.” He also told me about the new technology they are working on—something that will take picoLLM to the next level—but my lips are sealed and that will be a topic for another day. In the meantime, do you have any thoughts you’d care to share on any of this?
OK,

So that last paragraph does leave the door open for a software implementation, but we do not have an exclusive licence for NDAs.

It would be great to have an established AI software vendor as a licencee.

The thing is, they would also either need to use BRN SLMs or adapt their in-house SLMs for Akida, or, most probably, both.

Money for jam (or money for old rope, as my EE lecturer used to say in his broad Midlands accent - I never figured out if that was his daily breakfast. At least he had the jam with it. In any event - luxury!)
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 12 users

jtardif999

Regular
  • Like
  • Fire
Reactions: 17 users
So many good articles out there at the mo from all sort of publications from various diff countries espousing the Pico advancements. This is one I quite liked.

Given Pico would have been in Dev for some time, possibly in conjunction or at the behest of a partner (or NDA) and now fully capable prior to the recent BRN release, why do I get the feeling that there might just be something more tangible in the wings.

C'mon BRN :)

Send_em_Top_Gun_Maverick_Clip_Launching_Off_Carrier.gif



Home / AI & Trends / How Will BrainChip’s Akida Pico Revolutionize Wearable AI Tech?

How Will BrainChip’s Akida Pico Revolutionize Wearable AI Tech?​

October 2, 2024
How Will BrainChip’s Akida Pico Revolutionize Wearable AI Tech?
Image credit: Unsplash
Russell Fairweather
Russell Fairweather
BrainChip, an edge artificial intelligence (AI) specialist, has recently unveiled an innovative addition to its Akida family called the Akida Pico. This neuromorphic processor is being celebrated as the “lowest power acceleration coprocessor” ever produced, specifically geared towards wearable technology and sensor-integrated markets. Offering immense promise for the future of AI-driven applications, the Akida Pico pushes the boundaries of power efficiency and compactness. In a world where wearable devices and smart technology are increasingly ubiquitous, BrainChip’s latest offering could represent a significant leap forward in both design and functionality.

The Akida Pico is engineered to meet the burgeoning needs of the next generation of wearable devices. As consumers continue to demand more efficiency and compact designs from their technology, this processor promises to deliver on both fronts. Built on the second-generation Akida2 platform, Akida Pico is crafted to be exceptionally power-efficient and compact, making it an ideal candidate for integration in various high-efficiency systems. This advancement is a testament to BrainChip’s ongoing commitment to innovation and its strategic vision for the future of artificial intelligence.

Technological Breakthroughs in Akida Pico​

The second-generation Akida2 platform, upon which the Akida Pico is built, takes inspiration from the architecture of the human brain for its neuromorphic processing capabilities. This neuromorphic design enables the processor to handle machine learning (ML) and artificial intelligence (AI) workloads with remarkable efficiency.

One of the standout features of the Akida Pico is its ultra-low power consumption, drawing under 1 milliwatt (mW) even when under load. In addition to low power draw, the chip’s power island design ensures an extremely low standby power requirement, making it a highly efficient option for battery-operated devices.

Compactness and efficiency are at the core of the Akida Pico’s design philosophy. The chip is engineered to take up minimal physical space, aligning it perfectly with the trend towards smaller and more efficient electronic components. This makes it an ideal choice for integration into wearable devices, environmental sensors, and even smart home technologies. By providing continuous AI operations in such compact form factors, the Akida Pico facilitates a significant increase in efficiency for always-on systems. In effect, it supports the seamless operation of devices requiring continuous data collection and processing without compromising on power consumption or physical space.

A Paradigm Shift in Wearable Technology​

The Akida Pico ushers in a new era for wearable technology, particularly in battery-powered devices where efficiency and longevity are paramount. One of the chip’s standout functionalities is its ability to serve as a wake-up trigger for more powerful microcontrollers or application processors upon detecting specific conditions. This feature is particularly advantageous for fitness trackers, health-monitoring systems, and environmental sensors—devices that require constant data collection and processing. By acting as an intelligent gatekeeper, the Akida Pico ensures that larger, more power-intensive components only activate when necessary, conserving battery life and enhancing device functionality.

The shift towards always-on AI operations facilitated by the Akida Pico promises to revolutionize the way wearables interact with users. These devices can now maintain continuous interaction without frequent battery charges, delivering smarter and more engaging user experiences. By incorporating the Akida Pico as a wake-up gatekeeper, wearable devices can offer enhanced functionality while conserving energy. This balance of power efficiency and performance can significantly extend the lifecycle of wearables, making them more reliable and user-friendly. Consequently, consumers stand to gain immensely from longer-lasting, more efficient devices, propelling wearable tech into a new realm of possibilities.

Bridging the Gap with MetaTF Software​

On the software side, BrainChip enhances the Akida Pico’s versatility through its proprietary MetaTF software. This development platform supports the compilation and optimization of Temporal-Enabled Neural Networks (TENNs), allowing developers to seamlessly import and optimize models crafted in popular frameworks like TensorFlow, Keras, and PyTorch. This capability mitigates the need for developers to learn new frameworks, streamlining the AI deployment process. The ease of integrating pre-existing models into the Akida Pico environment is a significant advantage, as it accelerates the development cycle and ensures that innovative AI solutions are brought to market faster.

MetaTF software not only makes the development process smoother but also brings a user-friendly interface to the table, promoting efficiency in model deployment. This software ecosystem ensures that developers can quickly adapt and iterate their AI models to leverage the ultra-low power and compact features of the Akida Pico. The ability to rapidly deploy and optimize models is crucial for creating scalable AI solutions that meet a variety of application requirements. By bridging the gap between cutting-edge technology and practical application, MetaTF empowers developers to maximize the potential of neuromorphic computing in real-world scenarios.

Strategic Approach and Market Positioning​

BrainChip’s decision to release the Akida Pico as Intellectual Property (IP) rather than standalone hardware represents a strategic and forward-thinking move. This approach allows other manufacturers to integrate the Akida Pico into their chip designs, fostering widespread adoption across multiple industries. By positioning the Akida Pico as an easily integrable IP, BrainChip is poised to become a central player in the semiconductor and edge AI development ecosystems. This move not only promotes greater industry collaboration but also accelerates the adoption of advanced AI technologies across a broad spectrum of applications.

Although BrainChip has not yet disclosed the exact pricing details for the Akida Pico IP, the company assures that further information is available on their official website. This strategic direction underscores BrainChip’s role as a pioneer in the AI on-chip solutions market. By making advanced, power-efficient AI technologies more accessible, BrainChip aims to drive innovation and set new industry standards. The release of the Akida Pico as an IP highlights BrainChip’s commitment to providing versatile and efficient AI solutions, positioning the company at the forefront of technological advancements in the edge AI space.

Key Innovations and Emerging Trends​

BrainChip, a leader in edge artificial intelligence (AI), has introduced a groundbreaking addition to its Akida lineup: the Akida Pico. Heralded as the “lowest power acceleration coprocessor” ever developed, the Akida Pico is purpose-built for wearable tech and sensor-rich markets. This neuromorphic processor promises a future brimming with AI-driven applications by extending the limits of power efficiency and compact design. In an era where wearables and smart devices are becoming indispensable, BrainChip’s new innovation signifies a major step forward in both technology and functionality.

The Akida Pico is designed to satisfy the growing demands of next-generation wearable devices. As consumers seek more efficient and compact technological solutions, this processor aims to meet those expectations. Originating from the second-generation Akida2 platform, the Akida Pico combines exceptional power efficiency with a compact form, making it ideal for diverse high-efficiency systems. This development underscores

BrainChip’s unwavering dedication to innovation and its strategic foresight, setting new benchmarks for the future of artificial intelligence
 
  • Like
  • Love
  • Fire
Reactions: 34 users

Andy38

The hope of potential generational wealth is real
Not sure if posted before, apologies but there was some discussion about AWS and what it may have been referring to? - ex Brainchip employee Nikunj now certified to work in the area associated with AWS 😉😉. Could be nothing, but just thought was an interesting find. Could be nothing, but maybe where there’s a little smoke, there could be a ginormous bonfire 😂🤪🤑🤑
IMG_4494.png
 

Attachments

  • IMG_4495.png
    IMG_4495.png
    1 MB · Views: 14
Last edited:
  • Like
  • Wow
  • Love
Reactions: 10 users

Frangipani

Regular

At least one forum member appears to have found @Berlinforever ’s post so ridiculous he/she/they felt the need to comment with a 🤣?! 🤔

View attachment 59313

View attachment 59306
View attachment 59307


While OHB may not end up becoming a household name in the general public, the emoji poster(s) seem(s) to totally underestimate the parent company’s reputation and significance within the aerospace industry sector: OHB SE (the multinational space and technology group that wholly owns OHB Hellas) is actually one of Europe’s leading space tech companies, headquartered in Bremen, Germany, which is not only well-known for the fairy tale The Town Musicians of Bremen, but also happens to be one of Europe’s hubs for the aeronautics and space industry.



I recommend the Doubting Thomas(es) (and everyone else) to have a look at OHB’s image brochure

their latest corporate report

as well as their website (https://www.ohb.de/en/corporate/milestones) to see how Christa Fuchs, her late husband Manfred Fuchs and their team “turned the small hydraulics company OHB into a global player in the international space industry.” The initialism OHB originally stood for “Otto Hydraulik Bremen” - in 1991 it was officially renamed “Orbital- und Hydrotechnologie Bremen-System GmbH”.

View attachment 59314

They are right up there with Europe’s aerospace giants Airbus Defence & Space (formerly Astrium) and Thales Alenia Space, fiercely bidding to secure contracts for satellites etc (OHB lost out to both Airbus D&S and Thales Alenia Space in securing the contract for the now ongoing construction of the second generation (G2) of Galileo satellites, after they had been the main supplier for the first generation (https://www.reuters.com/technology/...about-galileo-satellites-contract-2023-04-26/).
At other times, however, they also collaborate with their main competitors (https://www.ohb.de/en/news/esas-pla...stem-ag-readies-for-integration-of-26-cameras).

In 2018, the Hanseatic city of Bremen even renamed the square in front of the company’s headquarters in honour of Manfred Fuchs (1938-2014), to commemorate the engineer and entrepreneur’s significant contribution to the development of the aerospace industry in Bremen.

Being a household name is not necessarily part of what defines a successful global player. Just think of Arm (especially before the IPO).

We as BRN shareholders should always keep that in mind…

View attachment 59272

In mid-March, @chapman89 spotted a LinkedIn post by OHB Hellas (based in a suburb of Athens, Greece), in which the company revealed they had been experimenting with Akida. Their parent company OHB SE, based in Bremen, Germany, may not be a familiar name to the general public, but is in fact one of the major players in the European space tech sector (see my post above).


The other day, during a random Google search, I stumbled across the following website:


8496BA07-6EB4-4271-BBD6-CC86E8F06F9A.jpeg



Turns out that the acronym GIASaaS has to do with a space mission project led by OHB Hellas and originally stood for Greek Infrastructure for Satellite as a Service.
(I presume GIASaaS was picked to resemble the plural resp. formal Greek greeting Γεια σας - if that was indeed the case, it may even qualify as a backronym 😊.)


The first public mention of GIASaaS seems to have been in early 2021:


C63D39C1-28D9-4644-9074-A0A08A9CAC3E.jpeg



128FBE89-0D6B-48F5-8393-CEC5947D8326.jpeg



In addition, I found a July 2023 LinkedIn blog post relating to GIASaaS (“the world [sic] first platform for edge computing in Space”) by France-based Miklos Tomka, the CEO and co-founder of space software start-up Parsimoni, which is a spin-off from Paris-headquartered software development company Tarides.


“As a spin-off of Tarides, we are working at Parsimoni on the world's first satellite as a service marketplace (or "airbnb for satellites"). Finally all users can access high quality, timely earth observation data from satellites - on a pay as you go basis. And of course - for those who prefer on board AI processing - we make that possible too. Powered by SpaceOS, Parsimoni will play a key role in making the value created by space technologies accessible to all.”


D5075F68-3DD7-4247-9291-D666EF14C1A1.jpeg


Parsimoni (https://parsimoni.co) has meanwhile attracted funding from all the major players in the European space industry:

AE3D5F9B-66B0-486A-9FEE-3D15C422952F.jpeg



Next, I discovered , a concept website by OHB Hellas, on which users can choose between different applications such as vessel detection, cloud detection, flood detection, object detection or fire detection and then pick one of two satellites in order to submit a request for a satellite image to be taken. SAT1 is equipped with a Xilinx Kria KV260 Vision AI Starter Kit whereas SAT2 has a Google Coral TPU Dev Board aboard.

And now comes the intriguing bit: There is also a third option (which is limited to the category “object detection”, though), namely the UAV1 (unmanned aerial vehicle), and that contains an AKD1000 PCIe Board! So it looks as if they are currently testing Akida at low altitude, before making a decision of whether or not to send it up to space in a satellite (at least I presume that would be their ultimate goal).

At first glance, the info “server status: online” as well as the green light next to it and underneath SAT1/SAT2/UAV1 suggest that this web portal is up and running and that users can already submit requests for live data, but at second glance those images “generated” within each category turn out to be the exact same satellite imagery (resp. terrestrial stock photos for UAV1) every time.

76BCC8A4-E90D-4745-B875-208475B9B536.jpeg


E2E529D6-4E9F-44D7-9836-B73301AD6AB4.jpeg


28E2A19C-AD61-40D1-929F-4B92B61C11EF.jpeg




A video OHB Hellas uploaded to their YouTube channel in May 2023 (months before purchasing the Akida PCIe Board), demonstrates how they were envisaging this SaaS concept to work at the time (“We are trying to make Satellites Smart by using performant hardware to process data on-board. This, will eliminate the need for downlinking data on-ground & subsequently save valuable time”), requiring potential web portal users to first log into their respective accounts on the landing page before being able to submit a request. Since there is no login option available on https://giasaas.eu/ so far, I take it to be yet another indication that this web portal remains under construction for the time being, although the LinkedIn blog post by Miklos Tomka referred to the “worldwide launch of the GIASaaS platform on July 5 and 6 [2023] in Paris”:





4DBD6C84-C8B8-4706-A626-1C0AA3E0EA5A.jpeg


By the time this demo video was uploaded in May 2023, the acronym/backronym GIASaaS had undergone a slight shift in meaning, from the original Greek Infrastructure for Satellite as a Service to now officially standing for Global Instant Satellite as a Service.

So is this “Airbnb for satellites” - as Miklos Tomka called it - just a pie in the sky? I doubt it. Despite no further public update since then, said Satellite as a Service concept does not seem to have vanished into the exosphere or disappeared in a bottom drawer of an OHB Hellas employee’s desk, as evidenced by a very recent international trade fair presentation:

010D0730-C380-4B29-B6BE-813729E21EB5.jpeg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 24 users

Frangipani

Regular
View attachment 61016

More details about the AIS 2024 Event-Based Eye Tracking Challenge in April, in which the BrainChip “bigBrains” team consisting of Yan Ru (Rudy) Pei, Sasskia Brüers Freyssinet, Sébastien Crouzet (who has since left our company), Douglas McLelland and Olivier Coenen came in third, narrowly trailing two teams from the University of Science and Technology of China:


65039590-479D-466A-8E53-2704D08EC8AD.jpeg

7ED91EA1-E4B8-49CB-8249-1308318731DB.jpeg

DC85147A-4F71-411F-AC0D-1F68ABB7BC5F.jpeg

6F76B05E-711B-4371-B75C-E7E1A3FAADD6.jpeg
 
Top Bottom