BRN Discussion Ongoing

BrainShit

Regular
Hey there @BrainShit, looks like you're very much in charge here, you really need to lift your game, haha!
View attachment 78215


As I read your post, my epilepsy Early-Warning-Goggles alerted me and what can I say, I took the helm.
The following action points are to be implemented immediately:

- Sales offers are to be made to ARM, NVIDIA, Intel, AMD and other interested parties
- Disclosure of all NDAs
- The 6 Edge boxes produced will be raffled among BrainChip fans from TSE
- Nandan Nayampally, Rudy Pei, Bajana Sai, will receive all produced M.2 cards
- The tablecloth is ceremoniously handed over to PvDM
- Sean is banned from using the following words for misunderstanding:
  • explosion in sales
  • continued progress and growth
  • massive competitive advantage
  • revenue growth
  • grow the business on multiple vectors
  • massive, dramatically
  • advancing customer engagements
  • potential to generate revenue

1000034805.gif
 
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 37 users

DK6161

Regular
  • Haha
  • Like
  • Sad
Reactions: 7 users

Rach2512

Regular


Efficiency is currency.
 
  • Like
  • Love
Reactions: 19 users

TheDon

Regular
100million whiped of shareholder value by the most inept management a shareholder could wish for,You lot put me down as a downramper if that was the case I would be cheering today with the rest of the shorters, No far from it, AGM oh it’s gunna be good, little red ferret will be again promising the biggest lot of absolute bullshit, the hallelujah brothers on here even the Pommy ones will have their hands in the air swaying backwards and forwards beleiving every word of crap management have to say. If today was,nt enough for shareholders to say give the Board the arse then Sean I don’t know what shareholders are waiting for a 5 cent share price.
I'm waiting for my money to come so that I can buy some more. If your not happy with the SP why stay? In my opinion i just sell and cut my losses. Me, I'm happy as a clam. Soo happy that I will argue with my wife again today to buy some more BRN Shares. lol

TheDon
 
  • Like
  • Fire
  • Haha
Reactions: 19 users

itsol4605

Regular
Anyone who sold hysterically will be very angry tomorrow!!
It will be extremely green!!!
 
  • Like
  • Haha
Reactions: 5 users

Iseki

Regular
I'm waiting for my money to come so that I can buy some more. If your not happy with the SP why stay? In my opinion i just sell and cut my losses. Me, I'm happy as a clam. Soo happy that I will argue with my wife again today to buy some more BRN Shares. lol

TheDon
Sadly we are not all as brave as you.
The rules are that if you are a substantial holder you need to inform the ASX when you cease to be one.
But the latest ann shows that BRN didn't bother doing this for at least 2 years as Anil sold down.
 
  • Fire
  • Like
Reactions: 3 users

Schwale

Regular
  • Like
  • Thinking
Reactions: 8 users

Iseki

Regular

View attachment 78239
View attachment 78240
Sean and his team have continued to afirm, without evidence, that MB is engaged with us. Sadly, I don't think many of us believe that this is quite as precise as it needs to be.
 
  • Fire
Reactions: 2 users

Iseki

Regular

White Horse

Regular
  • Like
Reactions: 1 users

Frangipani

Top 20
From crapper site by

FF…,
I personally would rate both Samsung and Hyundai as strong contenders:

Despite Fact Finder’s repeated claims on HC that he were posting only facts, he continues to spread the following misinformation about Hyundai/Boston Dynamics being the secret customer behind the Fraunhofer HHI researchers’ “proof-of-concept implementation of neuromorphic wireless cognition with application to human-robot interaction”:

2. Hyundai because they own Boston Dynamics who provided Spot their robotic dog to Fraunhofer Research Institute to experiment with using Prophesee’s vision sensor and Brainchip’s AKD1000 to prove out that Spot could be controlled by hand gestures.

Fact or fiction?

Fact is, the following document proves that Fraunhofer Society’s Central Purchasing Department in Munich purchased three Spot robot dogs from Boston Dynamics in May 2023 (after BD had won the public tender) destined for Fraunhofer HHI (Heinrich Hertz-Institut) in Berlin as part of the 6G-RIC (Research and Innovation Cluster) research hub.


https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-433491

1740589087594.jpeg




This research was verifiably publicly funded, as evidenced by numerous sources I have since provided in several posts on this topic, including the description accompanying the May 2024 YouTube video that revealed an Akida Rasperry Pi was used in the Fraunhofer HHI setup (“The work has been conducted within the 6G Research and Innovation Cluster (6G-RIC), funded by the German Ministry of Education and Research (BMBF) in the program “Souverän. Digital. Vernetzt.” Find more information here: https://6G-ric.de”), several LinkedIn posts and the Fraunhofer HHI website itself.

Here is yet another screenshot showing that Spot robot dogs were used for two separate 6G-RIC demos by Fraunhofer HHI researchers at the EuCNC & 6G Summit in Antwerp last year:


70A2E251-4846-4CEC-9228-DA9837AFF948.jpeg

609EF9CB-9B89-4A83-99C6-4590AE17FE91.jpeg


So the above PoC was clearly NOT the result of contract research commissioned by Boston Dynamics or Hyundai, but instead publicly-funded cutting-edge research exploring future use cases that 6G will enable, aiming “to help establish Germany and Europe as global leaders in the expansion of sustainable 6G technologies” (https://6g-ric.de/the-project/).



68F90791-5FF9-40B3-9D29-10A9282A1559.jpeg


I also note that Hyundai has used Fraunhofer directly over many years to undertake research on its behalf not to mention Prophesee’s partnership with Brainchip.

The above have been the subject of multiple posts here and elsewhere over the last four years.

My opinion only DYOR

Fact Finder

FF doesn’t seem to be aware that Fraunhofer is actually not a single research institute (“Hyundai because they own Boston Dynamics who provided Spot their robotic dog to Fraunhofer Research Institute…”), but a society that currently operates 76 individual research institutes all over Germany as well as several Representative Offices overseas, including one in South Korea (https://www.fraunhofer.kr/en.html).

While individual Fraunhofer Institutes (eg in Augsburg or Dresden) have collaborated with the Hyundai Motor Group or an affiliate of the Hyundai Heavy Industries Group (especially in the field of hydrogen and fuel cells), the statement “Hyundai used Fraunhofer directly over many years to undertake research on its behalf” is not accurate.

Apart from the fact that we know it was actually the German government that funded the PoC in question, it is also highly unlikely that the Hyundai Motor Group, which fully acquired Boston Dynamics in 2021, would even require the help of a Fraunhofer Institute to do research on Spot, the robot dog, on their behalf:

As mentioned in a previous post of mine, the Hyundai Motor Group launched the Boston Dynamics AI Institute in 2022, headquartered in Cambridge, MA and led by Marc Raibert, the founder, former CEO and now Chairman of Boston Dynamics (https://www.hyundai.com/eu/mobility...-and-innovation/robotics/boston-dynamics.html).

It has since changed its name twice, first to The AI Institute and then to RAI (Robotics and AI Institute) and also opened a European office in Zurich in 2024, helmed by Marco Rutter, an ETH professor who runs the university’s Robotic Systems Lab and is co-founder of ANYbotics, an ETH spinoff that commercialised the quadrupedal ANYmal robots “designed for autonomous operations in challenging environments” ((www.animal-research.org)

As you can gather from the screenshots below, the over 150 full-time RAI staff are perfectly capable of doing their own research on Spot - the robot dog that was developed by RAI Executive Director Marc Raibert and his former team at Boston Dynamics:


84B89561-F2AD-4777-A15E-8960EE2794D7.jpeg


F2993DC3-5791-471C-B9AD-86374062FB82.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 10 users

uiux

Regular



 
  • Like
  • Fire
  • Love
Reactions: 60 users

Boab

I wish I could paint like Vincent
  • Like
  • Thinking
Reactions: 5 users

Tothemoon24

Top 20
No mention of Brainchip in this article, yet to me it does give an insight into how far we are advanced compared to others




More brainlike computers could change AI for the better​

These 4 neuromorphic technologies hold promise for more efficient, more capable forms of AI
A collage of illustrations of a brain, neurons and elements of computer chips.


To improve AI, computer scientists are looking to neuroscience.
MATT CHINWORTH

Share this:​

By Kathryn Hulick
5 HOURS AGO
The tiny worm Caenorhabditis eleganshas a brain just about the width of a human hair. Yet this animal’s itty-bitty organ coordinates and computes complex movements as the worm forages for food. “When I look at [C. elegans] and consider its brain, I’m really struck by the profound elegance and efficiency,” says Daniela Rus, a computer scientist at MIT. Rus is so enamored with the worm’s brain that she cofounded a company, Liquid AI, to build a new type of artificial intelligence inspired by it.
Rus is part of a wave of researchers who think that making traditional AI more brainlike could create leaner, nimbler and perhaps smarter technology. “To improve AI truly, we need to … incorporate insights from neuroscience,” says Kanaka Rajan, a computational neuroscientist at Harvard University.
Such “neuromorphic” technology probably won’t completely replace regular computers or traditional AI models, says Mike Davies, who directs the Neuromorphic Computing Lab at Intel in Santa Clara, Calif. Rather, he sees a future in which many types of systems coexist.
The tiny worm C. elegan.
The tiny worm C. elegans is inspiration for a new type of artificial intelligence.HAKAN KVARNSTROM/SCIENCE SOURCE
Imitating brains isn’t a new idea. In the 1950s, neurobiologist Frank Rosenblatt devised the perceptron. The machine was a highly simplified model of the way a brain’s nerve cells communicate, with a single layer of interconnected artificial neurons, each performing a single mathematical function.
Decades later, the perceptron’s basic design helped inspire deep learning, a computing technique that recognizes complex patterns in data using layer upon layer of nested artificial neurons. These neurons pass input data along, manipulating it to produce an output. But, this approach can’t match a brain’s ability to adapt nimbly to new situations or learn from a single experience. Instead, most of today’s AI models devour massive amounts of data and energy to learn to perform impressive tasks, such as guiding a self-driving car.
“It’s just bigger, bigger, bigger,” says Subutai Ahmad, chief technology officer of Numenta, a company looking to human brain networks for efficiency. Traditional AI models are “so brute force and inefficient.”
In January, the Trump administration announced Stargate, a plan to funnel $500 billion into new data centers to support energy-hungry AI models. But a model released by the Chinese company DeepSeek is bucking that trend, duplicating chatbots’ capabilities with less data and energy. Whether brute force or efficiency will win out is unclear.
Meanwhile, neuromorphic computing experts have been making hardware, architecture and algorithms ever more brainlike. “People are bringing out new concepts and new hardware implementations all the time,” says computer scientist Catherine Schuman of the University of Tennessee, Knoxville. These advances mainly help with biological brain research and sensor development and haven’t been a part of mainstream AI. At least, not yet.
Here are four neuromorphic systems that hold potential for improving AI.

Making artificial neurons more lifelike​

Real neurons are complex living cells with many parts. They are constantly receiving signals from the environment, with their electric charge fluctuating until it crosses a specific threshold and fires. This activity sends an electrical impulse across the cell and to neighboring neurons. Neuromorphic computing engineers have managed to mimic this pattern in artificial neurons. These neurons, part of spiking neural networks, simulate the signals of an actual brain, creating discrete spikes that carry information through the network. Such a network may be modeled in software or built in hardware.
Spikes are not modeled in traditional AI’s deep learning networks. Instead, in those models, each artificial neuron is “a little ball with one type of information processing,” says Mihai Petrovici, a neuromorphic computing researcher at the University of Bern in Switzerland. Each of these “little balls” links to the others through connections called parameters. Usually, every input into the network triggers every parameter to activate at once, which is inefficient. DeepSeek divides traditional AI’s deep learning network into smaller sections that can activate separately, which is more efficient.
But real brain and artificial spiking networks achieve efficiency a bit differently. Each neuron is not connected to every other one. Also, only if electrical signals reach a specific threshold does a neuron fire and send information to its connections. The network activates sparsely rather than all at once.

Comparing networks​

Typical deep learning networks are dense, with interconnections among all their identical “neurons.” Brain networks are sparse, and their neurons can take on different roles. Neuroscientists are still working out how complex brain networks are actually organized.
An illustration of an artificial network and brain networks.
J.D. MONACO, K. RAJAN AND G.M. HWANG
Importantly, brains and spiking networks combine memory and processing. The connections “that represent the memory are also the elements that do the computation,” Petrovici says. Mainstream computer hardware — which runs most AI — separates memory and processing. AI processing usually happens in a graphical processing unit, or GPU. A different hardware component, such as random access memory, or RAM, handles storage. This makes for simpler computer architecture. But zipping data back and forth among these components eats up energy and slows down computation.
The neuromorphic computer chip BrainScaleS-2 combines these efficient features. It contains sparsely connected spiking neurons physically built into hardware, and the neural connections store memories and perform computation.
BrainScaleS-2 was developed as part of the Human Brain Project, a 10-year effort to understand the human brain by modeling it in a computer. But some researchers looked at how the tech developed from the project might make AI more efficient. For example, Petrovici trained different AIs to play the video game “Pong.” A spiking network running on the BrainScaleS-2 hardware used a thousandth of the energy as a simulation of the same network running on a CPU. But the real test was to compare the neuromorphic setup with a deep learning network running on a GPU. Training the spiking system to recognize handwriting used a hundredth the energy of the typical system, the team found.
For spiking neural network hardware to be a real player in the AI realm, it has to be scaled up and distributed. Then, it could be “useful to computation more broadly,” Schuman says.

Connecting billions of spiking neurons​

The academic teams working on BrainScaleS-2 currently have no plans to scale up the chip, but some of the world’s biggest tech companies, like Intel and IBM, do.
In 2023, IBM introduced its NorthPole neuromorphic chip, which combines memory and processing to save energy. And in 2024, Intel announced the launch of Hala Point, “the largest neuromorphic system in the world right now,” says computer scientist Craig Vineyard of Sandia National Laboratories in New Mexico.
Despite that impressive superlative, there’s nothing about the system that visually stands out, Vineyard says. Hala Point fits into a luggage-sized box. Yet it contains 1,152 of Intel’s Loihi 2 neuromorphic chips for a record-setting total of 1.15 billion electronic neurons — roughly the same number of neurons as in an owl brain.
Like BrainScaleS-2, each Loihi 2 chip contains a hardware version of a spiking neural network. The physical spiking network also uses sparsity and combines memory and processing. This neuromorphic computer has “fundamentally different computational characteristics” than a regular digital machine, Schuman says.
A computer chip with blue and red accents on it.
This BrainScaleS-2 computer chip was built to work like a brain. It contains 512 simulated neurons connected with up to 212,000 synapses. HEIDELBERG UNIV.
These features improve Hala Point’s efficiency compared with that of typical computer hardware. “The realized efficiency we get is definitely significantly beyond what you can achieve with GPU technology,” Davies says.
In 2024, Davies and a team of researchers showed that the Loihi 2 hardware can save energy even while running typical deep learning algorithms. The researchers took several audio and video processing tasks and modified their deep learning algorithms so they could run on the new spiking hardware. This process “introduces sparsity in the activity of the network,” Davies says.
A deep learning network running on a regular digital computer processes every single frame of audio or video as something completely new. But spiking hardware maintains “some knowledge of what it saw before,” Davies says. When part of the audio or video stream stays the same from one frame to the next, the system doesn’t have to start over from scratch. It can “keep the network idle as much as possible when nothing interesting is changing.” On one video task the team tested, a Loihi 2 chip running a “sparsified” version of a deep learning algorithm used 1/150th the energy of a GPU running the regular version of the algorithm.
The audio and video test showed that one type of architecture can do a good job running a deep learning algorithm. But developers can reconfigure the spiking neural networks within Loihi 2 and BrainScaleS-2 in numerous ways, coming up with new architectures that use the hardware differently. They can also implement different kinds of algorithms using these architectures.
It’s not yet clear what algorithms and architectures would make the best use of this hardware or offer the highest energy savings. But researchers are making headway. A January 2025 paper introduced a new way to model neurons in a spiking network, including both the shape of a spike and its timing. This approach makes it possible for an energy-efficient spiking system to use one of the learning techniques that has made mainstream AI so successful.
Neuromorphic hardware may be best suited to algorithms that haven’t even been invented yet. “That’s actually the most exciting thing,” says neuroscientist James Aimone, also of Sandia National Labs. The technology has a lot of potential, he says. It could make the future of computing “energy efficient and more capable.”

Designing an adaptable ‘brain’​

Neuroscientists agree that one of the most important features of a living brain is the ability to learn on the go. And it doesn’t take a large brain to do this. C. elegans, one of the first animals to have its brain completely mapped, has 302 neurons and around 7,000 synapses that allow it to learn continuously and efficiently as it explores its world.
Ramin Hasani studied how C. elegans learns as part of his graduate work in 2017 and was working to model what scientists knew about the worms’ brains in computer software. Rus found out about this work while out for a run with Hasani’s adviser at an academic conference. At the time, she was training AI models with hundreds of thousands of artificial neurons and half a million parameters to operate self-driving cars.
An illustration of a C. elegans brain brain.
A C. elegans brain (its neurons are colored by type in this reconstruction) learns constantly and is a model for building more efficient AI.D. WITVLIET ET AL/BIORXIV.ORG 2020
If a worm doesn’t need a huge network to learn, Rus realized, maybe AI models could make do with smaller ones, too.
She invited Hasani and one of his colleagues to move to MIT. Together, the researchers worked on a series of projects to give self- driving cars and drones more wormlike “brains” — ones that are small and adaptable. The end result was an AI algorithm that the team calls a liquid neural network.
“You can think of this like a new flavor of AI,” says Rajan, the Harvard neuroscientist.
Standard deep learning networks, despite their impressive size, learn only during a training phase of development. When training is complete, the network’s parameters can’t change. “The model stays frozen,” Rus says. Liquid neural networks, as the name suggests, are more fluid. Though they incorporate many of the same techniques as standard deep learning, these new networks can shift and change their parameters over time. Rus says that they “learn and adapt … based on the inputs they see, much like biological systems.”
To design this new algorithm, Hasani and his team wrote mathematical equations that mimic how a worm’s neurons activate in response to information that changes over time. These equations govern the liquid neural network’s behavior.
Such equations are notoriously difficult to solve, but the team found a way to approximate a solution, making it possible to run the network in real time. This solution is “remarkable,” Rajan says.
In 2023, Rus, Hasani and their colleagues showed that liquid neural networks could adapt to new situations better than much larger typical AI models. The team trained two types of liquid neural networks and four types of typical deep learning networks to pilot a drone toward different objects in the woods. When training was complete, they put one of the training objects — a red chair — into completely different environments, including a patio and a lawn beside a building. The smallest liquid network, containing just 34 artificial neurons and around 12,000 parameters, outperformed the largest standard AI network they tested, which contained around 250,000 parameters.
The team started the company Liquid AI around the same time and has worked with the U.S. military’s Defense Advanced Research Projects Agency to test their model flying an actual aircraft.
The company has also scaled up its models to compete directly with regular deep learning. In January, it announced LFM-7B, a 7-billion-parameter liquid neural network that generates answers to prompts. The team reports that the network outperforms typical language models of the same size.
“I’m excited about Liquid AI because I believe it could transform the future of AI and computing,” Rus says.
This approach won’t necessarily use less energy than mainstream AI. Its constant adaptation makes it “computationally intensive,” Rajan says. But the approach “represents a significant step towards more realistic AI” that more closely mimics the brain.
030125_brain-ai_inline3.jpg
MATT CHINWORTH

Building on human brain structure​

While Rus is working off the blueprint of the worm brain, others are taking inspiration from a very specific region of the human brain — the neocortex, a wrinkly sheet of tissue that covers the brain’s surface.
“The neocortex is the brain’s powerhouse for higher-order thinking,” Rajan says. “It’s where sensory information, decision-making and abstract reasoning converge.”
This part of the brain contains six thin horizontal layers of cells, organized into tens of thousands of vertical structures called cortical columns. Each column contains around 50,000 to 100,000 neurons arranged in several hundred vertical minicolumns.
These minicolumns are the primary drivers of intelligence, neuroscientist and computer scientist Jeff Hawkins argues. In other parts of the brain, grid and place cells help an animal sense its position in space. Hawkins theorizes that these cells exist in minicolumns where they track and model all our sensations and ideas. For example, as a fingertip moves, he says, these columns make a model of what it’s touching. It’s the same with our eyes and what we see, Hawkins explains in his 2021 book A Thousand Brains.
“It’s a bold idea,” Rajan says. Current neuroscience holds that intelligence involves the interaction of many different brain systems, not just these mapping cells, she says.
Though Hawkins’ theory hasn’t reached widespread acceptance in the neuroscience community, “it’s generating a lot of interest,” she says. That includes excitement about its potential uses for neuromorphic computing.
Hawkins developed his theory at Numenta, a company he cofounded in 2005. The company’s Thousand Brains Project, announced in 2024, is a plan for pairing computing architecture with new algorithms.
In some early testing for the project a few years ago, the team described an architecture that included seven cortical columns, hundreds of minicolumns but spanned just three layers rather than six in the human neocortex. The team also developed a new AI algorithm that uses the column structure to analyze input data. Simulations showed that each column could learn to recognize hundreds of complex objects.
The practical effectiveness of this system still needs to be tested. But the idea is that it will be capable of learning about the world in real time, similar to the algorithms of Liquid AI.
For now, Numenta, based in Redwood, Calif., is using regular digital computer hardware to test these ideas. But in the future, custom hardware could implement physical versions of spiking neurons organized into cortical columns, Ahmad says.
Using hardware designed for this architecture could make the whole system more efficient and effective. “How the hardware works is going to influence how your algorithm works,” Schuman says. “It requires this codesign process.”
A new idea in computing can take off only with the right combination of algorithm, architecture and hardware. For example, DeepSeek’s engineers noted that they achieved their gains in efficiency by codesigning “algorithms, frameworks and hardware.”
When one of these isn’t ready or isn’t available, a good idea could languish, notes Sara Hooker, a computer scientist at the research lab Cohere in San Francisco and author of an influential 2021 paper titled “The Hardware Lottery.” This already happened with deep learning — the algorithms to do it were developed back in the 1980s, but the technology didn’t find success until computer scientists began using GPU hardware for AI processing in the early 2010s.
Too often “success depends on luck,” Hooker said in a 2021 Association for Computing Machinery video. But if researchers spend more time considering new combinations of neuromorphic hardware, architectures and algorithms, they could open up new and intriguing possibilities for both AI and computing.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 13 users

TECH

Regular
Nexa contains AKIDA 1500 which was taped out around Jan'23. So its been a maximum 2 years (likely less due to the timing of engagement commencement) to get to the stage closing in but not yet commercial.
Shows the time lags on getting from engagement to commercially available.
What it also shows is that we should see a number of AKIDA 1000, 1500 and engagements coming out of NDA this year. Maybe even some Gen 2 late 2025.
Frontgrade, Bascom Hunter, AS AFRL, QV/BRN/Lockheed-Martin and now Onsor show that the pace is speeding up.

Nice balanced, honest post Manny, the news on Onsor (which I initially missed) is yet another very positive engagement, and really shines
a spotlight on what the company has said for many years now, that we will prove to be a great companion for any company that wishes
to join us in the Healthcare sector, I have personally mentioned over the last few years how I felt we would excel in the space sector, and
that is now playing out (yes, slowly) but clearly we are and have been attracting the big boys, I wonder why ?????

We started with "Beneficial AI" then moved onto "Essential AI" and now it's "Streaming at the Edge AI".............lets go Akida !!!!!!!!!!!!!!!!

Regards.....Tech.
 
  • Like
Reactions: 19 users

DJM263

LTH - 2015
1740633485088.png
 
  • Like
Reactions: 2 users

IMG_3894.jpeg
 
  • Like
Reactions: 4 users

7für7

Top 20
Top Bottom