BRN Discussion Ongoing

This one is interesting:


1729765939785.png


Clicking in the link in the Linkedin post takes you to this, and BRN gets a mention.

 
  • Like
  • Wow
  • Fire
Reactions: 13 users
Brainchip gets a mention in the paper that is referenced by this author in the Linkedin post....


1729766662706.png




https://www.sciencedirect.com/science/article/pii/S0952197624015732/pdfft?crasolve=1&r=8d7961f76907299b&ts=1729766782836&rtype=https&vrr=UKN&redir=UKN&redir_fr=UKN&redir_arc=UKN&vhash=UKN&host=d3d3LnNjaWVuY2VkaXJlY3QuY29t&tsoh=d3d3LnNjaWVuY2VkaXJlY3QuY29t&rh=d3d3LnNjaWVuY2VkaXJlY3QuY29t&re=X2JsYW5rXw%3D%3D&ns_h=d3d3LnNjaWVuY2VkaXJlY3QuY29t&ns_e=X2JsYW5rXw%3D%3D&rh_fd=rrr)n%5Ed%60i%5E%60_dm%60%5Eo)%5Ejh&tsoh_fd=rrr)n%5Ed%60i%5E%60_dm%60%5Eo)%5Ejh&iv=3d7c514b959fd40822970f712ae55106&token=35623033346636343865663035306463613530363338336139303462313363343262656433363931363562353066333736393263636632313463643539613162393334373537336266306565303034666535336339336430383965373133636566356162373539343131376339643538396432623463643136646239393633633a306231363330323266626635626464633331626634316437&text=6f57c0974cdd7d77f052161dc640360db0e34b2f23d5f43174fe6061dd220e93fce3653776ed875f70e13a3b3ce421d815aaf7ee88b7c12048f7ccdf6ed9cd8c042e7642c88f043a794cc5bf8a2d8f5ba3b1368b1f194aa583a53fa57898454478a93a5057f5d67c014c335b37f828fa790eba7ad46a9d596b70ca230e8d15c631fd9a25c04724a28035e4c4a27314eccfff61b27fadb5f434159fa79bb0cb2e30d5789db41daa20d86ab7a6f2712da35dba16d051a75dd29bd512c8d73560955e412628e82373f8fb38c6cf29e714ed162e0818af32ef8de69fa746bc876896d82a2e0f0008f3448b4a7314c382a84dcd282a1c716aaedb2cad890cf5db79e9e6f1ed049b1dfc8640c3d70f4116ac952ccb2de5adedc9651ff7864c74bf4356372ea54e7b2e9a3b7f9630f83bb08024&original=3f6d64353d6334383735643432346162626238333436613137656632313935396132336335267069643d312d73322e302d53303935323139373632343031353733322d6d61696e2e706466
 
  • Like
  • Fire
  • Love
Reactions: 15 users
I think I've been on a bit of a roll tonight, so I might just roll over to the couch and watch some music videos on Youtube., and then usually fall asleep in front of the TV :)

And for those of you that haven't heard of Bill McClintock, you may want to check out some of his music video mash ups. They are brilliant.

Here is a couple of good ones....









 
  • Haha
  • Fire
  • Like
Reactions: 8 users

Rach2512

Regular

View attachment 71721

Christian has over 33k followers.
 

Attachments

  • Screenshot_20241024-193112_Samsung Internet.jpg
    Screenshot_20241024-193112_Samsung Internet.jpg
    484.5 KB · Views: 81
  • Screenshot_20241024-193124_Samsung Internet.jpg
    Screenshot_20241024-193124_Samsung Internet.jpg
    645.5 KB · Views: 85
  • Like
  • Fire
Reactions: 9 users

7für7

Top 20
  • Like
Reactions: 1 users

CHIPS

Regular
  • Like
  • Thinking
  • Love
Reactions: 15 users
He is basically calling it a scam, no more spiking than his hair after a rough pillow.

That's an outright lie, and he should be forced to apologize or kicked out on Linkedin.
This David Wyatt guy, might be able to think more clearly, if he removes the carrot from his arse..
 
  • Haha
  • Like
Reactions: 11 users

BrainShit

Regular
1000052613.jpg


 
  • Like
  • Love
Reactions: 16 users

itsol4605

Regular
Sorry itso,

The drawing is just one of several, that one illustrating the analog multiplication element. I chose this to illustrate the analog part of the neuron where the multiplication is accumulated in the capacitor.

The digital part includes a row of flip-flops to count the spikes from the corresponding row as shown, for example, in:


US2023115373A1 ACCUMULATOR FOR DIGITAL COMPUTATION-IN-MEMORY ARCHITECTURES 20211013

View attachment 71716

The actual invention is defined in the claims of the patent, but these are are often in arcane patentese.

I haven't looked into these patents in detail, but it looks like Qualcomm are using single-bit, or possibly 2-bit, analog multipliers which, together with the digital summation, would largely circumvent the problems with manufacturing repeatability (variations in the size/spacing of the capacitor plates). The error effect of the variations escalates exponentially with the number of bits per neuron.
...and this shows an asynchronous clock divider – nothing special besides the worst asynchronous behavior.
 

manny100

Top 20
Sean need to address his share holders as this is just about enough for everyone.
It’s been months since the AGM
It meant to happen before years end yet next week is November and nothing
We deserve to be updated Sean

Is it just me or do others feel this way

I am just feeling disappointed atm
Hopefully this will pass I am usually more positive
Agree with your thoughts. It's been a while since the AGM positive deal vibes. Deal closure soon would be comforting for holders.
I guess the next Quarterly podcast will be in November/early December.
I asked but no dates have been set.
 
  • Like
  • Haha
  • Wow
Reactions: 10 users

JB49

Regular
Even searching the keywords "AKIDO Pico", comes up with more exposure for BRN. I'm actually quite surprised at the amount of exposure we are now getting..... hope its a good sign.


View attachment 71722



View attachment 71723
Do you think it might be paid promotions by brainchip?
 
  • Like
  • Thinking
  • Fire
Reactions: 3 users

Giddy up Thomas !!

View attachment 70643
 
  • Like
Reactions: 3 users


Ohh Tom you’ve done it again….

IMG_2963.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Nice work @Jesse Chapman!

Jesse received a comment from Markus Shaefer on LinkedIin.

I love it!

"Neuromorphic computing truly represents a paradigm shift in how we approach technology.." (Markus Shaefer)

"I am convinced Neuromorphic Computing is key to efficient drive automation systems." (Gerrit Ecke).

Screenshot 2024-10-25 at 8.47.01 am.png



A few comments below that you can see this exchange between BrainChip's Anil Mankar, Markus May, Alexander Janisch (R&D Engineer AI and Neuromorphic Computing at Mercedes-Benz) and Gerrit Ecke (Researcher for Future Automotive Software Development and for Neuromorphic Computing at Mercedes-Benz).


Screenshot 2024-10-25 at 8.47.32 am.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 56 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Screenshot 2024-10-25 at 9.09.47 am.png




The chips of tomorrow may well take inspiration from the architecture of our brains​

As artificial intelligence demands more and more energy from the computers it runs on, scientists at IBM Research are taking inspiration from the world’s most efficient computer: the human brain.
As artificial intelligence demands more and more energy from the computers it runs on, scientists at IBM Research are taking inspiration from the world’s most efficient computer: the human brain.
Neuromorphic computing is an approach to hardware design and algorithms that seeks to mimic the brain. The concept doesn’t describe an exact replica, a robotic brain full of synthetic neurons and artificial gray matter. Rather, experts working in this area are designing all layers of a computing system to mirror the efficiency of the brain. Compared to conventional computers, the human brain barely uses any power and can effectively solve tasks even when faced with ambiguous or poorly defined data and inputs. IBM Research scientists are using this evolutionary marvel as inspiration for the next generation of hardware and software that can handle the epic amounts of data required by today’s computing tasks — especially artificial intelligence.
In some cases, these efforts are still deep in research and development, and for now they mostly exist in the lab. But in one case, prototype performance numbers suggest that a brain-inspired computer processor will soon be ready for market.

What is neuromorphic computing?​

To break it down into its etymology, the term “neuromorphic” literally means “characteristic of the shape of the brain or neurons.” But whether this is the appropriate term for the field or a given processor may depend on whom you ask. It could mean circuitry that attempts to recreate the behavior of synapses and neurons in the human brain, or it could mean computing that takes conceptual inspiration from how the brain processes and stores information.
If it sounds like the field of neuromorphic — or brain-inspired — computing is somewhat undecided, that’s only because researchers have taken such vastly different approaches to building computer systems that mimic the brain. Scientists at IBM Research and beyond have been working for years to develop these machines, and the field has not yet landed on the quintessential neuromorphic architecture.
One familiar approach to brain-inspired computing involves creating very simple, abstract models of biological neurons and synapses. These are essentially static, nonlinear functions that use scalar multiplication. In this case, information propagates as floating-point numbers. When it’s scaled up, the result is deep learning. At a simplistic level, deep learning is brain inspired — all these mathematical neurons add up to something that mimics certain brain functions.
“In the last decade or so, this has become so successful that the vast majority of people doing anything related to brain-inspired computing are essentially doing something related to this,” says IBM Research scientist Abu Sebastian. Mimicking neurons with math can be done in additional brain-inspired ways, he says, by incorporating neuronal or synaptic dynamics, or by communicating with spikes of activity, instead of floating-point numbers.
An analog approach, on the other hand, uses advanced materials that can store a continuum of conductance values between 0 and 1, and perform multiple levels of processing — multiplying using Ohm’s law and accumulating partial sums using Kirchhoff’s current summation.

How on-chip memory eliminates a classic bottleneck​

A common trait among brain-inspired computing architecture approaches is on-chip memory, also called in-memory computing. It's a fundamental shift in chip structure compared to conventional microprocessors.
The brain is divided into regions and circuits, which co-locate memory formation and learning — in effect, data processing and storage. Classical computers are not set up this way. With a conventional processor, your memory sits apart from the processor where computing happens, and information is ferried back and forth between the two on circuits. But in a neuromorphic architecture that includes on-chip memory, memory is closely intertwined with processing on a fine level — just like in the brain.
This architecture is a chief feature in IBM’s in-memory computing chip designs, whether analog or digital.
The rationale for putting computing and memory side by side is that machine learning tasks are computing-intensive, but the tasks themselves are not necessarily complex. In other words, there’s a high volume of simple calculations called matrix multiplication. The limiting factor isn’t that the processor is too slow, but that moving data back and forth between memory and computing takes too long and uses too much energy, especially when dealing with heavy workloads and AI based applications. This kink is called the von Neumann bottleneck, named for the von Neumann architecture that has been employed in nearly every chip design since the dawn of the microchip era. With in-memory computing, there are huge energy and latency savings to be found by cutting this shuffle out of data-heavy processes like AI training and inferencing.
In the case of AI inference, synaptic weights are stored in memory. These weights dictate the strength of connections between nodes, and in the case of a neural network, they’re values applied to the matrix multiplication operations being run through them. If synaptic weights are stored apart from where processing takes place and must be shuttled back and forth, the energy you spend per operation will always plateau at a certain point, meaning that more energy eventually stops leading to better performance. Sebastian and his colleagues who developed one of IBM’s brain-inspired chips, named Hermes, believe they must break down the barrier created by moving synaptic weights. The goal is much more performant AI accelerators with smaller footprints.
“In-memory computing minimizes or reduces to zero the physical separation between memory and compute,” says IBM Research scientist Valeria Bragaglia, who is part of the Neuromorphic Devices and System group.
In the case of IBM’s NorthPole chip, the computing structure is built around the memory. But rather than locating the memory and computing in exactly the same space, as is done in analog computing, NorthPole intertwines them so that they may be more specifically called “near-memory.” But the effect is essentially the same.
11 IBM_AI_SplitScreen_v08_06.jpg

The analog Hermes chip uses phase-change memory (PCM) devices that store AI model weights in the conductance values of a type of glass that can be switched between amorphous and crystalline phases.

How brain-inspired chips mimic neurons and synapses​

Carver Mead, an electrical engineering researcher at California Institute of Technology, had a huge influence on the field of neuromorphic computing back in the 1990s, when he and his colleagues realized that it was possible to create an analog device that, at a phenomenological level, resembles the firing of neurons.
Decades later, this is essentially what chips like Hermes and IBM’s other prototype analog AI chip are doing: Analog units both perform calculations and store synaptic weights, much like neurons in the brain do. Both analog chips contain millions of nanoscale phase-change memory (PCM) devices, a sort of analog computing version of brain cells.
The PCM devices are assigned their weights by flowing an electrical current through them, changing the physical state of a piece of chalcogenide glass. When more voltage passes through it, this glass is rearranged from a crystalline to an amorphous solid. This makes it less conductive, changing the value of matrix multiplication operations when they are run through it. After an AI model is trained in software, all synaptic weights are stored in these PCM devices, just like memories are in biological synapses.
“Synapses store information, but they also help compute,” says IBM Research scientist Ghazi Sarwat Syed, who works on designing the materials and device architectures used in PCM. “For certain computations, such as deep neural network inference, co-locating compute and memory in PCM not only overcomes the von Neumann bottleneck, but these devices also store intermediate values beyond just the ones and zeros of typical transistors.” The aim is to create devices that compute with greater precision, can be densely packed onto a chip, and can be programmed with ultra-low currents and power.
“Furthermore, we’re trying to give these devices more flavor,” he says. “Biological synapses store information in a nonvolatile way for a long time, but they also have changes that are short-lived.” So, his team is working on ways to make changes in the analog memory that better emulate biological synapses. Once you have that, you can craft new algorithms that solve problems that digital computers have difficulty doing.
One shortcoming of these analog devices, Bragaglia notes, is that they are currently limited to inferencing. “There are no devices that can be used for training because the accuracy of moving the weights isn’t there yet,” she says. The weights can be cemented into PCM cells once an AI model has been trained on digital architecture, but changing the weights directly through training isn’t yet precise enough. Plus, PCM devices are not durable enough to have their conductance changed a trillion and more times, like would happen during training, according to Syed.
IBM Research prototype analog chip design

IBM Research's unnamed prototype analog chip uses PCM to encode up to 35 million model weights in a single chip.
Multiple teams at IBM Research are working to address the issues created by non-ideal material properties and insufficient computational fidelity. One such approach involves new algorithms that work around the errors created during model weight updates in PCM. They’re still in development, but early results suggest that it will soon be possible to perform model training on analog devices.
Bragaglia is involved in a materials science approach to this problem: a different kind of memory device called resistive random-access memory or RRAM. RRAM functions by similar principles as PCM, storing the values of synaptic weights in a physical device. An atomic filament sits between two electrodes, inside an insulator. During AI training, the input voltage changes the oxidation of the filament, which alters its resistance in a very fine manner — and this resistance is read as a weight during inferencing. These cells are arranged on a chip in crossbar arrays, creating a network of synaptic weights. So far, this structure has shown promise for analog chips that can perform computation while remaining flexible to updates. This was made possible only after years of material and algorithm co-optimization by several teams of researchers at IBM.
Beyond the way memories are stored, the way data flows in some neuromorphic computer chips can be fundamentally different from the way it does in conventional ones. In a typical synchronous circuit — most computer processors — streams of data are clock-based, with a continuous oscillating electrical current that synchronizes the actions of the circuit. There can be different structures and multiple layers of clocks, including a clock multiplier that enables a microprocessor to run at a different rate than the rest of the circuit. But on a basic level, things are happening even when no data is being processed.
Instead of this, biology uses event-driven spikes, says Syed. “Our nerve cells are communicating sparsely, which is why we’re so efficient,” he adds. In other words, the brain only works when it must, so by adopting this asynchronous data processing stream, an artificial emulation can save significant amounts of energy.
All three of the brain-inspired chips at IBM Research were designed with a standard clocked process, though.
IBM's AIU NorthPole research prototype chip

NorthPole is a brain-inspired research prototype chip that stored model weights digitally, but like the analog chips, it eliminates the von Neumann bottleneck that usually separates memory and compute.
In one of these cases, IBM Research staff say they’re making significant headway into edge and data center applications. “We want to learn from the brain,” says IBM Fellow Dharmendra Modha, “but we want to learn from the brain in a mathematical fashion while optimizing for silicon.” His lab, which developed NorthPole, doesn’t mimic the phenomena of neurons and synapses via transistor physics, but digitally captures their approximate mathematics. NorthPole is axiomatically designed and incorporates brain-inspired low precision; a distributed, modular, core array with massive compute parallelism within and among cores; memory near compute; and networks-on-chip. NorthPole has also moved from TrueNorth’s spiking neurons and asynchronous design to a synchronous design.
For TrueNorth, an experimental processor that was an early springboard for the more sophisticated and commercially ready NorthPole, Modha and his team realized that event-driven spikes use silicon-based transistors inefficiently. Neurons in the brain fire at about 10 hertz (10 times a second), whereas today’s transistors run in gigahertz — the transistors in IBM’s Z 16 run at 5 GHz, and transistors in a MacBook’s 6-core Intel Core i7 run at 2.6 GHz. If the synapses in the human brain operated at the same rate as a laptop, “our brain would explode,” says Syed. In neuromorphic computer chips such as Hermes — or brain-inspired ones like NorthPole — the goal is to combine the bio-inspiration of how data is processed with the high-bandwidth operation required by AI applications.
Because of their choice to move away from neuron-like spiking and other features that mimic the physics of the brain, Modha says his group leans more towards the term ‘brain-inspired’ computing than ‘neuromorphic.’ He envisions that NorthPole has lots of room for growth, because they can tweak the architecture in purely mathematical and application-centered ways to achieve more gains while also exploiting silicon scaling and lessons gleaned from user feedback. And the data show that their strategy worked: In new results from Modha’s team, NorthPole performed inference on a 3-billion-parameter model 46.9 times faster than the next most energy-efficient GPU, at 72.7 times higher energy efficiency than the next lowest latency one.

Thinking on the edge: neuromorphic computing applications​

Researchers may still be defining what neuromorphic computing is or the best ways to build brain-inspired circuits, says Syed, but they tend to agree that it’s well suited for edge applications — phones, self-driving cars, and other applications that can take advantage of fast, efficient AI inferencing with pre-trained models. A benefit of using PCM chips on the edge, Sebastian says, is that they can be exceptionally small, performant, and inexpensive.
Robotics applications could be well suited for brain-inspired computing, says Modha, as well as video analytics, in-store security cameras for example. Putting neuromorphic computing to work in edge applications could help solve problems of data privacy, says Bragaglia, as in-device inference chips would mean data doesn't need to be shuttled back and forth between devices, or to the cloud, to perform AI inferencing.
Whatever brain-inspired or neuromorphic processors end up coming out on top, researchers also agree that the current crop of AI models are too complicated to be run on classical CPUs or GPUs. There needs to be a new generation of circuits that can run these massive models.
“It’s a very exciting goal,” says Bragaglia. “It’s very hard, but it’s very exciting. And it’s in progress.”
https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-51849
 
  • Like
  • Thinking
Reactions: 17 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Say what?!!!


shocked-cat.gif



Extract
Opteran am.png





Artificial Intelligence

Opteran's neuromorphic software offers a new era for autonomous space robotics​

Opteran is conducting tests with Airbus at its Mars Yard to enable rovers to understand depth perception in the toughest off-world environments.
Cate Lawrence16 hours ago





Opteran's neuromorphic software offers a  new era for autonomous space robotics






Opteran, the natural intelligence company has announced its work with Airbus Defence and Space with support from European Space Agency (ESA) and the UK Space Agency will test the Opteran Mind, its general purpose neuromorphic software, in Airbus space rovers.
Opteran believes nature offers a more efficient, robust solution for autonomy in space robotics which will enable new mission capabilities for future Mars missions and other space exploration projects.
Based on over a decade of research into animal and insect vision, navigation and decision-making, Opteran is conducting tests with Airbus at its Mars Yard to enable rovers to understand depth perception in the toughest off-world environments.

Today’s off-world robots are cumbersome - taking minutes to compute a map of their surroundings from multiple cameras before every movement.
Opteran’s visual and perception systems offer Mars rovers’ the ability to understand their surroundings in milliseconds, in challenging conditions, without adding to the robots critical power consumption.
Opteran has reverse engineered natural brain algorithms into a software mind that enables autonomous machines to efficiently move through the most challenging environments without the need for extensive data or training.
Successful application of this technology to real-world space exploration will significantly extend navigation capabilities in extreme off-world terrain.
opteran-team-working-on-airbus-mars-rover-806.jpg
Image: Opteran team working on Airbus Mars rover.
Ultimately, providing rovers with continuous navigation while being able to drive further and faster.
“We are delighted to be working with ESA and Airbus to demonstrate how Opteran’s neuromorphic software addresses key blockers in space autonomy,” said David Rajan, CEO and co-founder, Opteran.
“Our long-term vision is to provide natural autonomy with the Opteran Mind to every machine, on Earth and beyond, and this project will show how we can enable high speed, continuous safe driving, optimised for the rigours of planetary rover navigation.
Today, no such flight-ready systems exist, so there is a major opportunity for Opteran to step up and resolve a challenge facing all the major players in space robotics.”
This project is funded by ESA’s General Support Technology Programme (GSTP) through the UK Space Agency, which takes leading-edge technologies that are not ready to be sent into space and then develops them to be used in future missions. The near-term focus for the BNEE project is on depth estimation for obstacle detection, and the mid-term focus on infrastructure-free visual navigation.
Once the results of the initial testing have been presented to ESA the goal would be to move to the next stage of grant funding which would start to focus on deployment and commercialisation.
 
  • Like
  • Fire
Reactions: 21 users

Terroni2105

Founding Member
New job advertised on LinkedIn


1729808619493.png
 
  • Like
  • Fire
  • Thinking
Reactions: 18 users
Top Bottom