BRN Discussion Ongoing

buena suerte :-)

BOB Bank of Brainchip
I think the dam could be about to break

View attachment 70598
water splashing GIF
:) :)
 
  • Like
  • Love
  • Haha
Reactions: 9 users

Diogenese

Top 20
  • Haha
Reactions: 7 users

TopCat

Regular
This might have something to do with us although I can’t find a link between Microchip and Lorser . We have history with Lorser and the benefit of Akida with SDR.

“We believe that neuromorphic computing is the future of AI/ML, and an SDR with neuromorphic AI/ML capability will offer users significantly more functionality, flexibility, and efficiency,” said Diane Serban, CEO of Lorser Industries. “The Akida processor and IP is the ideal solution for SDR devices because of its low power consumption, high performance, and, importantly, its ability to learn on-chip, after deployment in the field..”



IMG_0258.jpeg



 
  • Like
  • Fire
  • Love
Reactions: 22 users

Diogenese

Top 20
This might have something to do with us although I can’t find a link between Microchip and Loser. We have history with Lorser and the benefit of Akida with SDR.

“We believe that neuromorphic computing is the future of AI/ML, and an SDR with neuromorphic AI/ML capability will offer users significantly more functionality, flexibility, and efficiency,” said Diane Serban, CEO of Lorser Industries. “The Akida processor and IP is the ideal solution for SDR devices because of its low power consumption, high performance, and, importantly, its ability to learn on-chip, after deployment in the field..”



View attachment 70599


Bad day for dropped "r"s.
 
Last edited:
  • Haha
Reactions: 8 users

TopCat

Regular
  • Like
Reactions: 1 users

HopalongPetrovski

I'm Spartacus!
BrainChip atm......... trying to get back to 30 cents. 🤣🤣🤣

461761754_1081885096635278_5208757335109060710_n.jpg
mpfc31.jpg
 
  • Haha
Reactions: 12 users

Mt09

Regular

Attachments

  • IMG_6771.png
    IMG_6771.png
    3.3 MB · Views: 111
  • Like
  • Fire
  • Love
Reactions: 32 users

7für7

Top 20

Stmicro staff at the Brainchip booth 😀
They don’t look excited 😵‍💫 but I guess that’s how tech nerds looking all the time when they make several thoughts….

STM -“Hmmm guess I should buy one license tho…”

BRN-“hmm come on man… just sign that f..in contract… TSE is whatching me 24/7”

STM-“why he looks so uncomfortable? Is this whole thing a scam like T&J claims every day?”

BRN-oh jeez.. hope he don’t follow the HC forum.. what a mess..”

STM-“oh eye contact … eye contact…” “yeah … so akida pico you say..right? Cool cool….”

BRN-shit… I lost the ball.. “yeah… it’s… it’s fast you know… eeeehhh do you know our robot mascot? (What the heck are you talking about….)
 
Last edited:
  • Haha
  • Like
Reactions: 10 users

miaeffect

Oat latte lover
They don’t look excited 😵‍💫 but I guess that’s how it needs looking all the time when they make several thoughts….

STM -“Hmmm guess I should buy one license tho…”

BRN-“hmm come on man… just sign that f..in contract… TSE is whatching me 24/7”

STM-“why he looks so uncomfortable? Is this whole thing a scam like T&J claims every day?”

BRN-oh jeez.. hope he don’t follow the HC forum.. what a mess..”

STM-“oh eye contact … eye contact…” “yeah … so akida pico you say..right? Cool cool….”

BRN-shit… I lost the ball.. “yeah… it’s… it’s fast you know… eeeehhh do you know our robot mascot? (What the heck are you talking about….)
TozOs861TLMzzc8-WgR9ZS5VqImSWB_jXeJ18vzvw1XVwPdCVajHcvGImL4QODrj5J9gUJlUE_WvzyXwE-Rpcg.gif

 
Last edited:
  • Haha
Reactions: 8 users
Love to know if these guys have been exploring with us as well and if not, maybe one of our sales team (yes Alf, looking at you for EMEA haha) should be in contact :unsure:

Though, looks like they playing with crossbar arrays (analog?).





Insight

Neuromorphic computing enables ultra-low power edge devices​

Insight_Neuromorphes%20Rechnen_Main%20Image-921x921-921x921.webp


Industries​


Software & IT

Technologies​


Scientific informatics

Over the last five years, neuromorphic computing has rapidly advanced through the development of state-of-the-art hardware and software technologies that mimic the information processing dynamics of animal brains. This development provides ultra-low power computation capabilities, especially for edge computing devices. Helbling experts have recently built up extensive interdisciplinary knowledge in this field in a project with partner Abacus neo. The focus was also on how the potential of neuromorphic computing can be optimally utilized.

Similar to the natural neural networks found in animal brains, neuromorphic computing uses compute-in-memory, sparse spike data encoding, or both together to provide higher energy efficiencies and lower computational latencies versus traditional digital neural networks. This makes neuromorphic computing ideal for ultra-low power edge applications, especially those where energy harvesting can provide autonomous always-on devices for environmental monitoring, medical implants, or wearables.

Currently, the adoption of neuromorphic computing is restricted by the maturity of the available hardware and software frameworks, the limited number of suppliers, and competition from the ongoing development of traditional digital devices. Another factor is resistance from conservative communities due to their limited experience of the theory behind and practical use of neuromorphic devices, with the need for a new mindset to approach problems relating to both hardware and software.

Given the inherent relative benefits of neuromorphic computing, it will be a key technology for future low-power edge applications. Accordingly, Helbling has been actively investigating currently available solutions to assess their suitability for a broad range of existing and new applications. Helbling also cooperates with partners here, such as in an ongoing project with Abacus neo, a company that focuses on developing innovative ideas into new business models.

The von Neumann bottleneck needs to be overcome

The main disadvantage of current computer architectures, both in terms of energy consumption and speed, is the need to transfer data and instructions between the memory and the central processing unit (CPU) during each fetch-execute cycle. In von Neumann devices, the increased length and thus electrical resistance of these communication paths leads to greater energy dissipation as heat. In fact, often more energy is used for transferring the data than for processing it by the CPU. Furthermore, since the data transfer rate between the CPU and memory is lower than the processing rate of the CPU, the CPU must constantly wait for data, thus limiting the system’s processing rate. In the future, this von Neumann bottleneck will become more restrictive as CPU and memory speeds continue to increase faster than the data transfer rate between them.

Neuromorphic computing collocates processing and memory

Neuromorphic computing aims to remove the von Neumann bottleneck and minimize heat dissipation by eliminating the distance between data and the CPU. These non-von Neumann compute-in-memory (CIM) architectures collocate processing and memory to provide ultra-low power and massively parallel data processing capabilities. Practically, this has been achieved through the development of programmable crossbar arrays (CBA), which are effectively dot-product engines that are highly optimized for the vector-matrix multiplication (VMM) operations that are the fundamental building blocks of most classical and deep learning algorithms.
These crossbars comprise input word and output bit lines, where the junctions between them have programmable conductance values that can be set to carry out specific algorithmic tasks.

Therefore, for a neuromorphic computer the algorithm is defined by the architecture of the system rather than by the sequential execution of instructions by a CPU. For example, to perform a VMM, voltages (U) representing the vector are applied to the input word lines whilst the matrix is represented by the conductance values (G) of the crossbar junction grid. The result of the VMM is then given by the currents (I) flowing from the output bit lines (see Figure 1). Since a VMM operation is performed instantaneously in a clockless, asynchronous manner, the latencies and processing times are much lower than for traditional von Neumann systems.
Neuromorphic%20Computing_Fig%201_en.png
Figure 1: Neuromorphic crossbar array for vector-matrix multiplication. Figure: Helbling.


Options for the central component of the crossbar arrays​

Currently, the CBA synapses are either fabricated from analog memristors, the conductance of which can be approximately programmed within a limited continuous range, or from a collection of CMOS transistors that can be set to provide constant quantified conductance values. The critical shortcoming of the former case is the drifting of the set conductance values with time. In both cases, since the conductance matrix can only be defined approximately, crossbar arrays are limited to approximate computation tasks such as qualitative classification, lossy compression, and convolutional filtering (see Figure 2).
Neuromorphic%20Computing_Fig%202_en.png
Figure 2: Examples of algorithms implemented by Helbling on neuromorphic crossbar arrays. Note the strong similarity between the exact numerical compression results and the approximate ones obtained from the neuromorphic crossbar array. Figure: Helbling and Abacus neo.

The energy cost of implementing a VMM on a CBA is very low versus that of a von Neumann device since energy is only required to impose the input word line voltages and to overcome the electrical resistance losses of the CBA.

Sparse data representation reduces energy requirements

The second main feature of neuromorphic computing is its time-dynamic nature and the flow of sparse event encoded data through spiking neural networks (SNN).

Event encoding typically involves converting a continuous signal into a train of representative short-duration analog spikes. Techniques include rate encoding, where the spike frequency is proportional to the instantaneous signal amplitude, or time-encoded spikes that are generated when a signal satisfies pre-defined thresholds. The advantages of this sparse representation are the very low power required for transmission and the ability to develop asynchronous, event-driven systems.

Leaky integrate-and-fire (LIF) neurons implemented at the hardware level form the basis of the SNN used in neuromorphic computing. The operation of these LIF neurons is shown in Figure 3. Essentially, the spikes entering a neuron are multiplied by the pretrained weight (w) of their respective channels. These are then integrated and added to a bias value, before being added to the instantaneous membrane potential (VM) of that neuron.
This membrane potential leaks with time as it decreases at a programmable rate, thus providing the neuron with a memory of previous spiking events. If the membrane potential is greater than a predefined threshold (VTH), the neuron fires a spike downstream before resetting to its base state to create a continuous time dynamic process.
Neuromorphic%20Computing_Fig%203_en.png
Figure 3: Leaky integrate-and-fire neuron spike event processing. Figure: Helbling..

Neuromorphic elements need to be integrated into devices

In a practical neuromorphic device, the CBA is integrated into a neuromorphic processing unit (NPU) with pre- and post-signal conditioners to encode the input spikes and decode the output spikes, respectively (see Figure 4). Since the nature of these signal conditioners greatly affects the overall performance of the system and can eliminate any energy or latency benefits gained from the use of a neuromorphic computation core, their choice is critical to the overall comparative effectiveness of a neuromorphic solution. For example, the energy consumption and latency of typical microcontrollers are much higher than for a CBA, thus limiting their suitability. Ideally, input spike generation should be purely analog or performed on the sensors before application to the input word lines of the CBA. An interesting solution is the implementation of sensor fusion during pre-conditioning to reduce the dimensionality of the combined multi-sensor inputs, thus only processing features of relevance to the application. This is particularly beneficial when the number of input word lines of the CBA is limited.
Neuromorphic%20Computing_Fig%204_en.png
Figure 4: Neuromorphic processing unit components. Figure: Helbling.

Summary: Neuromorphic computing has untapped potential for future technologies

Due to its inherent features, neuromorphic computing provides enormous advantages versus traditional digital electronic devices with von Neumann architectures. Benefits include very low computation latencies and ultra-low energy requirements. However, due to the need for a new engineering mindset to approach problems and a lack of community knowledge of the relevant technologies, the full potential of neuromorphic computing has yet to be leveraged. As such, Helbling experts from various disciplines have studied the topic intensively and believe that it will be a decisive factor in future MedTech and system monitoring applications. With this expertise and the intensive partnership with Abacus neo, Helbling is positioning itself as an important industry partner and trailblazer.

Authors: Navid Borhani, Matthias Pfister
 
  • Like
  • Love
  • Fire
Reactions: 16 users

HopalongPetrovski

I'm Spartacus!
  • Haha
Reactions: 10 users
  • Like
  • Love
  • Wow
Reactions: 9 users

7für7

Top 20
View attachment 70607
Actually… this was not a joke…and this brings me to the point that the whole world think that Germans try to be funny… like the actual political agenda there… the whole world thinks they are kidding… but no… it’s fuck.in serious what they do
 
  • Sad
Reactions: 1 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Screenshot 2024-10-09 at 9.21.12 pm.png




BrainChip advances edge AI with ultra-low power processing​

Oct 8, 2024 | Stephen Mayhew
Categories Edge Computing News | Hardware
BrainChip advances edge AI with ultra-low power processing

BrainChip has launched the Akida Pico, the lowest power AI acceleration co-processor designed for ultra-low power, portable devices in various sectors including consumer, healthcare, and IoT.
The Akida Pico operates with less than 1 milliwatt of power, making it suitable for battery-powered applications and enabling efficient wake-up functionalities for microcontrollers. It supports specific neural network models for applications like voice detection, audio enhancement, and personal voice assistants, while filtering out false alarms to conserve energy.
The platform utilizes BrainChip’s MetaTF software, allowing developers to optimize their neural networks without needing to learn new programming languages.
“Whether you have limited AI expertise or are an expert at developing AI models and applications, Akida Pico and the Akida Development Platform provides users with the ability to create, train and test the most power and memory efficient temporal-event based neural networks quicker and more reliably,” said Sean Hehir, CEO at BrainChip.
Akida Pico is built on the Akida2 event-based computing platform, promoting low latency and energy efficiency for edge AI applications.
BrainChip aims to enhance edge AI processing capabilities, enabling local learning and data processing to improve privacy and reduce latency in various real-world applications.
BrainChip and Frontgrade Gaisler recently partnered on a space-grade AI-enabled microprocessor.

https://www.edgeir.com/brainchip-advances-edge-ai-with-ultra-low-power-processing-20241008
 
  • Like
  • Love
  • Fire
Reactions: 40 users

FiveBucks

Regular
4500 pages of Tsex.

Zero IP signings! :ROFLMAO::cry:
 
  • Haha
  • Sad
  • Fire
Reactions: 11 users
BRN getting some good exposure....


1728473595827.png

1728473636280.png






1728474452155.png





1728474614055.png





1728474882425.png




1728474967194.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 37 users
Top Bottom