BRN Discussion Ongoing

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
Reactions: 3 users
  • Like
Reactions: 3 users

Diogenese

Top 20
Isn't this similar to what we're trying to achieve to slash power consumption @Diogenese?

SNN's eliminating the need for matrix multiplication without affecting performance?



Researchers claim new approach to LLMs could slash AI power consumption​

Work was conducted as concerns about power demands of AI have risen​

Graeme Burton
clock
27 June 2024ā€¢ 2 min read

SHARE​

Researchers claim new approach to LLMs could slash AI power consumption

ChatGPT uses 10 times the electricity of a Google search ā€“ but cutting out ā€˜matrix multiplicationā€™ could cut this number without affecting performance​

New research suggests that eliminating the ā€˜matrix multiplication' stage of large-language models (LLMs) used in AI could slash power consumption without affecting performance.
The research was published at the same time that increasing concerns are being raised over the power demands that AI will make over the course of the decade.
Matrix multiplication ā€“ abbreviated to MatMul in the research paper ā€“ performs large numbers of multiplication operations in parallel, becoming the dominant operation in the neural networks that drive AI primarily due to GPUs being optimised for such operations.
But by leveraging Nvidia CUDA [Compute Unified Device Architecture] technology instead, along with optimised linear algebra libraries, this process can be efficiently parallelised and accelerated ā€“ cutting power consumption without penalising performance.
In the process, the researchers claim to have "demonstrated the feasibility and effectiveness of the first scalable MatMul-free language model", in a paper entitled Scalable MatMul-free Language Modelling.
The researchers created a custom 2.7 billion parameter model without using matrix multiplication, and found that performance was on a par with state-of-the-art deep learning models (called ā€˜transformers', a fundamental element of natural language processing). The performance gap between their model and conventional approaches narrows as the MatMul-free scales, they claim.
They add: "We also provide a GPU-efficient implementation of this model, which reduces memory usage by up to 61% over an unoptimised baseline during training. By utilising an optimised kernel during inference, our model's memory consumption can be reduced by more than 10 times compared to unoptimised models."
Less hardware-heavy AI models could also enable more pervasive AI, freeing the technology from its dependence on the data centre and the cloud. In addition, both OpenAI and Meta are set to unveil new models that, they claim, will be capable of reasoning and planning.
However, they cautioned, "one limitation of our work is that the MatMul-free language model has not been tested on extremely large-scale models (for example, 100+ billion parameters) due to computational constraints". They called for well-resourced institutions and organisations to build LLMs utilising lightweight models, "prioritising the development and deployment of matrix multiplication-free architectures".
Moreover, the researchers' work has not yet been subject to peer review. Indeed, power consumption on its own matters less than energy usage per unit of ā€˜output' ā€“ information that is not provided in the research paper.

Hi Bravo,

This reads like a von Neumann computer software implementation, not an SNN. They run it on a GPU.

The software makes it slow and power hungry compared to Akida.

It may be ok for cloud processing, but may not be suitable for real-time applications.
 
  • Like
  • Fire
  • Thinking
Reactions: 15 users

CHIPS

Regular
  • Like
  • Love
Reactions: 5 users


Pedro Machado, a Senior Lecturer in Computer Science at Nottingham Trent University must have been a little giddy and trembling with excitement while typing this LinkedIn post about his uni joining ā€œthe race of Neuromorphic Computing by being [one] of the pioneers accelerating SNNs on the Alkidaā€™s Brainchip.ā€ šŸ¤£ You gotta love his enthusiasm, though!

View attachment 64801

View attachment 64803

View attachment 64804



Akida will be useful for their ongoing project on ā€œPosition Aware Activity Recognitionā€:

View attachment 64805
Looks like Pedro is enjoying his new toy so far :)




IMG_20240628_162156.jpg
 
  • Like
  • Fire
  • Love
Reactions: 23 users

DK6161

Regular
I think Kuchenbuch urgently needs to work on his presentation skills. There are courses for that.

Already his first slide and and confusing presentation made me shut it off.
Agree. 25 years of experience as a leading salesperson. I think shareholders can demand a better presentation than that.
Bring Rob Telson back!
 
  • Like
  • Love
Reactions: 4 users

Slade

Top 20
Agree. 25 years of experience as a leading salesperson. I think shareholders can demand a better presentation than that.
Bring Rob Telson back!
We want the share price to go lower so we can buy back in. Hehehe šŸ˜œ
 
  • Haha
Reactions: 2 users

Slade

Top 20

CHIPS

Regular
We want the share price to go lower so we can buy back in. Hehehe šŸ˜œ

You had so many chances to buy low and did not use them? And now you want the SP to fall again after it has recovered a bit?
How stupid can one be?
 
  • Haha
  • Fire
  • Love
Reactions: 5 users

miaeffect

Oat latte lover
You had so many chances to buy low and did not use them? And now you want the SP to fall again after it has recovered a bit?
How stupid can one be?
sarcasm....................
 
  • Haha
  • Fire
Reactions: 3 users

Slade

Top 20
You had so many chances to buy low and did not use them? And now you want the SP to fall again after it has recovered a bit?
How stupid can one be?
I am a stubbie short of a six pack.
 
  • Haha
Reactions: 2 users

Tothemoon24

Top 20

IMG_9170.jpeg

IMG_9169.jpeg

Abstract​

Analysing a visual scene by inferring the configuration of a generative model is widely considered the most flexible and generalizable approach to scene understanding. Yet, one major problem is the computational challenge of the inference procedure, involving a combinatorial search across object identities and poses. Here we propose a neuromorphic solution exploiting three key concepts: (1) a computational framework based on vector symbolic architectures (VSAs) with complex-valued vectors, (2) the design of hierarchical resonator networks to factorize the non-commutative transforms translation and rotation in visual scenes and (3) the design of a multi-compartment spiking phasor neuron model for implementing complex-valued resonator networks on neuromorphic hardware. The VSA framework uses vector binding operations to form a generative image model in which binding acts as the equivariant operation for geometric transformations. A scene can therefore be described as a sum of vector products, which can then be efficiently factorized by a resonator network to infer objects and their poses. The hierarchical resonator network features a partitioned architecture in which vector binding is equivariant for horizontal and vertical translation within one partition and for rotation and scaling within the other partition. The spiking neuron model allows mapping the resonator network onto efficient and low-power neuromorphic hardware. Our approach is demonstrated on synthetic scenes composed of simple two-dimensional shapes undergoing rigid geometric transformations and colour changes. A companion paper demonstrates the same approach in real-world application scenarios for machine vision and robotics.
 
  • Like
  • Love
  • Fire
Reactions: 23 users

Getupthere

Regular
  • Wow
  • Thinking
  • Sad
Reactions: 5 users

Gies

Regular
 
  • Like
  • Love
Reactions: 8 users
Not down ramping at all.
I was annoyed because of empty promises made and lack of real commercial updates from the company.
Sold some weeks ago which has saved me a further 30-40% decline.
"Watching the financials" and ready to buy when things actually look positive on paper, but if there is SFA happening for the rest of the year, then I think this will go down to 10-15 cents. In that case I might buy back in depending on the "financials" that we were told to keep an eye on.

As always, Not advice. DYOR and of course "watch the financials".

So yeah, "watch the financials"
1719602756168.gif
 
  • Haha
Reactions: 2 users
Agree. 25 years of experience as a leading salesperson. I think shareholders can demand a better presentation than that.
Bring Rob Telson back!

1719603665492.gif
 
  • Haha
  • Like
Reactions: 10 users
Guess they are planning using SNN for sometime and I like the last paragraph bit at the bottom šŸ˜

The purpose of this internship is to implement low level event processing using SNN accelerators.





Job description​

Internship opportunities at Prophesee

PROPHESEE

Founded by the worldā€™s leading pioneers in the field of neuromorphic vision, Prophesee develops computer vision sensors and systems for application in all fields of artificial vision. The sensor technology is inspired by biological eyes, acquiring visual information in an extremely performing yet highly efficient way. Propheseeā€™s disruptive vision sensor technology entirely overthrows the established paradigms of frame-based vision acquisition currently used everywhere in computer vision.

This is a great opportunity to join a dynamic company and an exciting team and to lead a paradigm shift in computer vision across many industries.

EVENT-BASED TECHNOLOGY

Prophesee designs and produces a new type of cameras that are bio-inspired and thus free themselves from the concept of images. They do not gather information with a fixed framerate but instead each pixel is captured asynchronously when needed. These bio-inspired cameras, also called event-based cameras or neuromorphic cameras, therefore have an extremely sparse output and enable, with appropriate algorithms, a real time treatment of the information at an equivalent frequency of a kHz or more. But since the data coming from the sensor are quite different from the images traditionally used in standard vision, Prophesee is also advancing the algorithmic and machine learning side of this new kind of machine vision. This enables our clients to build new applications in a wide range of domains, including industrial automation, connected devices, autonomous vehicles, augmented and virtual reality, and more.

Internship Position Description

We are looking for passionate interns who demonstrate initiative, take ownership for project work, and exhibit a strong spirit of innovation. The ideal candidate is a curious and creative individual keen on problem-solving and with prior experience in C++ / Python programming and exposure to the Computer Vision / Image Processing / Artificial Intelligence / Machine Learning domains.

She/He will work in a mixed team of scientists and engineers to design, develop & optimize solutions to research problems. Her/His main contribution will be in creating innovative bio-inspired computer vision algorithms for specific tasks across many applications such as computational imaging, 3d sensing, robotics, localization, factory automation, smart devices, aerospace & defense, automotive, etc.

The main required skills common to MOST internship positions are the following (but not exclusively):

  • Excellent programming skills in C++ and/or in Python
  • Engineering background in Computer Science, Mathematics or related field
  • Prior experience in projects involving at least one of the following domains:
  • Algorithmic design, e.g. 3d vision, machine learning, numerical optimization, etc
  • Software development, e.g. implementation, architecture, optimization, testing, porting on embedded platforms, etc
  • Development operations, e.g. source code versioning, continuous integration & deployment, cloud computing, system administration, etc
R&D

Event Signal Processing versus Spiking Accelerators

Event sensors generate sparse and asynchrounous data which are not compatible with Van Neuran conventionnal computers: the states or memory of any event driven algorithm is tight with the computing part. To scale with larger pixel arrays, the bandwith of these sensors have increased and thus the same applies with the need of low level filtering close to the pixels. Many hardware accelerators have been proposed in the state of the art to ease the event processing, either using FPGAs or dedicated ASICs.

Convolutionnal neural networks are mostly sparse after few layers, and some hardware accelerators already use this feature to faster computations. However, the input information size and frequency has to be defined and fixed. Spiking Neural Networks (SNN), which could be seen as asynchronous recurrent neural networks, are adding asynchronous time based processing to traditionnal neural networks. This feature makes them suitable for event-based data, and the ability to program SSN accelerators will enable application-specific filtering.

The purpose of this internship is to implement low level event processing using SNN accelerators.

The Plan Is

  • SNN accelerator/processing state of the art analysis: Most of the proposed architectures are not suitable with the bandwith of event sensors. Part of this work has to make sure that an existing accelerator can scale up.
  • Software Imlementation of low-level processing functions to filter the event stream using SNN. The algorithms will be similar to the filters implemented inside the event signal processor of Prophesee sensors or will be adapted from recently published academics works.
  • Hardware implementation using a SNN accelerator from Prophesee partners.





What do they mean or looking for when they say this in the above?


Part of this work has to make sure that an existing accelerator can scale up.
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 14 users


 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 5 users
For anyone interested, here is the latest Edge AI box by NVIDIA/Intel:


"DFI X6-MTH-ORN is a fanless Edge AI Box Computer that combines an NVIDIA Jetson Orin NX/Nano AI module with a 14th Gen Intel Core Ultra ā€œMeteor Lake-Uā€ 15W processor for AI-driven applications leveraging GPU computing, machine learning, and image processing."

As compared to Brainchip's Edge AI box https://shop.brainchipinc.com/products/akidaā„¢-edge-ai-box
 
  • Thinking
  • Fire
  • Like
Reactions: 3 users
Top Bottom