BRN Discussion Ongoing

Tothemoon24

Top 20

IMG_9170.jpeg

IMG_9169.jpeg

Abstract​

Analysing a visual scene by inferring the configuration of a generative model is widely considered the most flexible and generalizable approach to scene understanding. Yet, one major problem is the computational challenge of the inference procedure, involving a combinatorial search across object identities and poses. Here we propose a neuromorphic solution exploiting three key concepts: (1) a computational framework based on vector symbolic architectures (VSAs) with complex-valued vectors, (2) the design of hierarchical resonator networks to factorize the non-commutative transforms translation and rotation in visual scenes and (3) the design of a multi-compartment spiking phasor neuron model for implementing complex-valued resonator networks on neuromorphic hardware. The VSA framework uses vector binding operations to form a generative image model in which binding acts as the equivariant operation for geometric transformations. A scene can therefore be described as a sum of vector products, which can then be efficiently factorized by a resonator network to infer objects and their poses. The hierarchical resonator network features a partitioned architecture in which vector binding is equivariant for horizontal and vertical translation within one partition and for rotation and scaling within the other partition. The spiking neuron model allows mapping the resonator network onto efficient and low-power neuromorphic hardware. Our approach is demonstrated on synthetic scenes composed of simple two-dimensional shapes undergoing rigid geometric transformations and colour changes. A companion paper demonstrates the same approach in real-world application scenarios for machine vision and robotics.
 
  • Like
  • Love
  • Fire
Reactions: 23 users

Getupthere

Regular
  • Wow
  • Thinking
  • Sad
Reactions: 5 users

Gies

Regular
  • Like
  • Love
Reactions: 8 users
Not down ramping at all.
I was annoyed because of empty promises made and lack of real commercial updates from the company.
Sold some weeks ago which has saved me a further 30-40% decline.
"Watching the financials" and ready to buy when things actually look positive on paper, but if there is SFA happening for the rest of the year, then I think this will go down to 10-15 cents. In that case I might buy back in depending on the "financials" that we were told to keep an eye on.

As always, Not advice. DYOR and of course "watch the financials".

So yeah, "watch the financials"
1719602756168.gif
 
  • Haha
Reactions: 2 users
Agree. 25 years of experience as a leading salesperson. I think shareholders can demand a better presentation than that.
Bring Rob Telson back!

1719603665492.gif
 
  • Haha
  • Like
Reactions: 10 users
Guess they are planning using SNN for sometime and I like the last paragraph bit at the bottom 😁

The purpose of this internship is to implement low level event processing using SNN accelerators.





Job description​

Internship opportunities at Prophesee

PROPHESEE

Founded by the world’s leading pioneers in the field of neuromorphic vision, Prophesee develops computer vision sensors and systems for application in all fields of artificial vision. The sensor technology is inspired by biological eyes, acquiring visual information in an extremely performing yet highly efficient way. Prophesee’s disruptive vision sensor technology entirely overthrows the established paradigms of frame-based vision acquisition currently used everywhere in computer vision.

This is a great opportunity to join a dynamic company and an exciting team and to lead a paradigm shift in computer vision across many industries.

EVENT-BASED TECHNOLOGY

Prophesee designs and produces a new type of cameras that are bio-inspired and thus free themselves from the concept of images. They do not gather information with a fixed framerate but instead each pixel is captured asynchronously when needed. These bio-inspired cameras, also called event-based cameras or neuromorphic cameras, therefore have an extremely sparse output and enable, with appropriate algorithms, a real time treatment of the information at an equivalent frequency of a kHz or more. But since the data coming from the sensor are quite different from the images traditionally used in standard vision, Prophesee is also advancing the algorithmic and machine learning side of this new kind of machine vision. This enables our clients to build new applications in a wide range of domains, including industrial automation, connected devices, autonomous vehicles, augmented and virtual reality, and more.

Internship Position Description

We are looking for passionate interns who demonstrate initiative, take ownership for project work, and exhibit a strong spirit of innovation. The ideal candidate is a curious and creative individual keen on problem-solving and with prior experience in C++ / Python programming and exposure to the Computer Vision / Image Processing / Artificial Intelligence / Machine Learning domains.

She/He will work in a mixed team of scientists and engineers to design, develop & optimize solutions to research problems. Her/His main contribution will be in creating innovative bio-inspired computer vision algorithms for specific tasks across many applications such as computational imaging, 3d sensing, robotics, localization, factory automation, smart devices, aerospace & defense, automotive, etc.

The main required skills common to MOST internship positions are the following (but not exclusively):

  • Excellent programming skills in C++ and/or in Python
  • Engineering background in Computer Science, Mathematics or related field
  • Prior experience in projects involving at least one of the following domains:
  • Algorithmic design, e.g. 3d vision, machine learning, numerical optimization, etc
  • Software development, e.g. implementation, architecture, optimization, testing, porting on embedded platforms, etc
  • Development operations, e.g. source code versioning, continuous integration & deployment, cloud computing, system administration, etc
R&D

Event Signal Processing versus Spiking Accelerators

Event sensors generate sparse and asynchrounous data which are not compatible with Van Neuran conventionnal computers: the states or memory of any event driven algorithm is tight with the computing part. To scale with larger pixel arrays, the bandwith of these sensors have increased and thus the same applies with the need of low level filtering close to the pixels. Many hardware accelerators have been proposed in the state of the art to ease the event processing, either using FPGAs or dedicated ASICs.

Convolutionnal neural networks are mostly sparse after few layers, and some hardware accelerators already use this feature to faster computations. However, the input information size and frequency has to be defined and fixed. Spiking Neural Networks (SNN), which could be seen as asynchronous recurrent neural networks, are adding asynchronous time based processing to traditionnal neural networks. This feature makes them suitable for event-based data, and the ability to program SSN accelerators will enable application-specific filtering.

The purpose of this internship is to implement low level event processing using SNN accelerators.

The Plan Is

  • SNN accelerator/processing state of the art analysis: Most of the proposed architectures are not suitable with the bandwith of event sensors. Part of this work has to make sure that an existing accelerator can scale up.
  • Software Imlementation of low-level processing functions to filter the event stream using SNN. The algorithms will be similar to the filters implemented inside the event signal processor of Prophesee sensors or will be adapted from recently published academics works.
  • Hardware implementation using a SNN accelerator from Prophesee partners.





What do they mean or looking for when they say this in the above?


Part of this work has to make sure that an existing accelerator can scale up.
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 14 users


 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 5 users
For anyone interested, here is the latest Edge AI box by NVIDIA/Intel:


"DFI X6-MTH-ORN is a fanless Edge AI Box Computer that combines an NVIDIA Jetson Orin NX/Nano AI module with a 14th Gen Intel Core Ultra “Meteor Lake-U” 15W processor for AI-driven applications leveraging GPU computing, machine learning, and image processing."

As compared to Brainchip's Edge AI box https://shop.brainchipinc.com/products/akida™-edge-ai-box
 
  • Thinking
  • Fire
  • Like
Reactions: 3 users
For anyone interested, here is the latest Edge AI box by NVIDIA/Intel:


"DFI X6-MTH-ORN is a fanless Edge AI Box Computer that combines an NVIDIA Jetson Orin NX/Nano AI module with a 14th Gen Intel Core Ultra “Meteor Lake-U” 15W processor for AI-driven applications leveraging GPU computing, machine learning, and image processing."

As compared to Brainchip's Edge AI box https://shop.brainchipinc.com/products/akida™-edge-ai-box
Fanless huh? 🤔..
So I guess that just means it's going to overheat?...

20240629_102604.jpg

The whole top of it, is a massive heatsink, maybe the instructions say.. "place unit in front of a fan"..

They had to add a cooling fan for the other chips in "our" Edge Box, so hard to see how they can get away with it here, using GPU technology..
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users

Guzzi62

Regular
  • Like
  • Love
  • Fire
Reactions: 35 users

Diogenese

Top 20
Hi Bravo,

This reads like a von Neumann computer software implementation, not an SNN. They run it on a GPU.

The software makes it slow and power hungry compared to Akida.

It may be ok for cloud processing, but may not be suitable for real-time applications.

"But by leveraging Nvidia CUDA [Compute Unified Device Architecture] technology instead, along with optimised linear algebra libraries, this process can be efficiently parallelised and accelerated – cutting power consumption without penalising performance."

For those of a technical bent, this is a discussion of Nvidia Cuda:

https://www.nvidia.com/en-us/data-center/ampere-architecture/
 
  • Like
  • Wow
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Pooh!


Samsung backs ‘world’s most powerful’ AI chip for edge devices​

Dutch startup Axelera promises better AI performance at lower costs
June 28, 2024 - 10:30 am
ADVERTISEMENT

Samsung backs ‘world’s most powerful’ AI chip for edge devices

Eindhoven-based startup Axelera has raised $68mn as it looks to take its AI chip business global. One of the lead investors is Samsung Catalyst, the venture arm of semiconductor giant Samsung Electronics.
Axelera is developing chips, known as AI processing units (AIPUs), that enable computer vision and generative AI in devices like robots and drones. The chips facilitate so-called edge AI — when AI models are deployed inside devices, instead of linking to them via the cloud.
Axelera builds the AIPUs as well as the software that runs them. Dubbed Metis, the startup claims that it is the world’s most powerful AI chip for edge devices.
The AI chips are, in effect, tiny data centres located within the device. By negating the need to upload or download data to a centralised cloud server, the chips speed up data processing. They also minimises energy use.


What’s more, Axelera leverages what’s known as in-memory computing. That’s when data is stored in the main memory (RAM) instead of on traditional disk storage. This makes for even faster data processing and retrieval.
The startup’s chips thus deliver high computing performance at a fraction of the cost and energy consumption of centralised AI processing units, said the startup.
“To truly harness the value of AI, organisations need a solution that delivers high-performance and efficiency while balancing cost,” said Fabrizio Del Maffeo, co-founder and CEO at Axelera AI.
axelera-AI-chip
Axelera claims it has built the world’s most powerful AI processing unit for edge devices. Credit: Axelera

Democratising AI

Axelera’s chips make use of the instruction set architecture (ISA) RISC-V. An ISA acts like a bridge between the hardware and the software. It specifies both what the processor is capable of doing as well as how it gets done.
RISC-V is a low-cost, efficient, and flexible ISA that can be customised to specific use cases. Crucially, unlike most ISAs, it is open source, which means no single entity controls it.
“Our mission is to democratise access to artificial intelligence,” said Del Maffeo.
By specialising in edge AI, and developing both the software and hardware components, Axelera looks to give itself a competitive edge in a booming AI chip market dominated by the likes of Nvidia, Intel, and IBM.
Speaking of IBM, Del Maffeo, who previously worked at Belgium-based tech lab Imec, co-founded Axelera alongside Evangelos Eleftheriou, a former veteran at the American tech giant.
Axelera plans to put its AI processing units into full production in the latter half of this year. It looks to expand its presence in North America, where it already has an office, and into new industries such as automotive, digital healthcare, and surveillance. The startup is also exploring the development of high performance AI chips for data centres and supercomputers.
Hailing from Eindhoven, Axelera exists in one of the most mature semiconductor tech hubs in the world. The city is home to Philips-founded NXP Semiconductors and ASML, which produces chip-making machines for almost every major semiconductor manufacturer on Earth.
This latest funding round brings Axelera’s total raised to $120mn. New investors include the Samsung Catalyst Fund, European Innovation Council Fund, Innovation Industries Strategic Partnership Fund, and Invest-NL.

 
  • Thinking
  • Like
  • Sad
Reactions: 18 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Sam Altman-backed AI processor venture hires ex-Apple engineer to lead hardware development​

News
By Anton Shilov
published 13 hours ago
Jean-Didier Allegrucci joins Rain AI to work on hardware.


Comments (2)
 Jean-Didier (J-D) Allegrucci

(Image credit: Accesswire / Pexels)

Rain AI, an AI hardware processor developer backed by OpenAI's Sam Altman and investment banks, has hired Jean-Didier Allegrucci, a former Apple chip executive, to lead its hardware engineering. This high-profile hire indicates that Rain AI has serious plans for its processors.

Jean-Didier Allegrucci, who has yet to update his LinkedIn profile, worked on Apple's system-on-chips (SoCs) for over 17 years since June 2007 and oversaw development of more than 30 processors used for iPhones, Macs, iPads, and Apple Watch. Based on a Rain AI blog post, Allegrucci was instrumental to build Apple's world-class SoC development team, overseeing areas such as SoC methodology, architecture, design, integration, and verification, so his experience will be extremely valuable for Rain AI. Before Apple, J-D Allegrucci worked at Vivante and ATI Technologies, both developers of graphics processing units.

"We could not be more excited to have a hardware leader of J-D’s caliber overseeing our silicon efforts," said Rain AI CEO William Passo. "Our novel compute-in-memory (CIM) technology will help unlock the true potential of today's generative AI models, and get us one step closer to running the fastest, cheapest, and most advanced AI models anywhere."


At Rain AI, Jean-Didier Allegrucci will collaborate with Amin Firoozshahian, Rain AI's lead architect, who transitioned from Meta Platforms after a five-year tenure. This partnership combines deep industry experience and innovative thinking to drive the company's ambitious goals. Yet, it will take quite some time before Amin Firoozshahian and Jean-Didier Allegrucci build their first system-on-chip at Rain. The process typically takes many years.

Rain AI's focus is on in-memory compute technology, which processes data at the storage location, mimicking the human brain. It promises to enhance energy efficiency significantly compared to traditional AI processors, such as Nvidia's H100, or B100/B200 or AMD's Instinct MI300X.

Earlier this month Rain AI licensed Andes Technology's AX45MPV RISC-V vector processor with the ACE/COPILOT instruction customization and partnered with Andes's Custom Computing Business Unit (CCBU) to accelerate development of its compute-in-memory generative AI solutions. This collaboration aims to enhance Rain AI's product roadmap and deliver scalable AI solutions by early 2025.

Given the time it usually takes to develop a complex processor from scratch and the fact that Rain AI is tasking Andes to help it build its first SoC by early 2025, it looks like processors which development will be led by Jean-Didier Allegrucci are at least a couple of years away and his input to the 2025 product will be limited (if any).

 
  • Like
  • Thinking
  • Love
Reactions: 8 users

hotty4040

Regular
Fanless huh? 🤔..
So I guess that just means it's going to overheat?...

View attachment 65650
The whole top of it, is a massive heatsink, maybe the instructions say.. "place unit in front of a fan"..

They had to add a cooling fan for the other chips in "our" Edge Box, so hard to see how they can get away with it here, using GPU technology..

First, I wonder what the "cost" differential is between them and then, performance differences ? .... Colour schemes anyone ???


Akida Ballista >>>>> that's a huge " heatsink " isn't it <<<<<


hotty...
 
  • Like
  • Fire
Reactions: 5 users
  • Haha
  • Like
Reactions: 16 users

Diogenese

Top 20
Pooh!


Samsung backs ‘world’s most powerful’ AI chip for edge devices​

Dutch startup Axelera promises better AI performance at lower costs
June 28, 2024 - 10:30 am
ADVERTISEMENT

Samsung backs ‘world’s most powerful’ AI chip for edge devices

Eindhoven-based startup Axelera has raised $68mn as it looks to take its AI chip business global. One of the lead investors is Samsung Catalyst, the venture arm of semiconductor giant Samsung Electronics.
Axelera is developing chips, known as AI processing units (AIPUs), that enable computer vision and generative AI in devices like robots and drones. The chips facilitate so-called edge AI — when AI models are deployed inside devices, instead of linking to them via the cloud.
Axelera builds the AIPUs as well as the software that runs them. Dubbed Metis, the startup claims that it is the world’s most powerful AI chip for edge devices.
The AI chips are, in effect, tiny data centres located within the device. By negating the need to upload or download data to a centralised cloud server, the chips speed up data processing. They also minimises energy use.


What’s more, Axelera leverages what’s known as in-memory computing. That’s when data is stored in the main memory (RAM) instead of on traditional disk storage. This makes for even faster data processing and retrieval.
The startup’s chips thus deliver high computing performance at a fraction of the cost and energy consumption of centralised AI processing units, said the startup.
“To truly harness the value of AI, organisations need a solution that delivers high-performance and efficiency while balancing cost,” said Fabrizio Del Maffeo, co-founder and CEO at Axelera AI.
axelera-AI-chip
Axelera claims it has built the world’s most powerful AI processing unit for edge devices. Credit: Axelera

Democratising AI

Axelera’s chips make use of the instruction set architecture (ISA) RISC-V. An ISA acts like a bridge between the hardware and the software. It specifies both what the processor is capable of doing as well as how it gets done.
RISC-V is a low-cost, efficient, and flexible ISA that can be customised to specific use cases. Crucially, unlike most ISAs, it is open source, which means no single entity controls it.
“Our mission is to democratise access to artificial intelligence,” said Del Maffeo.
By specialising in edge AI, and developing both the software and hardware components, Axelera looks to give itself a competitive edge in a booming AI chip market dominated by the likes of Nvidia, Intel, and IBM.
Speaking of IBM, Del Maffeo, who previously worked at Belgium-based tech lab Imec, co-founded Axelera alongside Evangelos Eleftheriou, a former veteran at the American tech giant.
Axelera plans to put its AI processing units into full production in the latter half of this year. It looks to expand its presence in North America, where it already has an office, and into new industries such as automotive, digital healthcare, and surveillance. The startup is also exploring the development of high performance AI chips for data centres and supercomputers.
Hailing from Eindhoven, Axelera exists in one of the most mature semiconductor tech hubs in the world. The city is home to Philips-founded NXP Semiconductors and ASML, which produces chip-making machines for almost every major semiconductor manufacturer on Earth.
This latest funding round brings Axelera’s total raised to $120mn. New investors include the Samsung Catalyst Fund, European Innovation Council Fund, Innovation Industries Strategic Partnership Fund, and Invest-NL.



Last time I looked (18 months ago), Axelera was analog - "in-memory compute" is a giveaway.

I think Axelera is an IBM spin-off - there's some tie-up through the inventors:

WO2021220069A2 CROSSBAR ARRAYS FOR COMPUTATIONS IN MEMORY-AUGMENTED NEURAL NETWORKS

IBM 20200429

Inventors: BOHNSTINGL THOMAS [CH]; PANTAZI ANGELIKI [CH]; WOZNIAK STANISLAW [CH]; EVANGELOS ELEFTHERIOU (CH)

Evangelos Eleftheriou (axelera.ai)

Co-Founder - Axelera AI

Evangelos Eleftheriou, an IEEE and IBM Fellow, is the Chief Technology Officer and co-founder of Axelera AI, a best-in-class performance company that develops a game-changing hardware and software platform for AI.

Before his current role, Evangelos worked for IBM Research – Zurich, where he held various management positions for over 35 years. His outstanding achievements led him to become an IBM Fellow, which is IBM’s highest technical honour.

More recently, there have been 4 Axelera patent docs published:

WO2024110255A1 MEMORY AND IN-MEMORY PROCESSOR 20221123

WO2024067954A1 ACCELERATING ARTIFICIAL NEURAL NETWORKS USING HARDWARE-IMPLEMENTED LOOKUP TABLES 20220927

WO2023193899A1 MULTI-BIT ANALOG MULTIPLY-ACCUMULATE OPERATIONS WITH MEMORY CROSSBAR ARRAYS 20220406

WO2023117081A1 IN-MEMORY PROCESSING BASED ON MULTIPLE WEIGHT SET
 
Last edited:
  • Like
  • Fire
Reactions: 19 users

GazDix

Regular
Last time I looked (18 months ago), Axelera was analog - "in-memory compute" is a giveaway.

I think Axelera is an IBM spin-off - there's some tie-up through the inventors:

WO2021220069A2 CROSSBAR ARRAYS FOR COMPUTATIONS IN MEMORY-AUGMENTED NEURAL NETWORKS

IBM 20200429

Inventors: BOHNSTINGL THOMAS [CH]; PANTAZI ANGELIKI [CH]; WOZNIAK STANISLAW [CH]; EVANGELOS ELEFTHERIOU (CH)

Evangelos Eleftheriou (axelera.ai)

Co-Founder - Axelera AI

Evangelos Eleftheriou, an IEEE and IBM Fellow, is the Chief Technology Officer and co-founder of Axelera AI, a best-in-class performance company that develops a game-changing hardware and software platform for AI.

Before his current role, Evangelos worked for IBM Research – Zurich, where he held various management positions for over 35 years. His outstanding achievements led him to become an IBM Fellow, which is IBM’s highest technical honour.

More recently, there have been 4 Axelera patent docs published:

WO2024110255A1 MEMORY AND IN-MEMORY PROCESSOR 20221123

WO2024067954A1 ACCELERATING ARTIFICIAL NEURAL NETWORKS USING HARDWARE-IMPLEMENTED LOOKUP TABLES 20220927

WO2023193899A1 MULTI-BIT ANALOG MULTIPLY-ACCUMULATE OPERATIONS WITH MEMORY CROSSBAR ARRAYS 20220406

WO2023117081A1 IN-MEMORY PROCESSING BASED ON MULTIPLE WEIGHT SET
Hi Diogenese, do you think they are a competitor to Brainchip on a technological front?
 
  • Like
Reactions: 2 users

DK6161

Regular
Ahh crap, who did they let go to get Johnny boy?
Seems like a high turnover this year 😩😂🤡🤷.

Relax everyone. I was half joking
1719684891746.gif
 
  • Like
Reactions: 2 users
Top Bottom