BRN Discussion Ongoing

Diogenese

Top 20

Hi equanimous,

8 September 2022
".@NASA has tapped @SiFive (and @MicrochipTech) to create a space-centric RISC-V processor: the High-Performance Spaceflight Computing chip. At heart of the HPSC will be SiFive's X280 64-bit RISC-V cores, which include ML acceleration capabilities."


SiFive does have its own ML, but it is software-based:

https://www.sifive.com/cores/intelligence
SiFive Intelligence is an integrated software + hardware solution that addresses energy efficient inference applications. It starts with SiFive’s industry-leading RISC-V Core IP, adds RISC-V Vector (RVV) support, and then goes a step further with the inclusion of software tools and new SiFive Intelligence Extensions, vector operations specifically tuned for the acceleration of machine learning operations. These new instructions, integrated with a multi-core capable, Linux-capable, dual-issue microarchitecture, with up to 512b wide vectors, and bundled with TensorFlow Lite support, are well-suited for high-performance, low-power inference applications.

We've been friends with SiFive in public since April, so that's 5 months plus the clandestine (NDA) period which would add several months.

https://brainchip.com/brainchip-sifive-partner-deploy-ai-ml-at-edge/
Laguna Hills, Calif. – April 5, 2022 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power neuromorphic AI chips and IP, and SiFive, Inc., the founder and leader of RISC-V computing, have combined their respective technologies to offer chip designers optimized AI/ML compute at the edge.

Hardware ML would be an optimization compared to software ML.

We are not listed among Microchip's AI/ML partners, but one of our friends is:

https://www.microchip.com/en-us/solutions/machine-learning/design-partners


Get Expert Help With Your AI/ML Design​


If you need assistance developing an Artificial Intelligence (AI) or Machine Learning (ML) project, we have partnered with industry-leading design companies to provide state-of-the art AI-based solutions and software tools that support our portfolio of silicon products. These partners have proven capabilities and are uniquely qualified to provide you with the support you need to successfully bring your innovative design to life.



Design PartnerLocationContact InfoApplication AreaFocus/StrengthFocus Microchip ProductsCustomer Prerequisites
edge-impulse.png
Edge ImpulseSan Jose, USAhello@edgeimpulse.comSmart Predictive Maintenance, Smart HMIDevelopment Software ToolkitArm Cortex-based 32-bit MCUs and MPUsAI solution developers
motion-gestures-logo.png
Motion GesturesWaterloo, Canadainfo@motiongestures.comSmart HMIGesture Recognition SolutionArm Cortex-based 32-bit MCUs and MPUsNo AI experience required
SensiML-Logo.png
SensiMLOregon, USAinfo@sensiml.com +1 (503) 567-1248Predictive Maintenance, Gesture Recognition, Process ControlSmall-Footprint, Low-Power Edge DevicesArm Cortex-based 32-bit MCUs and MPUsAI solutions developers


So. given we have known SiFive for more than 6 months, and given we are friends with Edge Impulse, and given the objective our partnership with SiFive of producing optimized AI/ML, and given NASA's penchant for energy efficiency and autonomous operation, you'd have to think that we have a reasonable chance of being involved.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 30 users

Learning

Learning to the Top 🕵‍♂️
It's based on a story from 2020


Mercedes has since switched from Intel to BrainChip


The article is entirely misleading
Thank you uiux,

And Edge Impulse only share the article 12 hours ago 🤔🤔🤔.

Learning
 
  • Like
  • Thinking
  • Fire
Reactions: 9 users

uiux

Regular
Thank you uiux,
And EdgeImpulse only share the article 12 hours ago 🤔🤔🤔.

Learning


Follow the link from that article to this one:



Published on June 23, 2020 10:30 AM



Old news and poor fact checking on the journalists behalf by relying on that article as a current source


** The article is only discussing past research so it is actually factual **

It just neglects to mention current events, like Mercedes switching to Akida
 
Last edited:
  • Like
  • Fire
Reactions: 11 users

TopCat

Regular

I accidentally came across this research paper whilst looking for something else. There’s no mention of Akida or any other chip. It’s a technical piece and a lot went over my head 🤯but very interesting in regards to medical breakthroughs. I’ve copied some relevant paragraphs highlighting the use of SNN​

( sorry, the bold seems to be stuck on 🤔)

A neuromorphic spiking neural network detects epileptic high frequency oscillations in the scalp EEG​

Spiking neural networks (SNN) have emerged as optimal architectures for embedding in compact low-power signal processing hardware.

This study is a further step towards non-invasive epilepsy monitoring with a low-power wearable device.

Future implementation in neuromorphic hardware​

Our simulated EEG SNN has been motivated by the perspective of future implementation in neuromorphic processors that carry out computation “at the edge''4749. The EEG SNN can be easily mapped onto the neuromorphic device that we have developed and described previously8. All parameters and architecture elements in this neuromorphic device have been carefully chosen to enable the implementation of the simulated EEG SNN in the neuromorphic hardware with only minor adaptations.

A hardware HFO detector based on neuromorphic technology would benefit from low power consumption since it performs spike-based processing. The raw signal is converted into "events" by an asynchronous delta modulator (ADM) circuit. There is no fixed sampling rate. This feature makes the whole device highly efficient in terms of power consumption. As an output, only the presence of an HFO would be signaled to a data storage device, e.g., a mobile phone.

To estimate power consumption, we envision a real-time HFO analysis “at the edge” that includes signal amplification in the preprocessing stage, HFO detection in the SNN, and wireless transmission of a flag to a storage device at the time of HFO occurrence. As previously reported8, our chip consumes 58.4 μW for preprocessing and 555.6 μW for the SNN.

A bit more info on the research paper I came across. The research was carried out in Zurich which I also happens to get a mention on Brainchips website.

Neuromorphic chip detects high-frequency oscillations

Neuromorphic engineering is a promising new approach that bridges the gap between artificial and natural intelligence. An interdisciplinary research team at the University of Zurich, the ETH Zurich, and the UniversityHospital Zurich has used this approach to develop a chip based on neuromorphic technology that reliably and accurately recognizes complex biosignals. The scientists were able to use this technology to successfully detect previously recorded high-frequency oscillations (HFOs). These specific waves, measured using an intracranial electroencephalogram (iEEG), have proven to be promising biomarkers for identifying the brain tissue that causes epileptic seizures.

Then there’s this;

Research on HFO in epilepsy is conducted at Zentrum für Epileptologie und Epilepsiechirurgie (ZEE), a collaboration of the Department of Neurosurgery with the Swiss Epilepsy Center at Klinik Lengg, the Department of Neurology of the University Hospital Zurich, the University Children’s Hospital Zurich, and the Institute for Neuroinformatics, University and ETH Zurich.

And this from Brainchip website;

For now, neuromorphic engineering is particularly suited to real-time sensory processing. Because this is such a critical area, our package includes two stories, one a report from the field by EE Times Europe Editor-in-Chief Anne-Françoise Pelé exploring vision sensors in mobile phones and production lines. The second is the personal perspective of Tobi Delbrück, one of neuromorphic engineering founder Carver Mead’s PhD students and professor at the Institute of Neuroinformatics in Zurich, Switzerland.
 
  • Like
  • Fire
  • Love
Reactions: 19 users

Townyj

Ermahgerd
On another happy front😎,

Just able to converts 20% of mrs Learning super into BRN.
That's right shorter, it's lock away until her retirement. Oh wait how long is that? Mrs Learning is not due to retirement for another 30+ years. 😂😂😂

Or unless BRN is $50 in ten years, then we all can have very early retirement. 🎉🎉🎉😂😂😂🏖🏖🏖

It's great to be a shareholder.

Don't some Super funds still allow lending for shorting activities..?? As they know you wont be able to sell them off until retirement.
 
  • Like
  • Thinking
  • Fire
Reactions: 4 users
Just reading a recent research paper from some students within the Rochester Institute of Tech in US and China campuses who reference Akida (& Loihi) as potential HW to implement on.

Paper attached and one author was interesting. Part of the Ant Group which appears to be tied back to Alibaba.



Scaling Up Dynamic Graph Representation Learning via Spiking Neural Networks

Jintang Li1
, Zhouxin Yu1
, Zulun Zhu2
, Liang Chen1
,
Zibin Zheng1
, Qi Yu2
, Sheng Tian3
, Ruofan Wu3
, Changhua Meng3
1 Sun Yat-sen University
2 Rochester Institute of Technology
3 Ant Group


Abstract
Recent years have seen a surge in research on dynamic graph representation learning, which aims to model temporal graphs that are dynamic and evolving constantly over time. However, current work typically models graph dynamics with recurrent neural networks (RNNs), making them suffer seriously from computation and memory overheads on large temporal
graphs. So far, scalability of dynamic graph representation learning on large temporal graphs remains one of the ma-
jor challenges.

In this paper, we present a scalable framework, namely SpikeNet, to efficiently capture the temporal and structural patterns of temporal graphs. We explore a new direction in that we can capture the evolving dynamics of temporal graphs with spiking neural networks (SNNs) in-
stead of RNNs. As a low-power alternative to RNNs, SNNs explicitly model graph dynamics as spike trains of neuron populations and enable spike-based propagation in an efficient way.


Experiments on three large real-world temporal graph datasets demonstrate that SpikeNet outperforms strong baselines on the temporal node classification task with lower computational costs. Particularly, SpikeNet generalizes to a large temporal graph (2M nodes and 13M edges)
with significantly fewer parameters and computation overheads. Our code is publicly available at https://github.com/
EdisonLeeeee/SpikeNet


Algorithm 1 provides an overview of the SpikeNet Em-bedding generation (i.e., forward propagation), the node em-
bedding at each layer and each time step is generated byaggregating neighborhood information, followed by a LIF model to capture the evolving dynamics. It is worth noting that the forward propagation allows the multiply-and-accumulate typically inherent in matrix multiplication to be turned into simply matrix masking (or indexing) and accumulation, i.e., masked summation, which could be implemented with more energy-efficient neuromorphic hardware such as Intel Loihi (Davies et al. 2018) or Brainchip
Akida (Vanarse et al. 2019).

In this manner, our method may lead to more energy-efficient implementations of GNNs on temporal graphs and will be meaningful and influential in the future.

Additional Experimental Results

Firing sparsity Benefiting from intrinsic neuronal dynamics and spike-based communication paradigms, SNNs can be easily applied on some specialized neuromorphic hardware, such as Intel Loihi (Davies et al. 2018) or
Brainchip Akida (Vanarse et al. 2019)
 

Attachments

  • 2208.10364.pdf
    1.1 MB · Views: 78
  • Like
  • Fire
  • Love
Reactions: 14 users

Learning

Learning to the Top 🕵‍♂️
Don't some Super funds still allow lending for shorting activities..?? As they know you wont be able to sell them off until retirement.
Hi Townyj,

I truly hope such practices will stop. As it'd goes again fiduciary duty to thier clients.
But you can buy and sell anytime within your super account.
But we are not planning to.

Learning
 
  • Like
Reactions: 5 users
Don't justify it uiux. You were out of line.
Get off your high horse mate. Save the monarchy posts for your social media accounts.
 
  • Like
Reactions: 2 users

Townyj

Ermahgerd
Hi Townyj,

I truly hope such practices will stop. As it'd goes again fiduciary duty to thier clients.
But you can buy and sell anytime within your super account.
But we are not planning to.

Learning

I don't have a SMSF so wasn't to sure tbh.. Would be great to have some clarity on what they do with the purchased shares.

Only heard it from the grapevine that these kind of things happen. I'm all for a good short squeeze as i wont be selling my shares for a long time. if at all. Adding to the collection will more likely occur.

As Gollum says... "My precious"

the lord of the rings GIF
 
  • Like
  • Love
Reactions: 4 users

krugerrands

Regular
And the latest reported aggregate shorts and graphs that no one asked for :cool:

86,438,911 as of 2/9/22

1662691948738.png
1662691959888.png
 
  • Like
  • Haha
Reactions: 20 users
The Queen was a great lady and may she rest in peace.
colonisers are not good people, but yeah let’s not talk about that. Can we keep political discussions off this thread, dead or alive we have more important things to discuss like brn
 
  • Like
  • Sad
Reactions: 5 users

Slade

Top 20
Let’s keep the political discussion off this thread but before we do let me have the final provoking prod.
 
  • Haha
  • Like
  • Fire
Reactions: 24 users

Diogenese

Top 20
The latest in in-memory compute?

One day we may see Akida in our SSDs.

https://www.msn.com/en-au/news/othe...sedgntp&cvid=a5a1134059814cd4b4c047e756df8067

A whole new breed of SSDs is about to break through

Joel Khalili - 12h ago

The Storage Networking Industry Association (SNIA) has released the first edition of a new set of standards designed to clear the way for a new breed of storage products: CSxe
s.

1662693855207.png



Short for computational storage devices, CSxes differ from regular SSDs or hard drives in that they handle data processing on-board, minimizing bottlenecks created by the need to pass data between storage and the CPU, GPU and RAM.

The new standards issued by SNIA were created to encourage interoperability between the various CSDs currently under development, as well as supporting the work of software architects and other programmers.


Computational storage: The next big thing?

Computational storage has been billed by market players as one of the next big things in computing for a number of years now.

Broadly, there are two types of CSDs: those that incorporate processors into the storage device itself and those that pass compute operations to a storage accelerator located nearby.


Although computational storage is not appropriate for every use case, it has the potential to dramatically accelerate applications that are limited by I/O performance rather than compute.

“There is clearly a broad class of applications that benefit from offloading compute functions from a main CPU to a more efficient processing engine that is more suited to the specific problem of interest,” explained Richard New, VP of Research at Western Digita
l, in conversation with TechRadar Pro.

In the context of storage, we can think of applications like video transcoding, compression, database acceleration as falling into this category. A video transcoding device closely paired with a storage device can allow a video server to more efficiently stream content at many different quality levels while minimizing unnecessary I/O and data transfers throughout the system.”

In addition to providing vendors with guidance for developing new CSxes, the arrival of the SNIA standards establishes a set of common definitions that can be used to properly categorize the products that come to market.

"The 1.0 Model has a nice baseline on definitions – before this there were none, but now we have Computational Storage Devices (CSxes), Computational Storage Processors (CSPs), Computational Storage Drives (CSDs), and Computational Storage Arrays (CSAs), and more," said David McIntyre, who chairs the computational storage special interest group at SNIA.

Already, vendors from Samsung to sk Hynix are beginning to demo and release computational storage devices. But by bringing standardization to the market, the new SNIA specification could lay the necessary foundations for adoption on a mass scale
.
 
  • Like
  • Fire
  • Love
Reactions: 31 users
This is intresting:

Intel & Mercedes!

"While this may sound futuristic, Intel’s neuromorphic computing research is already fostering interesting use cases, including how to add new voice interaction commands to Mercedes-Benz vehicles; create a robotic hand that delivers medications to patients; or develop chips that recognize hazardous chemicals."


It's great to be a shareholder.
Yulia Sandamirskaya research scientist at Intel said..

“We’re incredibly excited to see how neuromorphic computing could offer a compelling alternative to traditional AI accelerators,” she said, “by significantly improving power and data efficiency for more complex AI use cases spanning data center to extreme edge applications.”

Whenever a connection with a large Company is revealed and even without, you're going to get the competition, frothing at their mouths and saying "Hey we can do that too, only better"..
Intel's play book, if it's history is any lesson.. Is saying this, knowing it's not true, but while stuffing wads of money into the potential customer's top pocket..

You have to try don't you 😛..
 
Last edited:
  • Like
  • Haha
  • Fire
Reactions: 8 users

Worker122

Regular
No one is special on this Forum, we are all entitled to our opinions. No one can tell any one what to stop posting (Moderator excepted) It hasn’t taken long and the level of debate has again dropped to bogan like input, what the hell are people trying to prove? Are so many of us living on the edge of jumping down other peoples throats over something that is irrelevant or does not sit well. It’s going to be a interesting AGM if a few of us go. Not that anyone cares but I will not be attending. Also not that I have a lot of input , I regularly log in to read informative posts but am finding that I am getting more and more disappointed in the level of posts.I am taking a FF sabbatical. Good luck.
 
  • Like
  • Love
Reactions: 14 users

Foxdog

Regular
Hi FF,

What a Amazing company Brainchip is:

Our founder is Amazing

The technology is Amazing

The management is Amazing

Last but not least, the shareholders of Brainchip is Amazingly knowledgeable.

Learning
It's great to be a shareholder.

PS. May our Queen rise to heaven ❤️.
Methinks it's time for the SP to become amazing too......🤔
 
  • Like
  • Fire
  • Love
Reactions: 10 users
Don't some Super funds still allow lending for shorting activities..?? As they know you wont be able to sell them off until retirement.
Let the shorters play with matches if they want 😛

They ain't kids and will only burn their own houses down, the day they lose Big Time..
 
  • Like
  • Fire
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Could everyone please stop bickering because we have far bigger fish to fry. Like this giant 1496 pound Atlantic Salmon of an announcement from LG from 21 June 2022 which states "Additionally, the MBUX Hyperscreen incorporates a driver monitoring system (DMS) with driving assistance and passenger safety functions, made possible by the new electronic architecture."

Here's how I interpret this. LG can't NOT know about us.

Not only that! It also states "LG is also hard at work creating a human-machine interface (HMI) that prioritizes the safety of vehicle passengers." So if LG has ackowledged that the safety functions are made possible by the new elctronic software (and AKIDA is the software), then it stand to reason that we are INDISPENSABLE. Not to mention, the Hyerpscreen was voted the best because of the collaborative efforts of all of the parties involved - LG , Mercedes & BrainChip.

BrainChip's the best. Chuck out the rest!


B 💋

LOVE + PEACE + MUNG BEANS ☮️



06/21/2022


The ‘Hyper’ Innovative Vehicle Solutions

Mercedes-Benz EQS MBUS Hyperscreen Image 1


Photo Credit : The MBUX Hyperscreen featured in Mercedes-Benz EQS


LG has been focusing on creating new technologies and solutions for the fast-changing automotive industry, working hand-in-hand with leading global carmakers to help shape the future of mobility.

LG’s unique capabilities and extensive knowhow in improving the user experience, garnered across years of leadership in the consumer electronics and IT businesses, have proven key to the company’s success in creating a new generation of automotive parts solutions. Known for its willingness to explore new ideas, and for its long history of bring out the value in customers’ life, LG is now leading the way to a more connected, enjoyable and safer future on the world’s roads.


Mercedes-Benz EQS MBUS Hyperscreen Image 2


Photo Credit : The MBUX Hyperscreen inside the Mercedes-Benz EQS


In order to improve its offerings every single year, LG conducts in-depth research to gain a thorough understanding of consumers and their needs. The success and insight garnered from the pursuit of this approach have put LG in a unique position to contribute to the evolution of the in-vehicle experience. The perfect partner for forward-thinking automakers, LG is also hard at work creating a human-machine interface (HMI) that prioritizes the safety of vehicle passengers.

Mercedes-Benz, LG’s well-known customer, recently Mercedes-Benz User Experience (MBUX) Hyperscreen was recognized for excellence in display application at one of the world’s largest display events. LG Vehicle component Solutions company played a central role in bringing the MBUX Hyperscreen to life, supplying its immense expertise in cutting-edge display tech, and its latest in-vehicle infotainment (IVI) system, to the groundbreaking project.

Everything began in 2016 when Mercedes-Benz’ engineers began looking for a viable way to integrate a single, continuous touchscreen into the vehicle cockpit. After countless meetings and discussions between LG and the German carmaker, Mercedes-Benz’s vision for a seamless cockpit system – one that would revolution the connection between human and car – came into clear view.


Mercedes-Benz EQS MBUS Hyperscreen Image 3


Photo Credit : The MBUX Hyperscreen featured in Mercedes-Benz EQS


A combination of three high-resolution and high-brightness display screens – a 12.3-inch LCD instrument cluster, a 17.7-inch plastic-OLED center information display and 12.3-inch plastic-OLED passenger display – the LG VS Pillar to Pillar (P2P) display provides the MBUX Hyperscreen with its unique look and unmatched usability. Powered by the company’s IVI system, the sharp, vibrant display enables the driver and front-seat passenger to view a range of key vehicle information and easily access a wide variety of content. A curved glass cover creates the appearance of a single, continuous screen and delivers a flowing form factor in keeping with the design of the vehicle interior.

Additionally, the MBUX Hyperscreen incorporates a driver monitoring system (DMS) with driving assistance and passenger safety functions, made possible by the new electronic architecture.


Mercedes-Benz EQS MBUS Hyperscreen Image 4


Photo Credit : The MBUX Hyperscreen and interior cockpit view of Mercedes-Benz EQS


At Display Week 2022 in San Jose, California, the MBUX Hyperscreen was honored with the Society for Information Display’s ‘Display Application of the Year’ award, a meaningful recognition that points to the growing sophistication and importance of the in-vehicle display segment. Covering the full width of the dashboard, the attention-grabbing screen is a key feature of the sleek, futuristic Mercedes-Benz EQS.

Through its unmatched innovation and dedication, the LG has become an in-demand auto industry partner, taking an active role in shaping the future of mobility and elevating the on-road experience for drivers and passengers worldwide.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 58 users
D

Deleted member 118

Guest
  • Haha
Reactions: 2 users

Learning

Learning to the Top 🕵‍♂️
Yulia Sandamirskaya research scientist at Intel said..

“We’re incredibly excited to see how neuromorphic computing could offer a compelling alternative to traditional AI accelerators,” she said, “by significantly improving power and data efficiency for more complex AI use cases spanning data center to extreme edge applications.”

Whenever a connection with a large Company is revealed and even without, you're going to get the competition, frothing at their mouths and saying "Hey we can do that too, only better"..
Intel's play book, if it's history is any lesson.. Is saying this, knowing it's not true, but while stuffing wads of money into the potential customer's top pocket..

You have to try don't you 😛..
Well DB,

I only know of one and only AI accelerators that's is commercially available, one shots learning, power efficient & ultra low latency.🤔🤔🤔

😎😎😎
It's great to be a shareholder.
 
  • Like
  • Fire
Reactions: 9 users
Top Bottom