BRN Discussion Ongoing

It does seem like a very clever move on the part of Brainchip to almost a year ago to bring Krishnamurthy V. Venuru on board and allow him to complete his Canny Edge Detector Using a Spiking Neural Network algorithm research funded by Riverside Research’s Open Innovation Center which will accelerate bringing to AKIDA technology the ability to identify boundaries for medical imaging one of the many large addressable markets Brainchip is targeting. There are other vision applications such as UAVs where this has application however medical imaging is a standout application in my opinion. Being able to define the exact outline of a tumour or foreign body is invaluable to surgeons preparing to operate.

My opinion only DYOR
FF

AKIDA BALLISTA - (This paper has been previously referenced here on TSEx - this is an extract as the full paper exceeds the site limits.) :

Implementation of the Canny Edge Detector Using a Spiking Neural Network​

by
Krishnamurthy V. Vemuru


Riverside Research, 2900 Crystal Dr., Arlington, VA 22202, USA

Current Address: BrainChip Inc., 23041 Avenida de la Carlota, Laguna Hills, CA 92653, USA.
Future Internet 2022, 14(12), 371; https://doi.org/10.3390/fi14120371
Received: 6 June 2022 / Revised: 5 December 2022 / Accepted: 7 December 2022 / Published: 11 December 2022
(This article belongs to the Collection Computer Vision, Deep Learning and Machine Learning with Applications)

Abstract​

Edge detectors are widely used in computer vision applications to locate sharp intensity changes and find object boundaries in an image. The Canny edge detector is the most popular edge detector, and it uses a multi-step process, including the first step of noise reduction using a Gaussian kernel and a final step to remove the weak edges by the hysteresis threshold. In this work, a spike-based computing algorithm is presented as a neuromorphic analogue of the Canny edge detector, where the five steps of the conventional algorithm are processed using spikes. A spiking neural network layer consisting of a simplified version of a conductance-based Hodgkin–Huxley neuron as a building block is used to calculate the gradients. The effectiveness of the spiking neural-network-based algorithm is demonstrated on a variety of images, showing its successful adaptation of the principle of the Canny edge detector. These results demonstrate that the proposed algorithm performs as a complete spike domain implementation of the Canny edge detector.
Keywords:
edge detection; segmentation; spiking neural networks; bio-inspired neurons



Graphical Abstract

1. Introduction​

Artificial neural networks (ANNs) have become an indispensable tool for implementing machine learning and computer vision algorithms in a variety of pattern recognition and knowledge discovery tasks for both commercial and defense interests. Recent progress in neural networks is driven by the increase in computing power in data centers, cloud computing platforms, and edge computing boards. In size, weight, and power (SWaP)–constrained applications, such as unmanned aerial vehicles (UAVs), augmented reality headsets, and smart phones, more novel computing architectures are desirable. The state-of-the-art deep learning hardware platforms are often based on graphics processing units (GPUs), tensor processing units (TPUs) and field programmable gate arrays (FPGAs). The human brain is capable of performing more general and complex tasks at a minute fraction of the power required by deep learning hardware platforms. Spiking neurons are regarded as the building blocks of the neural networks in the brain. Moreover, research in neuroscience indicates the spatiotemporal computing capabilities of spiking neurons play a role in the energy efficiency of the brain. In addition, spiking neurons leverage sparse time-based information encoding, event-triggered plasticity, and low-power inter-neuron signaling. In this context, neuromorphic computing hardware architecture and spike domain machine learning algorithms offer a low-power alternative to ANNs on von Neumann computing architectures. The availability of neuromorphic processors, such as IBM’s TrueNorth [1], Intel’s Loihi [2], and event-domain neural processors, for example, BrainChip’s Akida [3,4], which offers the flexibility to define both artificial neural network layers and spiking neuron layers, are motivating the research and development of new algorithms for edge computing. In the present work, we have investigated how one can program an algorithm for Canny type edge detection using a spiking neural network and spike-based computing.

2. Background​

An edge detection algorithm is widely used in computer vision to locate object boundaries in images. An edge in an image shows a sharp change in image brightness, which is a result of a sharp change in pixel intensity data. An edge detector computes and identifies the pixels with sharp changes in intensity with respect to the intensity of neighboring pixels. There are several edge detection image processing algorithms.
The three stages in edge detection are image smoothing, detection, edge localization. There are mainly three types of operators in edge detection. These are (i) gradient-based, (ii) Laplacian-based and (iii) Gaussian-based. The gradient-based edge detection method detects the edges by finding the maximum and the minimum in the first derivative of the image using a threshold. The Roberts edge detector [5], Sobel edge detector [6], and Prewitt edge detector [7] are some of the examples of gradient-based edge detectors. These detectors use a 3 × 3 pattern grid. A detailed discussion on these edge detectors and a comparison of their advantages and disadvantages can be found in [8]. The Roberts edge detection method is built on the idea that a difference on any pair of mutually perpendicular directions can be used to calculate the gradient. The Sobel operator uses the convolution of the images with a small, separable, and integer-valued filter in horizontal and vertical directions for edge detection. The Prewitt edge detector uses two masks, each computing the derivate of the image in the x-direction and the y-direction. This detector is suitable to estimate the magnitude and orientation of the edge. Laplacian-based edge detectors find the edges by searching for zero crossings in the second derivative of the image. The Laplacian of the Gaussian algorithm uses a pre-smoothing step with a Gaussian low-pass filter on an image followed by a second-order differential, i.e., Laplacian, which finds the image edge. This method needs a discrete convolutional kernel that can approximate the second derivative for the image which consists of discrete pixels. The Marr–Hildreth edge detector is also based on the Laplacian of the Gaussian operator [9]. The Gabor filter edge detector [10] and Canny edge detector [11] are Gaussian-based edge detectors. The Gabor filter is a linear filter with its impulse response function defined by the product of a harmonic function with a Gaussian function and is similar to the human perception system.
The Canny edge detector provides excellent edge detection, as it meets the three criteria for edge detection [12]: (i) detection with low error rate, (ii) the edge point should localize in the center of the edge, and (iii) an edge should only be marked once and image noise should not create edges. Canny edge detection uses the calculus of variations to optimize a functional which is a sum of four exponential terms, which approximates the first derivative of a Gaussian. A Canny edge detector is a multi-step algorithm designed to detect the edges of any analyzed image. The steps of this process are: (1) removal of noise in the image using a Gaussian filter, (2) calculation of the gradient of the image pixels along x- and y-directions, (3) non-maximum suppression to thin out edges, (4) double-threshold filtering to detect strong, weak and non-relevant pixels, and (5) edge tracking by hysteresis to transform weaker pixels into stronger pixels if at least one of their neighbors is a stronger pixel. The Canny edge detection algorithm is highly cited (∼36,000 citations) and the most commonly used edge detection algorithm [11].
Edge detection is a primary step in identifying an object and further research is strongly desirable to expand these methods to event-domain applications. The Canny edge detector has a better performance as an edge detector compared to Roberts, Sobel and Prewitt edge detectors, but at a higher computational cost [8]. An alternate implementation of the Canny edge detection algorithm is for edge computing event-domain applications, where low-power and real-time solutions can be attractive to target applications, in view of its tunable performance using the standard deviation of the Gaussian filter.

3. Related Work​

In the human vision system, the photoreceptors in the retina convert the light intensity into nerve signals. These signals are further processed and converted into spike trains by the ganglion cells in the retina. The spike trains travel along the optic nerve for further processing in the visual cortex. Neural networks that are inspired by the human vision system have been introduced to improve image processing techniques, such as edge detection [13]. Spiking neural networks, which are built on the concepts of spike encoding techniques [14], spiking neuron models [15] and spike-based learning rules [16], are biologically inspired in their mechanism of image processing. SNNs are gaining attraction for biologically inspired computing and learning applications [17,18]. Wu et al. simulated a three-layer spiking neural network (SNN), consisting of a receptor layer, an intermediate layer with four filters, respectively, for up, down, left, and right directions, and an oputput layer with Hogkin–Huxley-type neurons as the building blocks for edge detection [19].
Clogenson et al. demonstrated how a SNN with scalable, hexagonally shaped receptive fields performs edge detection with computational improvements over rectangular shaped pixel-based SNN approaches [20]. The digital images are converted into a hexagonal pixel representation before being processed by the SNN. A spiking neuron integrates the spikes from a group of afferent neurons in a receptive field. The network model used by the authors consists of an intermediate layer with four types of neurons corresponding to four different receptive fields, corresponding to up, down, right and left orientations. Yedjour et al. [21] demonstrated the basic task of contour detection using a spiking neural network based on the Hodgkin–Huxley neuron model. In this approach, the synaptic weights are determined by the Gabor function to describe the receptive field’s behaviors of simple cells in the visual cortex. Vemuru [22] reported the design of a SNN edge detector with biologically inspired neurons and demonstrated that the edge detector detects edges in simulated low-contrast images. These studies focused on defining SNNs using an array of Gabor filter receptive fields in the edge detector. In view of the success of SNNs in edge detection, it is desirable to develop a spike domain implementation of an edge detector with the Canny edge detector algorithm because it has the potential to offer a high performance alternative for edge detection.

5. Results and Discussion​

Figure 2 displays the original image (No. 61060) from BSD500 dataset [32,33] (license: Not found) and the intermediate output images of the spiking detector network after the Gaussian kernel, gradient calculation, non-maximum suppression, double-threshold filtering, and edge-tracking by hysteresis. The intermediate features show the spike domain implementation is effective in performing all five steps involved in the conventional Canny edge detection algorithm. The edge map resulting from the spike-based computation, i.e., the output of the network after the edge-tracking hysteresis step, can be referred to as the neuromorphic Canny edge detector or SNN-based Canny edge detector. We find that the resolution of the images from BSD500 dataset is sufficient to evaluate the qualitative and quantitative performance of edge detection. In Figure 3, we compare the neuromorphic Canny detector with the Sobel detector and the conventional Canny detector for a set of four images and their ground truth edges from the BSD500 dataset [32,33]. We find the neuromorphic Canny detector detects edges similarly to the edges generated by both the conventional detectors. The spike-based detection reported here results in relatively more highlighted edges in the objects while retaining some of the structural features that seem to have been lost in the conventional edge detectors. This can be attributed to the difference in the range of thresholds used in the spike domain implementation compared to the conventional Canny edge detector. With an implementation on a low size, weight and power (SWaP) hardware, for example, by emulating spikes on a field programmable gate array (FPGA) [34] or edge computing devices, such as NVIDIA Jetson TX2, together with a neuromorphic camera, the spike domain image processing algorithm will be attractive for new applications, for example, in handheld computer vision systems and for low-Earth satellite-based object detection.
Futureinternet 14 00371 g002 550

Figure 2. The original and the output images from all 5 stages of neuromorphic Canny edge detection.
Futureinternet 14 00371 g003 550

Figure 3. A comparison of the ground truth with the results from Canny edge detector and the SNN analogue of Canny edge detector for selected images from BSD500 dataset.
In image processing, it is common to select a few images and demonstrate how the method works. Statistics on training sets and test sets usually applies to machine learning experiments, which is the not the case in the present study. All the images that are pictures of natural scenes with edges and none of the examples are synthesized. From visual inspection, the SNN edge detector appears to render wider edges and detect more background information. It is possible that the difference in the SNN Canny edge map compared to the conventional Canny edge map is also related to the choice of the threshold used or the smoothing parameters. The SNN edge maps are realized in a narrower threshold range, and this makes it difficult to perform an ablation study, which is typically done in machine learning experiments. We would like to note that the primary goal of the present work is to demonstrate a spike-based implementation of Canny edge detection, not the superior computational efficiency. Currently, there is no hardware that can implement the exact version of the neuron model used in the present work to evaluate the computational time for the targeted neuromorphic domain. The context of this work is that it addresses the question of whether it is possible to compute the steps of the Canny edge algorithm by exclusively using spikes.
Futureinternet 14 00371 g004 550

Figure 4. A comparison of the ground truth with the results from Canny edge detector and the SNN analogue of Canny edge detector for selected images from BSD500 dataset. For these images, the metrics for SNN Canny edge detector come out lower compared to the Canny edge detector.
Table 1. Comparison of performance ratio, PR, and F1�1-score for selected images.
Object detection in infrared images is another key application for edge detectors, for example in night vision gadgets or for autonomous driving. Novel edge detectors, especially those that can extract edges in low-resolution infrared images with low-SWaP requirements for hardware implementation, are sought across commercial and defense applications. In this context, we evaluated our SNN edge detector with a few infrared images from the Thermal Road Dataset [36] (license: Not found). Figure 5 shows a comparison of edge detection with SNN-based Canny detector, conventional Canny detector and Sobel detector. Ground truth edge maps are not available for this dataset to perform a quantitative comparison similar to the one presented for the RGB images in Table 1. A visual comparison of the results from the three edge detectors in Figure 5 indicates that the SNN-based Canny edge detector is able to generate edge maps very similar to the ones generated by the conventional Canny edge detector.
Futureinternet 14 00371 g005 550

Figure 5. A comparison of edge detection by Sobel, Canny, and SNN Canny edge detectors for images from thermal road dataset.
Biomedical image processing is a field with an increasing demand for advanced algorithms that automate and accelerate the process of segmentation. In a recent review, Wallner et al. reported multi-platofrm performance evaluation of open-source segmentation algorithms, including the Canny-edge-based segmentation method, for cranio-maxillofacial surgery [37]. In this context, it is desirable to test newer algorithms such as the one developed here using spiking neurons on medical images. To this end, we performed an edge detection experiment with a few representative medical images from the computed tomography (CT) emphysema dataset [38] (this database can be used free of charge for research and educational purposes). This dataset consists of 115 high-resolution CT slices as well as 168 square patches that are manually annotated in a subset of the slices. Figure 6 illustrates a comparison of the three edge detectors, Sobel, Canny and SNN-based Canny detectors, for a few example images from the CT emphysema dataset. Ground truth for edge maps is not available for this dataset to perform a quantitative comparison. A visual comparison of the edges generated from the three edge detectors, presented in Figure 6, shows that the SNN-based Canny edge detector is competitive with the other two edge detectors and offer an algorithmically neuromorphic alternative to the conventional Canny edge detector.
Futureinternet 14 00371 g006 550

Figure 6. A comparison of edge detection by Sobel, Canny, and SNN Canny edge detectors for medical images from CT emphysema dataset.

6. Conclusions​

In conclusion, we present a spiking neural network (SNN) implementation of the Canny edge detector as its neuromorphic analogue by introducing algorithms for spike based computation in the five steps of the conventional algorithm with the conductance-based Hodgkin–Huxley neuron as the building block. Edge detection examples are presented for RGB and infrared images with a variety of objects. A quantitative comparison of the edge maps from the SNN-based Canny detector, conventional Canny detector and a Sobel detector using the F1�1-score as a metric, shows that the neuromorphic implementation of the Canny edge detector achieves better performance. The SNN architecture of the Canny edge detector also offers promise for image processing and object recognition applications in the infrared domain. The SNN Canny edge detector is also evaluated with medical images, and the edge maps compare well with the edges generated with the conventional Canny detector. Future work will focus on the implementation of the algorithm on an FPGA or on a neuromorphic chip for hardware acceleration and testing in an infrared object detection task potentially with edge maps as features together with a pre-processing layer to remove any distortions, enhance contrast, remove blur, etc., and a spiking neuron layer as a final layer to introduce a machine learning component. An extension of the SNN architecture of the Canny edge detector with additional processing layers for object detection in LiDAR point clouds would be another interesting new direction of research [39].

Funding​

This work was internally funded by Riverside Research’s Open Innovation Center.

 
  • Like
  • Fire
  • Love
Reactions: 40 users

Dhm

Regular

View attachment 33061
I attempted to register but suddenly realised that the Brainchip keynote speech is already over. As I type the time in San Jose is already 2.46pm and the speech was delivered around 10.10am local time. I wonder if there will be a video available soon.
 
  • Like
  • Sad
Reactions: 10 users
Well how about Brainchip employs someone who can actually get the revenue to kick-in. Silicon Valley is crawling with super qualified out of work technicians ATM - there have been colossal, mass layoffs in the industry. I'm not convinced that any current additions have been attracted by AKIDA any more than their need to put food on the table. I just think that people here put way too much weight on the addition of new employees but far too little emphasis on the fact that we have made very little revenue producing IP sales progress. Perhaps the next 4C will prove me wrong, I certainly hope so but how many consecutive 4C's without self sustaining revenue will be acceptable? Oh but you don't have to consider that difficult question because we've just paid for another super qualified technician, who we might not actually need.
Firstly this appointment was almost a year ago if you read the LinkedIn document.

Secondly the most recent appointments announced by Brainchip have been the sales and marketing appointments in Japan, Korea and Germany.

Thirdly you and others have many times expressed fears about Brainchip being overtaken by others who have caught up the technology lead created by Peter van der Made and the other technology giants working at Brainchip. I assume you know longer consider it important to maintain the technology lead from your comments if so would you care to expand on your reasons.

Fourthly as Brainchip has about two years of cash runway the answer is eight 4C's.

My opinion only based on Doing My Own Research
FF

AKIDA BALLISTA.
 
  • Like
  • Love
  • Fire
Reactions: 53 users

skutza

Regular
Here are some simple questions that you can ask yourself regarding competition;

1. Where are Brainchip now, commercial?
2. Is anyone else commercial in this space?
3. How long has it taken BRN to sell their IP?
4. Will any competition be able to just magically appear and then have their tech validated and then commercial immediately?
5. Simple math, look at BRN pathway, look at when finally someone announces they are commercial and then add 1,2,3 years to become commercial?

So yeah, it sounds scary that someone might catch up, but catch up to what? Catch up to announcing they have a commercial chip? Well done to them, then add 3 years for anyone to start using it, and where will BRN be by then? It's simple when you look at it emotionless and logically........


Oh sorry had to edit this, IMO of course :)
 
  • Like
  • Fire
  • Love
Reactions: 54 users

equanimous

Norse clairvoyant shapeshifter goddess
Here are some simple questions that you can ask yourself regarding competition;

1. Where are Brainchip now, commercial?
2. Is anyone else commercial in this space?
3. How long has it taken BRN to sell their IP?
4. Will any competition be able to just magically appear and then have their tech validated and then commercial immediately?
5. Simple math, look at BRN pathway, look at when finally someone announces they are commercial and then add 1,2,3 years to become commercial?

So yeah, it sounds scary that someone might catch up, but catch up to what? Catch up to announcing they have a commercial chip? Well done to them, then add 3 years for anyone to start using it, and where will BRN be by then? It's simple when you look at it emotionless and logically........


Oh sorry had to edit this, IMO of course :)
With any new technology we need an ecosystem for it and thats exactly what BRN team are building which is essential for such a disrupting, essential and ubiquitous products.

  • Apple can build the ecosystem because it does not design around a single device but around said ecosystem. What makes this unique ecosystem successful is how well the devices work together and the walls the company has built around products and services.
  • The Arm Automotive Ecosystem connects you to the right partners, enabling you to build the next generation of efficient, scalable autonomous solutions. Find Automotive Partners. IoT Ecosystem. Explore Arm IoT Ecosystem partners who can help transform an idea into a secure, market-leading device.
  • Amazon ecosystem of products and services is vast and it comprises retail, transportation, B2B distribution, payments, entertainment, cloud computing, and other segments
  • BrainChip expands its Ecosystem with Teksun to Bring the Akida Processor to Next-Generation AIoT Devices​

 
  • Like
  • Love
  • Fire
Reactions: 34 users

marsch85

Regular
It's a marathon not a sprint. Hiring the best tech / engineering talent out there is critical as the Tech is core to everything the business is doing and setting out to do. And it will need to keep evolving to maintain/extend our lead and capitalise on new opportunities over the next decades. The likes of NVIDIA and ARM today have changed substantially and shifted focus from where they started a few decades ago.

Commercialising technology is complex and takes time. I've worked for multiple B2B tech / software businesses and when hiring new sales reps it can take up to a year for them to close their first deal... and then implementation starts... It takes even longer when you have to align with 3-7 year product design cycles. Just look at ARM's journey on getting to where they are today. It wasn't an overnight success story. it took them 3 decades.

BRN has everything to achieve similar results. The Edge AI market is about to explode, we have the tech (as confirmed by many of our very excited partners), we are successfully building our ecosystem (multiple partner announcements per month), and I believe we have the team to bring this home.


IMG_3805.jpg
 
  • Like
  • Love
  • Fire
Reactions: 29 users

Foxdog

Regular
Firstly this appointment was almost a year ago if you read the LinkedIn document.

Secondly the most recent appointments announced by Brainchip have been the sales and marketing appointments in Japan, Korea and Germany.

Thirdly you and others have many times expressed fears about Brainchip being overtaken by others who have caught up the technology lead created by Peter van der Made and the other technology giants working at Brainchip. I assume you know longer consider it important to maintain the technology lead from your comments if so would you care to expand on your reasons.

Fourthly as Brainchip has about two years of cash runway the answer is eight 4C's.

My opinion only based on Doing My Own Research
FF

AKIDA BALLISTA.
So why the hell are people posting stuff about appointments from a year ago FFS? Seriously
 
  • Like
  • Haha
  • Fire
Reactions: 7 users
So why the hell are people posting stuff about appointments from a year ago FFS? Seriously
That is a very easy question to answer because it is relevant to Brainchip whereas random opinions and swearing in abbreviated form are not.

To express an opinion it helps to have done your own research. Someone who has done their own research or simply read all the research done by others and posted here would understand all the dots that come together by having knowledge of this Brainchip employee.

As you don’t understand you probably need to DYOR or go back and read the research generously shared here by those that do.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 53 users

Adam82

Member
So why the hell are people posting stuff about appointments from a year ago FFS? Seriously
And that why it’s important to do your own research and have a plan that suits you….. 🙄
 
  • Like
Reactions: 17 users

Learning

Learning to the Top 🕵‍♂️
How to Build Open-Source Neuromorphic Hardware and Algorithms
The brain is the perfect place to look for inspiration to develop more efficient neural networks. While the computational cost of deep learning exceeds millions of dollars to train large-scale models, our brains are somehow equipped to process an abundance of signals from our sensory periphery within a power budget of approximately 10-20 watts. The brain’s incredible efficiency can be attributed to how biological neurons encode data in the time domain as spiking action potentials.

This tutorial will take a hands-on approach to learning how to train spiking neural networks (SNNs), and designing neuromorphic accelerators that can process these models. With the advent of open-sourced neuromorphic training libraries and electronic design automation tools, we will conduct hands-on coding sessions to train SNNs, and attendees will subsequently design a lightweight neuromorphic accelerator in the SKY130 process. Participants will be equipped with practical skills that apply principles of neuroscience to deep learning and hardware acceleration in building the next generation of machine intelligence.

Jason Eshraghian.jpg
UC Santa Cruz, USA

photo_CF.jpg
Delft University of Technology, Netherlands


Learning🏖
 
  • Like
  • Fire
  • Love
Reactions: 14 users
Info

1679964670399.png

1679964598443.png
 
  • Like
  • Love
  • Fire
Reactions: 51 users

Boab

I wish I could paint like Vincent
How to Build Open-Source Neuromorphic Hardware and Algorithms
The brain is the perfect place to look for inspiration to develop more efficient neural networks. While the computational cost of deep learning exceeds millions of dollars to train large-scale models, our brains are somehow equipped to process an abundance of signals from our sensory periphery within a power budget of approximately 10-20 watts. The brain’s incredible efficiency can be attributed to how biological neurons encode data in the time domain as spiking action potentials.

This tutorial will take a hands-on approach to learning how to train spiking neural networks (SNNs), and designing neuromorphic accelerators that can process these models. With the advent of open-sourced neuromorphic training libraries and electronic design automation tools, we will conduct hands-on coding sessions to train SNNs, and attendees will subsequently design a lightweight neuromorphic accelerator in the SKY130 process. Participants will be equipped with practical skills that apply principles of neuroscience to deep learning and hardware acceleration in building the next generation of machine intelligence.

Jason Eshraghian.jpg
UC Santa Cruz, USA

photo_CF.jpg
Delft University of Technology, Netherlands


Learning🏖
There's a familiar face amongst the speakers
Jason.jpg

Biography
Jason K. Eshraghian is an Assistant Professor at the Department of Electrical and Computer Engineering at UC Santa Cruz, CA, USA. Prior to that, he was a Post-Doctoral Researcher at the Department of Electrical Engineering and Computer Science, University of Michigan in Ann Arbor. He
received the Bachelor of Engineering (Electrical and Electronic) and the Bachelor of Laws degrees from The University of Western Australia, WA, Australia in 2016, where he also completed his Ph.D. Degree.


He is the developer of snnTorch, a high-profile Python library used to train and model brain-inspired spiking neural networks, which has amassed over 60,000 downloads since its release. It has been used at Meta, the Space Communications and Navigation project arm of NASA, and has been
integrated for native acceleration with GraphCore’s Intelligent Processing Units.

Professor Eshraghian was awarded the 2019 IEEE VLSI Best Paper Award, the Best Paper Award at 2019 IEEE Artificial Intelligence CAS Conference, and the Best Live Demonstration Award at 2020 IEEE ICECS for his work on neuromorphic vision and in-memory computing using RRAM. He
currently serves as the secretary-elect of the IEEE Neural Systems and Applications Committee, and was a recipient of the Fulbright Future Fellowship (Australian-America Fulbright Commission), the Forrest Research Fellowship (Forrest Research Foundation), and the Endeavour Fellowship (Australian Government).
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Learning

Learning to the Top 🕵‍♂️
Not related to Brainchip but a good read regarding AI chip from Synopsys.

Screenshot_20230328_115938_Samsung Internet.jpg

According to Allied Market Research, the global artificial intelligence (AI) chip market is projected to reach $263.6 billion by 2031. The AI chip market is vast and can be segmented in a variety of different ways, including chip type, processing type, technology, application, industry vertical, and more. However, the two main areas where AI chips are being used are at the edge (such as the chips that power your phone and smartwatch) and in data centers (for deep learning inference and training).

No matter the application, however, all AI chips can be defined as integrated circuits (ICs) that have been engineered to run machine learning workloads and may consist of FPGAs, GPUs, or custom-built ASIC AI accelerators. They work very much like how our human brains operate and process decisions and tasks in our complicated and fast-moving world. The true differentiator between a traditional chip and an AI chip is how much and what type of data it can process and how many calculations it can do at the same time. At the same time, new software AI algorithmic breakthroughs are driving new AI chip architectures to enable efficient deep learning computation.

Read on to learn more about the unique demands of AI, the many benefits of an AI chip architecture, and finally the applications and future of the AI chip architecture.

The Distinct Requirements of AI Chips
The AI workload is so strenuous and demanding that the industry couldn’t efficiently and cost-effectively design AI chips before the 2010s due to the compute power it required—orders of magnitude more than traditional workloads. AI requires massive parallelism of multiply-accumulate functions such as dot product functions. Traditional GPUs were able to do parallelism in a similar way for graphics, so they were re-used for AI applications.

The optimization we’ve seen in the last decade is drastic. AI requires a chip architecture with the right processors, arrays of memories, robust security, and reliable real-time data connectivity between sensors. Ultimately, the best AI chip architecture is the one that condenses the most compute elements and memory into a single chip. Today, we’re moving into multiple chip systems for AI as well since we are reaching the limits of what we can do on one chip.

Chip designers need to take into account parameters called weights and activations as they design for the maximum size of the activation value. Looking ahead, being able to take into account both software and hardware design for AI is extremely important in order to optimize AI chip architecture for greater efficiency.

The Benefits of AI Chip Architecture
There’s no doubt that we are in the renaissance of AI. Now that we are overcoming the obstacles of designing chips that can handle the AI workload, there are many innovative companies that are experts in the field and designing better AI chips to do things that would have seemed very much out of reach a decade ago.

As you move down process nodes, AI chip designs can result in 15 to 20% less clocking speed and 15 to 30% more density, which allows designers to fit more compute elements on a chip. They also increase memory components that allow AI technology to be trained in minutes vs. hours, which translates into substantial savings. This is especially true when companies are renting space from an online data center to design AI chips, but even those using in-house resources can benefit by conducting trial and error much more effectively.

We are now at the point where AI itself is being used to design new AI chip architectures and calculate new optimization paths to optimize power, performance, and area (PPA) based on big data from many different industries and applications.

AI Chip Architecture Applications and the Future Ahead​

AI is all around us quite literally. AI processors are being put into almost every type of chip, from the smallest IoT chips to the largest servers, data centers, and graphic accelerators. The industries that require higher performance will of course utilize AI chip architecture more, but as AI chips become cheaper to produce, we will begin to see AI chip architecture in places like IoT to optimize power and other types of optimizations that we may not even know are possible yet.

It’s an exciting time for AI chip architecture. Synopsys predicts that we’ll continue to see next-generation process nodes adopted aggressively because of the performance needs. Additionally, there’s already much exploration around different types of memory as well as different types of processor technologies and the software components that go along with each of these.

In terms of memory, chip designers are beginning to put memory right next to or even within the actual computing elements of the hardware to make processing time much faster. Additionally, software is driving the hardware, meaning that software AI models such as new neural networks are requiring new AI chip architectures. Proven, real-time interfaces deliver the data connectivity required with high speed and low latency, while security protects the overall systems and their data.

Finally, we’ll see photonics and multi-die systems come more into play for new AI chip architectures to overcome some of the AI chip bottlenecks. Photonics provides a much more power-efficient way to do computing and multi-die systems (which involve the heterogeneous integration of dies, often with memory stacked directly on top of compute boards) can also improve performance as the possible connection speed between different processing elements and between processing and memory units increases.

One thing is for sure: Innovations in AI chip architecture will continue to abound, and Synopsys will have a front-row seat and a hand in them as we help our customers design next-generation AI chips in an array of industries.


Learning 🏖
 
  • Like
  • Love
Reactions: 18 users

gex

Regular
1679965970165.png


I might be slow but i thought the cortical side of the tech is still being evaluated or researched?

 
  • Like
  • Fire
  • Love
Reactions: 25 users

Foxdog

Regular
That is a very easy question to answer because it is relevant to Brainchip whereas random opinions and swearing in abbreviated form are not.

To express an opinion it helps to have done your own research. Someone who has done their own research or simply read all the research done by others and posted here would understand all the dots that come together by having knowledge of this Brainchip employee.

As you don’t understand you probably need to DYOR or go back and read the research generously shared here by those that do.

My opinion only DYOR
FF

AKIDA BALLISTA
Fair enough. Cheers FF
 
  • Like
Reactions: 8 users

HopalongPetrovski

I'm Spartacus!
Fair enough. Cheers FF
Hang in there mate. It can be a tough road at times and people have all sorts of strife occurring from time to time.
We all wish it would happen already/would have already happened, and when pressure is applied it can exacerbate personal situations.
Many fine/ clever/ deserving people here who are rooting for Brainchip so at least you're in good company.
 
  • Like
  • Love
  • Fire
Reactions: 40 users

Diogenese

Top 20
This interesting that GM are awarding Valeo for service delivery in ADAS arena:

View attachment 33074


We've seen rumors that Mecedes is planning to switch to Luminar's foveated LiDaR, and, Mercedes' expressed preference for component standardization aside, we do not have any proof that Luminar will use Akida, so this GM award to Valeo for ADAS is very encouraging.

We do know that BrainChip have been working with Valeo in a Joint Development on autonomous vehicles since mid-2020.

https://smallcaps.com.au/brainchip-joint-development-agreement-akida-neuromorphic-chip-valeo/

BrainChip signs joint development agreement for Akida neuromorphic chip with Valeo​

By
George Tchetvertakov
-
June 9, 2020

"Artificial intelligence device company BrainChip Holdings (ASX: BRN) has taken an affirmative step towards integrating its Akida neuromorphic chip into autonomous vehicles after signing a binding joint development agreement with European automotive supplier Valeo Corporation.

The agreement means both companies will collaborate to develop a new wave of tech solutions based on artificial intelligence (AI) and reduced power consumption within the overarching theme of miniaturisation that’s taking the tech industry by storm
."
...
"This latest agreement between BrainChip and Valeo has been hailed as a validation of the company’s Akida device by a Tier-1 sensor supplier and “considered to be a significant development”, according to BrainChip.

In a statement to the market this morning, BrainChip said Valeo will utilise Akida and collaborate on the development of neural network processing solutions, for integration in autonomous vehicles (AVs).

The terms of the deal stipulate that both companies must reach specific performance milestones, with BrainChip stating it expects to receive payments to cover its expenses, subject to the completion of, as yet, undisclosed milestones
."

A JD is a different beast from a licence agreement. There is no licence fee. Usually the JD members split the income in proportion to their contribution to the project, so income is still dependent on the number of units sold, but it can have the potential to be significantly greater than a standard royalty fee.

https://www.valeo.com/en/valeo-scala-lidar/

The Honda Legend, which was the first vehicle in the world to be approved for SAE level 3 automated driving, uses Valeo LiDAR scanners, two frontal cameras and a Valeo data fusion controller. The Mercedes-Benz Class S, the second level 3-certified car, is also equipped with a laser LiDAR technology, Valeo SCALA® Gen2.

Valeo’s third-generation laser LiDAR technology, which is scheduled to hit the market in 2024, will take autonomous driving even further, making it possible to delegate driving to the vehicle in many situations, including at speeds of up to 130 km/h on the highway. Even at high speeds on the highway, autonomous vehicles equipped with this system are able to manage emergency situation autonomously
."

https://www.repairerdrivennews.com/2023/03/16/gms-new-adas-boasts-hands-free-technology/

GM’s new ADAS boasts hands-free technology​

By Michelle Thompson on March 16, 2023
Announcements | Market Trends | Technology

General Motors (GM) is rolling out a new advanced driver assistance program (ADAS) that it says will enable hands-free driving 95% of the time.

The automaker shared details about its next-generation system, Ultra Cruise, this week and said it will first be launched on the Cadillac Celestiq, a hand-built electric vehicle expected to begin production in December.

...

The OEM said Ultra Cruise-equipped vehicles will contain more than 20 sensors, with a driver attention system in place to ensure the vehicle’s pilot is alert.

“The destination-to-destination hands-free system will use more than just cameras to ‘see’ the world,” GM said in a press release. “Ultra Cruise uses a blend of cameras, short- and long-range radars, LiDAR behind the windshield, an all-new computing system and a driver attention system to monitor the driver’s head position and/or eyes in relation to the road to help ensure driver attention. These systems work together through ‘sensor fusion’ to provide Ultra Cruise with a confident, 360-degree, three-dimensional representation of the vehicle’s surroundings
.”
...
A spokeswoman told Repairer Driven News that Ultra Cruise is a Level 2 system.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 43 users
View attachment 33090

I might be slow but i thought the cortical side of the tech is still being evaluated or researched?

Yes this caught me out at first then I remembered 'cortical' is to do with the eye. Prophesee states they have taken inspiration from the eye in developing their vision sensor.

So if I read this as being AKIDA processing a Prophesee vision sensor better than anyone else in the world ever (my exaggeration) but more over at least better than anyone that Prophesee has trialled or commercially partnered with to date I think it makes sense.

I feel very confident if Brainchip was actually beyond the design stage with its cortical column and sending it off to engineering such an event would at least rate a Tweet but if I was in charge a specially made ASX announcement illuminated in orange neon tube.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Esq.111

Fascinatingly Intuitive.
Good Afternoon Chippers,

Not much to report my end...

Pressently having fun watching the below company.

Relating to a completely diffrent company , industry...
LTR : Liontown Resources Ltd

Their Board of directors rejected a buyout offer...

One can only wonder what will unfold for us , BRN , once serious deals are signed , announced on ASX & royalty streams start to flow.

Regards,
Esq.
 
  • Like
  • Thinking
  • Fire
Reactions: 33 users
Not related to Brainchip but a good read regarding AI chip from Synopsys.

View attachment 33087
According to Allied Market Research, the global artificial intelligence (AI) chip market is projected to reach $263.6 billion by 2031. The AI chip market is vast and can be segmented in a variety of different ways, including chip type, processing type, technology, application, industry vertical, and more. However, the two main areas where AI chips are being used are at the edge (such as the chips that power your phone and smartwatch) and in data centers (for deep learning inference and training).

No matter the application, however, all AI chips can be defined as integrated circuits (ICs) that have been engineered to run machine learning workloads and may consist of FPGAs, GPUs, or custom-built ASIC AI accelerators. They work very much like how our human brains operate and process decisions and tasks in our complicated and fast-moving world. The true differentiator between a traditional chip and an AI chip is how much and what type of data it can process and how many calculations it can do at the same time. At the same time, new software AI algorithmic breakthroughs are driving new AI chip architectures to enable efficient deep learning computation.

Read on to learn more about the unique demands of AI, the many benefits of an AI chip architecture, and finally the applications and future of the AI chip architecture.

The Distinct Requirements of AI Chips
The AI workload is so strenuous and demanding that the industry couldn’t efficiently and cost-effectively design AI chips before the 2010s due to the compute power it required—orders of magnitude more than traditional workloads. AI requires massive parallelism of multiply-accumulate functions such as dot product functions. Traditional GPUs were able to do parallelism in a similar way for graphics, so they were re-used for AI applications.

The optimization we’ve seen in the last decade is drastic. AI requires a chip architecture with the right processors, arrays of memories, robust security, and reliable real-time data connectivity between sensors. Ultimately, the best AI chip architecture is the one that condenses the most compute elements and memory into a single chip. Today, we’re moving into multiple chip systems for AI as well since we are reaching the limits of what we can do on one chip.

Chip designers need to take into account parameters called weights and activations as they design for the maximum size of the activation value. Looking ahead, being able to take into account both software and hardware design for AI is extremely important in order to optimize AI chip architecture for greater efficiency.

The Benefits of AI Chip Architecture
There’s no doubt that we are in the renaissance of AI. Now that we are overcoming the obstacles of designing chips that can handle the AI workload, there are many innovative companies that are experts in the field and designing better AI chips to do things that would have seemed very much out of reach a decade ago.

As you move down process nodes, AI chip designs can result in 15 to 20% less clocking speed and 15 to 30% more density, which allows designers to fit more compute elements on a chip. They also increase memory components that allow AI technology to be trained in minutes vs. hours, which translates into substantial savings. This is especially true when companies are renting space from an online data center to design AI chips, but even those using in-house resources can benefit by conducting trial and error much more effectively.

We are now at the point where AI itself is being used to design new AI chip architectures and calculate new optimization paths to optimize power, performance, and area (PPA) based on big data from many different industries and applications.

AI Chip Architecture Applications and the Future Ahead​

AI is all around us quite literally. AI processors are being put into almost every type of chip, from the smallest IoT chips to the largest servers, data centers, and graphic accelerators. The industries that require higher performance will of course utilize AI chip architecture more, but as AI chips become cheaper to produce, we will begin to see AI chip architecture in places like IoT to optimize power and other types of optimizations that we may not even know are possible yet.

It’s an exciting time for AI chip architecture. Synopsys predicts that we’ll continue to see next-generation process nodes adopted aggressively because of the performance needs. Additionally, there’s already much exploration around different types of memory as well as different types of processor technologies and the software components that go along with each of these.

In terms of memory, chip designers are beginning to put memory right next to or even within the actual computing elements of the hardware to make processing time much faster. Additionally, software is driving the hardware, meaning that software AI models such as new neural networks are requiring new AI chip architectures. Proven, real-time interfaces deliver the data connectivity required with high speed and low latency, while security protects the overall systems and their data.

Finally, we’ll see photonics and multi-die systems come more into play for new AI chip architectures to overcome some of the AI chip bottlenecks. Photonics provides a much more power-efficient way to do computing and multi-die systems (which involve the heterogeneous integration of dies, often with memory stacked directly on top of compute boards) can also improve performance as the possible connection speed between different processing elements and between processing and memory units increases.

One thing is for sure: Innovations in AI chip architecture will continue to abound, and Synopsys will have a front-row seat and a hand in them as we help our customers design next-generation AI chips in an array of industries.


Learning 🏖
Hi @Learning
I read this at the optometrist waiting for my six monthly check up. When I was called in Harold said "You seem unusually chirpy today."🤣😂🤣

I am pretty sure the first line had something to do with it:

According to Allied Market Research, the global artificial intelligence (AI) chip market is projected to reach $263.6 billion by 2031.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 37 users
Top Bottom