BRN Discussion Ongoing

charles2

Regular
  • Like
  • Haha
  • Wow
Reactions: 11 users
Was just reading an article on some old news on the AKD1500.

Read it then had a look at the original BRN release.


The article author made a statement that i didn't see in the original BRN release but maybe mentioned elsewhere and I probs missed it?

If I didn't miss it & new, then is a nice to know :)

Highlighted towards end of write up.



BRAINCHIP ANNOUNCES SUCCESSFULLY TAPED OUT AKD1500 CHIP ON GLOBALFOUNDRIES’ 22NM FD-SOI PROCESS

Posted on 4 March, 2023 by Abhishek Jadhav
BrainChip announces successfully taped out AKD1500 chip on GlobalFoundries’ 22nm FD-SOI process
BrainChip Tapes Its Akida AKD1500
BrainChip, a leading provider of ultra-low power edge AI technology, has successfully taped out its AKD1500 chip on GlobalFoundries’ 22nm FD-SOI process. This marks an important milestone for BrainChip as it proves the portability of its technology and enables the company to take advantage of the benefits of GlobalFoundries’ process.
The AKD1500 is BrainChip’s flagship product designed to deliver AI processing capabilities on edge. The chip features BrainChip’s patented spiking neural network (SNN) technology, which is capable of learning, recognizing, and processing patterns in real-time. The AKD1500 is ideal for various applications, including advanced driver assistance systems (ADAS), surveillance, and autonomous robotics.
The AKD1500 combines event-based Akida AI IP with GlobalFoundry’s low leakage FDX process technology platform to deliver an always-on, at-sensor application with low power consumption, suitable for industrial automation and the automotive industry.
GlobalFoundries’ 22nm FD-SOI process is an ideal choice for BrainChip as it provides several benefits, including low power consumption, high performance, and excellent reliability. The process is also well-suited for edge AI applications as it offers a compact form factor and low cost.
“This is an important validation milestone in the continuing innovation of our event-based, neuromorphic Akida IP, even if it is fully digital and portable across foundries. The AKD1500 reference chip using GlobalFoundries’ very low-leakage FD SOI platform, showcases the possibilities for intelligent sensors in edge AI.”
The AKD1500 chip is expected to be available soon and will be a significant addition to BrainChip’s portfolio of edge AI solutions. The company has already received strong interest from several customers looking to use AKD1500 in their products.
BrainChip’s successful tape-out of the AKD1500 chip in GlobalFoundries’ 22nm FD-SOI process is an important milestone for the company. This will help BrainChip to deliver more innovative and advanced AI solutions to its customers in the future.
 
  • Like
  • Fire
  • Love
Reactions: 67 users

Sirod69

bavarian girl ;-)
BrainChip
BrainChipBrainChip
2 Std.

Edge Impulse -Edge ML Series is starting soon. Don't miss the BrainChip keynote - “Edge AI: Ready When You Are!” — Register here: https://lnkd.in/gxbB96Mp

1679938078312.png

The Edge ML Series is coming to San Jose on March 27th! Mark your calendars and request your invite now.

This exclusive, in-person, one-day event will explore the benefits of edge machine learning, ways to differentiate your products with embedded intelligence, and how to deliver value in less time while lowering operational cost using AI tools like Edge Impulse. Featuring:

• Keynotes from industry leaders

• Hands-on workshops

• Customer stories

• Insights on deploying ML solutions at scale

• Demos

• Networking opportunities

Agenda (PT)
8:30–9:00 Registration and coffee

9:00–10:40 Keynotes


  • Welcome and housekeeping (10 min)
  • “Demystifying Edge ML” — Edge Impulse keynote (30 min)
    The fast-evolving ecosystem of edge ML includes silicon, sensors, connectivity, and more. We'll guide you through the world of edge ML and uncover its potential for your business.
  • "Advancing Intelligence at the Edge with Texas Instruments" — Texas Instruments keynote (30 min)
  • “Edge AI: Ready When You Are!” — BrainChip keynote (30 min)
    AI promises to provide new capabilities, efficiencies, and economic growth, all with the potential for improving the human condition. There are numerous challenges to delivering on this, in particular the need for performant, secure edge AI. We'll describe the readiness of the industry to deliver edge AI, and the path to the imminent transition.
10:40–11:00 Coffee break (demo area)

11:00–12:00 Seminars


  • Edge Impulse — Use case (20 min)
  • “Why AI Acceleration Matters for Edge Devices” — Alif (20 min)
    Three key parameters to consider when selecting a hardware platform for AI-enable edge-ML are
    performance, power consumption, and price. This talk will help you maximize your projects chance of success by showing you how to maximize the performance parameter, while keeping the others in-check.
  • “The Advantages of Nordic’s Ultra-Low-Power Wireless Solutions and Machine Learning” — Nordic Semiconductors (20 min)
    Nordic Semiconductor creates low-power wireless devices that utilize popular protocols including Bluetooth Low Energy, Wi-Fi 6, and cellular IoT. Although optimizing the radio transmitter can just save power to a point, implementing ML on the processor to detect and send only relevant data can significantly further reduce power requirements on SoCs and SiPs. Discover how Nordic's wireless technology can leverage ML to achieve ultra-low power — without needing a dedicated machine learning core.
12:00–13:00 Lunch (demo area)

  • Demos from TI, BrainChip, Alif, Nordic, Sony, MemryX, and NovTech
13:00–15:30 Workshops

  • "Hands-On With the TDA4VM" — Texas Instruments workshop
    (75 min)
  • Break (15 min)
  • “Enabling the Age of Intelligent Machines” — Alif workshop
    (60 min)
    Alif Semiconductor and Edge Impulse will demonstrate how anyone can create, train, tune, and deploy advanced machine learning models on the next generation Ensemble family of AI accelerated microcontrollers.
15:30–16:00 Wrap and close

‍​

 
  • Like
  • Fire
  • Love
Reactions: 42 users

cosors

👀
Just a snippet.
She has an interesting job:
1679925227165.png

How AI research is creating new luxury in the vehicle.
"Hey Mercedes, drive me to work" - With artificial intelligence (AI), we will interact with our car intuitively in the future. Even small talk will be possible then," says Dr Teresa Botschen. Together with her colleagues in the AI Research Team at Mercedes-Benz, the PhD computer scientist and AI expert is working on making vehicles smarter - for even more comfort and the perfect driving experience. In her interview, she tells us how she bridges the gap between research and the vehicle, and how her team can implement ideas together.

Dr. Botschen, will vehicles soon be as smart as humans?

That depends on your definition of "smart" (laughs). Vehicles will certainly be able to react much more flexibly to different situations in the future. Machine learning makes this possible. The most prominent example of this is currently the development of automated vehicles. However, artificial intelligence offers many more fields of application. At Mercedes-Benz we use technology in a wide variety of areas. One field that I find particularly exciting is driver-vehicle communication, for example the question: How will we communicate and interact with our vehicles in the future? The potential here is huge.

At Mercedes-Benz for AI: In the AI Research Team, Dr. Teresa Botschen supports users from all over the Group in using machine learning.
At Mercedes-Benz for AI: In the AI Research Team, Dr. Teresa Botschen supports users from all over the Group in using machine learning.
In the AI Research Team you are the expert for Natural Language Processing. What exactly is this about?

Put simply, I teach our cars to understand human language better. A major challenge for machines is the ambiguity of our language. This often leads to misunderstandings in communication between people. A simple example: In the distant future, when you tell your self-driving vehicle to "park at the bank", it could be taken to mean park on the river bank or near the bank branch around the corner. It depends on the context. In our team, we are developing methods to enable vehicles to make situational decisions, for example by using additional cameras and combining voice and image processing in a multimodal way.

This means that in the future we will be able to have a proper conversation with our vehicle?

Almost (laughs). Imagine you are on your way to work in the morning and have your car read out the latest news to you, you comment on the news and the car searches for corresponding further information pertaining to your comment. This can already come very close to a real conversation, though we place great emphasis on the transparent and responsible use of AI. Our goal is not to pretend that the driver is interacting with a human being, but to support the driver in the car in the best possible way. But human-vehicle interaction will certainly become much more intuitive, and at the same time the vehicles will become more proactive.

What do you mean by that?

If you give your automated vehicle the order "Drive me to Stuttgart", the system could recognise who is currently on board and make individual suggestions for the route. For example, you get into the car early in the morning, carrying your laptop bag. The algorithm deduces that you are on your way to work and offers you a stop at your favourite bakery. And if you get in with your family in the afternoon, perhaps a stopover at a nice area with a playground.

One field that I find particularly exciting is driver-vehicle communication, for example how will we communicate with our vehicles in the future? There is huge potential for AI here.
Well connected to the next innovation: When developing smart AI solutions, Dr. Teresa Botschen is in contact with start-ups from the community and research.
Well connected to the next innovation: When developing smart AI solutions, Dr. Teresa Botschen is in contact with start-ups from the community and research.
And in the AI Research Team, you develop solutions to implement such technologies at Mercedes-Benz?

Yes, but language processing and driver-vehicle communication are only part of our projects. With our team, we support specialist departments across the whole Group in using various machine learning technologies for their processes - whether as a development tool or in the vehicle, for example for a Digital Luxury Experience, which are applications that ensure the perfect driving experience in our vehicles. Every project is different. There is no such thing as "one size fits all". That's what makes my work so exciting.

How would you describe your team?

Quite multifaceted and interdisciplinary. We are colleagues from very different disciplines - from physics and mathematics to electrical engineering and computer science. This is also important when we work on new, innovative topics. Depending on the expertise required for a project, we assemble a project group with two or three colleagues. Of course, we always work very closely with the development departments, but also with Legal. And for projects involving language and text processing, I have established a Group-wide NLP (Natural Language Processing) community together with a core team.

Simply try out new ideas: At Mercedes-Benz, Dr. Teresa Botschen has a lot of freedom to advance research on AI - and to put it directly into practice.
Simply try out new ideas: At Mercedes-Benz, Dr. Teresa Botschen has a lot of freedom to advance research on AI - and to put it directly into practice.
What makes the AI Research Team special, in your opinion?

We are a motivated and creative group. The interdisciplinary exchange is super-exciting, and often brings completely new perspectives. And we have a great team spirit, not only in the AI Research Team but in the whole department. We often also offer subjects for academic theses or projects for students in our team. The exchange with universities and start-ups is very important for me personally.

Because you come from a research background?

Yes, before I started in the AI Research Team, I was a doctoral student at the Technical University of Darmstadt, where I did my doctorate in the research field of Natural Language Processing and developed systems for automatic text analysis and multimodal speech understanding. During this time, I attended an international conference where my later colleagues from the Group also presented new research results, and we got talking. I was impressed by the depth of Mercedes-Benz's research into artificial intelligence. .

What makes Mercedes-Benz a good employer for you?

What I personally find great is that here I have the opportunity to implement the results from research in specific applications. Mercedes-Benz is investing a great deal in the future. We have a large innovation workshop where we can build prototypes and simply try out new ideas. After all, research means that sometimes the things that emerge can't immediately be mass-produced. Here we have the freedom to accelerate research into AI.

Finally, we would like to know: What is the must-have in your dream office?

A place for my shepherd dog. She keeps me company in the home office right now, and every video conference with colleagues becomes much more relaxed when she moves into the picture every now and then, to see whom I am talking to (laughs).

Dr Teresa Botschen (30) brings the latest AI technologies from science into the car at the Mercedes-Benz Group. Together with her interdisciplinary team, she is driving AI research forward in the Group. The natural language processing expert started her journey with artificial intelligence in the Cognitive Science programme at the University of Tübingen. She completed her Master's degree in Computational Statistics and Machine Learning at University College London, followed by a doctorate in Natural Language Processing at the Technical University of Darmstadt. Besides her research at Mercedes-Benz, she enjoys discovering the world on city trips and mountain tours, singing musical songs in a choir or having fun training her dog."
https://group.mercedes-benz.com/car...-intelligence/interviews/teresa-botschen.html
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 29 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 49 users

equanimous

Norse clairvoyant shapeshifter goddess
 
  • Like
  • Love
Reactions: 30 users

Mugen74

Regular
View attachment 33069
Think I'm crazy, but I'm developing the idea of bringing the AKD1000 bord to the largest computer museum in the world.
Do you know Mr. Nixdorf? He was a pioneer of modern computers even if the future of his company was different due to tragic decisions and almost no one knows him today. But one of his foundations for this world's largest museum exists as well as the museum itself.
If it goes after me I would like to see the exciting development step in the computer history the Neuromorophic Computing in the permanent exhibition as a further milestone - Loihi, TrueNorth and Akida in one exhibition. They are dealing with the issue of NC, of course.
Just my thought while hearing the last podcasts but it is developing in my brain and I am thinking about contacting the curator for the first time today so that there is remembered for PVDM and Heinz Nixdorf.
https://www.hnf.de/en/home.html
Dont forget Anil Mankar!
 
  • Like
  • Love
  • Fire
Reactions: 20 users

cosors

👀
Dont forget Anil Mankar!
I develop the thought. It is precisely because of comments like yours that I post this. It would certainly be possible for me to have this conversation.

___
Added:
I'm also interested in feeling out the resonance a little bit. It is perhaps still a little too early for this. Besides, the museum, like many others is a place to make knowledge tangible and understandable. And how could that be better than with event based cameras or face, gesture or speak recognition.
But this is only just emerging. Nevertheless, it can be attractive for a museum to be up to date. And in addition, it is not just a matter of putting up another display case, but of initiating the exhibition for NC. And that certainly takes a year in my mind. So I am wavering back and forth and will leave this here for a short while in case there are any thoughts from you.

___
My thought in the other thing:
My thought about fuel recognition: I will let rest from my side. It's just one of countless use cases the board is bound to be bombarded with I think and I don't feel in a position to address it properly. Even if the matter is now decided in the EU. But under a different sign. It will be evaluated at a later date how what when. So there is more time if the company would want to participate and to teach the sensor technology from wine to fuels. In addition, it is a politically charged topic.
 
Last edited:
  • Like
  • Love
Reactions: 8 users

Deadpool

hyper-efficient Ai
  • Like
Reactions: 6 users

goodvibes

Regular
Intel labs

Deep Learning with Spiking Neural Networks

Our Neuromorphic Computing Lab is highly active in researching Spiking Neural Networks (SNNs) and how they can be useful for deep learning. SNNs, sometimes dubbed the third generation of Artificial Neural Networks (ANNs), replace the non-linear activation functions in ANNs with spiking neurons and show real promise in AI, image processing, NLP, video compression, and more. Read the blog from Sumit Bam Shrestha to learn more about SNN research. https://intel.ly/40GzeiQ

#Research #DeepLearning #Developer

 
  • Like
Reactions: 7 users
  • Like
  • Fire
  • Thinking
Reactions: 47 users
It does seem like a very clever move on the part of Brainchip to almost a year ago to bring Krishnamurthy V. Venuru on board and allow him to complete his Canny Edge Detector Using a Spiking Neural Network algorithm research funded by Riverside Research’s Open Innovation Center which will accelerate bringing to AKIDA technology the ability to identify boundaries for medical imaging one of the many large addressable markets Brainchip is targeting. There are other vision applications such as UAVs where this has application however medical imaging is a standout application in my opinion. Being able to define the exact outline of a tumour or foreign body is invaluable to surgeons preparing to operate.

My opinion only DYOR
FF

AKIDA BALLISTA - (This paper has been previously referenced here on TSEx - this is an extract as the full paper exceeds the site limits.) :

Implementation of the Canny Edge Detector Using a Spiking Neural Network​

by
Krishnamurthy V. Vemuru


Riverside Research, 2900 Crystal Dr., Arlington, VA 22202, USA

Current Address: BrainChip Inc., 23041 Avenida de la Carlota, Laguna Hills, CA 92653, USA.
Future Internet 2022, 14(12), 371; https://doi.org/10.3390/fi14120371
Received: 6 June 2022 / Revised: 5 December 2022 / Accepted: 7 December 2022 / Published: 11 December 2022
(This article belongs to the Collection Computer Vision, Deep Learning and Machine Learning with Applications)

Abstract​

Edge detectors are widely used in computer vision applications to locate sharp intensity changes and find object boundaries in an image. The Canny edge detector is the most popular edge detector, and it uses a multi-step process, including the first step of noise reduction using a Gaussian kernel and a final step to remove the weak edges by the hysteresis threshold. In this work, a spike-based computing algorithm is presented as a neuromorphic analogue of the Canny edge detector, where the five steps of the conventional algorithm are processed using spikes. A spiking neural network layer consisting of a simplified version of a conductance-based Hodgkin–Huxley neuron as a building block is used to calculate the gradients. The effectiveness of the spiking neural-network-based algorithm is demonstrated on a variety of images, showing its successful adaptation of the principle of the Canny edge detector. These results demonstrate that the proposed algorithm performs as a complete spike domain implementation of the Canny edge detector.
Keywords:
edge detection; segmentation; spiking neural networks; bio-inspired neurons



Graphical Abstract

1. Introduction​

Artificial neural networks (ANNs) have become an indispensable tool for implementing machine learning and computer vision algorithms in a variety of pattern recognition and knowledge discovery tasks for both commercial and defense interests. Recent progress in neural networks is driven by the increase in computing power in data centers, cloud computing platforms, and edge computing boards. In size, weight, and power (SWaP)–constrained applications, such as unmanned aerial vehicles (UAVs), augmented reality headsets, and smart phones, more novel computing architectures are desirable. The state-of-the-art deep learning hardware platforms are often based on graphics processing units (GPUs), tensor processing units (TPUs) and field programmable gate arrays (FPGAs). The human brain is capable of performing more general and complex tasks at a minute fraction of the power required by deep learning hardware platforms. Spiking neurons are regarded as the building blocks of the neural networks in the brain. Moreover, research in neuroscience indicates the spatiotemporal computing capabilities of spiking neurons play a role in the energy efficiency of the brain. In addition, spiking neurons leverage sparse time-based information encoding, event-triggered plasticity, and low-power inter-neuron signaling. In this context, neuromorphic computing hardware architecture and spike domain machine learning algorithms offer a low-power alternative to ANNs on von Neumann computing architectures. The availability of neuromorphic processors, such as IBM’s TrueNorth [1], Intel’s Loihi [2], and event-domain neural processors, for example, BrainChip’s Akida [3,4], which offers the flexibility to define both artificial neural network layers and spiking neuron layers, are motivating the research and development of new algorithms for edge computing. In the present work, we have investigated how one can program an algorithm for Canny type edge detection using a spiking neural network and spike-based computing.

2. Background​

An edge detection algorithm is widely used in computer vision to locate object boundaries in images. An edge in an image shows a sharp change in image brightness, which is a result of a sharp change in pixel intensity data. An edge detector computes and identifies the pixels with sharp changes in intensity with respect to the intensity of neighboring pixels. There are several edge detection image processing algorithms.
The three stages in edge detection are image smoothing, detection, edge localization. There are mainly three types of operators in edge detection. These are (i) gradient-based, (ii) Laplacian-based and (iii) Gaussian-based. The gradient-based edge detection method detects the edges by finding the maximum and the minimum in the first derivative of the image using a threshold. The Roberts edge detector [5], Sobel edge detector [6], and Prewitt edge detector [7] are some of the examples of gradient-based edge detectors. These detectors use a 3 × 3 pattern grid. A detailed discussion on these edge detectors and a comparison of their advantages and disadvantages can be found in [8]. The Roberts edge detection method is built on the idea that a difference on any pair of mutually perpendicular directions can be used to calculate the gradient. The Sobel operator uses the convolution of the images with a small, separable, and integer-valued filter in horizontal and vertical directions for edge detection. The Prewitt edge detector uses two masks, each computing the derivate of the image in the x-direction and the y-direction. This detector is suitable to estimate the magnitude and orientation of the edge. Laplacian-based edge detectors find the edges by searching for zero crossings in the second derivative of the image. The Laplacian of the Gaussian algorithm uses a pre-smoothing step with a Gaussian low-pass filter on an image followed by a second-order differential, i.e., Laplacian, which finds the image edge. This method needs a discrete convolutional kernel that can approximate the second derivative for the image which consists of discrete pixels. The Marr–Hildreth edge detector is also based on the Laplacian of the Gaussian operator [9]. The Gabor filter edge detector [10] and Canny edge detector [11] are Gaussian-based edge detectors. The Gabor filter is a linear filter with its impulse response function defined by the product of a harmonic function with a Gaussian function and is similar to the human perception system.
The Canny edge detector provides excellent edge detection, as it meets the three criteria for edge detection [12]: (i) detection with low error rate, (ii) the edge point should localize in the center of the edge, and (iii) an edge should only be marked once and image noise should not create edges. Canny edge detection uses the calculus of variations to optimize a functional which is a sum of four exponential terms, which approximates the first derivative of a Gaussian. A Canny edge detector is a multi-step algorithm designed to detect the edges of any analyzed image. The steps of this process are: (1) removal of noise in the image using a Gaussian filter, (2) calculation of the gradient of the image pixels along x- and y-directions, (3) non-maximum suppression to thin out edges, (4) double-threshold filtering to detect strong, weak and non-relevant pixels, and (5) edge tracking by hysteresis to transform weaker pixels into stronger pixels if at least one of their neighbors is a stronger pixel. The Canny edge detection algorithm is highly cited (∼36,000 citations) and the most commonly used edge detection algorithm [11].
Edge detection is a primary step in identifying an object and further research is strongly desirable to expand these methods to event-domain applications. The Canny edge detector has a better performance as an edge detector compared to Roberts, Sobel and Prewitt edge detectors, but at a higher computational cost [8]. An alternate implementation of the Canny edge detection algorithm is for edge computing event-domain applications, where low-power and real-time solutions can be attractive to target applications, in view of its tunable performance using the standard deviation of the Gaussian filter.

3. Related Work​

In the human vision system, the photoreceptors in the retina convert the light intensity into nerve signals. These signals are further processed and converted into spike trains by the ganglion cells in the retina. The spike trains travel along the optic nerve for further processing in the visual cortex. Neural networks that are inspired by the human vision system have been introduced to improve image processing techniques, such as edge detection [13]. Spiking neural networks, which are built on the concepts of spike encoding techniques [14], spiking neuron models [15] and spike-based learning rules [16], are biologically inspired in their mechanism of image processing. SNNs are gaining attraction for biologically inspired computing and learning applications [17,18]. Wu et al. simulated a three-layer spiking neural network (SNN), consisting of a receptor layer, an intermediate layer with four filters, respectively, for up, down, left, and right directions, and an oputput layer with Hogkin–Huxley-type neurons as the building blocks for edge detection [19].
Clogenson et al. demonstrated how a SNN with scalable, hexagonally shaped receptive fields performs edge detection with computational improvements over rectangular shaped pixel-based SNN approaches [20]. The digital images are converted into a hexagonal pixel representation before being processed by the SNN. A spiking neuron integrates the spikes from a group of afferent neurons in a receptive field. The network model used by the authors consists of an intermediate layer with four types of neurons corresponding to four different receptive fields, corresponding to up, down, right and left orientations. Yedjour et al. [21] demonstrated the basic task of contour detection using a spiking neural network based on the Hodgkin–Huxley neuron model. In this approach, the synaptic weights are determined by the Gabor function to describe the receptive field’s behaviors of simple cells in the visual cortex. Vemuru [22] reported the design of a SNN edge detector with biologically inspired neurons and demonstrated that the edge detector detects edges in simulated low-contrast images. These studies focused on defining SNNs using an array of Gabor filter receptive fields in the edge detector. In view of the success of SNNs in edge detection, it is desirable to develop a spike domain implementation of an edge detector with the Canny edge detector algorithm because it has the potential to offer a high performance alternative for edge detection.

5. Results and Discussion​

Figure 2 displays the original image (No. 61060) from BSD500 dataset [32,33] (license: Not found) and the intermediate output images of the spiking detector network after the Gaussian kernel, gradient calculation, non-maximum suppression, double-threshold filtering, and edge-tracking by hysteresis. The intermediate features show the spike domain implementation is effective in performing all five steps involved in the conventional Canny edge detection algorithm. The edge map resulting from the spike-based computation, i.e., the output of the network after the edge-tracking hysteresis step, can be referred to as the neuromorphic Canny edge detector or SNN-based Canny edge detector. We find that the resolution of the images from BSD500 dataset is sufficient to evaluate the qualitative and quantitative performance of edge detection. In Figure 3, we compare the neuromorphic Canny detector with the Sobel detector and the conventional Canny detector for a set of four images and their ground truth edges from the BSD500 dataset [32,33]. We find the neuromorphic Canny detector detects edges similarly to the edges generated by both the conventional detectors. The spike-based detection reported here results in relatively more highlighted edges in the objects while retaining some of the structural features that seem to have been lost in the conventional edge detectors. This can be attributed to the difference in the range of thresholds used in the spike domain implementation compared to the conventional Canny edge detector. With an implementation on a low size, weight and power (SWaP) hardware, for example, by emulating spikes on a field programmable gate array (FPGA) [34] or edge computing devices, such as NVIDIA Jetson TX2, together with a neuromorphic camera, the spike domain image processing algorithm will be attractive for new applications, for example, in handheld computer vision systems and for low-Earth satellite-based object detection.
Futureinternet 14 00371 g002 550

Figure 2. The original and the output images from all 5 stages of neuromorphic Canny edge detection.
Futureinternet 14 00371 g003 550

Figure 3. A comparison of the ground truth with the results from Canny edge detector and the SNN analogue of Canny edge detector for selected images from BSD500 dataset.
In image processing, it is common to select a few images and demonstrate how the method works. Statistics on training sets and test sets usually applies to machine learning experiments, which is the not the case in the present study. All the images that are pictures of natural scenes with edges and none of the examples are synthesized. From visual inspection, the SNN edge detector appears to render wider edges and detect more background information. It is possible that the difference in the SNN Canny edge map compared to the conventional Canny edge map is also related to the choice of the threshold used or the smoothing parameters. The SNN edge maps are realized in a narrower threshold range, and this makes it difficult to perform an ablation study, which is typically done in machine learning experiments. We would like to note that the primary goal of the present work is to demonstrate a spike-based implementation of Canny edge detection, not the superior computational efficiency. Currently, there is no hardware that can implement the exact version of the neuron model used in the present work to evaluate the computational time for the targeted neuromorphic domain. The context of this work is that it addresses the question of whether it is possible to compute the steps of the Canny edge algorithm by exclusively using spikes.
Futureinternet 14 00371 g004 550

Figure 4. A comparison of the ground truth with the results from Canny edge detector and the SNN analogue of Canny edge detector for selected images from BSD500 dataset. For these images, the metrics for SNN Canny edge detector come out lower compared to the Canny edge detector.
Table 1. Comparison of performance ratio, PR, and F1�1-score for selected images.
Object detection in infrared images is another key application for edge detectors, for example in night vision gadgets or for autonomous driving. Novel edge detectors, especially those that can extract edges in low-resolution infrared images with low-SWaP requirements for hardware implementation, are sought across commercial and defense applications. In this context, we evaluated our SNN edge detector with a few infrared images from the Thermal Road Dataset [36] (license: Not found). Figure 5 shows a comparison of edge detection with SNN-based Canny detector, conventional Canny detector and Sobel detector. Ground truth edge maps are not available for this dataset to perform a quantitative comparison similar to the one presented for the RGB images in Table 1. A visual comparison of the results from the three edge detectors in Figure 5 indicates that the SNN-based Canny edge detector is able to generate edge maps very similar to the ones generated by the conventional Canny edge detector.
Futureinternet 14 00371 g005 550

Figure 5. A comparison of edge detection by Sobel, Canny, and SNN Canny edge detectors for images from thermal road dataset.
Biomedical image processing is a field with an increasing demand for advanced algorithms that automate and accelerate the process of segmentation. In a recent review, Wallner et al. reported multi-platofrm performance evaluation of open-source segmentation algorithms, including the Canny-edge-based segmentation method, for cranio-maxillofacial surgery [37]. In this context, it is desirable to test newer algorithms such as the one developed here using spiking neurons on medical images. To this end, we performed an edge detection experiment with a few representative medical images from the computed tomography (CT) emphysema dataset [38] (this database can be used free of charge for research and educational purposes). This dataset consists of 115 high-resolution CT slices as well as 168 square patches that are manually annotated in a subset of the slices. Figure 6 illustrates a comparison of the three edge detectors, Sobel, Canny and SNN-based Canny detectors, for a few example images from the CT emphysema dataset. Ground truth for edge maps is not available for this dataset to perform a quantitative comparison. A visual comparison of the edges generated from the three edge detectors, presented in Figure 6, shows that the SNN-based Canny edge detector is competitive with the other two edge detectors and offer an algorithmically neuromorphic alternative to the conventional Canny edge detector.
Futureinternet 14 00371 g006 550

Figure 6. A comparison of edge detection by Sobel, Canny, and SNN Canny edge detectors for medical images from CT emphysema dataset.

6. Conclusions​

In conclusion, we present a spiking neural network (SNN) implementation of the Canny edge detector as its neuromorphic analogue by introducing algorithms for spike based computation in the five steps of the conventional algorithm with the conductance-based Hodgkin–Huxley neuron as the building block. Edge detection examples are presented for RGB and infrared images with a variety of objects. A quantitative comparison of the edge maps from the SNN-based Canny detector, conventional Canny detector and a Sobel detector using the F1�1-score as a metric, shows that the neuromorphic implementation of the Canny edge detector achieves better performance. The SNN architecture of the Canny edge detector also offers promise for image processing and object recognition applications in the infrared domain. The SNN Canny edge detector is also evaluated with medical images, and the edge maps compare well with the edges generated with the conventional Canny detector. Future work will focus on the implementation of the algorithm on an FPGA or on a neuromorphic chip for hardware acceleration and testing in an infrared object detection task potentially with edge maps as features together with a pre-processing layer to remove any distortions, enhance contrast, remove blur, etc., and a spiking neuron layer as a final layer to introduce a machine learning component. An extension of the SNN architecture of the Canny edge detector with additional processing layers for object detection in LiDAR point clouds would be another interesting new direction of research [39].

Funding​

This work was internally funded by Riverside Research’s Open Innovation Center.

 
  • Like
  • Fire
  • Love
Reactions: 40 users

Dhm

Regular

View attachment 33061
I attempted to register but suddenly realised that the Brainchip keynote speech is already over. As I type the time in San Jose is already 2.46pm and the speech was delivered around 10.10am local time. I wonder if there will be a video available soon.
 
  • Like
  • Sad
Reactions: 10 users
Well how about Brainchip employs someone who can actually get the revenue to kick-in. Silicon Valley is crawling with super qualified out of work technicians ATM - there have been colossal, mass layoffs in the industry. I'm not convinced that any current additions have been attracted by AKIDA any more than their need to put food on the table. I just think that people here put way too much weight on the addition of new employees but far too little emphasis on the fact that we have made very little revenue producing IP sales progress. Perhaps the next 4C will prove me wrong, I certainly hope so but how many consecutive 4C's without self sustaining revenue will be acceptable? Oh but you don't have to consider that difficult question because we've just paid for another super qualified technician, who we might not actually need.
Firstly this appointment was almost a year ago if you read the LinkedIn document.

Secondly the most recent appointments announced by Brainchip have been the sales and marketing appointments in Japan, Korea and Germany.

Thirdly you and others have many times expressed fears about Brainchip being overtaken by others who have caught up the technology lead created by Peter van der Made and the other technology giants working at Brainchip. I assume you know longer consider it important to maintain the technology lead from your comments if so would you care to expand on your reasons.

Fourthly as Brainchip has about two years of cash runway the answer is eight 4C's.

My opinion only based on Doing My Own Research
FF

AKIDA BALLISTA.
 
  • Like
  • Love
  • Fire
Reactions: 53 users

skutza

Regular
Here are some simple questions that you can ask yourself regarding competition;

1. Where are Brainchip now, commercial?
2. Is anyone else commercial in this space?
3. How long has it taken BRN to sell their IP?
4. Will any competition be able to just magically appear and then have their tech validated and then commercial immediately?
5. Simple math, look at BRN pathway, look at when finally someone announces they are commercial and then add 1,2,3 years to become commercial?

So yeah, it sounds scary that someone might catch up, but catch up to what? Catch up to announcing they have a commercial chip? Well done to them, then add 3 years for anyone to start using it, and where will BRN be by then? It's simple when you look at it emotionless and logically........


Oh sorry had to edit this, IMO of course :)
 
  • Like
  • Fire
  • Love
Reactions: 54 users

equanimous

Norse clairvoyant shapeshifter goddess
Here are some simple questions that you can ask yourself regarding competition;

1. Where are Brainchip now, commercial?
2. Is anyone else commercial in this space?
3. How long has it taken BRN to sell their IP?
4. Will any competition be able to just magically appear and then have their tech validated and then commercial immediately?
5. Simple math, look at BRN pathway, look at when finally someone announces they are commercial and then add 1,2,3 years to become commercial?

So yeah, it sounds scary that someone might catch up, but catch up to what? Catch up to announcing they have a commercial chip? Well done to them, then add 3 years for anyone to start using it, and where will BRN be by then? It's simple when you look at it emotionless and logically........


Oh sorry had to edit this, IMO of course :)
With any new technology we need an ecosystem for it and thats exactly what BRN team are building which is essential for such a disrupting, essential and ubiquitous products.

  • Apple can build the ecosystem because it does not design around a single device but around said ecosystem. What makes this unique ecosystem successful is how well the devices work together and the walls the company has built around products and services.
  • The Arm Automotive Ecosystem connects you to the right partners, enabling you to build the next generation of efficient, scalable autonomous solutions. Find Automotive Partners. IoT Ecosystem. Explore Arm IoT Ecosystem partners who can help transform an idea into a secure, market-leading device.
  • Amazon ecosystem of products and services is vast and it comprises retail, transportation, B2B distribution, payments, entertainment, cloud computing, and other segments
  • BrainChip expands its Ecosystem with Teksun to Bring the Akida Processor to Next-Generation AIoT Devices​

 
  • Like
  • Love
  • Fire
Reactions: 34 users

marsch85

Regular
It's a marathon not a sprint. Hiring the best tech / engineering talent out there is critical as the Tech is core to everything the business is doing and setting out to do. And it will need to keep evolving to maintain/extend our lead and capitalise on new opportunities over the next decades. The likes of NVIDIA and ARM today have changed substantially and shifted focus from where they started a few decades ago.

Commercialising technology is complex and takes time. I've worked for multiple B2B tech / software businesses and when hiring new sales reps it can take up to a year for them to close their first deal... and then implementation starts... It takes even longer when you have to align with 3-7 year product design cycles. Just look at ARM's journey on getting to where they are today. It wasn't an overnight success story. it took them 3 decades.

BRN has everything to achieve similar results. The Edge AI market is about to explode, we have the tech (as confirmed by many of our very excited partners), we are successfully building our ecosystem (multiple partner announcements per month), and I believe we have the team to bring this home.


IMG_3805.jpg
 
  • Like
  • Love
  • Fire
Reactions: 29 users

Foxdog

Regular
Firstly this appointment was almost a year ago if you read the LinkedIn document.

Secondly the most recent appointments announced by Brainchip have been the sales and marketing appointments in Japan, Korea and Germany.

Thirdly you and others have many times expressed fears about Brainchip being overtaken by others who have caught up the technology lead created by Peter van der Made and the other technology giants working at Brainchip. I assume you know longer consider it important to maintain the technology lead from your comments if so would you care to expand on your reasons.

Fourthly as Brainchip has about two years of cash runway the answer is eight 4C's.

My opinion only based on Doing My Own Research
FF

AKIDA BALLISTA.
So why the hell are people posting stuff about appointments from a year ago FFS? Seriously
 
  • Like
  • Haha
  • Fire
Reactions: 7 users
So why the hell are people posting stuff about appointments from a year ago FFS? Seriously
That is a very easy question to answer because it is relevant to Brainchip whereas random opinions and swearing in abbreviated form are not.

To express an opinion it helps to have done your own research. Someone who has done their own research or simply read all the research done by others and posted here would understand all the dots that come together by having knowledge of this Brainchip employee.

As you don’t understand you probably need to DYOR or go back and read the research generously shared here by those that do.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 53 users

Adam82

Member
So why the hell are people posting stuff about appointments from a year ago FFS? Seriously
And that why it’s important to do your own research and have a plan that suits you….. 🙄
 
  • Like
Reactions: 17 users
Top Bottom