
As the $1.8 trillion space economy takes off, here are 3 ASX stocks to watch - Stockhead
McKinsey highlights vast opportunities for investors in the space sector. On the ASX, Brainchip is one stock for potential investors.
I was reading an article about one of the problems AI is facing (sorry, I dont have the link). It was about how most of the data available on the internet has been reviewed and classified and incorporated into models already. The challenge moving forward is generating and capturing new data to continuously improve these models.
My question to the group is. Could we foresee our akida products being used in combination with other more sophisitcated but power hungry chips or software.
For example with mercades. Our chip or software at the edge and at the sensor. Quick, and power efficient inference done in realtime on input like voice activation, learning someones accent, vocal manerisms etc. But then that more refined data is processed in larger and more power hungry systems in the car or cloud for model improvements.
I see pico being such a device. Not intended to run independently. But more of a extreme edge device at the sensor, that just does data collection and refinement, but doesn't make any decisions on anything...
I might be way off. But that's my impression.
Cheers
Draed
Hi Bs. Thank you.... that's what I was imagining in my head. And that's what I see picos role in all this. Just straight up at the edge classification, low watt power.Y
Yes, this is possible. See images below...
Maybe, just maybe, we’ll find out a teeny-weeny bit more about the current status of MB’s neuromorphic research later this week:
I checked out the website of Hochschule Karlsruhe (Karlsruhe University of Applied Sciences aka HKA) - since Markus Schäfer mentioned in his post they were collaborating with them on event-based cameras - and discovered an intriguing hybrid presentation by Dominik Blum, one of MB’s neuromorphic researchers, titled “Intelligente Fahrassistenzsysteme der Zukunft: KI, Sensorik und Neuromorphes Computing” (“Future Intelligent ADAS: AI, Sensor Technology and Neuromorphic Computing”).
The upcoming presentation is part of this week’s Themenwoche Künstliche Intelligenz, a week (Mon-Thu to be precise) devoted to AI, with numerous, mostly hybrid presentations from various HKA research areas (both faculty and external speakers will present), held daily between 5.15 pm and 8.30 pm.
Oct 17 is devoted to the topic of AI & Traffic:
Thementag KI
KI, AI oder Artificial Intelligence - es ist die Technologie der Zukunft. Wir laden zu einer wegweisenden Veranstaltung ein. Erhalte eine realistische Bestandsaufnahme zum Einsatz Künstlicher Intelligenz in den Bereichen Gaming, Klimaschutz und autonomer Systeme. Wo geht die Reise hin und worauf...www.h-ka.de
View attachment 71031
View attachment 71032
If you speak German (or even if you don’t, but are nevertheless interested in the presentation slides) and live in a compatible time zone, you may want to join the following livestream on Oct 17, at 5.15 pm (CEST):
(Since similar June AI Day presentations were recorded and uploaded to the HKA website, I assume this will also apply to the AI Week presentations.)
View attachment 71025
View attachment 71026
The reference to NMC (neuromorphic computing) being considered a “möglicher Lösungsweg” (possible/potential solution) suggests to me - once again - that Mercedes-Benz is nowhere near to implementing neuromorphic technology at scale into serial cars.
View attachment 71027
View attachment 71028
Hi @Frangipani I think it should be noted in my view as how many Employees who. has been instrumental in Brainchip's success has left within short span of time. People may say for better pay but i for one differ in this view, one need to be happy where they are working, We often say Employee dont leave the company they leave their managers right?
You underlined "Indian cuisine" that it's only an "important part" of totally ignoring the fact, that it's a "key ingredient" of Mediterranean and Middle Eastern cuisine! (and being mentioned first as such, seems to indicate its use is more prevalent).
You Cherry Picker
The fact that 75% is produced in India, might have something to do with the fact that it constitutes almost 18% of the World population.
Personally, I associate chickpeas with both hummus and chana masala.
No, it’s more of a modified clone, it’s not like the other developments have ‘died’… besides, we don’t want to make a religion out of it! It’s still a man-made technology.For the tech minded amongst us:
Is Akida Pico essentially a somewhat reincarnation of Akida 1500 ????
Here’s the link to the livestream’s recording (in German):
The presentation by Dominik Blum (Mercedes-Benz) starts at 3:48 min.
There was one slide that showed - amongst other things - various brands of neuromorphic hardware (ABR, BrainChip, Innatera, Intel, SynSense), including Akida 1.0 and 2.0, but none was mentioned by name. Same with the slide showing the EQXX: there was merely mention of the voice assistant’s keyword spotting having been realised on “a neuromorphic chip”, which was said to have considerably improved this function’s energy efficiency.
Please note that the slide showing neuromorphic hardware also lists the ABR Time Series Processor (TSP1), developed by Applied Brain Research, the Canadian company Chris Eliasmith co-founded and is CTO of (see the link to my previous post below).
I reckon TSP1 is what MB are going to base their future research on regarding their recently announced collaboration with the University of Waterloo.
So just as I said the other day, that announcement is highly likely no reason to celebrate for BRN shareholders:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-438892
In the Q&A session afterwards, Dominik Blum was asked whether the architecture of neuromorphic chips resembled that of GPUs or whether it was a completely different design. His answer was: “Da wir uns mehr auf die softwareseitige Entwicklung von “neuromorphic” fokussieren, kann ich da keine gute Antwort geben.” (“Since we are focussing more on the software side development of neuromorphic, I can’t give you a good answer to that question.”)
He was also asked whether he could quantify the potential energy savings of neuromorphic computing and said that there were vague estimations of up to 90%, however, he’d be careful with any concrete numbers (suggesting 90% might actually be too high) and stressed that they are still at a very early stage of research and that further studies were required.
When asked how long it would take for fully autonomous cars to become reality, he replied he would prefer to answer as “a private person” (= not as an MB employee) and then said he believed it would be within the next 15 years, but there were various uncertainties to factor in. He wriggled for an answer when asked whether any legal framework could pose a problem regarding that time frame.
To me the gist was that while MB considers neuromorphic computing a very promising technology regarding gains in energy efficiency (which will become more and more important on the path towards cars becoming fully autonomous), they are still at a very early stage of research - Dominik Blum literally said so. There you go, you heard it from the horse’s mouth. So don’t expect neuromorphic technology in any MB serial production cars in the near future. At least that’s what I took away from that presentation.
Anything major I missed?
Here are some of the presentation slides:
View attachment 71337
View attachment 71338
Energy consumption increases with the vehicle’s number of AI-based systems:
View attachment 71339
The higher the SAE-Level, the higher the energy consumption:
View attachment 71340
Neuromorphic computing is a young field of research … with a lot of open questions, e.g.:
View attachment 71341
SNNs in the event chain of autonomous driving:
View attachment 71342
This slide kind of looked familiar…
View attachment 71343
Dominik Blum said that they are still in the research phase as to event-based cameras that will one day complement regular cameras, radar and LiDAR…
View attachment 71346
View attachment 71344
View attachment 71345
I don’t interpret that comment as meaning he was unhappy. His interest appears to have been tweaked by this advertisement - perhaps an opportunity too great to resist.Another of our Laguna Hills engineers leaving without having another job lined up…
It should be noted that he has since been given glowing references from his former BrainChip colleagues.
Nevertheless, he seems to have been unhappy in his job for quite a while (see his 4 month old LinkedIn comment)… What is going on?
View attachment 71324
View attachment 71325
View attachment 71330
View attachment 71331
View attachment 71332
View attachment 71334
Don’t forget about the legendary intil and EBM…and our partner mircrochip and contrphesee…
There it is again, a new combination Akido Pico
Sounds kind of Japanese
____
In your posts I usually read over typos and don't notice them. But with Akida?
It's like Intel's at that time Pentium - Pentia, Pentio, Pentiam, Pentiom - hum hum.
Always a close call.
___
Seriously, this needs to be hammered or ram into the heads of all BRN writers.
How else is someone supposed to create a brand or product?
Kuci
Bolex
Nercedes Denz - yeah
Gola
A-pod
Verrari
A-pat
Akido
Dolls-Boyce
Mc Bonald's
And so on
Yo, Akido Bollisto
___
At the beginning I thought more funny, mistakes are human.
What would their company say if something was posted globally and said:
Dugatti
or
Bugatto
___
It's about building a brand. Painchip?!
I don't know why I'm so upset if the own employees do not take their texting seriously. Just letters.
Looks like some friends in Japan, with a little support from Megachips, have been playing with Akida & MetaTF
Apols if already posted as I may have missed it and haven't done a search.
Short video end of post.
Paper HERE
View attachment 68589
License: arXiv.org perpetual non-exclusive license
arXiv:2408.13018v1 [cs.RO] 23 Aug 2024
Robust Iterative Value Conversion: Deep Reinforcement Learning for Neurochip-driven Edge Robots
Yuki Kadokawakadokawa.yuki@naist.ac.jpTomohito Koderakodera.tomohito.kp9@is.naist.jpYoshihisa Tsuruminetsurumine.yoshihisa@is.naist.jpShinya Nishimuranishimura.shinya@megachips.co.jpTakamitsu Matsubaratakam-m@is.naist.jpNara Institute of Science and Technology, 630-0192, Nara, Japan MegaChips Corporation, 532-0003, Osaka, Japan
Abstract
A neurochip is a device that reproduces the signal processing mechanisms of brain neurons and calculates Spiking Neural Networks (SNNs) with low power consumption and at high speed. Thus, neurochips are attracting attention from edge robot applications, which suffer from limited battery capacity. This paper aims to achieve deep reinforcement learning (DRL) that acquires SNN policies suitable for neurochip implementation. Since DRL requires a complex function approximation, we focus on conversion techniques from Floating Point NN (FPNN) because it is one of the most feasible SNN techniques. However, DRL requires conversions to SNNs for every policy update to collect the learning samples for a DRL-learning cycle, which updates the FPNN policy and collects the SNN policy samples. Accumulative conversion errors can significantly degrade the performance of the SNN policies. We propose Robust Iterative Value Conversion (RIVC) as a DRL that incorporates conversion error reduction and robustness to conversion errors. To reduce them, FPNN is optimized with the same number of quantization bits as an SNN. The FPNN output is not significantly changed by quantization. To robustify the conversion error, an FPNN policy that is applied with quantization is updated to increase the gap between the probability of selecting the optimal action and other actions. This step prevents unexpected replacements of the policy’s optimal actions. We verified RIVC’s effectiveness on a neurochip-driven robot. The results showed that RIVC consumed 1/15 times less power and increased the calculation speed by five times more than an edge CPU (quad-core ARM Cortex-A72). The previous framework with no countermeasures against conversion errors failed to train the policies. Videos from our experiments are available:
Excerpts:
5.1 Construction of Learning System for Experiments
5.1.1 Entire Experiment Settings
This section describes the construction of the proposed framework shown in Fig. 2. We utilized a desktop PC equipped with a GPU (Nvidia RTX3090) for updating the policies and an Akida Neural Processor SoC as a neurochip [9, 12]. The robot was controlled by the policies implemented in the neurochip. SNNs were implemented to the neurochip by a conversion executed by the MetaTF of Akida that converts the software [9, 12]. Samples were collected by the SNN policies in both the simulation tasks and the real-robot tasks since the target task is neurochip-driven robot control. For learning, the GPU updates the policies based on the collected samples in the real-robot environment. Concerning the SNN structure, the quantization of weights 𝑤𝑠 described in Eq. (16) and the calculation accuracy of the activation functions described in Eq. (19) are verified in a range from 2- to 8-bits; they are the implementation constraints of the neurochip [9].
Table 3: Hardware performance of policies: FPNN was evaluated by edge-CPU (Raspberry Pi 4: quad-core ARM Cortex-A72). SNN was evaluated by neurochip (Akida 1000 [9]). “Power cons” and “Calc. speed” denote power consumption and calculation speed for obtaining one action from NN policies using each piece of hardware. Power consumption was measured by voltage checker (TAP-TST8N).
Network FPNN SNN Hardware Edge-CPU Neurochip Power consumption [mW] 61 4 Calculation speed [ms] 205 40
7 Conclusion
We proposed RIVC as a novel DRL framework for training SNN policies with a neurochip in real-robot environments. RIVC offers two prominent features: 1) it trains QNN policies, which can be robust for conversion to SNN policies, and 2) it updates the values with GIO, which is robust to the optimal action replacements by conversion to SNN policies. We also implemented RIVC for object-tracking tasks with a neurochip in real-robot environments. Our experiments show that RIVC can train SNN policies by DRL in real-robot environments.
Acknowledgments
This work was supported by the MegaChips Corporation. We thank Alonso Ramos Fernandez for his experimental assistance.
Not ignoring, but ignorantYou are totally ignoring the fact that I was not at all ignoring Mediterranean and Middle Eastern Cuisine…![]()
If it tastes like garlic mash, you're doing it wrongNot ignoring, but ignorant
I had to Google Hummus..
I had in my mind, that it was that parsley concoction..
Never liked dips..
This Reddit quote, doesn't exactly make me want to try it either..
"It's texture is awful, and the flavour is just... garlic mush. It looks like something a cat threw up. I don't understand why it's so popular"