This article led me here & whilst I can't date as the role has been filled or removed it's definitely something Huawei working on imo.Article from last year was interesting on Huawei's thoughts on neuromorphic.
Though we're not mentioned it appears Huawei finally cottoned on to a couple of the key aspects - highlighted.
Huawei embraces neuromorphic computing for IoT - Rethink
The convention of IoT devices being lightweight in processing capability is being turned on its head by the rise of neuromorphic computing. The aim is to mimic the plasticity of the human brain in a new generation of chips optimized for data analytics, employing algorithms under the banners of...rethinkresearch.biz
30 March 2021
Huawei embraces neuromorphic computing for IoT
By Phil Hunter
The convention of IoT devices being lightweight in processing capability is being turned on its head by the rise of neuromorphic computing.
The aim is to mimic the plasticity of the human brain in a new generation of chips optimized for data analytics, employing algorithms under the banners of AI and machine learning. This is being driven by several factors, including demand for ultra-low latency edge computing and desire to save network bandwidth by cutting down on data transmission between end IoT devices and the cloud or centralized data centers.
It is true that edge computing can be deployed in distributed servers, but this itself imposes an overhead and cost, as well as requiring a lot of local bandwidth in some cases.
The sticking point might appear to be power consumption, given that many IoT devices are deployed for long time periods in locations that are not convenient to visit frequently for battery changes. By a similar token, direct connections to the electricity grid are usually either unavailable or impractical, while having dedicated solar or wind panels would elevate costs per device too much in most use cases.
But this calculation ignores the high power consumption of radios, as we were reminded when talking recently to Henk Koopmans, CEO of R&D at Huawei UK. He actually cited the desire to boost battery life as a motivation for massive increases in IoT device processor capabilities, alongside need to reduce latency and save on data transfers to the cloud.
“As many IoT devices are battery powered, often in hard-to-access places, replacing the batteries is time-consuming and affects the cost efficiency of the business model,” Koopmans noted. “Local processing reduces the need for wireless transmissions, the part of the device using the most energy, thereby greatly extending the battery life.”
But this assumes that such a hike in local processing power can be achieved affordably without offsetting the energy gains through cutting wireless transmission drastically. As Koopmans put it, “The challenge, therefore, is to come up with a new type of processor, capable of a level of artificial intelligence to enable the device to locally analyze the data and locally make decisions, while still retaining the very low power consumption level required for IoT devices.”
Koopmans, and Huawei, are convinced that such capability will be achieved through the emerging field of neuromorphic computing, or the third generation of AI as it is sometimes dubbed. The first generation of AI, sometimes called expert systems, emerged over 40 years ago in the 1970s in rule-based systems that emulated classical logical processes to draw reasoned conclusions within a specific, narrowly defined problem domain or field of expertise.
The poster child of this first generation was a medical diagnostic system called Mycin developed at Stanford University in the early 1970s, which demonstrated the genre well but was limited in scope and gained little traction in the clinic. Indeed, it was initially confined to identification of bacteria causing severe infections, such as meningitis, and then recommend appropriate antibiotics with dosages adjusted for the patient’s body weight.
Then after a prolonged lull in the AI field, the second generation emerged during the noughties, brought on by the phenomenal advanced in computational power that enabled application of sophisticated statistical regression at scale to very large data sets. This enabled pattern matching and identification at far higher resolution and granularity, leading to valuable applications in sensing and perception under the banners of neural networks and deep learning.
The ability to identify video streams on the basis of objects within individual frames, as well as to diagnose medical conditions such as some cancers automatically through analysis of X-ray or MRI scanned images, are examples of proven applications.
This second generation has been said to be modelled on the structure and processes of the human brain, but in reality it has just been loosely inspired by that. The neuroscience behind human cognition was just not well enough understood for direct translation into AI algorithms.
The mantra of mimicking the human brain is still being used for the third generation of AI, or neuromorphic computing, but with rather more humility, or perhaps reality. There is much talk of incorporating aspects of biological neural networks more directly into electronic circuits, but with admission that this is as much to provide tools for neuroscientists to develop and test theories of how human brains operate in more detail, as in turn to take inspiration from the brain in cognitive computing.
Indeed, this is already proving to be a two way process with neuroscientists working alongside cognitive computing specialists. It is already clear that even if biomorphic computing does not mimic the brain exactly, an approach in which complex multilayered networks are embodied directly in the architecture of Very Large Scale Integration (VLSI) systems containing electronic analog circuits can greatly accelerate machine learning processes with higher efficiency and much reduced power consumption.
It can also mimic some of the flexibility or plasticity of the human brain, with ability to reconfigure rapidly in near real time to tackle problems more adaptively in response to feedback. Such a structure is also more resilient against failures in the system. Finally, there are also possible security gains, as Koopmans noted, by retaining personal data at a local level, rather than being sent to a cloud where it could be used in an unintended way.
A critical aspect of research therefore lies in investigating how the morphology, or structure, of individual neurons, circuits, applications, and large-scale architectures enables the desired level and type of computation, achieved through available fundamental components such as transistors and spintronic memories.
It could be said to be the usual suspects engaging in such research beyond Huawei, notably leading chipmakers. Intel has developed a chip called Loihi, which it describes as its fifth generation self-learning neuromorphic research test chip, introduced in November 2017. This is a 128-core design based on a specialized architecture that is fabricated on 14-nanometer process technology. The key design future is operation around spiking neural networks (SNNs), developed specifically for arranging logic elements to emulate neural networks as understood to exist in brains of humans and indeed many animals.
The key property is adaptability or plasticity, the ability to learn from experience at the silicon level so that the overall networks become more capable, or smarter, over time. This is achieved by adding a concept that is known to exist in animal brains, that of activation whereby neurons fire only when their membrane electric charge exceeds a set threshold. At this point the neuron generates and transmits a signal which causes other neurons receiving it either to increase or decrease their own potentials as a result. This leads to coordinated activations and firings that can correspond with or execute cognitive processes.
It can be seen then that such a system is a valuable tool for neuroscientists to investigate hypotheses, as well as a vehicle for cognitive computing R&D. There are various research projects working with such ideas, including the European Human Brain Project, which has designed its own a chip and is working on a project called ‘BrainScaleS-2’.
The key point for Koopmans is that the underlying concepts are being proven and that the prizes are huge. “By trying to figure out whether processors can in some way copy the functions of the brain would, even on a small scale, represent a major advance,” said Koopmans. “For example, by replacing the commonly accepted processor architecture, with its separation between CPU and memory, the interconnection between the two being a major bottleneck in processor speeds, with in-memory processing, would be revolutionary.” This is why much of the R&D effort is focused on this area.
The biggest challenge facing this field is not so much at the level of technical design but scaling up for commercial deployment in the field. It is hard to overestimate the importance of, and dependence on, the testing and development ecosystems that have grown up around conventional chip development and manufacture. “Silicon processor chips are designed using CAD (computer-aided design) tools,” said Koopmans. “These tools don’t just allow for the design of the chip, they are also capable of simulating the performance. The investment in such tools is enormous because chip design complexity is increasing all the time.”
As a result, Koopmans admitted that despite the optimism, large scale deployment is a long way off. “What is clear is that the first step is to create the tools to both design and simulate these new chips, which can take years, and we’re still in the research stage.”
Though in fairness we know they have their Da Vinci AI architecture being used on the Ascend 910 and Atlas servers.This article led me here & whilst I can't date as the role has been filled or removed it's definitely something Huawei working on imo.
Their bold.
![]()
Researcher - Neuromorphic Computing Algorithms at Huawei Technologies
Apply now for Researcher - Neuromorphic Computing Algorithms job at Huawei Technologies in Zürich, Switzerland. ––– Huawei's vision is to enrich life thr...startup.jobs
Researcher - Neuromorphic Computing Algorithms
Zürich, Switzerland
Huawei's vision is to enrich life through communication. We are a fast growing and leading global information and communications technology solutions provider. With our three business units Carrier, Enterprise and Consumer, we offer network infrastructure, cloud computing solutions and devices such as smartphones and tablet PCs.
Among our customers are 45 of the world's top 50 telecom operators, and one third of the world’s population uses Huawei technologies. Huawei is active in more than 170 countries and has over 180,000 employees of which more than 80,000 are engaged in research and development (R&D). With us you have the opportunity to work in a dynamic, multinational environment with more than 150 nationalities worldwide. We seek and reward talent. At Huawei, if you are dedicated to creativity, engagement of technical risks and delivery of target-driven results, your efforts will be rewarded with outstanding career prospects.
Our Research Center
With 18 sites across Europe and 1500 researchers, Huawei’s European Research Institute (ERI) oversees fundamental and applied technology research, academic research cooperation projects, and strategic technical planning across our network of European R&D facilities. Huawei’s ERI includes the new Zurich Research Center (ZRC), located in Zurich, Switzerland. A major element of ZRC is a new research group focused on fundamental research in the area of neuromorphic computing algorithms.
The group follows a target-oriented approach to neuromorphic computing research. Specifically, our aim is to exploit the computational properties that are unique to biological neurons and their neuromorphic emulations, to outperform conventional approaches to machine intelligence. For this new research group, we are currently looking for an outstanding Researcher in Neuromorphic Computing.
As a key member in our motivated and multicultural team, you will advance the state of the art in AI, showing in theory and applications some of the first concrete advantages of neuromorphic algorithms.
Your Responsibilities:
Requirements
- Conduct fundamental research on next-generation AI algorithms, using neuromorphic principles
- Develop algorithms and theories that simultaneously advance both AI and neuroscience
- Propose new applications of neuromorphic computing
- Simulate and benchmark the algorithms against the state of the art
- Produce and present research papers at international conferences and journals
- Create and maintain collaborations with academic partners
Essential Requirements
Preferred Requirements
- PhD in neuromorphic engineering, computational neuroscience, machine learning, or similar
- Outstanding publishing record of research papers in the relevant fields
- Strong experience in simulating (spiking) neural networks is an advantage
- A clear view of the advantages and limitations of current neuromorphic algorithms and hardware is an advantage
Article from last year was interesting on Huawei's thoughts on neuromorphic.
Though we're not mentioned it appears Huawei finally cottoned on to a couple of the key aspects - highlighted.
Huawei embraces neuromorphic computing for IoT - Rethink
The convention of IoT devices being lightweight in processing capability is being turned on its head by the rise of neuromorphic computing. The aim is to mimic the plasticity of the human brain in a new generation of chips optimized for data analytics, employing algorithms under the banners of...rethinkresearch.biz
30 March 2021
Huawei embraces neuromorphic computing for IoT
By Phil Hunter
The convention of IoT devices being lightweight in processing capability is being turned on its head by the rise of neuromorphic computing.
The aim is to mimic the plasticity of the human brain in a new generation of chips optimized for data analytics, employing algorithms under the banners of AI and machine learning. This is being driven by several factors, including demand for ultra-low latency edge computing and desire to save network bandwidth by cutting down on data transmission between end IoT devices and the cloud or centralized data centers.
It is true that edge computing can be deployed in distributed servers, but this itself imposes an overhead and cost, as well as requiring a lot of local bandwidth in some cases.
The sticking point might appear to be power consumption, given that many IoT devices are deployed for long time periods in locations that are not convenient to visit frequently for battery changes. By a similar token, direct connections to the electricity grid are usually either unavailable or impractical, while having dedicated solar or wind panels would elevate costs per device too much in most use cases.
But this calculation ignores the high power consumption of radios, as we were reminded when talking recently to Henk Koopmans, CEO of R&D at Huawei UK. He actually cited the desire to boost battery life as a motivation for massive increases in IoT device processor capabilities, alongside need to reduce latency and save on data transfers to the cloud.
“As many IoT devices are battery powered, often in hard-to-access places, replacing the batteries is time-consuming and affects the cost efficiency of the business model,” Koopmans noted. “Local processing reduces the need for wireless transmissions, the part of the device using the most energy, thereby greatly extending the battery life.”
But this assumes that such a hike in local processing power can be achieved affordably without offsetting the energy gains through cutting wireless transmission drastically. As Koopmans put it, “The challenge, therefore, is to come up with a new type of processor, capable of a level of artificial intelligence to enable the device to locally analyze the data and locally make decisions, while still retaining the very low power consumption level required for IoT devices.”
Koopmans, and Huawei, are convinced that such capability will be achieved through the emerging field of neuromorphic computing, or the third generation of AI as it is sometimes dubbed. The first generation of AI, sometimes called expert systems, emerged over 40 years ago in the 1970s in rule-based systems that emulated classical logical processes to draw reasoned conclusions within a specific, narrowly defined problem domain or field of expertise.
The poster child of this first generation was a medical diagnostic system called Mycin developed at Stanford University in the early 1970s, which demonstrated the genre well but was limited in scope and gained little traction in the clinic. Indeed, it was initially confined to identification of bacteria causing severe infections, such as meningitis, and then recommend appropriate antibiotics with dosages adjusted for the patient’s body weight.
Then after a prolonged lull in the AI field, the second generation emerged during the noughties, brought on by the phenomenal advanced in computational power that enabled application of sophisticated statistical regression at scale to very large data sets. This enabled pattern matching and identification at far higher resolution and granularity, leading to valuable applications in sensing and perception under the banners of neural networks and deep learning.
The ability to identify video streams on the basis of objects within individual frames, as well as to diagnose medical conditions such as some cancers automatically through analysis of X-ray or MRI scanned images, are examples of proven applications.
This second generation has been said to be modelled on the structure and processes of the human brain, but in reality it has just been loosely inspired by that. The neuroscience behind human cognition was just not well enough understood for direct translation into AI algorithms.
The mantra of mimicking the human brain is still being used for the third generation of AI, or neuromorphic computing, but with rather more humility, or perhaps reality. There is much talk of incorporating aspects of biological neural networks more directly into electronic circuits, but with admission that this is as much to provide tools for neuroscientists to develop and test theories of how human brains operate in more detail, as in turn to take inspiration from the brain in cognitive computing.
Indeed, this is already proving to be a two way process with neuroscientists working alongside cognitive computing specialists. It is already clear that even if biomorphic computing does not mimic the brain exactly, an approach in which complex multilayered networks are embodied directly in the architecture of Very Large Scale Integration (VLSI) systems containing electronic analog circuits can greatly accelerate machine learning processes with higher efficiency and much reduced power consumption.
It can also mimic some of the flexibility or plasticity of the human brain, with ability to reconfigure rapidly in near real time to tackle problems more adaptively in response to feedback. Such a structure is also more resilient against failures in the system. Finally, there are also possible security gains, as Koopmans noted, by retaining personal data at a local level, rather than being sent to a cloud where it could be used in an unintended way.
A critical aspect of research therefore lies in investigating how the morphology, or structure, of individual neurons, circuits, applications, and large-scale architectures enables the desired level and type of computation, achieved through available fundamental components such as transistors and spintronic memories.
It could be said to be the usual suspects engaging in such research beyond Huawei, notably leading chipmakers. Intel has developed a chip called Loihi, which it describes as its fifth generation self-learning neuromorphic research test chip, introduced in November 2017. This is a 128-core design based on a specialized architecture that is fabricated on 14-nanometer process technology. The key design future is operation around spiking neural networks (SNNs), developed specifically for arranging logic elements to emulate neural networks as understood to exist in brains of humans and indeed many animals.
The key property is adaptability or plasticity, the ability to learn from experience at the silicon level so that the overall networks become more capable, or smarter, over time. This is achieved by adding a concept that is known to exist in animal brains, that of activation whereby neurons fire only when their membrane electric charge exceeds a set threshold. At this point the neuron generates and transmits a signal which causes other neurons receiving it either to increase or decrease their own potentials as a result. This leads to coordinated activations and firings that can correspond with or execute cognitive processes.
It can be seen then that such a system is a valuable tool for neuroscientists to investigate hypotheses, as well as a vehicle for cognitive computing R&D. There are various research projects working with such ideas, including the European Human Brain Project, which has designed its own a chip and is working on a project called ‘BrainScaleS-2’.
The key point for Koopmans is that the underlying concepts are being proven and that the prizes are huge. “By trying to figure out whether processors can in some way copy the functions of the brain would, even on a small scale, represent a major advance,” said Koopmans. “For example, by replacing the commonly accepted processor architecture, with its separation between CPU and memory, the interconnection between the two being a major bottleneck in processor speeds, with in-memory processing, would be revolutionary.” This is why much of the R&D effort is focused on this area.
The biggest challenge facing this field is not so much at the level of technical design but scaling up for commercial deployment in the field. It is hard to overestimate the importance of, and dependence on, the testing and development ecosystems that have grown up around conventional chip development and manufacture. “Silicon processor chips are designed using CAD (computer-aided design) tools,” said Koopmans. “These tools don’t just allow for the design of the chip, they are also capable of simulating the performance. The investment in such tools is enormous because chip design complexity is increasing all the time.”
As a result, Koopmans admitted that despite the optimism, large scale deployment is a long way off. “What is clear is that the first step is to create the tools to both design and simulate these new chips, which can take years, and we’re still in the research stage.”
Another buy a 12.30 for 60k approx! ??I wonder who is accumulating millions of shares at 0.89 which I’ve pointed out before as it seems very strange. Yet again another 0.89 brought a little while ago. Be good to know who it is.
View attachment 17556
However, a quick search and one of their patents from last couple of years does not appear to be based around or include SNNThough in fairness we know they have their Da Vinci AI architecture being used on the Ascend 910 and Atlas servers.
How Da Vinci compares to Akida...don't know.
And again at 15 min interval 25k this timeAnother buy a 12.30 for 60k approx! ??
Another buy a 12.30 for 60k approx! ??
Geez....do we need a running commentary on my buying patterns........I wishAnd again at 15 min interval 25k this time
I particularly like the “co-location of processing and memory” statement. Let’ hope that is a wee bit of nano sized, low power consumption, non-volatile memory—wink wink!No, but the BrainChip promo is interesting:
https://brainchip.com/tinyml-neuromorphic-engineering-forum-tuesday-september-27-2022-virtual/
"... He will highlight how hardware design choices such as the event-based computing paradigm, low-bit width precision computation, the co-location of processing and memory, distributed computation, and support for efficient, on-chip learning algorithms enable low-power, high-performance ML execution at the edge. Finally, Mankar will discuss how this architecture supports next-generation SNN algorithms such as binarized CNNs and algorithms that efficiently utilize temporal information to increase accuracy."
"Utilizing temporal information to increase accuracy" sounds like Anil may have splashed a bit of the secret sauce about - the discovery by Simon Thorpe's group that most of the relevant information is contained in the early-arriving spikes, leading to N-of-M coding.
View attachment 17553
View attachment 17550
View attachment 17551
View attachment 17552
It was more than just serendipity that PvdM was the only person in the world who recognized the practical implications of this and had the hardware to implement it.
I only relly took notice today.Timing of trades seems weir(4c above real time price action).Can you cross at whatever price is agreed upon?This has been going on for sometime now and I whoever is accumulating to sell to someone for 0.89 as it must be well into the milllions and that’s what I’ve just seen. Maybe someone can download the last few months spreadsheets to see how many large sakes at 0.89 there had been.
"No I am afraid it’s too late. We’re becoming cardigan wearing blue chip investors and there is nothing anyone can do to stop it"Thanks for trying to cheer me up.
But alas that artless one million dollar buy at 86 cents had boring lawyer type written all over it.
I know the type just buying so he can mention it at the golf club.
A lawyer I knew years ago now when he had new offices built rang up the law book company and ordered a metre of leather bound law books with gold leaf and dark green on the spine to go on the bookcase behind his desk. Didn’t care what law it covered as it was for appearance.
My son tried to cheer me up by saying don’t worry Dad it could have been a short covering his position.
But what short stands in the sunlight for twenty minutes before open and then just let’s his order stand.
Apart from anything else twenty minutes in direct sunlight he would turn to dust.
No I am afraid it’s too late. We’re becoming cardigan wearing blue chip investors and there is nothing anyone can do to stop it.
Regards
FF
AKIDA BALLISTA![]()
Punching ones self in the cock whilst reading the m.a.r.k.e.t SUN to stroking it whilst looking at our chip on the centre fold of Penthouse is going to be an easy change for some out there .Next thing we will have posters boring us to tears with PE ratios, dividends, five year income projections, share buy back schemes and Nasdaq listings.
I’ll have to walk away.
My opinion only DYOR
FF
AKIDA BALLISTA
Showing youe age there Frank.No Penthouse for Akida its gonna have an only fans page and rake the $ inPunching ones self in the cock whilst reading the m.a.r.k.e.t SUN to stroking it whilst looking at our chip on the centre fold of Penthouse is going to be an easy change for some out there .
And more before 1And again at 15 min interval 25k this time
Howdy All,
This is Bravo reporting for my 5th video-watching shift.
I wanted to share this little transcript that I wrote of what the dude from Qualcomm whose name escapes me (sorry) was saying about the Snapdragon Ride Vision stack, which I believe demonstrates one of the ways we fit in, that is to say one of the ways AKIDA fits into the Qualcomm picture, which IMO is in the Co-designed Computer Vision Software and Vision SoC including IMO in the next generation of Valeo’s ultrasonic sensors IMO.
IMO. OMO. HOMO SAPIEN.
( 1.03.05 )
"Moving on to the Snapdragon Ride Vision stack. So what we've done here is, when we partnered with Valeo, one thing was clear was that we needed to co-design the stack and the silicon from a power perspective, from an AI perspective, from being able to maximise the availability of the hardware that we needed from the software requirements that are coming in. And the big advantage that we see here is because this space is so continuously changing, we have the ability to work directly with customers, get new requirements from them, optimise the stack, but optimise it in a way that is best conducive for the hardware IP that we're building. So were able to optimize the utilization of the IP, the power requirements, so it is essentially and end to end system. Think of it like a modem where we have a lot of hardware and software coming together but it is co-designed.
The stack is now in it's 5th generation and it has actually been deployed by Volvo, by Mercedes, by Geely, by BYD, so there's a lot of miles in this stack, a lot of experience in this team. And we are working with many other OEM's including BMW for the next generation. The platform that Ride Vision runs on is an open platform, so wile we provide a vision stack, if you want to be able to bring your own parking stack or run your own drive policy or bring driver monitoring that is something that the platform allows. So it allows us to have an open platform while providing computer vision, it improves the overall (mumble, mumble ??) of the system, it is highly cost optimized for customers looking to select this platform."