BRN Discussion Ongoing

alwaysgreen

Top 20
Here we go. Standard ASX bullshit with BRN. Up 7% one day then back down the next. 🤦
 
  • Like
  • Sad
Reactions: 8 users
Article from last year was interesting on Huawei's thoughts on neuromorphic.

Though we're not mentioned it appears Huawei finally cottoned on to a couple of the key aspects - highlighted.



30 March 2021

Huawei embraces neuromorphic computing for IoT​


By Phil Hunter
The convention of IoT devices being lightweight in processing capability is being turned on its head by the rise of neuromorphic computing.
The aim is to mimic the plasticity of the human brain in a new generation of chips optimized for data analytics, employing algorithms under the banners of AI and machine learning. This is being driven by several factors, including demand for ultra-low latency edge computing and desire to save network bandwidth by cutting down on data transmission between end IoT devices and the cloud or centralized data centers.
It is true that edge computing can be deployed in distributed servers, but this itself imposes an overhead and cost, as well as requiring a lot of local bandwidth in some cases.

The sticking point might appear to be power consumption, given that many IoT devices are deployed for long time periods in locations that are not convenient to visit frequently for battery changes. By a similar token, direct connections to the electricity grid are usually either unavailable or impractical, while having dedicated solar or wind panels would elevate costs per device too much in most use cases.

But this calculation ignores the high power consumption of radios, as we were reminded when talking recently to Henk Koopmans, CEO of R&D at Huawei UK. He actually cited the desire to boost battery life as a motivation for massive increases in IoT device processor capabilities, alongside need to reduce latency and save on data transfers to the cloud.

“As many IoT devices are battery powered, often in hard-to-access places, replacing the batteries is time-consuming and affects the cost efficiency of the business model,” Koopmans noted. “Local processing reduces the need for wireless transmissions, the part of the device using the most energy, thereby greatly extending the battery life.”

But this assumes that such a hike in local processing power can be achieved affordably without offsetting the energy gains through cutting wireless transmission drastically. As Koopmans put it, “The challenge, therefore, is to come up with a new type of processor, capable of a level of artificial intelligence to enable the device to locally analyze the data and locally make decisions, while still retaining the very low power consumption level required for IoT devices.”

Koopmans, and Huawei, are convinced that such capability will be achieved through the emerging field of neuromorphic computing, or the third generation of AI as it is sometimes dubbed. The first generation of AI, sometimes called expert systems, emerged over 40 years ago in the 1970s in rule-based systems that emulated classical logical processes to draw reasoned conclusions within a specific, narrowly defined problem domain or field of expertise.

The poster child of this first generation was a medical diagnostic system called Mycin developed at Stanford University in the early 1970s, which demonstrated the genre well but was limited in scope and gained little traction in the clinic. Indeed, it was initially confined to identification of bacteria causing severe infections, such as meningitis, and then recommend appropriate antibiotics with dosages adjusted for the patient’s body weight.

Then after a prolonged lull in the AI field, the second generation emerged during the noughties, brought on by the phenomenal advanced in computational power that enabled application of sophisticated statistical regression at scale to very large data sets. This enabled pattern matching and identification at far higher resolution and granularity, leading to valuable applications in sensing and perception under the banners of neural networks and deep learning.

The ability to identify video streams on the basis of objects within individual frames, as well as to diagnose medical conditions such as some cancers automatically through analysis of X-ray or MRI scanned images, are examples of proven applications.

This second generation has been said to be modelled on the structure and processes of the human brain, but in reality it has just been loosely inspired by that. The neuroscience behind human cognition was just not well enough understood for direct translation into AI algorithms.
The mantra of mimicking the human brain is still being used for the third generation of AI, or neuromorphic computing, but with rather more humility, or perhaps reality. There is much talk of incorporating aspects of biological neural networks more directly into electronic circuits, but with admission that this is as much to provide tools for neuroscientists to develop and test theories of how human brains operate in more detail, as in turn to take inspiration from the brain in cognitive computing.

Indeed, this is already proving to be a two way process with neuroscientists working alongside cognitive computing specialists. It is already clear that even if biomorphic computing does not mimic the brain exactly, an approach in which complex multilayered networks are embodied directly in the architecture of Very Large Scale Integration (VLSI) systems containing electronic analog circuits can greatly accelerate machine learning processes with higher efficiency and much reduced power consumption.

It can also mimic some of the flexibility or plasticity of the human brain, with ability to reconfigure rapidly in near real time to tackle problems more adaptively in response to feedback. Such a structure is also more resilient against failures in the system. Finally, there are also possible security gains, as Koopmans noted, by retaining personal data at a local level, rather than being sent to a cloud where it could be used in an unintended way.

A critical aspect of research therefore lies in investigating how the morphology, or structure, of individual neurons, circuits, applications, and large-scale architectures enables the desired level and type of computation, achieved through available fundamental components such as transistors and spintronic memories.

It could be said to be the usual suspects engaging in such research beyond Huawei, notably leading chipmakers. Intel has developed a chip called Loihi, which it describes as its fifth generation self-learning neuromorphic research test chip, introduced in November 2017. This is a 128-core design based on a specialized architecture that is fabricated on 14-nanometer process technology. The key design future is operation around spiking neural networks (SNNs), developed specifically for arranging logic elements to emulate neural networks as understood to exist in brains of humans and indeed many animals.

The key property is adaptability or plasticity, the ability to learn from experience at the silicon level so that the overall networks become more capable, or smarter, over time. This is achieved by adding a concept that is known to exist in animal brains, that of activation whereby neurons fire only when their membrane electric charge exceeds a set threshold. At this point the neuron generates and transmits a signal which causes other neurons receiving it either to increase or decrease their own potentials as a result. This leads to coordinated activations and firings that can correspond with or execute cognitive processes.

It can be seen then that such a system is a valuable tool for neuroscientists to investigate hypotheses, as well as a vehicle for cognitive computing R&D. There are various research projects working with such ideas, including the European Human Brain Project, which has designed its own a chip and is working on a project called ‘BrainScaleS-2’.

The key point for Koopmans is that the underlying concepts are being proven and that the prizes are huge. “By trying to figure out whether processors can in some way copy the functions of the brain would, even on a small scale, represent a major advance,” said Koopmans. “For example, by replacing the commonly accepted processor architecture, with its separation between CPU and memory, the interconnection between the two being a major bottleneck in processor speeds, with in-memory processing, would be revolutionary.” This is why much of the R&D effort is focused on this area.

The biggest challenge facing this field is not so much at the level of technical design but scaling up for commercial deployment in the field. It is hard to overestimate the importance of, and dependence on, the testing and development ecosystems that have grown up around conventional chip development and manufacture. “Silicon processor chips are designed using CAD (computer-aided design) tools,” said Koopmans. “These tools don’t just allow for the design of the chip, they are also capable of simulating the performance. The investment in such tools is enormous because chip design complexity is increasing all the time.”

As a result, Koopmans admitted that despite the optimism, large scale deployment is a long way off. “What is clear is that the first step is to create the tools to both design and simulate these new chips, which can take years, and we’re still in the research stage.”
 
  • Like
  • Fire
  • Love
Reactions: 19 users

TechGirl

Founding Member
Thanks for trying to cheer me up.

But alas that artless one million dollar buy at 86 cents had boring lawyer type written all over it.

I know the type just buying so he can mention it at the golf club.

A lawyer I knew years ago now when he had new offices built rang up the law book company and ordered a metre of leather bound law books with gold leaf and dark green on the spine to go on the bookcase behind his desk. Didn’t care what law it covered as it was for appearance.

My son tried to cheer me up by saying don’t worry Dad it could have been a short covering his position.

But what short stands in the sunlight for twenty minutes before open and then just let’s his order stand.

Apart from anything else twenty minutes in direct sunlight he would turn to dust.

No I am afraid it’s too late. We’re becoming cardigan wearing blue chip investors and there is nothing anyone can do to stop it.

Regards
FF

AKIDA BALLISTA 😞

Please cheer up FF :giggle:

Don't forget BrainChip's nose is as good as these Doggies :)

Happy Golden Retriever GIF by Cameo


cute dog GIF


Dog Bulldog GIF by MOODMAN
 
  • Like
  • Haha
Reactions: 13 users

Alpstein

Emerged
Hi all,
When is the next 4C due and half-year financials?
 
  • Like
  • Haha
Reactions: 2 users
Article from last year was interesting on Huawei's thoughts on neuromorphic.

Though we're not mentioned it appears Huawei finally cottoned on to a couple of the key aspects - highlighted.



30 March 2021

Huawei embraces neuromorphic computing for IoT​


By Phil Hunter
The convention of IoT devices being lightweight in processing capability is being turned on its head by the rise of neuromorphic computing.
The aim is to mimic the plasticity of the human brain in a new generation of chips optimized for data analytics, employing algorithms under the banners of AI and machine learning. This is being driven by several factors, including demand for ultra-low latency edge computing and desire to save network bandwidth by cutting down on data transmission between end IoT devices and the cloud or centralized data centers.
It is true that edge computing can be deployed in distributed servers, but this itself imposes an overhead and cost, as well as requiring a lot of local bandwidth in some cases.

The sticking point might appear to be power consumption, given that many IoT devices are deployed for long time periods in locations that are not convenient to visit frequently for battery changes. By a similar token, direct connections to the electricity grid are usually either unavailable or impractical, while having dedicated solar or wind panels would elevate costs per device too much in most use cases.

But this calculation ignores the high power consumption of radios, as we were reminded when talking recently to Henk Koopmans, CEO of R&D at Huawei UK. He actually cited the desire to boost battery life as a motivation for massive increases in IoT device processor capabilities, alongside need to reduce latency and save on data transfers to the cloud.

“As many IoT devices are battery powered, often in hard-to-access places, replacing the batteries is time-consuming and affects the cost efficiency of the business model,” Koopmans noted. “Local processing reduces the need for wireless transmissions, the part of the device using the most energy, thereby greatly extending the battery life.”

But this assumes that such a hike in local processing power can be achieved affordably without offsetting the energy gains through cutting wireless transmission drastically. As Koopmans put it, “The challenge, therefore, is to come up with a new type of processor, capable of a level of artificial intelligence to enable the device to locally analyze the data and locally make decisions, while still retaining the very low power consumption level required for IoT devices.”

Koopmans, and Huawei, are convinced that such capability will be achieved through the emerging field of neuromorphic computing, or the third generation of AI as it is sometimes dubbed. The first generation of AI, sometimes called expert systems, emerged over 40 years ago in the 1970s in rule-based systems that emulated classical logical processes to draw reasoned conclusions within a specific, narrowly defined problem domain or field of expertise.

The poster child of this first generation was a medical diagnostic system called Mycin developed at Stanford University in the early 1970s, which demonstrated the genre well but was limited in scope and gained little traction in the clinic. Indeed, it was initially confined to identification of bacteria causing severe infections, such as meningitis, and then recommend appropriate antibiotics with dosages adjusted for the patient’s body weight.

Then after a prolonged lull in the AI field, the second generation emerged during the noughties, brought on by the phenomenal advanced in computational power that enabled application of sophisticated statistical regression at scale to very large data sets. This enabled pattern matching and identification at far higher resolution and granularity, leading to valuable applications in sensing and perception under the banners of neural networks and deep learning.

The ability to identify video streams on the basis of objects within individual frames, as well as to diagnose medical conditions such as some cancers automatically through analysis of X-ray or MRI scanned images, are examples of proven applications.

This second generation has been said to be modelled on the structure and processes of the human brain, but in reality it has just been loosely inspired by that. The neuroscience behind human cognition was just not well enough understood for direct translation into AI algorithms.
The mantra of mimicking the human brain is still being used for the third generation of AI, or neuromorphic computing, but with rather more humility, or perhaps reality. There is much talk of incorporating aspects of biological neural networks more directly into electronic circuits, but with admission that this is as much to provide tools for neuroscientists to develop and test theories of how human brains operate in more detail, as in turn to take inspiration from the brain in cognitive computing.

Indeed, this is already proving to be a two way process with neuroscientists working alongside cognitive computing specialists. It is already clear that even if biomorphic computing does not mimic the brain exactly, an approach in which complex multilayered networks are embodied directly in the architecture of Very Large Scale Integration (VLSI) systems containing electronic analog circuits can greatly accelerate machine learning processes with higher efficiency and much reduced power consumption.

It can also mimic some of the flexibility or plasticity of the human brain, with ability to reconfigure rapidly in near real time to tackle problems more adaptively in response to feedback. Such a structure is also more resilient against failures in the system. Finally, there are also possible security gains, as Koopmans noted, by retaining personal data at a local level, rather than being sent to a cloud where it could be used in an unintended way.

A critical aspect of research therefore lies in investigating how the morphology, or structure, of individual neurons, circuits, applications, and large-scale architectures enables the desired level and type of computation, achieved through available fundamental components such as transistors and spintronic memories.

It could be said to be the usual suspects engaging in such research beyond Huawei, notably leading chipmakers. Intel has developed a chip called Loihi, which it describes as its fifth generation self-learning neuromorphic research test chip, introduced in November 2017. This is a 128-core design based on a specialized architecture that is fabricated on 14-nanometer process technology. The key design future is operation around spiking neural networks (SNNs), developed specifically for arranging logic elements to emulate neural networks as understood to exist in brains of humans and indeed many animals.

The key property is adaptability or plasticity, the ability to learn from experience at the silicon level so that the overall networks become more capable, or smarter, over time. This is achieved by adding a concept that is known to exist in animal brains, that of activation whereby neurons fire only when their membrane electric charge exceeds a set threshold. At this point the neuron generates and transmits a signal which causes other neurons receiving it either to increase or decrease their own potentials as a result. This leads to coordinated activations and firings that can correspond with or execute cognitive processes.

It can be seen then that such a system is a valuable tool for neuroscientists to investigate hypotheses, as well as a vehicle for cognitive computing R&D. There are various research projects working with such ideas, including the European Human Brain Project, which has designed its own a chip and is working on a project called ‘BrainScaleS-2’.

The key point for Koopmans is that the underlying concepts are being proven and that the prizes are huge. “By trying to figure out whether processors can in some way copy the functions of the brain would, even on a small scale, represent a major advance,” said Koopmans. “For example, by replacing the commonly accepted processor architecture, with its separation between CPU and memory, the interconnection between the two being a major bottleneck in processor speeds, with in-memory processing, would be revolutionary.” This is why much of the R&D effort is focused on this area.

The biggest challenge facing this field is not so much at the level of technical design but scaling up for commercial deployment in the field. It is hard to overestimate the importance of, and dependence on, the testing and development ecosystems that have grown up around conventional chip development and manufacture. “Silicon processor chips are designed using CAD (computer-aided design) tools,” said Koopmans. “These tools don’t just allow for the design of the chip, they are also capable of simulating the performance. The investment in such tools is enormous because chip design complexity is increasing all the time.”

As a result, Koopmans admitted that despite the optimism, large scale deployment is a long way off. “What is clear is that the first step is to create the tools to both design and simulate these new chips, which can take years, and we’re still in the research stage.”
This article led me here & whilst I can't date as the role has been filled or removed it's definitely something Huawei working on imo.

Their bold.



Researcher - Neuromorphic Computing Algorithms​

Zürich, Switzerland


Huawei's vision is to enrich life through communication. We are a fast growing and leading global information and communications technology solutions provider. With our three business units Carrier, Enterprise and Consumer, we offer network infrastructure, cloud computing solutions and devices such as smartphones and tablet PCs.

Among our customers are 45 of the world's top 50 telecom operators, and one third of the world’s population uses Huawei technologies. Huawei is active in more than 170 countries and has over 180,000 employees of which more than 80,000 are engaged in research and development (R&D). With us you have the opportunity to work in a dynamic, multinational environment with more than 150 nationalities worldwide. We seek and reward talent. At Huawei, if you are dedicated to creativity, engagement of technical risks and delivery of target-driven results, your efforts will be rewarded with outstanding career prospects.

Our Research Center
With 18 sites across Europe and 1500 researchers, Huawei’s European Research Institute (ERI) oversees fundamental and applied technology research, academic research cooperation projects, and strategic technical planning across our network of European R&D facilities. Huawei’s ERI includes the new Zurich Research Center (ZRC), located in Zurich, Switzerland. A major element of ZRC is a new research group focused on fundamental research in the area of neuromorphic computing algorithms.

The group follows a target-oriented approach to neuromorphic computing research. Specifically, our aim is to exploit the computational properties that are unique to biological neurons and their neuromorphic emulations, to outperform conventional approaches to machine intelligence. For this new research group, we are currently looking for an outstanding Researcher in Neuromorphic Computing.
As a key member in our motivated and multicultural team, you will advance the state of the art in AI, showing in theory and applications some of the first concrete advantages of neuromorphic algorithms.
Your Responsibilities:

  • Conduct fundamental research on next-generation AI algorithms, using neuromorphic principles
  • Develop algorithms and theories that simultaneously advance both AI and neuroscience
  • Propose new applications of neuromorphic computing
  • Simulate and benchmark the algorithms against the state of the art
  • Produce and present research papers at international conferences and journals
  • Create and maintain collaborations with academic partners
Requirements
Essential Requirements

  • PhD in neuromorphic engineering, computational neuroscience, machine learning, or similar
  • Outstanding publishing record of research papers in the relevant fields
Preferred Requirements
  • Strong experience in simulating (spiking) neural networks is an advantage
  • A clear view of the advantages and limitations of current neuromorphic algorithms and hardware is an advantage
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Mugen74

Regular
Can someone please explain this cross trade for 160k?
 

Attachments

  • Screenshot_20220928-123154_CommSec.jpg
    Screenshot_20220928-123154_CommSec.jpg
    485.7 KB · Views: 81
  • Like
Reactions: 2 users
Whether this article is correct or not once again strategically Brainchip has it covered. Right places at the right time:

Asia's richest man sees growing isolation for China​

Diksha Madhok - Yesterday 8:22 pm

Indian billionaire Gautam Adani says that China “will feel increasingly isolated” and the “foremost champion of globalization” would find it hard to bounce back from a period of economic weakness.

Speaking at a conference in Singapore on Tuesday, Adani said “increasing nationalism, supply chain risk mitigation, and technology restrictions,” as well as resistance to Beijing’s huge Belt and Road initiative, would impact China’s global role.

Asia’s richest man said that “housing and credit risks” in the world’s second largest economy were also “drawing comparisons with what happened to the Japanese economy during the ‘lost decade’ of the 1990s.”

Adani was speaking less than a month after the business mogul became the world’s third richest man, according to the Bloomberg Billionaires Index. He is the first Asian to take that spot.

The founder of the eponymous Adani Group controls companies ranging from ports to power.

While pessimistic about China, Adani remains bullish about his own country, saying that India is “one of the few relatively bright spots from a political, geostrategic, and market perspective.”

He anticipates India to become the world’s third largest economy by 2030, with “the largest consuming middle class the world will ever see.”
Some technology firms looking to reduce their dependence on Chinese manufacturing already see India as an attractive alternative.

On Monday, Apple announced that it has started making its new iPhone 14 in India, as the technology giant looks to diversify its supply chain. While the company manufactures the bulk of its products in China, it has decided to start producing its latest devices in India much earlier than with previous generations.

Businesses may have to move away from China not just because of its strict Covid restrictions, which have been hurting supply chains for months now, but also because of rising tensions between Washington and Beijing over Taiwan.

The US government ordered two of America’s top chipmakers to stop selling high-performance chips to China earlier this month. And, last week, leaders of America’s biggest banks said they could exit China if it ever attacks Taiwan.

Adani also mentioned the challenges facing the United Kingdom, and countries in the European Union, because of the war in Ukraine and Brexit.
“While I expect all these economies will readjust over time — and bounce back — the friction of the bounce-back looks far harder this time,” he said."

My opinion only DYOR
FF

AKIDA BALLISTA
 

Attachments

  • 1664332196776.png
    1664332196776.png
    68 bytes · Views: 44
  • Like
  • Wow
  • Fire
Reactions: 19 users
This article led me here & whilst I can't date as the role has been filled or removed it's definitely something Huawei working on imo.

Their bold.



Researcher - Neuromorphic Computing Algorithms​

Zürich, Switzerland


Huawei's vision is to enrich life through communication. We are a fast growing and leading global information and communications technology solutions provider. With our three business units Carrier, Enterprise and Consumer, we offer network infrastructure, cloud computing solutions and devices such as smartphones and tablet PCs.

Among our customers are 45 of the world's top 50 telecom operators, and one third of the world’s population uses Huawei technologies. Huawei is active in more than 170 countries and has over 180,000 employees of which more than 80,000 are engaged in research and development (R&D). With us you have the opportunity to work in a dynamic, multinational environment with more than 150 nationalities worldwide. We seek and reward talent. At Huawei, if you are dedicated to creativity, engagement of technical risks and delivery of target-driven results, your efforts will be rewarded with outstanding career prospects.

Our Research Center
With 18 sites across Europe and 1500 researchers, Huawei’s European Research Institute (ERI) oversees fundamental and applied technology research, academic research cooperation projects, and strategic technical planning across our network of European R&D facilities. Huawei’s ERI includes the new Zurich Research Center (ZRC), located in Zurich, Switzerland. A major element of ZRC is a new research group focused on fundamental research in the area of neuromorphic computing algorithms.

The group follows a target-oriented approach to neuromorphic computing research. Specifically, our aim is to exploit the computational properties that are unique to biological neurons and their neuromorphic emulations, to outperform conventional approaches to machine intelligence. For this new research group, we are currently looking for an outstanding Researcher in Neuromorphic Computing.
As a key member in our motivated and multicultural team, you will advance the state of the art in AI, showing in theory and applications some of the first concrete advantages of neuromorphic algorithms.
Your Responsibilities:

  • Conduct fundamental research on next-generation AI algorithms, using neuromorphic principles
  • Develop algorithms and theories that simultaneously advance both AI and neuroscience
  • Propose new applications of neuromorphic computing
  • Simulate and benchmark the algorithms against the state of the art
  • Produce and present research papers at international conferences and journals
  • Create and maintain collaborations with academic partners
Requirements
Essential Requirements

  • PhD in neuromorphic engineering, computational neuroscience, machine learning, or similar
  • Outstanding publishing record of research papers in the relevant fields
Preferred Requirements
  • Strong experience in simulating (spiking) neural networks is an advantage
  • A clear view of the advantages and limitations of current neuromorphic algorithms and hardware is an advantage
Though in fairness we know they have their Da Vinci AI architecture being used on the Ascend 910 and Atlas servers.

How Da Vinci compares to Akida...don't know.
 
  • Like
Reactions: 6 users

Diogenese

Top 20
Article from last year was interesting on Huawei's thoughts on neuromorphic.

Though we're not mentioned it appears Huawei finally cottoned on to a couple of the key aspects - highlighted.



30 March 2021

Huawei embraces neuromorphic computing for IoT​


By Phil Hunter
The convention of IoT devices being lightweight in processing capability is being turned on its head by the rise of neuromorphic computing.
The aim is to mimic the plasticity of the human brain in a new generation of chips optimized for data analytics, employing algorithms under the banners of AI and machine learning. This is being driven by several factors, including demand for ultra-low latency edge computing and desire to save network bandwidth by cutting down on data transmission between end IoT devices and the cloud or centralized data centers.
It is true that edge computing can be deployed in distributed servers, but this itself imposes an overhead and cost, as well as requiring a lot of local bandwidth in some cases.

The sticking point might appear to be power consumption, given that many IoT devices are deployed for long time periods in locations that are not convenient to visit frequently for battery changes. By a similar token, direct connections to the electricity grid are usually either unavailable or impractical, while having dedicated solar or wind panels would elevate costs per device too much in most use cases.

But this calculation ignores the high power consumption of radios, as we were reminded when talking recently to Henk Koopmans, CEO of R&D at Huawei UK. He actually cited the desire to boost battery life as a motivation for massive increases in IoT device processor capabilities, alongside need to reduce latency and save on data transfers to the cloud.

“As many IoT devices are battery powered, often in hard-to-access places, replacing the batteries is time-consuming and affects the cost efficiency of the business model,” Koopmans noted. “Local processing reduces the need for wireless transmissions, the part of the device using the most energy, thereby greatly extending the battery life.”

But this assumes that such a hike in local processing power can be achieved affordably without offsetting the energy gains through cutting wireless transmission drastically. As Koopmans put it, “The challenge, therefore, is to come up with a new type of processor, capable of a level of artificial intelligence to enable the device to locally analyze the data and locally make decisions, while still retaining the very low power consumption level required for IoT devices.”

Koopmans, and Huawei, are convinced that such capability will be achieved through the emerging field of neuromorphic computing, or the third generation of AI as it is sometimes dubbed. The first generation of AI, sometimes called expert systems, emerged over 40 years ago in the 1970s in rule-based systems that emulated classical logical processes to draw reasoned conclusions within a specific, narrowly defined problem domain or field of expertise.

The poster child of this first generation was a medical diagnostic system called Mycin developed at Stanford University in the early 1970s, which demonstrated the genre well but was limited in scope and gained little traction in the clinic. Indeed, it was initially confined to identification of bacteria causing severe infections, such as meningitis, and then recommend appropriate antibiotics with dosages adjusted for the patient’s body weight.

Then after a prolonged lull in the AI field, the second generation emerged during the noughties, brought on by the phenomenal advanced in computational power that enabled application of sophisticated statistical regression at scale to very large data sets. This enabled pattern matching and identification at far higher resolution and granularity, leading to valuable applications in sensing and perception under the banners of neural networks and deep learning.

The ability to identify video streams on the basis of objects within individual frames, as well as to diagnose medical conditions such as some cancers automatically through analysis of X-ray or MRI scanned images, are examples of proven applications.

This second generation has been said to be modelled on the structure and processes of the human brain, but in reality it has just been loosely inspired by that. The neuroscience behind human cognition was just not well enough understood for direct translation into AI algorithms.
The mantra of mimicking the human brain is still being used for the third generation of AI, or neuromorphic computing, but with rather more humility, or perhaps reality. There is much talk of incorporating aspects of biological neural networks more directly into electronic circuits, but with admission that this is as much to provide tools for neuroscientists to develop and test theories of how human brains operate in more detail, as in turn to take inspiration from the brain in cognitive computing.

Indeed, this is already proving to be a two way process with neuroscientists working alongside cognitive computing specialists. It is already clear that even if biomorphic computing does not mimic the brain exactly, an approach in which complex multilayered networks are embodied directly in the architecture of Very Large Scale Integration (VLSI) systems containing electronic analog circuits can greatly accelerate machine learning processes with higher efficiency and much reduced power consumption.

It can also mimic some of the flexibility or plasticity of the human brain, with ability to reconfigure rapidly in near real time to tackle problems more adaptively in response to feedback. Such a structure is also more resilient against failures in the system. Finally, there are also possible security gains, as Koopmans noted, by retaining personal data at a local level, rather than being sent to a cloud where it could be used in an unintended way.

A critical aspect of research therefore lies in investigating how the morphology, or structure, of individual neurons, circuits, applications, and large-scale architectures enables the desired level and type of computation, achieved through available fundamental components such as transistors and spintronic memories.

It could be said to be the usual suspects engaging in such research beyond Huawei, notably leading chipmakers. Intel has developed a chip called Loihi, which it describes as its fifth generation self-learning neuromorphic research test chip, introduced in November 2017. This is a 128-core design based on a specialized architecture that is fabricated on 14-nanometer process technology. The key design future is operation around spiking neural networks (SNNs), developed specifically for arranging logic elements to emulate neural networks as understood to exist in brains of humans and indeed many animals.

The key property is adaptability or plasticity, the ability to learn from experience at the silicon level so that the overall networks become more capable, or smarter, over time. This is achieved by adding a concept that is known to exist in animal brains, that of activation whereby neurons fire only when their membrane electric charge exceeds a set threshold. At this point the neuron generates and transmits a signal which causes other neurons receiving it either to increase or decrease their own potentials as a result. This leads to coordinated activations and firings that can correspond with or execute cognitive processes.

It can be seen then that such a system is a valuable tool for neuroscientists to investigate hypotheses, as well as a vehicle for cognitive computing R&D. There are various research projects working with such ideas, including the European Human Brain Project, which has designed its own a chip and is working on a project called ‘BrainScaleS-2’.

The key point for Koopmans is that the underlying concepts are being proven and that the prizes are huge. “By trying to figure out whether processors can in some way copy the functions of the brain would, even on a small scale, represent a major advance,” said Koopmans. “For example, by replacing the commonly accepted processor architecture, with its separation between CPU and memory, the interconnection between the two being a major bottleneck in processor speeds, with in-memory processing, would be revolutionary.” This is why much of the R&D effort is focused on this area.

The biggest challenge facing this field is not so much at the level of technical design but scaling up for commercial deployment in the field. It is hard to overestimate the importance of, and dependence on, the testing and development ecosystems that have grown up around conventional chip development and manufacture. “Silicon processor chips are designed using CAD (computer-aided design) tools,” said Koopmans. “These tools don’t just allow for the design of the chip, they are also capable of simulating the performance. The investment in such tools is enormous because chip design complexity is increasing all the time.”

As a result, Koopmans admitted that despite the optimism, large scale deployment is a long way off. “What is clear is that the first step is to create the tools to both design and simulate these new chips, which can take years, and we’re still in the research stage.”

Hi FMF,

It seems that the neuroscientists' interest in closely emulating the architecture of neurons and the structure of the brain has has had considerable influence on analog NNs.

"There is much talk of incorporating aspects of biological neural networks more directly into electronic circuits, but with admission that this is as much to provide tools for neuroscientists to develop and test theories of how human brains operate in more detail, as in turn to take inspiration from the brain in cognitive computing.

Indeed, this is already proving to be a two way process with neuroscientists working alongside cognitive computing specialists. It is already clear that even if biomorphic computing does not mimic the brain exactly, an approach in which complex multilayered networks are embodied directly in the architecture of Very Large Scale Integration (VLSI) systems containing electronic analog circuits can greatly accelerate machine learning processes with higher efficiency and much reduced power consumption.
...
A critical aspect of research therefore lies in investigating how the morphology, or structure, of individual neurons, circuits, applications, and large-scale architectures enables the desired level and type of computation, achieved through available fundamental components such as transistors and spintronic memories
."

BrainChip's Akida digital NN does not seek to directly replicate the architecture or morphology of the brain. Instead it implements the function of the brain in a manner which is both efficient and flexible.

So, while it is legitimate for neuroscientists to seek as direct an architectural emulation of the brain structure as possible, in the field of sensor processing the objective is to emulate, or indeed surpass, the function of the brain in recognition and learning.

Akida does not make any attempt to reproduce the architecture or morphology of neurons, synapses, soma, etc. However, it does a very good job of implementing their function in a practical embodiment.
 
  • Like
  • Love
  • Fire
Reactions: 28 users
D

Deleted member 118

Guest
I wonder who is accumulating millions of shares at 0.89 which I’ve pointed out before as it seems very strange. Yet again another 0.89 brought a little while ago. Be good to know who it is.

F22FDE51-2C0E-4851-B195-0198D6070C0E.png
 
  • Like
Reactions: 13 users

Mugen74

Regular
I wonder who is accumulating millions of shares at 0.89 which I’ve pointed out before as it seems very strange. Yet again another 0.89 brought a little while ago. Be good to know who it is.

View attachment 17556
Another buy a 12.30 for 60k approx! ??
 
  • Like
Reactions: 7 users
Though in fairness we know they have their Da Vinci AI architecture being used on the Ascend 910 and Atlas servers.

How Da Vinci compares to Akida...don't know.
However, a quick search and one of their patents from last couple of years does not appear to be based around or include SNN :unsure:



Neural network computing chip and computing method​


The embodiment of the application is mainly applied to a neural network system, wherein the neural network can also be called an Artificial Neural Network (ANN) or a neural network, and in the field of machine learning and cognitive science, the neural network is a mathematical model or a computational model simulating the structure and function of a biological neural network (the central nervous system of an animal, particularly the brain) and is used for estimating or approximating a function. The artificial neural network may include a Convolutional Neural Network (CNN), a Deep Neural Network (DNN), a multilayer perceptron (MLP), and the like.
 
  • Like
  • Love
Reactions: 7 users

Mugen74

Regular
  • Like
Reactions: 6 users
D

Deleted member 118

Guest
Another buy a 12.30 for 60k approx! ??

This has been going on for sometime now and I whoever is accumulating to sell to someone for 0.89 as it must be well into the milllions and that’s what I’ve just seen. Maybe someone can download the last few months spreadsheets to see how many large sakes at 0.89 there had been.
 
  • Like
  • Thinking
Reactions: 4 users
And again at 15 min interval 25k this time
Geez....do we need a running commentary on my buying patterns........I wish :ROFLMAO::(
 
  • Haha
  • Like
Reactions: 13 users

Slymeat

Move on, nothing to see.
No, but the BrainChip promo is interesting:
https://brainchip.com/tinyml-neuromorphic-engineering-forum-tuesday-september-27-2022-virtual/
"... He will highlight how hardware design choices such as the event-based computing paradigm, low-bit width precision computation, the co-location of processing and memory, distributed computation, and support for efficient, on-chip learning algorithms enable low-power, high-performance ML execution at the edge. Finally, Mankar will discuss how this architecture supports next-generation SNN algorithms such as binarized CNNs and algorithms that efficiently utilize temporal information to increase accuracy."

"Utilizing temporal information to increase accuracy" sounds like Anil may have splashed a bit of the secret sauce about - the discovery by Simon Thorpe's group that most of the relevant information is contained in the early-arriving spikes, leading to N-of-M coding.



View attachment 17553

View attachment 17550



View attachment 17551

View attachment 17552




It was more than just serendipity that PvdM was the only person in the world who recognized the practical implications of this and had the hardware to implement it.
I particularly like the “co-location of processing and memory” statement. Let’ hope that is a wee bit of nano sized, low power consumption, non-volatile memory—wink wink!
 
  • Like
  • Haha
  • Fire
Reactions: 7 users

Mugen74

Regular
This has been going on for sometime now and I whoever is accumulating to sell to someone for 0.89 as it must be well into the milllions and that’s what I’ve just seen. Maybe someone can download the last few months spreadsheets to see how many large sakes at 0.89 there had been.
I only relly took notice today.Timing of trades seems weir(4c above real time price action).Can you cross at whatever price is agreed upon?
 
  • Thinking
Reactions: 2 users

Proga

Regular
Thanks for trying to cheer me up.

But alas that artless one million dollar buy at 86 cents had boring lawyer type written all over it.

I know the type just buying so he can mention it at the golf club.

A lawyer I knew years ago now when he had new offices built rang up the law book company and ordered a metre of leather bound law books with gold leaf and dark green on the spine to go on the bookcase behind his desk. Didn’t care what law it covered as it was for appearance.

My son tried to cheer me up by saying don’t worry Dad it could have been a short covering his position.

But what short stands in the sunlight for twenty minutes before open and then just let’s his order stand.

Apart from anything else twenty minutes in direct sunlight he would turn to dust.

No I am afraid it’s too late. We’re becoming cardigan wearing blue chip investors and there is nothing anyone can do to stop it.

Regards
FF

AKIDA BALLISTA 😞
"No I am afraid it’s too late. We’re becoming cardigan wearing blue chip investors and there is nothing anyone can do to stop it"

What's this we white man? 🤣

Agree. Shorts usually come from off the boards.

I read with interest your recent posts trying to educate the uninitiated. Should have used the old saying "it's the economy stupid". I think they're about to learn. The consensus on Bloomberg last night was a 97% chance of a recession.

The Chancellor spooked financial markets by announcing the biggest tax giveaway since 1972 at last Friday’s “mini budget”. Mr Kwarteng promised £45 billion ($74 billion) of tax cuts and said further borrowing would be used to pay for it. Most of the cuts are for the rich.

The IMF and the Europeans have already ask them to reconsider the tax cuts for the rich. Never ends well when monetary and fiscal policy clash. The gilt market has gone bonkers. We are about to do the same thing here unless Labor reverses the stage 3 tax cuts for the rich. It's only going to force the RBA to increase interest rates higher for longer which will initially hurt the poor and middle class first. Their poultry tax cuts won't come close to matching their mortgage repayment increases. Then the rich will start going broke as their businesses collapse. It is a vicious cycle and for what? So 1 party can win the next election only to be turfed out the next when the house of cards comes crashing down and we all suffer.

I laughed when watching Insiders on Sunday when they interviewed that crazy women from the LNP and she talked about how they have to be careful with a Federal ICAC so it doesn't discourage good people to run for parliament. The entire nation wants a strong ICAC to discourage 90% of the current incumbents and people like them not to run. She has to know because the Teals wiped them out in the bluest of blue seats and no party claimed a national primary vote over 33%.

Hopefully this cheers you up after my and your son's earlier failures.
 
Last edited:
  • Like
  • Haha
Reactions: 7 users
Next thing we will have posters boring us to tears with PE ratios, dividends, five year income projections, share buy back schemes and Nasdaq listings.

I’ll have to walk away.😂🤣😂😎🤡

My opinion only DYOR
FF

AKIDA BALLISTA
Punching ones self in the cock whilst reading the m.a.r.k.e.t SUN to stroking it whilst looking at our chip on the centre fold of Penthouse is going to be an easy change for some out there .
 
  • Haha
  • Like
Reactions: 5 users

Mugen74

Regular
Punching ones self in the cock whilst reading the m.a.r.k.e.t SUN to stroking it whilst looking at our chip on the centre fold of Penthouse is going to be an easy change for some out there .
Showing youe age there Frank.No Penthouse for Akida its gonna have an only fans page and rake the $ in🤣
 
  • Haha
  • Like
  • Fire
Reactions: 12 users
Top Bottom