BRN Discussion Ongoing

Diogenese

Top 20
I believe Dolci is a he not a she say no more
How does that operation work?
 
  • Haha
  • Thinking
  • Like
Reactions: 16 users

Diogenese

Top 20
I believe Dolci is a he not a she say no more
I did have a sneaking suspicion that a chest-bursting alien may have taken occupation.
 
  • Haha
  • Like
  • Love
Reactions: 12 users

Townyj

Ermahgerd
I did have a sneaking suspicion that a chest-bursting alien may have taken occupation.
ridley scott GIF by 20th Century Fox Home Entertainment
 
  • Haha
  • Fire
Reactions: 8 users
Can't recall if posted previously but anyway.

Great recent article from a Renesas Product Marketing Specialist writing about neuromorphic, Brainchip, Akida and reiterating comments by Renesas EVP Chittipeddi.

Getting the message out there 🔥


Neuromorphic devices in TinyML​

November 15th 2022

renesas electronics
Author: Eldar Sido, Product Marketing Specialist, Renesas Electronics

Neural networks (NNs) have been inspired by the brain and the use of neuroscience terminologies (neurons and synapses) to explain neural networks has always been a source of complaint for neuroscientists, as the current generation of neural networks are polar opposites. to the way the brain works. Despite the inspiration, the general structure, neural calculations, and learning techniques between the current second generation of neural networks and the brain differed greatly. This comparison so upset neuroscientists that they began work on the third generation of networks that were more like the brain, called Spike Neural Networks (SNNs) with hardware capable of running them, namely the neuromorphic architecture.

Spiking of neural networks​

SNNs are a type of artificial neural network (ANN) that are more closely inspired by the brain than their second generation counterpart with one key difference, in that SNNs are spatiotemporal NNs, that is, they consider time in their operation. SNNs operate on discrete peaks determined by a differential equation representing various biological processes. The critical threshold fires after the neuron's membrane potential is reached ("firing" threshold), which occurs when spikes are fired in that neuron at specific time sequences. Analogously, the brain consists of 86 billion computational units called neurons, which receive information from other neurons via dendrites, once inputs exceed a certain threshold, the neuron fires and sends an electrical pulse through of a synapse, and the synaptic weight controls the spread of the pulse sent to the next neuron. Unlike other artificial neural networks, SNN neurons fire asynchronously at different layers of the network and arrive at different times where information traditionally propagates across layers dictated by the system clock. The spatiotemporal property of SNNs, together with the discontinuous nature of the spikes, means that models can be more sparsely distributed with neurons that only connect to relevant neurons and use time as a variable, allowing information to is more densely encoded compared to ANN's traditional binary encoding. Which leads to SNNs being more computationally powerful and more efficient.
conventional ann snn
Figure 1. Difference between conventional ANN and SNN.

The asynchronous behavior of SNNs together with the need to execute differential equations is computationally demanding on traditional hardware and a new architecture had to be developed. This is where neuromorphic architecture comes in.

neuromorphic architecture​

Neuromorphic architecture is a non-Von Neuman architecture inspired by the brain, made up of neurons and synapses. In neuromorphic computers, data processing and storage occur in the same region, alleviating the von Neuman bottleneck that slows down the maximum performance that traditional architectures can achieve due to the need to move data from memory to memory. processing units at relatively slow speeds.

Furthermore, the neuromorphic architecture natively supports SNNs and accepts spikes as inputs, allowing information to be encoded in spike arrival time, magnitude, and shape. Thus, key features of neuromorphic devices include their inherent scalability, event-based computation, and stochasticity, since firing neurons can have a sense of randomness, making neuromorphic architecture attractive due to its ultra-low power operation, which generally operates at magnitudes less than traditional computer systems.
different architectures
Figure 2. Von Neumann architecture vs neuromorphic architecture (non-Von Neumann).

Neuromorphic Market Forecast​

Technologically, neuromorphic devices have the potential to play an important role in the coming era of edge and endpoint artificial intelligence. To understand the expected demand of the industry, it is necessary to look at the research forecasts. According to a report by Sheer Analytics & Insights, the global neuromorphic computing market is expected to reach $780 million with a CAGR of 50,3% by 2028 [1]. Mordor Intelligence, on the other hand, expects the market to reach $366 million by 2026 at a CAGR of 47,4% [2] and much more market research can be found online indicating a similar increase. While the forecast numbers are not consistent with each other, one thing is consistent, the demand for neuromorphic devices is expected to increase dramatically in the coming years and market research firms expect various industries such as industrial, automotive, mobile and medical adopt neuromorphic devices for a variety of applications.

Neuromorphic TinyML​

Since TinyML (Tiny Machine Learning) is concerned with running ML and NN on devices with strict memory/processor constraints, such as microcontrollers (MCUs), it is a natural step to incorporate a neuromorphic kernel for TinyML use cases. , as there are several distinct advantages.

Neuromorphic devices are event-based processors that operate on non-zero events. Event-based convolution and dot products are significantly less computationally expensive, since zeros are not processed. Event-based convolution performance is further improved with more zeros in the filter channels or kernels. This, along with trigger features such as Relu being centered around zero, provides the inherent trigger sparseness property of event-based processors, reducing effective MAC requirements.

Also, as the processing of neuromorphic devices increases, more restricted quantization, such as 1, 2, and 4-bit quantization, can be used compared to conventional 8-bit quantization in ANN.

Also, since SNNs are embedded in hardware, neuromorphic devices (such as Brainchip's Akida) have the unique On-Edge learning capability. This is not possible with conventional devices, as they only simulate a Von Neumann architecture neural network, making On-Edge learning computationally expensive with large memory overheads, outside the budget of TinyML systems. Also, to train an NN model, integers would not provide enough range to train a model accurately, so it is currently not feasible to train with 8 bits on traditional architectures. For traditional architectures, currently, some edge learning implementations with machine learning algorithms (auto-encoders, decision trees) have reached a production stage for simple real-time analytics use cases, while NNs are still are under investigation.

In summary, the advantages of using neuromorphic devices and SNN On-Edge:
– Ultra low power consumption (milli to microjoules by inference)
– Lower MAC requirements compared to conventional NNs
– Less parameter memory usage compared to conventional NNs
– On-Edge learning capabilities

Neuromorphic TinyML Use Cases​

With all said and done, microcontrollers with neuromorphic cores can excel in industry-wide use cases with their distinctive edge-learning features, such as:
  • In anomaly detection applications for existing industrial equipment, where using the cloud to train a model is inefficient, so adding an endpoint AI device in the engine and training at the edge would allow for easy scalability since equipment aging tends to differ from machine to machine. Even if they are the same model.
  • In robotics, as time goes by, the joints of the robotic arms tend to wear out, misalign and stop working as needed. Retuning the driver at the edge without human intervention mitigates the need to call a professional, reduces downtime, and saves time and money.
  • In facial recognition applications, a user would have to add their face to the dataset and retrain the model in the cloud. With just a few snapshots of a person's face, the neuromorphic device can identify the end user through On-Edge learning, allowing users' data to be secure on the device along with a smoother experience. This can be used in cars, where different users have different preferences for seat position, climate control, etc.
  • In keyword detection apps, adding additional words for your device to recognize at the edge. It can be used in biometric applications, where a person would add a "secret word" that they would like to keep secure on the device.
on edge
Figure 3. On-Edge Learning Use Cases for Neuromorphic Devices

The balance between the ultra-low power of neuromorphic endpoint devices and the enhanced performance makes it suitable for extended battery-powered applications, running algorithms that are not possible on other low-power devices due to being computationally limited. Or vice versa, with high-end devices capable of similar processing power consuming too much power. Use cases include:
  • Smart watches that monitor and process data at the endpoint, sending only relevant information to the cloud.
  • Smart camera sensors for people detection to execute a logical command. For example, the automatic opening of doors when a person approaches, since current technology is based on proximity sensors.
  • Area without connectivity or charging capabilities, such as in forests for intelligent animal tracking or monitoring below ocean pipelines for possible cracks using real-time sound, vision and vibration data.
  • For infrastructure monitoring use cases, where a neuromorphic MCU can be used to continuously monitor motion, vibration, and structural changes in bridges (via imaging) to identify potential failures.
energy use cases
Figure 4. High performance ultra-low power use cases

Conclusions​

Renesas, as a leader in semiconductors, has recognized the great potential of neuromorphic devices and SNNs, so we have licensed a neuromorphic core from Brainchip [3], the world's first commercial producer of neuromorphic IP, as noted by Sailesh Chittipeddi , our executive vice president at EEnews Europe, “At the low end, we've added an ARM M33 MCU and a spike neural network with BrainChip core licensed for select applications; we have licensed what we need to license BrainChip, including the software to get the ball rolling.” [4]

Therefore, as we try to innovate and develop the best possible devices on the market, we are excited to see how this innovation will contribute to making our lives easier.
 
  • Like
  • Fire
  • Love
Reactions: 53 users

JDelekto

Regular
Here is a thought which in my opinion should be given weight.

The two million shares received by the CEO are part of his salary.

Ignoring that he sold 972,000 odd shares to pay his tax I would like to draw attention to the fact that the CEO has actually spent close to $800,000 of his salary to acquire 1,083,000 Brainchip shares.

Before you dismiss this consider the situation where instead of agreeing to take cash and shares the CEO insisted on receiving an all cash salary.

The cash would have been taxed as occurred with the share based payment. The tax paid would be the same.

If the CEO then took $800,000.00 of his cash salary and bought 1,083,000 shares according to some here this would be a sign of faith in Brainchip.

If it is why is the CEOs retention of 1,083,000 shares instead of selling all two million shares to convert them all to a cash salary less worthy???

Those who are ignoring the retention of these shares as proof of the CEO having faith in BRAINCHIP need to stand back and look again at his actions. He could have very easily chosen to have all 2 million shares sold and the $800,000 odd dollars handed to him and then bought Tesla, Nvidia, Amazon or Google shares or even Berkshire Hathaway but no he chose to buy/keep 1,083,000 Brainchip shares.

My opinion only DYOR
FF

AKIDA BALLISTA

This is all I needed to see:

1670253212981.png

When I see that, I immediately choose to ignore all other responses or opinions about that announcement.

I think I blame that on becoming a cynic in my old age. LOL
 
  • Like
  • Haha
  • Fire
Reactions: 24 users

TechGirl

Founding Member
Can't recall if posted previously but anyway.

Great recent article from a Renesas Product Marketing Specialist writing about neuromorphic, Brainchip, Akida and reiterating comments by Renesas EVP Chittipeddi.

Getting the message out there 🔥


Neuromorphic devices in TinyML​

November 15th 2022

renesas electronics
Author: Eldar Sido, Product Marketing Specialist, Renesas Electronics

Neural networks (NNs) have been inspired by the brain and the use of neuroscience terminologies (neurons and synapses) to explain neural networks has always been a source of complaint for neuroscientists, as the current generation of neural networks are polar opposites. to the way the brain works. Despite the inspiration, the general structure, neural calculations, and learning techniques between the current second generation of neural networks and the brain differed greatly. This comparison so upset neuroscientists that they began work on the third generation of networks that were more like the brain, called Spike Neural Networks (SNNs) with hardware capable of running them, namely the neuromorphic architecture.

Spiking of neural networks​

SNNs are a type of artificial neural network (ANN) that are more closely inspired by the brain than their second generation counterpart with one key difference, in that SNNs are spatiotemporal NNs, that is, they consider time in their operation. SNNs operate on discrete peaks determined by a differential equation representing various biological processes. The critical threshold fires after the neuron's membrane potential is reached ("firing" threshold), which occurs when spikes are fired in that neuron at specific time sequences. Analogously, the brain consists of 86 billion computational units called neurons, which receive information from other neurons via dendrites, once inputs exceed a certain threshold, the neuron fires and sends an electrical pulse through of a synapse, and the synaptic weight controls the spread of the pulse sent to the next neuron. Unlike other artificial neural networks, SNN neurons fire asynchronously at different layers of the network and arrive at different times where information traditionally propagates across layers dictated by the system clock. The spatiotemporal property of SNNs, together with the discontinuous nature of the spikes, means that models can be more sparsely distributed with neurons that only connect to relevant neurons and use time as a variable, allowing information to is more densely encoded compared to ANN's traditional binary encoding. Which leads to SNNs being more computationally powerful and more efficient.
conventional ann snn
Figure 1. Difference between conventional ANN and SNN.

The asynchronous behavior of SNNs together with the need to execute differential equations is computationally demanding on traditional hardware and a new architecture had to be developed. This is where neuromorphic architecture comes in.

neuromorphic architecture​

Neuromorphic architecture is a non-Von Neuman architecture inspired by the brain, made up of neurons and synapses. In neuromorphic computers, data processing and storage occur in the same region, alleviating the von Neuman bottleneck that slows down the maximum performance that traditional architectures can achieve due to the need to move data from memory to memory. processing units at relatively slow speeds.

Furthermore, the neuromorphic architecture natively supports SNNs and accepts spikes as inputs, allowing information to be encoded in spike arrival time, magnitude, and shape. Thus, key features of neuromorphic devices include their inherent scalability, event-based computation, and stochasticity, since firing neurons can have a sense of randomness, making neuromorphic architecture attractive due to its ultra-low power operation, which generally operates at magnitudes less than traditional computer systems.
different architectures
Figure 2. Von Neumann architecture vs neuromorphic architecture (non-Von Neumann).

Neuromorphic Market Forecast​

Technologically, neuromorphic devices have the potential to play an important role in the coming era of edge and endpoint artificial intelligence. To understand the expected demand of the industry, it is necessary to look at the research forecasts. According to a report by Sheer Analytics & Insights, the global neuromorphic computing market is expected to reach $780 million with a CAGR of 50,3% by 2028 [1]. Mordor Intelligence, on the other hand, expects the market to reach $366 million by 2026 at a CAGR of 47,4% [2] and much more market research can be found online indicating a similar increase. While the forecast numbers are not consistent with each other, one thing is consistent, the demand for neuromorphic devices is expected to increase dramatically in the coming years and market research firms expect various industries such as industrial, automotive, mobile and medical adopt neuromorphic devices for a variety of applications.

Neuromorphic TinyML​

Since TinyML (Tiny Machine Learning) is concerned with running ML and NN on devices with strict memory/processor constraints, such as microcontrollers (MCUs), it is a natural step to incorporate a neuromorphic kernel for TinyML use cases. , as there are several distinct advantages.

Neuromorphic devices are event-based processors that operate on non-zero events. Event-based convolution and dot products are significantly less computationally expensive, since zeros are not processed. Event-based convolution performance is further improved with more zeros in the filter channels or kernels. This, along with trigger features such as Relu being centered around zero, provides the inherent trigger sparseness property of event-based processors, reducing effective MAC requirements.

Also, as the processing of neuromorphic devices increases, more restricted quantization, such as 1, 2, and 4-bit quantization, can be used compared to conventional 8-bit quantization in ANN.

Also, since SNNs are embedded in hardware, neuromorphic devices (such as Brainchip's Akida) have the unique On-Edge learning capability. This is not possible with conventional devices, as they only simulate a Von Neumann architecture neural network, making On-Edge learning computationally expensive with large memory overheads, outside the budget of TinyML systems. Also, to train an NN model, integers would not provide enough range to train a model accurately, so it is currently not feasible to train with 8 bits on traditional architectures. For traditional architectures, currently, some edge learning implementations with machine learning algorithms (auto-encoders, decision trees) have reached a production stage for simple real-time analytics use cases, while NNs are still are under investigation.

In summary, the advantages of using neuromorphic devices and SNN On-Edge:
– Ultra low power consumption (milli to microjoules by inference)
– Lower MAC requirements compared to conventional NNs
– Less parameter memory usage compared to conventional NNs
– On-Edge learning capabilities

Neuromorphic TinyML Use Cases​

With all said and done, microcontrollers with neuromorphic cores can excel in industry-wide use cases with their distinctive edge-learning features, such as:
  • In anomaly detection applications for existing industrial equipment, where using the cloud to train a model is inefficient, so adding an endpoint AI device in the engine and training at the edge would allow for easy scalability since equipment aging tends to differ from machine to machine. Even if they are the same model.
  • In robotics, as time goes by, the joints of the robotic arms tend to wear out, misalign and stop working as needed. Retuning the driver at the edge without human intervention mitigates the need to call a professional, reduces downtime, and saves time and money.
  • In facial recognition applications, a user would have to add their face to the dataset and retrain the model in the cloud. With just a few snapshots of a person's face, the neuromorphic device can identify the end user through On-Edge learning, allowing users' data to be secure on the device along with a smoother experience. This can be used in cars, where different users have different preferences for seat position, climate control, etc.
  • In keyword detection apps, adding additional words for your device to recognize at the edge. It can be used in biometric applications, where a person would add a "secret word" that they would like to keep secure on the device.
on edge
Figure 3. On-Edge Learning Use Cases for Neuromorphic Devices

The balance between the ultra-low power of neuromorphic endpoint devices and the enhanced performance makes it suitable for extended battery-powered applications, running algorithms that are not possible on other low-power devices due to being computationally limited. Or vice versa, with high-end devices capable of similar processing power consuming too much power. Use cases include:
  • Smart watches that monitor and process data at the endpoint, sending only relevant information to the cloud.
  • Smart camera sensors for people detection to execute a logical command. For example, the automatic opening of doors when a person approaches, since current technology is based on proximity sensors.
  • Area without connectivity or charging capabilities, such as in forests for intelligent animal tracking or monitoring below ocean pipelines for possible cracks using real-time sound, vision and vibration data.
  • For infrastructure monitoring use cases, where a neuromorphic MCU can be used to continuously monitor motion, vibration, and structural changes in bridges (via imaging) to identify potential failures.
energy use cases
Figure 4. High performance ultra-low power use cases

Conclusions​

Renesas, as a leader in semiconductors, has recognized the great potential of neuromorphic devices and SNNs, so we have licensed a neuromorphic core from Brainchip [3], the world's first commercial producer of neuromorphic IP, as noted by Sailesh Chittipeddi , our executive vice president at EEnews Europe, “At the low end, we've added an ARM M33 MCU and a spike neural network with BrainChip core licensed for select applications; we have licensed what we need to license BrainChip, including the software to get the ball rolling.” [4]

Therefore, as we try to innovate and develop the best possible devices on the market, we are excited to see how this innovation will contribute to making our lives easier.

Thanks FMF, that was a great article and so relevant to us, I think every use case he mentions, he has plans for akida

It’s great to be a shareholder ❤️
 
  • Like
  • Love
  • Fire
Reactions: 29 users

FlipDollar

Never dog the boys
  • Like
  • Fire
Reactions: 13 users

Kachoo

Regular
I really don't care.

It's a discussion forum. I appreciate all opinions on here. If I disagree, I'm going to discuss it. If I get called out, I'm going to discuss it.

Fact Finder is a legend, doesn't mean I have to agree with him all the time.

I enjoy the banter with Wilzy. If he likes me or hates me, it is what it is.

Anyway, I'm off to bed.

Despite what some of you think, AKIDA BALLISTA
Not gonna argue with your view but we must keep in mind that Sean has millions of options still available so he will likely end up with 5 plus million even selling off for tax purposes. That's still more in the pie then most of us here.

We really don't know how personal situation finance wise so it's hard to comment more the really what the news provides. I get what your saying about raising money elsewhere but that may not be easy too.

Regards,
 
  • Like
  • Fire
Reactions: 18 users

Damo4

Regular
While I kind of agree with the general point of your post Dio. The only person to character assassinate(for want of a better word) Dolci was Dolci herself.

Congratulations to her and her big wins in trading BRN down the years.

As soon as she sold out of Brainchip earlier this year she came straight onto HC to shit on the company and in doing so shit on herself and subsequently played the victim as she couldn't see why people were pissed.

Time to move on from the Dolci conversation.
Far better chartists on here than her anyway imo.

Akida Ballista 🔥
From memory there was only a day or two between "uptrend kitty" and "$2b Co with no revenue ;)"

I've scrolled past it, but whoever called her a snake in the grass was spot on. She posts for her own personal gain, not anyone else's.
 
  • Like
  • Haha
  • Fire
Reactions: 25 users

Deadpool

hyper-efficient Ai
I believe Dolci is a he not a she say no more
Woman, can remember MD may have a drink with her at some stage a while back??
 
  • Like
Reactions: 3 users

Getupthere

Regular
AWS names 6 key trends driving machine learning innovation and adoption

Machine learning (ML) has undergone rapid transformation and adoption in recent years, driven by a number of factors.

There is no shortage of opinions about why artificial intelligence (AI) and ML are growing. A recent report from McKinsey identified industrializing ML and applied AI as among its top trends for the year. In a session at the AWS re:Invent conference this week, Bratin Saha, VP and GM of AI and machine learning at Amazon, outlined the six key trends the cloud giant is seeing that are helping to drive innovation and adoption in 2022 and beyond.

AWS claims to have over 100,000 customers for its AI/ML services. These services are spread across three tiers: ML infrastructure services, enabling organizations to build their own models; SageMaker, which provides tools to build applications; and purpose-built services for specific use cases, such as transcription.

“Machine learning has transitioned from being a niche activity to becoming integral to how companies do their business,” Saha said during the session.

Trend 1: Model sophistication is growing

Saha said that in recent years there has been an exponential increase in the sophistication of ML models. His use of the term “exponential” isn’t hyperbole either.

One way to measure machine learning models’ sophistication is by counting the number of parameters within them. Saha explained that parameters can be thought of as variables of values that are embedded inside ML models. In 2019, Saha said, then-state-of-the-art ML models had approximately 300 million parameters. Fast forward to 2022 and the best models now have more than 500 billion.

“In other words, in just three years, the sophistication of machine learning models has increased by 1,600 times,” Saha said.

These massive models are what are now commonly referred to as foundation models. With the foundation model approach, an ML model can be trained once, with a massive dataset, then reused and tuned for a variety of different tasks. Thus enterprises can benefit from the increased sophistication, with an easier-to-adopt approach.

“[Foundation models] reduce the cost and effort of doing machine learning by an order of magnitude,” Saha said.

Trend 2: Data growth

Increasing volumes of data, and different types of data, are being used to train ML models. This is the second key trend Saha identified.

Organizations are now building models that have been trained on structured data sources such as text, as well as unstructured data types including audio and video. Having the ability to get different data types into ML models has led to the development of multiple services at AWS to help in training models.

One such tool that Saha highlighted is SageMaker Data Wrangler, which helps users process unstructured data using an approach that makes it practical for ML training. AWS also added new support for geospatial data in SageMaker this week at the re:Invent conference.

Trend 3: Machine learning industrialization

AWS is also seeing a trend of increasing ML industrialization. That means more standardization of ML tools and infrastructure, enabling organizations to more easily build applications.

Saha said that ML industrialization is important because it helps organizations automate development and make it more reliable. An industrial, common approach is critical to scaling as organizations build and deploy more models.

“Even within Amazon we are using SageMaker for industrializing and machine learning development,” Saha said. “For example, the most complex Alexa speech models are now being trained on SageMaker.”

Trend 4: ML-powered apps for specific use cases

ML is also growing because of purpose-built applications for specific use cases.

Saha said that AWS customers have asked the vendor to automate common ML use cases. For example, AWS (and other vendors) now offer services such as voice transcription, translation, text-to-speech and anomaly detection. These give organizations an easier way to use ML-powered services.

Sentiment analysis in live audio calls, for example, is a new, complex use case that AWS now supports with the real-time call analytics capabilities of its Amazon Transcribe service. Saha said that the feature uses speech recognition models to understand customer sentiment.

Trend 5: Responsible AI

There is also a growing trend, and need, for responsible AI.

“With that growth in AI and ML comes the realization that we must use it responsibly,” Saha said.

From AWS’ perspective, responsible AI needs to have several key attributes. A system needs to be fair, operating equally for all users regardless of race, religion, gender and other user attributes. ML systems also need to be explainable, so organizations understand how a model operates. Also needed are governance mechanisms to make sure responsible AI is being practiced.

Trend 6: ML democratization

The final key trend that will drive ML forward is democratizing the technology, making tools and skills accessible to more people.

“Customers tell us that they … often have a hard time in hiring all the data science talent that they need,” Saha said.

The answers to the challenge of democratization, in Saha’s view, lie in continuing to develop low-code and use case-driven tools, and in education.

“AWS is also investing in training the next set of machine learning developers,” Saha said. “Amazon has committed that by 2025 we will help more than 29 million people improve their tech skills through free cloud computing skills training.”
 
  • Like
  • Fire
  • Love
Reactions: 17 users

equanimous

Norse clairvoyant shapeshifter goddess

Neurorobotics Scientist​

Neuraville LLC
Pittsburgh, PA




Full Job Description​

Neuraville is a technology company developing solutions that can pave the path for creating safe, versatile, and capable service robots. Our goal is to bridge the gap between the traditional robot digital control systems and the biological brain. We enable robots to take full advantage of their hardware capabilities while gaining the skills to interact effectively with their environment.
We are looking for highly talented individuals with a strong passion for improving the quality of others' lives through technology to join our growing team.
Our work is nature inspired and involves neuroscience, robotics, and software development. We leverage cutting-edge technologies of the day and scientific discoveries of generations to create tomorrow's technology.
Responsibilities:
As a Neurorobotics Scientist, you will work with a multidisciplinary team to deliver end-to-end solutions enabling robots to gain perceptual and cognitive abilities. You will help develop systems, methods, algorithms, and brain-inspired neural network architectures and models to support such goals.
A typical day at this job will include:
Literature review
Development of functional spiking neural network architectures
Implementation of the designs and testing of associated behaviors on simulated and physical robots while collaborating with other team members
Documentation, publication, and various transfer of knowledge
Lots of brainstorming and blackboard conversations
And yes, playing with cool toys!
Minimum Qualifications
Strong neuroscience background
Strong analytical skills
Strong communication skills
Relevant research and publications
Must obtain and maintain work authorization for and during employment
Preferred Qualifications
Doctor of Philosophy in developmental biology, cognitive neuroscience, systems neuroscience, or behavioral neuroscience
Strong programming skills
Evidence of prior innovative work
Leadership skills
 
  • Like
  • Fire
  • Thinking
Reactions: 21 users

equanimous

Norse clairvoyant shapeshifter goddess

About us​

Gen Nine is developing state-of-the-art, wearable hardware and software solutions for the home care setting. We provide the tools that help families care for the ones they love. We are a small group of researchers, engineers, designers, inventors, and hackers. Our research is principally funded through grants from the National Institutes of Health (NIH) and the National Institute on Aging (NIA). We collaborate with Stanford University School of Medicine in developing and testing our designs.

Neuromorphic AI Engineer​

Gen Nine, Inc.
Covington, KY

Full-time






Job details​

Job Type
Full-time

Full Job Description​

Job Description

The Company

Gen Nine is developing state-of-the-art, wearable hardware and software solutions for the home care setting. We provide the tools that help families care for the ones they love. Working at Gen Nine means applying your passion and intellect to help solve some very challenging technical problems and thereby create some of the most advanced products in the world. If you're interested in working with small teams of highly talented and motivated engineers seeking to make a difference in the world, Gen Nine may be the place for you.

Location

This position is based in the vibrant Cincinnati area. Our offices overlook the spectacular Cincinnati skyline and are within walking distance to downtown shopping, shopping malls, restaurants, entertainment, waterfront parks, major league sporting venues and are less than a 20-minute drive to an international airport.

Position
We're seeking results-orientated, neuromorphic AI engineers interested in exploring new concepts in health and safety monitoring using highly advanced edge computing hardware and software systems as part of a full-time, multi-year research and development project funded by the National Institutes of Health. This is a paid position. We have full-time, permanent positions and full-time, summer, internship positions. Internships may also be used to earn college credits.

Skills and Experience
The ideal candidate will have a background in electrical engineering, computer engineering, software development, computational neuroscience and/or mechatronics. This research engineer will investigate ultra-low power systems, devices, and algorithms for neuromorphic computing engines in applications including sensor arrays, activity tracking and voice processing. They will participate in a small-team environment to design, operate and simulate novel modes of computation using existing hardware platforms of hybrid CMOS/memristor circuits and contribute to expanding this platform for novel applications. Enthusiasm and the ability to participate in a team environment in order to solve interesting and complex problems is a must.

Required
  • PhD. preferred, but will consider a MS/BS degree in Computer Science, Computational Neuroscience, Physics, Mathematics, Electrical/Computer Engineering, or a related field.
  • Significant prior experience in AI/ML frameworks, modeling spiking neural networks and/or be currently enrolled in a PhD program with a focus on machine learning or neuro-inspired computational algorithms and applications.
  • Expertise in Python and/or C/C++ with the ability to design and write quality code.
  • Experience in machine learning frameworks such as TensorFlow or PyTorch.
  • Strong grasp of computer science fundamentals and mathematics.
  • Proficient in Linux OS environment.
  • Ability to think entrepreneurially and innovate in a real-world, problem-solving environment.
  • Ability to work both individually and as a small team member, with or without supervision.
 
  • Fire
  • Like
  • Love
Reactions: 14 users

Jchandel

Regular
Yeah like her words carry any weight.. Good for a graph or two. But the rest is just spitting poison. No wonder she hangs out with the negative mob still.
I think it’s time to have a separate thread for Dolci 😀
 
  • Haha
  • Like
Reactions: 10 users

equanimous

Norse clairvoyant shapeshifter goddess
@chapman89 If you receive bonus shares for your PR work and you sell some can you let us know if its due to tax purposes. Thanks
 
  • Haha
  • Like
Reactions: 19 users
@chapman89 If you receive bonus shares for your PR work and you sell some can you let us know if its due to tax purposes. Thanks
Not a good idea because if he sells them the whole thread gonna be going bananas 🍌
 
  • Haha
Reactions: 13 users

Baisyet

Regular
It must have shared before i stumbled on this one Linley fall Anil's presentation

 
  • Like
  • Fire
  • Love
Reactions: 16 users
Has anyone checked in with Markus Shaefer (CEO Mercedes) to see when his blog on Neuromorphic Computing is going to happen? I’m pretty sure it’s going to be before 🎄
Damn I knew there was something I was supposed to ask him. Must be getting old.

I hope you are correct about before Christmas but perhaps we mucked up all their planning based on their head of marketing’s focus group discussions showing that neuromorphic would come in last.

‘Damn you TSEx we will get you next time’ kind of moment was had at Mercedes Benz.😂😎🤣🤡😂🤡

As a result they have had to push the dates out so that they can still make their intended disclosures.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Love
Reactions: 21 users

Deadpool

hyper-efficient Ai
Neuromorphic Ai is the new black, it seems like every new computer related article or job position that is printed these days has a reference to it.

It's great to be a shareholder

high roller jackpot GIF by University of Alaska Fairbanks
 
  • Like
  • Love
  • Fire
Reactions: 27 users
Can't recall if posted previously but anyway.

Great recent article from a Renesas Product Marketing Specialist writing about neuromorphic, Brainchip, Akida and reiterating comments by Renesas EVP Chittipeddi.

Getting the message out there 🔥


Neuromorphic devices in TinyML​

November 15th 2022

renesas electronics
Author: Eldar Sido, Product Marketing Specialist, Renesas Electronics

Neural networks (NNs) have been inspired by the brain and the use of neuroscience terminologies (neurons and synapses) to explain neural networks has always been a source of complaint for neuroscientists, as the current generation of neural networks are polar opposites. to the way the brain works. Despite the inspiration, the general structure, neural calculations, and learning techniques between the current second generation of neural networks and the brain differed greatly. This comparison so upset neuroscientists that they began work on the third generation of networks that were more like the brain, called Spike Neural Networks (SNNs) with hardware capable of running them, namely the neuromorphic architecture.

Spiking of neural networks​

SNNs are a type of artificial neural network (ANN) that are more closely inspired by the brain than their second generation counterpart with one key difference, in that SNNs are spatiotemporal NNs, that is, they consider time in their operation. SNNs operate on discrete peaks determined by a differential equation representing various biological processes. The critical threshold fires after the neuron's membrane potential is reached ("firing" threshold), which occurs when spikes are fired in that neuron at specific time sequences. Analogously, the brain consists of 86 billion computational units called neurons, which receive information from other neurons via dendrites, once inputs exceed a certain threshold, the neuron fires and sends an electrical pulse through of a synapse, and the synaptic weight controls the spread of the pulse sent to the next neuron. Unlike other artificial neural networks, SNN neurons fire asynchronously at different layers of the network and arrive at different times where information traditionally propagates across layers dictated by the system clock. The spatiotemporal property of SNNs, together with the discontinuous nature of the spikes, means that models can be more sparsely distributed with neurons that only connect to relevant neurons and use time as a variable, allowing information to is more densely encoded compared to ANN's traditional binary encoding. Which leads to SNNs being more computationally powerful and more efficient.
conventional ann snn
Figure 1. Difference between conventional ANN and SNN.

The asynchronous behavior of SNNs together with the need to execute differential equations is computationally demanding on traditional hardware and a new architecture had to be developed. This is where neuromorphic architecture comes in.

neuromorphic architecture​

Neuromorphic architecture is a non-Von Neuman architecture inspired by the brain, made up of neurons and synapses. In neuromorphic computers, data processing and storage occur in the same region, alleviating the von Neuman bottleneck that slows down the maximum performance that traditional architectures can achieve due to the need to move data from memory to memory. processing units at relatively slow speeds.

Furthermore, the neuromorphic architecture natively supports SNNs and accepts spikes as inputs, allowing information to be encoded in spike arrival time, magnitude, and shape. Thus, key features of neuromorphic devices include their inherent scalability, event-based computation, and stochasticity, since firing neurons can have a sense of randomness, making neuromorphic architecture attractive due to its ultra-low power operation, which generally operates at magnitudes less than traditional computer systems.
different architectures
Figure 2. Von Neumann architecture vs neuromorphic architecture (non-Von Neumann).

Neuromorphic Market Forecast​

Technologically, neuromorphic devices have the potential to play an important role in the coming era of edge and endpoint artificial intelligence. To understand the expected demand of the industry, it is necessary to look at the research forecasts. According to a report by Sheer Analytics & Insights, the global neuromorphic computing market is expected to reach $780 million with a CAGR of 50,3% by 2028 [1]. Mordor Intelligence, on the other hand, expects the market to reach $366 million by 2026 at a CAGR of 47,4% [2] and much more market research can be found online indicating a similar increase. While the forecast numbers are not consistent with each other, one thing is consistent, the demand for neuromorphic devices is expected to increase dramatically in the coming years and market research firms expect various industries such as industrial, automotive, mobile and medical adopt neuromorphic devices for a variety of applications.

Neuromorphic TinyML​

Since TinyML (Tiny Machine Learning) is concerned with running ML and NN on devices with strict memory/processor constraints, such as microcontrollers (MCUs), it is a natural step to incorporate a neuromorphic kernel for TinyML use cases. , as there are several distinct advantages.

Neuromorphic devices are event-based processors that operate on non-zero events. Event-based convolution and dot products are significantly less computationally expensive, since zeros are not processed. Event-based convolution performance is further improved with more zeros in the filter channels or kernels. This, along with trigger features such as Relu being centered around zero, provides the inherent trigger sparseness property of event-based processors, reducing effective MAC requirements.

Also, as the processing of neuromorphic devices increases, more restricted quantization, such as 1, 2, and 4-bit quantization, can be used compared to conventional 8-bit quantization in ANN.

Also, since SNNs are embedded in hardware, neuromorphic devices (such as Brainchip's Akida) have the unique On-Edge learning capability. This is not possible with conventional devices, as they only simulate a Von Neumann architecture neural network, making On-Edge learning computationally expensive with large memory overheads, outside the budget of TinyML systems. Also, to train an NN model, integers would not provide enough range to train a model accurately, so it is currently not feasible to train with 8 bits on traditional architectures. For traditional architectures, currently, some edge learning implementations with machine learning algorithms (auto-encoders, decision trees) have reached a production stage for simple real-time analytics use cases, while NNs are still are under investigation.

In summary, the advantages of using neuromorphic devices and SNN On-Edge:
– Ultra low power consumption (milli to microjoules by inference)
– Lower MAC requirements compared to conventional NNs
– Less parameter memory usage compared to conventional NNs
– On-Edge learning capabilities

Neuromorphic TinyML Use Cases​

With all said and done, microcontrollers with neuromorphic cores can excel in industry-wide use cases with their distinctive edge-learning features, such as:
  • In anomaly detection applications for existing industrial equipment, where using the cloud to train a model is inefficient, so adding an endpoint AI device in the engine and training at the edge would allow for easy scalability since equipment aging tends to differ from machine to machine. Even if they are the same model.
  • In robotics, as time goes by, the joints of the robotic arms tend to wear out, misalign and stop working as needed. Retuning the driver at the edge without human intervention mitigates the need to call a professional, reduces downtime, and saves time and money.
  • In facial recognition applications, a user would have to add their face to the dataset and retrain the model in the cloud. With just a few snapshots of a person's face, the neuromorphic device can identify the end user through On-Edge learning, allowing users' data to be secure on the device along with a smoother experience. This can be used in cars, where different users have different preferences for seat position, climate control, etc.
  • In keyword detection apps, adding additional words for your device to recognize at the edge. It can be used in biometric applications, where a person would add a "secret word" that they would like to keep secure on the device.
on edge
Figure 3. On-Edge Learning Use Cases for Neuromorphic Devices

The balance between the ultra-low power of neuromorphic endpoint devices and the enhanced performance makes it suitable for extended battery-powered applications, running algorithms that are not possible on other low-power devices due to being computationally limited. Or vice versa, with high-end devices capable of similar processing power consuming too much power. Use cases include:
  • Smart watches that monitor and process data at the endpoint, sending only relevant information to the cloud.
  • Smart camera sensors for people detection to execute a logical command. For example, the automatic opening of doors when a person approaches, since current technology is based on proximity sensors.
  • Area without connectivity or charging capabilities, such as in forests for intelligent animal tracking or monitoring below ocean pipelines for possible cracks using real-time sound, vision and vibration data.
  • For infrastructure monitoring use cases, where a neuromorphic MCU can be used to continuously monitor motion, vibration, and structural changes in bridges (via imaging) to identify potential failures.
energy use cases
Figure 4. High performance ultra-low power use cases

Conclusions​

Renesas, as a leader in semiconductors, has recognized the great potential of neuromorphic devices and SNNs, so we have licensed a neuromorphic core from Brainchip [3], the world's first commercial producer of neuromorphic IP, as noted by Sailesh Chittipeddi , our executive vice president at EEnews Europe, “At the low end, we've added an ARM M33 MCU and a spike neural network with BrainChip core licensed for select applications; we have licensed what we need to license BrainChip, including the software to get the ball rolling.” [4]

Therefore, as we try to innovate and develop the best possible devices on the market, we are excited to see how this innovation will contribute to making our lives easier.
Thanks @Fullmoonfever another great find generously shared.

A DEFINITE must read article and one which clearly justified the sales guys at Brainchip buying a company train set for (wink, wink, nudge, nudge) demonstrations of AKIDA engaging in vibration monitoring.

In the UK there are around 30,000 rail bridges and over 1 million in India an inexpensive real-time system to monitor railway bridges is likely to be very big, big business.

My opinion only DYOR
FF


AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 37 users
Top Bottom