Is this an Akida?

uiux

Regular
There is a common trend occuring where a person will find a random neural network or AI related technology on the internet and share it here with the question:

"Is this Akida?"

Fair enough. That's what we are all here for - to expand our knowledge. We have our resident ogre and many others willing to analyse the product and dig into the underlying technologies, patents and research to better understand what makes it tick. To make it a bit more organized, I figured questions could go in this thread.

64jggb.jpg



To save time, usually it's a safe bet to start with the answer being, no, it's not Akida.

However, we know there is research and development occuring all over the place and there still might be a home for our favourite neuromorphic chip in an unlikely spot. To help, here are some common terms that you may find in the Akida habitat:
  • Digital spiking neural network
  • on-chip learning
  • one-shot learning
  • CNN-to-SNN
  • Akida
  • BrainChip
  • Neuromorphic
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 72 users

TTE

Regular
...in your pocket... or are ya just glad to see events?
 
  • Haha
Reactions: 1 users

Mugen74

Regular
Is this Akida?
terminator.jpg


Sorry Uiux couldnt help myself🤣
 
  • Like
  • Haha
Reactions: 23 users

Mugen74

Regular
  • Like
  • Fire
Reactions: 8 users

Makeme 2020

Regular
One from Renesas for you to look at uiux...........$$$$$$$

 
  • Like
Reactions: 9 users

DUNJIN

Member
  • Like
Reactions: 2 users

Dr E Brown

Regular

This just came up on my news feed. Whilst only a simulator at present, it sounds interesting in the way they are teaching. It certainly starts to sound neuromorphic but is it Akida?
 
  • Like
Reactions: 2 users

Diogenese

Top 20
There is a common trend occuring where a person will find a random neural network or AI related technology on the internet and share it here with the question:

"Is this Akida?"

Fair enough. That's what we are all here for - to expand our knowledge. We have our resident ogre and many others willing to analyse the product and dig into the underlying technologies, patents and research to better understand what makes it tick. To make it a bit more organized, I figured questions could go in this thread.

View attachment 676
Hi ui,

What would be useful would be to itemize the features which are thought to indicate the incorporation of Akida in the item in question, always bearing in mind Ella's remonstrance:
"tain't what you do, it's the way that you do it."

Some useful keywords:-

digital spiking neural network;
on-chip learning;
one-shot learning;
CNN-to-SNN;
Akida;
BrainChip.
 
  • Like
Reactions: 23 users

Tothemoon24

Top 20
Is this Akida😂🏃🏻‍♂️🏃🏻‍♂️🏃🏻‍♂️🏃🏻‍♂️🏃🏻‍♂️
 

Attachments

  • 97064E48-1DF8-4471-9A02-B0C656C92307.png
    97064E48-1DF8-4471-9A02-B0C656C92307.png
    1.9 MB · Views: 63
  • Like
Reactions: 2 users

TTE

Regular
  • Like
Reactions: 7 users

Diogenese

Top 20

This just came up on my news feed. Whilst only a simulator at present, it sounds interesting in the way they are teaching. It certainly starts to sound neuromorphic but is it Akida?
This is not Akida.

It is a computer program to provide "real-life" situations for testing Autonomous Vehicle control systems.

1644548475916.png
 
  • Like
Reactions: 16 users

Fox151

Regular
Dodgy knees will love this thread. A whole lot of meals for the ogre to devour.
 
  • Haha
  • Like
Reactions: 6 users
I have been looking at Contental lately. Been very hard to find anything that could link us to them other than Lou mentioning a couple of years ago. This is the closest I can get. Maybe something. Mentions neural networks in L3 and L4 cars. Extract and link provided.


EN












  1. Press

  2. Studies & Publications

  3. Other Publications

  4. How data fusion gives vehicles a seventh sense

From Vision to Perception​

How data fusion gives vehicles a seventh sense​

  • Continental offers leading technologies for assisted and automated driving, such as radar, lidar and camera. These sensors act as a kind of vehicle's eyes, gathering environment information.
  • By fusing sensor data, the vehicle can gain an overview of the entire traffic situation, obtain an understanding of the scene, and accurately anticipate potential hazards. Performance and safety of the systems increase. Continental is relying on a mix of classic methods and artificial intelligence (AI).
  • The degree of automation of vehicles will continue to increase in the coming years and autonomous driving will become suitable for mass use. From 2025, level 3 and level 4 systems will be available to the mass-market at affordable cost.

How can safety and assistance systems reliably detect hazards and obstacles? Continental relies on a "From Vision to Perception" approach. Vehicles should not only be able to see situations in traffic (vision), but also understand them (perception). To give vehicles something like a seventh sense, to enable them to interpret even complex traffic scenarios and act accordingly, a wide variety of sensor data is fused together. Artificial intelligence helps in this process.
More and more complex technologies are being used in today's modern vehicles. Today, there are already numerous safety and assistance systems that use sensors to collect information about the surroundings and thus help the driver keep his distance, stay in his lane or brake in time (level 2). As the level of automation increases, so does the number of sensors - and therefore data that is collected and interpreted.
The sensors have very different strengths, and only the combination of the various technologies results in a complete and robust recording of the environment, which is absolutely necessary for higher levels of automation.
Continental_PP_AEye-LiDAR



The high resolution of the long-range LiDAR improves the classification of objects.
Using sensors to assess the risk of crashes
But there is much more to the technology: Thanks to the sensors that act as its eyes, the vehicle can interpret the behavior of other road users very well. In other words, how great the risk of a crash is, whether a collision is imminent or whether it can be assumed that a pedestrian will stop shortly or crosses the street.
"While a radar can detect distances to and movements of objects very precisely, the data fusion of radar and camera allows further details to be detected. For example, whether they are people, cyclists or strollers and how they behave. Is the pedestrian paying attention when crossing the street? Is he looking in the direction of the vehicle and traffic, or is he distracted? There is enormous potential here through the combination of sensor fusion and artificial intelligence," says Dr. Sascha Semmler, Head of Program Management Camera at Continental's Advanced Driver Assistance Systems (ADAS) business unit.
Continental_PP_AEye-LiDAR-Point-Cloud



LiDAR point cloud output combining HD resolution and long-distance range.
LiDAR as a third element of environment perception
In particularly challenging situations with poor visibility or darkness, for example, LiDAR supplements environment perception as a third element. Based on 25 years of LiDAR experience, Continental relies on a combination of short- and long-range LiDAR. "This year, our short-range 3D flash LiDAR technology will go into production with a premium OEM. Together with our partner AEye, we are developing a high-performance long-range LiDAR that can reliably detect vehicles at a distance of more than 300 meters, regardless of visibility conditions. This puts us in an excellent position to capture the entire vehicle environment with state-of-the-art LiDAR technology, enabling automated driving at SAE level 3 and higher, both for passenger cars and in truck applications," explains Dr. Gunnar Juergens, head of the LiDAR segment at Continental's Advanced Driver Assistance Systems (ADAS) business unit.
Bringing sensor data together through high- and low-level fusion
Continental uses different approaches for object detection: high-level fusion, mid-level fusion and low-level fusion. In high-level fusion, environment detection is first performed separately for each individual sensor - for example, detection of objects or the course of the road. These separate sensor results are then fused and used in the respective driving function. This classic approach has been used successfully for many years and has a high degree of maturity. But for the future, and especially for autonomous driving, low-level fusion or its combination with high-level fusion is of fundamental importance. Here, the raw sensor data is transmitted directly to a powerful central computer, where it is fused and only then the overall image will be interpreted. The result: significantly higher system performance and "scene comprehension" in which the vehicle correlates road users, the course of the road, buildings, etc. and offers a wide range of possibilities.
csm_20210714-lidar02_5aeef01f7b.jpg



The 77 GHz technology further improves the resolution of the sensors and enables more accurate detection of smaller objects.
Recognizing the environment in detail and interpreting traffic scenarios
"Low-level fusion is our strength, which we are using to drive the development of automated and autonomous driving systems at Level 3 and 4," explains Dr. Ismail Dagli, head of Research and Development at Continental's Advanced Driver Assistance Systems (ADAS) business unit. "Recognizing the environment in such detail was not possible for vehicles before. Especially by using artificial intelligence in data fusion and situation interpretation, the vehicle knows what's going to happen, can draw better conclusions and perform maneuvers." Currently, systems that use low-level fusion are already in use in certain areas. By 2025, the Continental expert estimates, these systems could also be in mass production - paving the way for cost-effective L3 and L4 systems.
The new approach to low-level fusion is made possible thanks to artificial intelligence. Intelligent algorithms and neural networks take over what classical data processing methods can no longer do: They help classify and interpret complex driving situations that can no longer be programmed using classic methods. In the process, the algorithms evaluate large volumes of data within fractions of a second and train the systems for later use. AI also makes it easier to incorporate new functions into technologies. In the case of new developments, such as e-scooters suddenly appearing more frequently in traffic, the systems can be retrained and adapted via over-the-air updates. Continental is continuously expanding its competencies in the field of AI. Strategic partnerships are increasing the speed of innovation in this area.

https://www.continental.com/en/presse/studien-publikationen/autonomous-mobility-sensortechnology/

Whole thing not a bad read but extract points specifically to neural networks and perception through data fusion.


SC
 
  • Like
Reactions: 11 users
I have been looking at Contental lately. Been very hard to find anything that could link us to them other than Lou mentioning a couple of years ago. This is the closest I can get. Maybe something. Mentions neural networks in L3 and L4 cars. Extract and link provided.


EN












  1. Press

  2. Studies & Publications

  3. Other Publications

  4. How data fusion gives vehicles a seventh sense

From Vision to Perception​

How data fusion gives vehicles a seventh sense​

  • Continental offers leading technologies for assisted and automated driving, such as radar, lidar and camera. These sensors act as a kind of vehicle's eyes, gathering environment information.
  • By fusing sensor data, the vehicle can gain an overview of the entire traffic situation, obtain an understanding of the scene, and accurately anticipate potential hazards. Performance and safety of the systems increase. Continental is relying on a mix of classic methods and artificial intelligence (AI).
  • The degree of automation of vehicles will continue to increase in the coming years and autonomous driving will become suitable for mass use. From 2025, level 3 and level 4 systems will be available to the mass-market at affordable cost.

How can safety and assistance systems reliably detect hazards and obstacles? Continental relies on a "From Vision to Perception" approach. Vehicles should not only be able to see situations in traffic (vision), but also understand them (perception). To give vehicles something like a seventh sense, to enable them to interpret even complex traffic scenarios and act accordingly, a wide variety of sensor data is fused together. Artificial intelligence helps in this process.
More and more complex technologies are being used in today's modern vehicles. Today, there are already numerous safety and assistance systems that use sensors to collect information about the surroundings and thus help the driver keep his distance, stay in his lane or brake in time (level 2). As the level of automation increases, so does the number of sensors - and therefore data that is collected and interpreted.
The sensors have very different strengths, and only the combination of the various technologies results in a complete and robust recording of the environment, which is absolutely necessary for higher levels of automation.
Continental_PP_AEye-LiDAR



The high resolution of the long-range LiDAR improves the classification of objects.
Using sensors to assess the risk of crashes
But there is much more to the technology: Thanks to the sensors that act as its eyes, the vehicle can interpret the behavior of other road users very well. In other words, how great the risk of a crash is, whether a collision is imminent or whether it can be assumed that a pedestrian will stop shortly or crosses the street.
"While a radar can detect distances to and movements of objects very precisely, the data fusion of radar and camera allows further details to be detected. For example, whether they are people, cyclists or strollers and how they behave. Is the pedestrian paying attention when crossing the street? Is he looking in the direction of the vehicle and traffic, or is he distracted? There is enormous potential here through the combination of sensor fusion and artificial intelligence," says Dr. Sascha Semmler, Head of Program Management Camera at Continental's Advanced Driver Assistance Systems (ADAS) business unit.
Continental_PP_AEye-LiDAR-Point-Cloud



LiDAR point cloud output combining HD resolution and long-distance range.
LiDAR as a third element of environment perception
In particularly challenging situations with poor visibility or darkness, for example, LiDAR supplements environment perception as a third element. Based on 25 years of LiDAR experience, Continental relies on a combination of short- and long-range LiDAR. "This year, our short-range 3D flash LiDAR technology will go into production with a premium OEM. Together with our partner AEye, we are developing a high-performance long-range LiDAR that can reliably detect vehicles at a distance of more than 300 meters, regardless of visibility conditions. This puts us in an excellent position to capture the entire vehicle environment with state-of-the-art LiDAR technology, enabling automated driving at SAE level 3 and higher, both for passenger cars and in truck applications," explains Dr. Gunnar Juergens, head of the LiDAR segment at Continental's Advanced Driver Assistance Systems (ADAS) business unit.
Bringing sensor data together through high- and low-level fusion
Continental uses different approaches for object detection: high-level fusion, mid-level fusion and low-level fusion. In high-level fusion, environment detection is first performed separately for each individual sensor - for example, detection of objects or the course of the road. These separate sensor results are then fused and used in the respective driving function. This classic approach has been used successfully for many years and has a high degree of maturity. But for the future, and especially for autonomous driving, low-level fusion or its combination with high-level fusion is of fundamental importance. Here, the raw sensor data is transmitted directly to a powerful central computer, where it is fused and only then the overall image will be interpreted. The result: significantly higher system performance and "scene comprehension" in which the vehicle correlates road users, the course of the road, buildings, etc. and offers a wide range of possibilities.
csm_20210714-lidar02_5aeef01f7b.jpg



The 77 GHz technology further improves the resolution of the sensors and enables more accurate detection of smaller objects.
Recognizing the environment in detail and interpreting traffic scenarios
"Low-level fusion is our strength, which we are using to drive the development of automated and autonomous driving systems at Level 3 and 4," explains Dr. Ismail Dagli, head of Research and Development at Continental's Advanced Driver Assistance Systems (ADAS) business unit. "Recognizing the environment in such detail was not possible for vehicles before. Especially by using artificial intelligence in data fusion and situation interpretation, the vehicle knows what's going to happen, can draw better conclusions and perform maneuvers." Currently, systems that use low-level fusion are already in use in certain areas. By 2025, the Continental expert estimates, these systems could also be in mass production - paving the way for cost-effective L3 and L4 systems.
The new approach to low-level fusion is made possible thanks to artificial intelligence. Intelligent algorithms and neural networks take over what classical data processing methods can no longer do: They help classify and interpret complex driving situations that can no longer be programmed using classic methods. In the process, the algorithms evaluate large volumes of data within fractions of a second and train the systems for later use. AI also makes it easier to incorporate new functions into technologies. In the case of new developments, such as e-scooters suddenly appearing more frequently in traffic, the systems can be retrained and adapted via over-the-air updates. Continental is continuously expanding its competencies in the field of AI. Strategic partnerships are increasing the speed of innovation in this area.

https://www.continental.com/en/presse/studien-publikationen/autonomous-mobility-sensortechnology/

Whole thing not a bad read but extract points specifically to neural networks and perception through data fusion.


SC
Sorry I'll try again. Last paragraph mentions the neural network. It pastedmthe lot. Not holding my mouth right obviously.

The new approach to low-level fusion is made possible thanks to artificial intelligence. Intelligent algorithms and neural networks take over what classical data processing methods can no longer do: They help classify and interpret complex driving situations that can no longer be programmed using classic methods. In the process, the algorithms evaluate large volumes of data within fractions of a second and train the systems for later use. AI also makes it easier to incorporate new functions into technologies. In the case of new developments, such as e-scooters suddenly appearing more frequently in traffic, the systems can be retrained and adapted via over-the-air updates. Continental is continuously expanding its competencies in the field of AI. Strategic partnerships are increasing the speed of innovation in this area.

SC
 
  • Like
Reactions: 10 users

Taproot

Regular
 
  • Like
Reactions: 4 users

Taproot

Regular
  • Like
Reactions: 9 users

Taproot

Regular

Percepto’s holistic solution enables fully automated inspections and continuous surveillance of industrial sites around the globe. This is all facilitated by a visionary team based in the US, Australia and Israel, focused on pushing the boundaries of industrial automation.
 
  • Like
Reactions: 12 users

Zedjack33

Regular

Edison Awards.

Not saying any dots here. But the list and tech is something amazing that BRN will be heading in the future.
 
  • Like
  • Fire
Reactions: 9 users

Esq.111

Fascinatingly Intuitive.

Edison Awards.

Not saying any dots here. But the list and tech is something amazing that BRN will be heading in the future.

Evening Zedjack33 ,

Alot of cool products there,
Quickly skimmed through the list and the one which really caught my eye was Apeel.

Apeel.

*Advanced hyperspectral
Imaging / cameras.

*APEEL recently acquired IMPACT VISION.

* NASA apparently use this technology to study moon rocks.

Could be worth one of the BRN super slouthes / tech savey members having a look at.

Regards,
Esq.
 
  • Like
Reactions: 13 users
Not cross promoting but AVA an asx listed company based in WA and has a deal with Indian defence and also connected to Australian defence through their CEO has developed Aura IQ.
This is copied from their website
FFT Aura IQ revolutionises conveyor health monitoring – using real-time data to enhance asset management, improve reliability and introduce exciting new predictive capabilities.

Conveyor systems play a crucial role in underpinning efficiency and ultimately profitability in bulk handling operations globally. Conveyor maintenance has traditionally been a real problem, with conventional methods of advanced conveyor failure detection often unreliable, subjective, time-consuming and labour intensive.
Now is this Akida?
 
  • Like
  • Fire
Reactions: 5 users
Top Bottom