BRN Discussion Ongoing

Nice little article on Edge Impulse / BRN in IOT World espousing the merits of both...most of which we know already.

Maybe posted previously?


View attachment 22506



IoT Worlds


edge impulse

Artificial IntelligenceMachine Learning

Edge Impulse – Making Machine Learning Available for Embedded Devices​


Edge Impulse is a software as a service (SaaS) platform that uses a compiler to turn TensorFlow Lite models into C++ programs. The platform works with existing tools and is designed to compete with startups. This article explores Edge Impulse’s unique capabilities and competitive positioning.

Edge Impulse is a software as a service platform​

Edge Impulse is a software as ta service platform that makes machine learning available for embedded devices. Launched in mid-2019, the platform already boasts a growing list of enterprise customers including Oura, Polycom, and NASA. Its goal is to help customers deploy machine learning on their embedded devices and achieve high-impact results.

Edge Impulse also has a relationship with Arm. Shelby’s previous startup, Sensinode, was acquired by Arm in 2013. Sensinode provided low-power mesh networking and Internet gateway systems. The two companies were able to work together on end-to-end solutions that covered tel infrastructure and embedded devices. The acquisition gave Arm access to a range of compute power.

Edge Impulse is available as a free SaaS platform for developers. It includes all of the steps needed to build a machine-learning model, from data collection to signal processing and deployment to the sensor. It is free to use for individual developers, but there is a paid version for enterprise customers. The SaaS platform is a powerful tool for embedded engineers looking to make machine-learning solutions for their applications.

Edge Impulse is the leading development platform for machine learning on edge devices. It simplifies the process of developing and testing ML models on edge devices by streamlining data collection and integration. It then validates the models against real-world data. And finally, it deploys optimized models to edge targets, unlocking massive value across every industry. With the platform, millions of developers and businesses can now build and deploy machine-learning applications on billions of devices.

Edge Impulse has received several awards for its EON Tuner, an algorithm that automatically selects the most suitable machine learning model for the edge. It also supports the BrainChip MetaTF platform, which helps developers quickly develop enterprise-grade ML algorithms. To learn more about Edge Impulse, check out the free hour-long webinar.

It uses a compiler that converts TensorFlow Lite models into human readable C++ programs​

Edge Impulse is a platform that uses a Tensorflow Lite compiler to build deep learning models on embedded devices. The resulting model can be deployed to any device, whether it be a smartphone, tablet, or PC. It is a cross-platform and open-source platform that makes it easy to train models and deploy them at the Edge. It works on Linux-based embedded devices and mobile devices.

Edge Impulse works with TensorFlow Lite, an open-source deep learning framework. It is designed for on-device machine learning inference, and it is lightweight and low-latency. Its architecture allows for efficient model conversion, and it uses a compiler that translates TensorFlow Lite models into human-readable C++ programs. This allows it to run on a wide range of hardware, including devices with low-power MCUs.

The TinyML algorithm is designed to detect three different types of geometry. Edge Impulse implements it with its C++ SDK and TensorFlow support. It can also be deployed using a custom PCB. It can also run in standby mode. In addition, TinyML models can be used to filter sensor data.

The Edge Impulse SDK provides a number of useful examples. For instance, the vacuum-recognition demo contains examples and data. This data can be downloaded separately from the GitHub repository. The data used for this demonstration is the COCO dataset.

The model is optimized for low latency, which is important when it is deployed at the edge. By reducing the computational costs, it is possible to produce a model that uses less memory. Optimizing the model reduces its size while preserving its accuracy. Moreover, it allows for a model to store its data as graphs or 32-bit floating-point values.

Edge Impulse also provides support for data forwarding. By leveraging UART connectivity, users can use the CLI to classify sensor data. The Edge Impulse studio also enables customization of data processing, learning, and optimization.

Edge Impulse can also be used to build ML models. This platform has a range of built-in tools and libraries that will make it easy to train ML models. Its CLI supports capturing data from serial ports, CSV files, and JSON files.

It integrates seamlessly with existing tools​

With Edge Impulse, you can build AI applications using familiar and well documented methods. The tools in this software suite can be combined to achieve a variety of goals, from detecting anomalies to analyzing signal patterns. They provide several different analysis methods, including signal flattening and analysis of repetitive motion.

The software also allows you to build custom models without coding. There are 3 basic building blocks you must use to build a model. The first one, input block, is used to specify the type of data you want to input to the model. This can be images or time series.

Edge Impulse’s AI platform is available as a free and enterprise version. The free version has some limitations, such as a single developer’s sweat and a cloud storage limit of four GB. The enterprise version, which costs $149 per project, removes these restrictions and allows for up to five users per project.

Edge Impulse enables the development of enterprise-grade ML algorithms that train on real sensor data. These models can be quantised and optimised. Then, they can be deployed on BrainChip Akida devices. Enterprise developers can also leverage the BrainChip MetaTF model deployment block to deploy neuromorphic models.

Edge Impulse is free and easy to use. It helps speed up data pre-processing and model building. It features a user-friendly UI that guides you through the process and allows you to customize your model. It also provides a TensorFlow-lite model library that supports all popular formats.

Edge Impulse’s AI technology is based on the BrainChip Akida processor, a breakthrough neural networking processor architecture that delivers high performance and ultra-low power, while still allowing for on-chip learning. It also enables you to visualize the results of your inference using any web browser.

It competes with startups​

Edge Impulse is a startup that uses machine learning to build smarter embedded devices. The company launched in mid-2019, and has almost 30,000 developers using its platform. Its customers include NASA, Polycom, and Advantech. In a recent funding round, Edge Impulse raised $34 million from investors including Coatue, Momenta Ventures, and Acrew Capital.

The startup uses off-the-shelf machine learning frameworks such as TensorFlow to make its models as easy to use as possible. It also provides tools for domain experts to collect data, classify it, and predict the future. Those features are also available in the free tier of Edge Impulse. The company also offers a subscription option that allows customers to gain access to features like collaboration between multiple engineers, larger datasets, and model versioning.

Edge Impulse’s platform makes it easier to build smarter IoT applications. It supports sensor, audio, and computer vision applications. It can also help with asset tracking and health applications. In addition, it ingests 99 percent of critical sensor data, which improves the performance of its algorithms. This technology also enables developers to quickly and easily create new applications.

Edge Impulse has recently raised $34 million in Series B funding. This investment will allow the startup to expand its operations, marketing, and staff. The company also plans to double its annual recurring revenue and triple its market valuation by 2022. Its current investors include Coatue, Sequoia Capital, and Accel.

As a SaaS platform, the company offers developers a solution to implement TinyML in their enterprise environments. Its SaaS platform includes the entire set of steps that is necessary to build models: data collection, signal processing, and deployment to a sensor. It’s available for free to individual developers, as well as a paid service for enterprise customers.
Great join dot, first I’ve seen of this ❤️
 
  • Like
  • Love
Reactions: 7 users

Diogenese

Top 20
Has we discusse about 'innatera'

From the untrained eye, they are trying to do what Brainchip is doing. Although NONE of their patients is has been grant. Our more knowledgeable shareholders can investigate.





Learning



Innatera uses "analog-mixed signal computing" and is designed for time series data (speech, vibration, etc).

What "analog-mixed signal computing" means is not clear from their patents:

WO2022073946A1 ADAPTATION OF SNNS THROUGH TRANSIENT SYNCHRONY

1668950076033.png


[00144] The SNN core may employ a mixed analog-digital computational platform, i.e. the spike trains incorporate analog information in the timing of the events, which are subsequently transformed back into an analog representation at the inputs of the synaptic matrix.


https://www.eetimes.com/innatera-unveils-neuromorphic-ai-chip-to-accelerate-spiking-networks/

Most other companies working on spiking neural network algorithms and hardware (for example, Prophesee) are targeting images and video streams. Innatera has decided to focus on audio (sound and speech recognition), health (vital signs monitoring) and radar (for consumer/IoT use cases such as elderly person fall sensors which maintain privacy).

Marco Jacobs Marco Jacobs (Image: Innatera)
“These sensors have time series data, instead of images which are very parallel,” said Marco Jacobs, Innatera VP marketing and business development, in an interview with EE Times. “Our array is especially good at processing time series data… it’s a good technology fit. Also, from a market perspective, we see a lot of interesting applications in this area and not that many solutions that address it.”

(Not altogether sure they are up to speed on what Prophesee does? ... but then why should they be because their analog-mixed signal processor does not work with image data?)
 
  • Like
  • Love
  • Fire
Reactions: 24 users

Learning

Learning to the Top 🕵‍♂️
Innatera uses "analog-mixed signal computing" and is designed for time series data (speech, vibration, etc).

What "analog-mixed signal computing" means is not clear from their patents:

WO2022073946A1 ADAPTATION OF SNNS THROUGH TRANSIENT SYNCHRONY

View attachment 22516

[00144] The SNN core may employ a mixed analog-digital computational platform, i.e. the spike trains incorporate analog information in the timing of the events, which are subsequently transformed back into an analog representation at the inputs of the synaptic matrix.


https://www.eetimes.com/innatera-unveils-neuromorphic-ai-chip-to-accelerate-spiking-networks/

Most other companies working on spiking neural network algorithms and hardware (for example, Prophesee) are targeting images and video streams. Innatera has decided to focus on audio (sound and speech recognition), health (vital signs monitoring) and radar (for consumer/IoT use cases such as elderly person fall sensors which maintain privacy).

Marco Jacobs Marco Jacobs (Image: Innatera)
“These sensors have time series data, instead of images which are very parallel,” said Marco Jacobs, Innatera VP marketing and business development, in an interview with EE Times. “Our array is especially good at processing time series data… it’s a good technology fit. Also, from a market perspective, we see a lot of interesting applications in this area and not that many solutions that address it.”

(Not altogether sure they are up to speed on what Prophesee does? ... but then why should they be because their analog-mixed signal processor does not work with image data?)
Thanks Dio.

Learning.
 
  • Like
  • Love
Reactions: 7 users

Proga

Regular
Hi @Proga

This is the recently read article link:

I have reproduced the following paragraph because it references the 😇 effect and invite others to recall the statements of gratitude by Brainchip to Mercedes Benz for mentioning AKIDA technology:


“Constellation Research Inc. analyst Holger Mueller told SiliconANGLE that Qualcomm’s push shows us that chipmakers are no longer focused solely on servers, computers and smartphones, but also cars, which are evolving to become powerful compute platforms too. “The battle goes beyond the hardware, as car manufacturers want to see who can provide a complete platform for car operations, infotainment, self-driving, maintenance, connectivity and more,” Mueller said. “Today it’s Qualcomm’s turn to show what it can do, announcing a partnership with Red Hat and customer win with Mercedes-Benz.

When companies land a big win with a premium car manufacturer that often has a halo effect within the rest of the industry, so it’s something that bodes well for the prospects Qualcomm’s automotive platforms.”

My opinion only DYOR
FF

AKIDA BALLISTA
@Fact Finder the last 2 paragraphs intrigued me

"It’s notable that Qualcomm is pursuing a different approach to software-defined vehicles from Nvidia. Earlier this week, at GTC 2022, Nvidia announced a new Thor system-on-a-chip platform that will provide centralized compute for software-defined vehicles.

Whereas Nvidia is offering a single chip to power everything in one vehicle, Qualcomm is going with multiple customized chips for different in-vehicle applications, running within a central box. Qualcomm says this approach can help car manufacturers to scale more efficiently to meet the needs of individual vehicles, while keeping costs to a minimum."

Nvidia already have multiple customized chips for different in-vehicle applications but have decided to go the single chip to power everything by developing Thor. MB are hedging their bets. But as we are only talking about 1 chip, it really doesn't matter. As I said earlier and @Diogenese reiterated, Akida IP will be used at the sensor level and many other areas etc throughout the car.

You have multiple Domain controllers for specific systems while micro controllers are the all in 1.
 
  • Like
  • Love
Reactions: 8 users

Proga

Regular
  • Like
  • Haha
Reactions: 3 users
Old article but we're in it.
 
  • Like
  • Fire
  • Wow
Reactions: 10 users

Proga

Regular
Valeo just tweeted this

Looks like there was a bit of media around the Picturebeam Monolithic from Valeo ten or so months ago

Was before my BrainChip adventure started but were we “linked” to the tech at the time?

If so is ten months long enough to add some Akida IP?

*Apologies if barking up the wrong tree entirely


Valeo has been developing their Lidar using Akida for longer than 10 months so a very good chance. Identifying other vehicles, cyclist and pedestrians on the side of the road means it is tied into their Lidar. Valeo coming up with another feature (showing their wares) on why vehicle manufactures should choose them.

It's a jungle out there.
 
Last edited:
  • Like
  • Fire
Reactions: 17 users

Proga

Regular
Old article but we're in it.
Maybe old but it hammers home what @Fact Finder was trying to point out in regards to timelines. Prophesee and Sony have been working together for +3 years with Prophesee doing a lot of the ground work a few years before then and they're only now bringing their application to market.

In early December 2019 we learned that Prophesee and Sony would give a joint presentation at the International Solid-State Circuits Conference in San Francisco in February 2020, about an “event based” image sensor. This was a surprise too, because Prophesee has been developing its own image sensors so far. Therefore such an alliance is a sign of great interest in the neuromorphic sensing topic. And now, we have learned that the team at Insightness, another “event-based” image sensor startup from Zurich, Switzerland, has silently joined Sony.
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Nice little article on Edge Impulse / BRN in IOT World espousing the merits of both...most of which we know already.

Maybe posted previously?


View attachment 22506



IoT Worlds


edge impulse

Artificial IntelligenceMachine Learning

Edge Impulse – Making Machine Learning Available for Embedded Devices​


Edge Impulse is a software as a service (SaaS) platform that uses a compiler to turn TensorFlow Lite models into C++ programs. The platform works with existing tools and is designed to compete with startups. This article explores Edge Impulse’s unique capabilities and competitive positioning.

Edge Impulse is a software as a service platform​

Edge Impulse is a software as ta service platform that makes machine learning available for embedded devices. Launched in mid-2019, the platform already boasts a growing list of enterprise customers including Oura, Polycom, and NASA. Its goal is to help customers deploy machine learning on their embedded devices and achieve high-impact results.

Edge Impulse also has a relationship with Arm. Shelby’s previous startup, Sensinode, was acquired by Arm in 2013. Sensinode provided low-power mesh networking and Internet gateway systems. The two companies were able to work together on end-to-end solutions that covered tel infrastructure and embedded devices. The acquisition gave Arm access to a range of compute power.

Edge Impulse is available as a free SaaS platform for developers. It includes all of the steps needed to build a machine-learning model, from data collection to signal processing and deployment to the sensor. It is free to use for individual developers, but there is a paid version for enterprise customers. The SaaS platform is a powerful tool for embedded engineers looking to make machine-learning solutions for their applications.

Edge Impulse is the leading development platform for machine learning on edge devices. It simplifies the process of developing and testing ML models on edge devices by streamlining data collection and integration. It then validates the models against real-world data. And finally, it deploys optimized models to edge targets, unlocking massive value across every industry. With the platform, millions of developers and businesses can now build and deploy machine-learning applications on billions of devices.

Edge Impulse has received several awards for its EON Tuner, an algorithm that automatically selects the most suitable machine learning model for the edge. It also supports the BrainChip MetaTF platform, which helps developers quickly develop enterprise-grade ML algorithms. To learn more about Edge Impulse, check out the free hour-long webinar.

It uses a compiler that converts TensorFlow Lite models into human readable C++ programs​

Edge Impulse is a platform that uses a Tensorflow Lite compiler to build deep learning models on embedded devices. The resulting model can be deployed to any device, whether it be a smartphone, tablet, or PC. It is a cross-platform and open-source platform that makes it easy to train models and deploy them at the Edge. It works on Linux-based embedded devices and mobile devices.

Edge Impulse works with TensorFlow Lite, an open-source deep learning framework. It is designed for on-device machine learning inference, and it is lightweight and low-latency. Its architecture allows for efficient model conversion, and it uses a compiler that translates TensorFlow Lite models into human-readable C++ programs. This allows it to run on a wide range of hardware, including devices with low-power MCUs.

The TinyML algorithm is designed to detect three different types of geometry. Edge Impulse implements it with its C++ SDK and TensorFlow support. It can also be deployed using a custom PCB. It can also run in standby mode. In addition, TinyML models can be used to filter sensor data.

The Edge Impulse SDK provides a number of useful examples. For instance, the vacuum-recognition demo contains examples and data. This data can be downloaded separately from the GitHub repository. The data used for this demonstration is the COCO dataset.

The model is optimized for low latency, which is important when it is deployed at the edge. By reducing the computational costs, it is possible to produce a model that uses less memory. Optimizing the model reduces its size while preserving its accuracy. Moreover, it allows for a model to store its data as graphs or 32-bit floating-point values.

Edge Impulse also provides support for data forwarding. By leveraging UART connectivity, users can use the CLI to classify sensor data. The Edge Impulse studio also enables customization of data processing, learning, and optimization.

Edge Impulse can also be used to build ML models. This platform has a range of built-in tools and libraries that will make it easy to train ML models. Its CLI supports capturing data from serial ports, CSV files, and JSON files.

It integrates seamlessly with existing tools​

With Edge Impulse, you can build AI applications using familiar and well documented methods. The tools in this software suite can be combined to achieve a variety of goals, from detecting anomalies to analyzing signal patterns. They provide several different analysis methods, including signal flattening and analysis of repetitive motion.

The software also allows you to build custom models without coding. There are 3 basic building blocks you must use to build a model. The first one, input block, is used to specify the type of data you want to input to the model. This can be images or time series.

Edge Impulse’s AI platform is available as a free and enterprise version. The free version has some limitations, such as a single developer’s sweat and a cloud storage limit of four GB. The enterprise version, which costs $149 per project, removes these restrictions and allows for up to five users per project.

Edge Impulse enables the development of enterprise-grade ML algorithms that train on real sensor data. These models can be quantised and optimised. Then, they can be deployed on BrainChip Akida devices. Enterprise developers can also leverage the BrainChip MetaTF model deployment block to deploy neuromorphic models.

Edge Impulse is free and easy to use. It helps speed up data pre-processing and model building. It features a user-friendly UI that guides you through the process and allows you to customize your model. It also provides a TensorFlow-lite model library that supports all popular formats.

Edge Impulse’s AI technology is based on the BrainChip Akida processor, a breakthrough neural networking processor architecture that delivers high performance and ultra-low power, while still allowing for on-chip learning. It also enables you to visualize the results of your inference using any web browser.

It competes with startups​

Edge Impulse is a startup that uses machine learning to build smarter embedded devices. The company launched in mid-2019, and has almost 30,000 developers using its platform. Its customers include NASA, Polycom, and Advantech. In a recent funding round, Edge Impulse raised $34 million from investors including Coatue, Momenta Ventures, and Acrew Capital.

The startup uses off-the-shelf machine learning frameworks such as TensorFlow to make its models as easy to use as possible. It also provides tools for domain experts to collect data, classify it, and predict the future. Those features are also available in the free tier of Edge Impulse. The company also offers a subscription option that allows customers to gain access to features like collaboration between multiple engineers, larger datasets, and model versioning.

Edge Impulse’s platform makes it easier to build smarter IoT applications. It supports sensor, audio, and computer vision applications. It can also help with asset tracking and health applications. In addition, it ingests 99 percent of critical sensor data, which improves the performance of its algorithms. This technology also enables developers to quickly and easily create new applications.

Edge Impulse has recently raised $34 million in Series B funding. This investment will allow the startup to expand its operations, marketing, and staff. The company also plans to double its annual recurring revenue and triple its market valuation by 2022. Its current investors include Coatue, Sequoia Capital, and Accel.

As a SaaS platform, the company offers developers a solution to implement TinyML in their enterprise environments. Its SaaS platform includes the entire set of steps that is necessary to build models: data collection, signal processing, and deployment to a sensor. It’s available for free to individual developers, as well as a paid service for enterprise customers.
So, off the back of the comment from that previous article:

Edge Impulse’s AI technology is based on the BrainChip Akida processor, a breakthrough neural networking processor architecture that delivers high performance and ultra-low power, while still allowing for on-chip learning. It also enables you to visualize the results of your inference using any web browser.

This video by a Sony Solutions Engineer is a couple of weeks old and begs the question :unsure:

More so on the industrial sensor in particular....vibration anyone?

Screenshot_2022-11-20-23-26-16-55_f9ee0578fe1cc94de7482bd41accb329.jpg




 
  • Like
  • Fire
  • Love
Reactions: 24 users

Proga

Regular
Supposed to be supercomputing but Travis from Dell talks more about AI. Gets interesting from the 6m mark.




 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 12 users
So, off the back of the comment from that previous article:

Edge Impulse’s AI technology is based on the BrainChip Akida processor, a breakthrough neural networking processor architecture that delivers high performance and ultra-low power, while still allowing for on-chip learning. It also enables you to visualize the results of your inference using any web browser.

This video by a Sony Solutions Engineer is a couple of weeks old and begs the question :unsure:

More so on the industrial sensor in particular....vibration anyone?

View attachment 22517




From a couple days ago, a quick presso by the co-founder of Arduino on FOMO using EI :unsure:

But first a snip of one particular question (by a SH no doubt :) ) from the EI Sept webinar transcript of:

Q&A from Constrained Object Detection with Edge Impulse​

tariq.ahmad
14 Sep 2022


Screenshot_2022-11-20-23-48-53-74_4641ebc0df1485bf6b47ebd018b5ee76.jpg



Screenshot_2022-11-20-23-42-37-72_f9ee0578fe1cc94de7482bd41accb329.jpg


 
  • Like
  • Fire
Reactions: 17 users

Proga

Regular
Just in case it was missed. Runs for just over an hour. I also posted it in the Edge Impulse thread to be able to find easily later.

 
  • Like
  • Fire
Reactions: 18 users
Seem like EI got a bit going on with various companies. Hopefully we'll be getting some exposure.


Alif, Bosch, Edge Impulse combine for AI sensors​

New Products | November 15, 2022
By Peter Clarke

AI-capable microcontroller supplier Alif Semiconductor Inc. (Pleasanton, Calif.) is teaming up with Bosch Sensortec and Edge Impulse to produce motion-sensing reference designs.

Alif provides the microcontrollers, Bosch Sensortec the sensors and Edge Impulse the development platform in support of machine learning.

Alif claims the combination supports the design of precision motion-sensing products while increasing performance, lowering power consumption, and establishing security. The company did not say what benchmark it was comparing against.

Alif said it hopes to sell into applications such as game controllers, predictive maintenance, and the monitoring of industrial machinery and cargo.

Alif and partners are offering reference designs based on the E3 MCU from Alif’s Ensemble family, Bosch Sensortec’s BMI323, Inertial Measurement Unit (IMU) and Edge Impulse’s development platform for machine learning.

Big-little like

The E3 MCU employs dual processing domains that accelerate machine learning. A high-efficiency processor is set to be always-on at minimum energy consumption while the high-performance processor sleeps until awoken by the first processor. It then executes heavier workloads rapidly and then returns to sleep.

In the reference design the BMI323 IMU feeds precision acceleration and angular rate data to the MCU. Edge Impulse’s platform is used to rapidly train an ML model to identify complex continuous gestures such as multiple characters, symbols, and anomalies in repeating complex movement patterns.

A machine learning model can be created in less than one hour and then deployed to the E3 MCU to detect and identify a complex multi-directional motion pattern and translate it to a symbol. While running, the time to complete the inference operation required to identify the gesture pattern is only 280 microseconds.

“The increase in inference speed directly translates to lower system-level power consumption and could also be used for this kind of solution to operate at significantly higher symbol rates. This opens the door to deploying smart sensors in ways that are not achievable today,” said Stefan Finkbeiner, CEO at Bosch Sensortec, in a statement issued by Alif Semiconductor.

Related links and articles:

www.alifsemi.com

www.bosch-sensortec.com

www.edgeimpulse.com
 
  • Like
  • Fire
Reactions: 24 users
Maybe old but it hammers home what @Fact Finder was trying to point out in regards to timelines. Prophesee and Sony have been working together for +3 years with Prophesee doing a lot of the ground work a few years before then and they're only now bringing their application to market.

In early December 2019 we learned that Prophesee and Sony would give a joint presentation at the International Solid-State Circuits Conference in San Francisco in February 2020, about an “event based” image sensor. This was a surprise too, because Prophesee has been developing its own image sensors so far. Therefore such an alliance is a sign of great interest in the neuromorphic sensing topic. And now, we have learned that the team at Insightness, another “event-based” image sensor startup from Zurich, Switzerland, has silently joined Sony.
The more I think, and the more I learn about our technology the more mind-blowing epiphanies I have about it.
 
  • Like
  • Love
  • Fire
Reactions: 16 users
D

Deleted member 118

Guest
  • Like
  • Fire
Reactions: 6 users
I've whipped myself into a Fomo frenzy.
 
  • Haha
  • Like
Reactions: 6 users
Big Brother Animation GIF by ARTEfr




Artificial Intelligence Robot GIF by Westworld HBO
 
Last edited:
  • Like
Reactions: 6 users

TopCat

Regular
After coming across Cambridge Consultants and Prophesee working together tin the cell therapy field , I looked up who Cambridge Consultants are and what they do. Has anyone else come across them before? They have their fingers in a lot of pies, including space , health , communication and defence. I’m really liking the look of them. The only strong link I have so far is the use of the Prophesee vision sensor.

F5C8680A-088F-4566-AF1B-CD7ADCA78CB8.jpeg
42263B51-5349-4149-97C1-E1EB16097039.jpeg

Decision​

True intelligence and machine+human collaboration require those machines to make meaningful decisions and to take the ‘right’ action. This means rapidly predicting the future and choosing the best option in an uncertain world.

From helping to coordinate national infrastructure, through the creation of autonomous platforms, to reinforcement learning agents that explain their actions, we improve machine performance through intelligent decision making.

Edge​

Delivering transformational products and services means taking artificial intelligence out of the data centre and into the real world. Out here, constraints of size, cost, power and connectivity present new challenges that only the best can solve.

We redefine the possibilities of artificial intelligence at the Edge. From signal processing in billions of mobile phones, to AI in smart inhalers, we’ve generated billions of dollars of value for our clients, by creating and optimising world-leading silicon platforms.
This is interesting. Hopefully another breakthrough for the Brainchip/ Prophesee team.


The low data density and automation improvements provided by Metavision event-based vision represent opportunities to scale and grow the industry, ultimately resulting in more affordable and precise patient care. This breakthrough technique solves a significant challenge by enabling for the first time real-time, automated contamination detection, part of the key to unlocking cell and gene therapy mass deployment.
View attachment 22497
 
  • Like
  • Fire
  • Love
Reactions: 12 users
The more I think, and the more I learn about our technology the more mind-blowing epiphanies I have about it.
And it just never ever stops happening. My interest commenced in 2016 and has never waned.

As I said researching AKIDA technology is much more fun than Sudoku or Crossword puzzles to ward off dementia.

@prnewy74 further to my earlier answer as data acquisition moves to the Edge or End Point you also start to see bandwidth issues arise.

Bandwidth does not just exist on the poles outside, in multi-storey skyscrapers bandwidth is inside walls and increasingly it comes with huge logistical and cost issues if your solution is to increase it by ripping out the old and adding new.

If every occupant adds a smart robot vacuum cleaner to their appliances this starts to take up bandwidth that never envisaged these types of technology.

Bandwidth is just like a city road system. It has a maximum capacity and it only requires one more car or device beyond capacity to jam the entire system.

AKIDA at the Edge or End Point only selecting the relevant data and sending it as meta data preserves or maximises existing bandwidth so that big tech can sell more smart fridges, washing machines, PCs, phones, fitbits etc etc.

In these existing systems you even have limitations on the amount of power available over imbedded electrical cabling that can be drawn too run all these new technologies and here to AKIDA using such ridiculously low power also adds essential value.

There are more essential reasons for big tech to embrace AKIDA than to reject it.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 48 users

toasty

Regular
The reality is a share price back in the 60c range and not much in the way of receipts... yet. Puffery, doesn't increase the value of my shares. Revenue does. "The Burgers are best at hungry Jacks".... yet nobody is taking legal action against them. Grow up.
Sounds like you may be under water with the share price in the $0.60's? If so, that makes you are relatively recent shareholder? Patience is required. I, like many others, have been a BRN holder since 2016................
 
  • Like
  • Love
Reactions: 4 users
Top Bottom