BRN Discussion Ongoing

TECH

Regular
Good morning/afternoon all,

Interesting podcast during the week, at times I personally found the spoken word hard to follow, but it's good seeing Dr Tony getting
involved with our podcast, Peter has a different personality, as in, more reserved and unassuming, not that I know much about Dr Tony
having never actually conversed with him privately.

As I suggested a few weeks ago, Space Programs could well turn out to be a real big winner for us, in the harshness of the space environment,
Akida protected through Rad-Hardening, it's our brilliant low power, being able to do inference on-chip in real time and low latency through
only processing data that is event based, reducing data flow between satellite and earth base stations.

The only negative I can honestly say with regards our investment in Brainchip has always been the same, that being time, as far as our company
is concerned, the brilliant technology, hardworking staff, our engagements, our products, partnerships and on and on are all extremely positive.

Thanks to all the posters during the week, many great articles, in fact, too many things for my little brain to absorb, but like many on our
generally friendly forum, I certainly appreciate the time involved researching things for the benefit of all, so thanks from Tech.

Have a good evening...cheers.
 
  • Like
  • Love
  • Fire
Reactions: 69 users
Was that graphic already commented on? OHB is not a household name … it might become one soon.


IMG_1844.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 21 users
If you think about it, this guy and the European Space Agency have access to all technology, new and old.
He has felt inclined on a public social media post, which shows a level of excitement not only from Laurent, but the ESA itself as he represents ESA, that Brainchips Akida, is far superior than anything else out there . Is he your brother?🤞


View attachment 59176
 
  • Like
  • Fire
Reactions: 4 users

Xray1

Regular
  • Fire
  • Like
Reactions: 6 users
Afternoon DingoBorat,

Yep , umbrella compartment.... alternately I'd keep my deluxe extra large cigar there , I'm shaw if one asked that space could be converted to a humidifier.

Once saw one of these wagons with a fitted chilled champagne & crystal flute's compartment in the boot , highly impractical but hay , if one is spending 600k upwards who cares.

Regards,
Esq.
Totally practical for Melbourne Cup type events where you drive the car into the carpark where you stop and pop open the boot and dig in . Oakbank races is another race event where this is encouraged and enjoyed .
 
  • Like
  • Haha
Reactions: 2 users
Don't know if posted but a quick search didn't reveal anything.


When : April 25th, 2024
Where: Hyatt Regency Santa Clara
5101 Great America Parkway, Santa Clara, CA


Join D&R IP SoC Silicon Valley 24 !! A worldwide connected Event !!

A worldwide connected Event !!
D&R IP-SoC Silicon Valley 2024 Day is the unique worldwide Spring event fully dedicated to IP (Silicon Intellectual Property) and IP based Electronic Systems.
IP-SoC providers, the seed of innovation in Electronic Industry, are invited to highlight their latest products and services and share their vision about the next innovation steps in the Electronic Industry.
IP consumers can view at a glance the latest Technology trends and exciting Innovative IP/SoC products. Through a global view, Electronic systems leaders may identify disruptive innovation leading to new market segment growth.

Room A
Neuromorphic Processor IP for a New Generation of SoCs featuring Temporal Event-based Neural Networks (TENNs)

Steve Thorne
Vice President of Sales
BrainChip Inc.

About me
 
  • Like
  • Fire
  • Love
Reactions: 45 users

Iseki

Regular
If you think about it, this guy and the European Space Agency have access to all technology, new and old.
He has felt inclined on a public social media post, which shows a level of excitement not only from Laurent, but the ESA itself as he represents ESA, that Brainchips Akida, is far superior than anything else out there!


View attachment 59176
TBH they only have access to AKIDA1000 and 1500. If only someone can produce an AKIDA2 we would be ahead of the race.
 
  • Like
Reactions: 4 users

miaeffect

Oat latte lover
Non BRN


200w (10).gif

Intel... Intel... nice toaster
 
  • Haha
  • Wow
  • Like
Reactions: 9 users

wilzy123

Founding Member
in fact, too many things for my little brain to absorb

the forum deadbeats.. have that effect

baby-lick-window.gif
 
  • Haha
  • Like
Reactions: 5 users

CHIPS

Regular
Don't know if posted but a quick search didn't reveal anything.


When : April 25th, 2024
Where: Hyatt Regency Santa Clara
5101 Great America Parkway, Santa Clara, CA


Join D&R IP SoC Silicon Valley 24 !! A worldwide connected Event !!

A worldwide connected Event !!
D&R IP-SoC Silicon Valley 2024 Day is the unique worldwide Spring event fully dedicated to IP (Silicon Intellectual Property) and IP based Electronic Systems.
IP-SoC providers, the seed of innovation in Electronic Industry, are invited to highlight their latest products and services and share their vision about the next innovation steps in the Electronic Industry.
IP consumers can view at a glance the latest Technology trends and exciting Innovative IP/SoC products. Through a global view, Electronic systems leaders may identify disruptive innovation leading to new market segment growth.

Room A
Neuromorphic Processor IP for a New Generation of SoCs featuring Temporal Event-based Neural Networks (TENNs)

Steve Thorne
Vice President of Sales
BrainChip Inc.

About me

And here we also have Synopsys again ...

1710672043431.png



Remember this one?

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-415015

1710672580585.png
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 27 users

Terroni2105

Founding Member
  • Like
  • Love
  • Fire
Reactions: 71 users
From a few days ago on Edge Impulse.


X-Ray Classification and Analysis - Brainchip Akida Neuromorphic Processor​

A computer vision project to perform image classification on x-ray results, using the Brainchip Akida Development Kit.

Created By: David Tischler
Public Project Link: https://studio.edgeimpulse.com/public/348027/latest
image


Introduction​

Over the past several years, as hardware has improved and machine learning models have gotten more efficient, many AI workloads have been transitioned from the cloud to the edge of the network, running locally on devices that were previously not able to perform inferencing tasks. Fast CPU's and GPU's, more memory, and better connectivity have helped, but a very large impact has come from dedicated AI Accelerators that offer high performance inference, in small, low-power form-factors.
The Brainchip Akida AKD1000 is one such example, with the ability to speed up sensor, audio, or vision tasks such as image classification or object detection significantly over standard CPU-based inferencing, while using only a few milliwatts of power. In this project we'll use the Brainchip Akida Developer Kit, which comes in a ready-to-use system consisting of an x86 or Arm-based platform, plus an Akida AKD1000 NPU on a small PCIe add-on card.
image


However, even this low-power system is more powerful than truly necessary, as the Akida NPU could be simply integrated directly into a single PCB containing a processor, memory, storage, and any necessary interfaces, eliminating the need for the PCIe add-on card. Even more integrated, the Akida IP can be licensed and embedded directly within an SoC, creating a single-chip solution capable of compute and AI acceleration, combined. But for ease of getting started, the Brainchip Akida Raspberry Pi Developer Kit is used here.

Improving Medical Processes​

Artificial Intelligence may never be able to fully replace a doctor, but it can certainly help supplement their work, speed up diagnostic processes, or offer data-driven analyses to assist with decision making. This project will explore the capability of the Akida processor to identify pneumonia in an x-ray image, along with some potential next steps and description of how that can be leveraged in the real-world.
We'll use the Akida Developer Kit, Edge Impulse, a curated dataset from Kaggle, and some basic utilities to evaluate performance. The standard Edge Impulse workflow will be used, which is well-documented here.

Dataset Collection​

The first step to consider for a machine learning project is the dataset. This could be collected yourself, such as most sensor projects, or you can use an existing dataset if there is one that meets your particular needs. In this case, as we are interested in evaluating the Akida for x-ray classification, we can use the Chest X-Ray Images (Pneumonia) dataset provided by Paul Mooney on Kaggle. This dataset consists of 5,863 images (x-rays) of patients who were diagnosed with pneumonia, as well as those who did not have pneumonia (i.e., "normal"). You can download the dataset, then unzip it, to find Test, Train, and Validation folders, subdivided into "pneumonia" and "normal" folders for each.
Make a new project in Edge Impulse, click on Data Acquisition, and then upload the Test and Train folders for each Class, making sure you select "Automatically split between training and testing" and also provide the correct Label for each folder condition.
image


Once each folder is appropriately uploaded, your dataset should look something like this:
image


I've ended up with 4,646 images in my Training set, and a total of 1,163 images in my Test set, which will be held back and can be used later to test the model on unseen data.

Building a Model​

To begin the process of building a classification model, click on Impulse Design on the left, and set the image dimensions in the Image data block. I have chosen 640x480 as a starting point, though we could possible go a bit higher depending on if the model accuracy is too low once we begin testing. Next, add an Image Processing block, then a Classification - BrainChip Akida Learning block. Then click Save Impulse.
image


In the Impulse block detail page, you can likely change to Grayscale as x-rays are black and white, so we can save a bit of processing speed / memory by eliminating RGB color. Choose "Grayscale" from the Color depth drop-down menu, and then click Save Parameters. On the next page, click on Generate Features, and you will see a visual representation of your dataset features after the process completes.
image


On the Classifier settings page, I've made a few changes to increase accuracy of the model, bumping up the number of epochs run to 200, reducing the learning rate to 0.0005, and reduced my validation set size down to 5%. To speed up the training process, I've used GPU training, which is available for Enterprise users. You can request a free 14-day trial here if you'd like to increase your model sizes and reduce your build times.
Once the build is complete, you'll be presented with Validation accuracy and inference time information.
image


During the data upload step earlier, recall that we set aside 1,163 images that were not used for training the machine learning algorithm (this occurred automatically as a result of the "Automatically split between training and testing" checkbox). Now we can have our newly created model try to evaluate those 1,163 images, but using the Model Testing feature. Click on Model Testing on the left navigation, and then click the "Classify all" button. A job will be started, with logs available on the right side of the screen, and once complete the model will iterate through the unseen Test images and perform an inference on each, then render the results against the known value. This will give you a good indication of how well your model is working on new x-ray images. Here I received a Test classification accuracy of about 94.06%, so I will move forward with deploying the model to the board.

Deploying to the Developer Kit​

Now it's time to setup the Akida Developer Kit. There is an included SD Card, ready to use out-of-the-box with Brainchip's model zoo and sample applications. This makes it quick and easy to evaluate the Akida, and begin using the device. But as we're going to be using Edge Impulse in this tutorial, I've instead flashed a new SD Card with Ubuntu 20.04.5 LTS 64-bit by using the Raspberry Pi Imager application. I also used the "Customize" feature of the application to add a username and password, as well as local WiFi credentials, though you could just as easily plug in an ethernet cable for connectivity. Once booted up and on the network, the Akida Developer Kit is similar to any other Raspberry Pi in how you can interact with it. You can attach a keyboard, mouse, and HDMI monitor, or in my case I simply accessed the device over SSH.
image


As this was a fresh installation of Ubuntu, we'll need to install both the Akida tooling and drivers, as well as the Edge Impulse tooling and examples. Those require some prerequisites, so the process actually begins by updating the system and then installing necessary packages. After that, the Edge Impulse CLI, Akida CLI, Edge Impulse Linux SDK, and Akida PCIe driver can be installed.
Here is the complete series of commands I used, in order:

Copy1. sudo apt-get update && sudo apt-get upgrade
2. sudo reboot
3. sudo su
4. curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
5. apt-get install build-essential linux-headers-$(uname -r) git gcc g++ make nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps libatlas-base-dev libportaudio2 libportaudiocpp0 portaudio19-dev python3-opencv python3-pip
6. npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm
7. reboot
8. export PATH=/home/<your-username-here>/.local/bin:$PATH # (or add to `.profile` file to make permanent)
9. pip install --upgrade pip
10. pip install --upgrade akida
11. pip install edge_impulse_linux -i https://pypi.python.org/simple
12. pip install pyaudio
13. git clone https://github.com/edgeimpulse/linux-sdk-python
14. git clone https://github.com/Brainchip-Inc/akida_dw_edma
15. cd akida_dw_edma/
16. ./install.sh
17. akida devices
Note: Be sure to replace the username in Step 8, with your own username.
If successful, the akida devices command should return:

CopyAvailable devices
PCIe/NSoC_v2

Inference Evaluation​

Now that the Akida Developer Kit is setup, we can run our model and evaluate the results. We'll use two distinct methods to test out the Akida performance in this tutorial, though other methods or scenarios could also exist. The first method we will use is the Edge Impulse Linux SDK, which includes a sample python script that takes a model file and an image as inputs, run inferences on the given image, and then displays the output results on the command line.
The second method is to use a USB Webcam attached to the Akida Developer Kit, capture the live video feed, and inference what is seen through the camera. If the images are displayed on a monitor, the brightness of the screen, resolution, or light in the room could impact the overall accuracy or ability of the model to make predictions. Thus is likely a less ideal method for this use-case, but we'll document it as it could serve useful in other scenarios beyond x-ray classification.

Method 1 - Linux SDK Python Inferencing​

Earlier, when we unzipped the downloaded dataset from Kaggle, there were three folders inside it: Train, Test, and Val. We uploaded the Test and Train folders to Edge Impulse, but the Val folder was not uploaded. Instead, we can now place those Validation images on a USB drive and copy them over to the Akida Developer Box (or use the scp command, FTP, etc.) in order to evaluate how our model performs on the hardware.
In a terminal, we'll continue where we left off above. With the images copied over to a USB stick and then inserted into the Akida Developer Kit, the following series of commands will copy the images to the device, and use the example python from the Linux SDK to run inference:

Copy1. sudo mount /dev/sda1 /tmp
2. mkdir /home/<your-username-here>/validation
3. cp /tmp/NORMAL/*.jpeg /home/<your-username-here>/validation && cp /tmp/PNEUMONIA/*.jpeg /home/<your-username-here>/validation
4. edge-impulse-linux-runner # login to Edge Impulse, select project, build, then exit running process with Control+C
5. cd linux-sdk-python/examples/image/
6. python3 classify-image.py ~/.ei-linux-runner/models/<your-project-number-here>/<your-version-number-here>/model.eim ~/validation/NORMAL2-IM-1427-0001.jpeg
Note: Once again, the username needs to be substituted with your own. The project number and version number can be obtained by simply ls'ing the models directory.
The edge-impulse-linux-runner command in Step 4 above is used to connect the Akida Developer Kit to Edge Impulse, login with your credentials, select a project, and then download your model to the device. Once that is complete, inference will attempt to begin, but you can cancel the running process with Control+C to exit the process. The model is downloaded, which is what we are interested in. Continuing on with Step 5 and Step 6 will run the inference and display the results, time it took to process, and power consumption. You can iterate through all of the images in the validation folder you created (which should contain some Normal and some Pneumonia images.)
image


Method 2 - USB Webcam Inferencing​

As mentioned above, the second methodology we'll explore is live inference from an attached USB Webcam, though this does introduce a set of variables that may impact accuracy for our selected use-case of x-ray classification. Other use-cases may not have these variables, so we'll document the method as it could be helpful for other projects. In this situation, we'll open up those same Validation images on a separate laptop or PC, then point the webcam that is hooked up to the Akida Developer Kit at the monitor showing he x-ray image.
On the Akida Developer Kit, launch the application by entering edge-impulse-linux-runner on the command line.
Inference will start running continuously, printing out results to the console. An extra feature of the Linux Runner is that it also starts an HTTP service, which can be accessed at http://<IP-address-of-the-device>:4912 (the IP will be displayed in the text that is printed out as the application begins, or just run ip a to find it).
Then in a browser on the PC or laptop, open up that URL and you will see a view from the camera, and its inference results. You might need to arrange your windows or move the camera so that it is only seeing the x-ray, otherwise classification will not work.
However, as identified earlier, this method may not be as reliable for the x-ray classification use-case, due to lighting conditions of the room, brightness and contrast of the monitor, quality of the USB Webcam, resolution and size of the monitor, etc. It is worth exploring though, as many vision projects are excellent candidates for live inferencing with a camera and the Akida NPU.
image


Going Further​

At this point, we have demonstrated how to build a machine learning model and deploy it to the Brainchip Akida Developer Kit, and proven that inference is working successfully. But let's look a bit closer at the results we achieved and analyze our current situation.
Inference times were only 100 to 150 milliseconds. For a doctor evaluating a single patient x-ray, this is near instantly, and anything within a minute or two would be acceptable on-site at a healthcare facility while diagnosing a patient. The Akida is orders of magnitude fast enough for that situation. Alternatively, if attempting to classify a large dataset of tens-of-thousands or hundreds-of-thousands of images, such as a researcher or government entity may need to do, again the Akida can dramatically speed up the process.
Second, power consumption is measured in the milliwatts of energy consumed. Although the Akida Developer Kit is plugged in to a steady power supply and a Raspberry Pi is measured in watts, keep in mind as mentioned above that the Akida processor could rather easily be integrated in to a more compact, lower power, single PCB alongside an application processor, lowering power consumption significantly. Even further, the Akida IP could be directly embedded into a processor, eliminating the need for the stand-alone co-processor completely, adding just that small uptick of those few milliwatts to a device's power consumption, while adding NPU acceleration for machine learning tasks.
With these factors in mind, it is entirely feasible to build x-ray classification capabilities directly into new generations of smart medical devices that can do real-time inference, to aid doctors in their decision making. It may even be possible to create small handheld, battery-powered classifiers, that simply accept a USB drive containing the images, which could be useful for remote clinics.
If you have ideas for other use-cases or product designs using the Brainchip Akida neuromorphic processor, be sure to reach out to us!
https://docs.edgeimpulse.com/experts/image-projects/brainchip-akida-industrial-inspection
 
  • Like
  • Love
  • Fire
Reactions: 99 users

charles2

Regular
From a few days ago on Edge Impulse.


X-Ray Classification and Analysis - Brainchip Akida Neuromorphic Processor​

A computer vision project to perform image classification on x-ray results, using the Brainchip Akida Development Kit.

Created By: David Tischler
Public Project Link: https://studio.edgeimpulse.com/public/348027/latest
image


Introduction​

Over the past several years, as hardware has improved and machine learning models have gotten more efficient, many AI workloads have been transitioned from the cloud to the edge of the network, running locally on devices that were previously not able to perform inferencing tasks. Fast CPU's and GPU's, more memory, and better connectivity have helped, but a very large impact has come from dedicated AI Accelerators that offer high performance inference, in small, low-power form-factors.
The Brainchip Akida AKD1000 is one such example, with the ability to speed up sensor, audio, or vision tasks such as image classification or object detection significantly over standard CPU-based inferencing, while using only a few milliwatts of power. In this project we'll use the Brainchip Akida Developer Kit, which comes in a ready-to-use system consisting of an x86 or Arm-based platform, plus an Akida AKD1000 NPU on a small PCIe add-on card.
image


However, even this low-power system is more powerful than truly necessary, as the Akida NPU could be simply integrated directly into a single PCB containing a processor, memory, storage, and any necessary interfaces, eliminating the need for the PCIe add-on card. Even more integrated, the Akida IP can be licensed and embedded directly within an SoC, creating a single-chip solution capable of compute and AI acceleration, combined. But for ease of getting started, the Brainchip Akida Raspberry Pi Developer Kit is used here.

Improving Medical Processes​

Artificial Intelligence may never be able to fully replace a doctor, but it can certainly help supplement their work, speed up diagnostic processes, or offer data-driven analyses to assist with decision making. This project will explore the capability of the Akida processor to identify pneumonia in an x-ray image, along with some potential next steps and description of how that can be leveraged in the real-world.
We'll use the Akida Developer Kit, Edge Impulse, a curated dataset from Kaggle, and some basic utilities to evaluate performance. The standard Edge Impulse workflow will be used, which is well-documented here.

Dataset Collection​

The first step to consider for a machine learning project is the dataset. This could be collected yourself, such as most sensor projects, or you can use an existing dataset if there is one that meets your particular needs. In this case, as we are interested in evaluating the Akida for x-ray classification, we can use the Chest X-Ray Images (Pneumonia) dataset provided by Paul Mooney on Kaggle. This dataset consists of 5,863 images (x-rays) of patients who were diagnosed with pneumonia, as well as those who did not have pneumonia (i.e., "normal"). You can download the dataset, then unzip it, to find Test, Train, and Validation folders, subdivided into "pneumonia" and "normal" folders for each.
Make a new project in Edge Impulse, click on Data Acquisition, and then upload the Test and Train folders for each Class, making sure you select "Automatically split between training and testing" and also provide the correct Label for each folder condition.
image


Once each folder is appropriately uploaded, your dataset should look something like this:
image


I've ended up with 4,646 images in my Training set, and a total of 1,163 images in my Test set, which will be held back and can be used later to test the model on unseen data.

Building a Model​

To begin the process of building a classification model, click on Impulse Design on the left, and set the image dimensions in the Image data block. I have chosen 640x480 as a starting point, though we could possible go a bit higher depending on if the model accuracy is too low once we begin testing. Next, add an Image Processing block, then a Classification - BrainChip Akida Learning block. Then click Save Impulse.
image


In the Impulse block detail page, you can likely change to Grayscale as x-rays are black and white, so we can save a bit of processing speed / memory by eliminating RGB color. Choose "Grayscale" from the Color depth drop-down menu, and then click Save Parameters. On the next page, click on Generate Features, and you will see a visual representation of your dataset features after the process completes.
image


On the Classifier settings page, I've made a few changes to increase accuracy of the model, bumping up the number of epochs run to 200, reducing the learning rate to 0.0005, and reduced my validation set size down to 5%. To speed up the training process, I've used GPU training, which is available for Enterprise users. You can request a free 14-day trial here if you'd like to increase your model sizes and reduce your build times.
Once the build is complete, you'll be presented with Validation accuracy and inference time information.
image


During the data upload step earlier, recall that we set aside 1,163 images that were not used for training the machine learning algorithm (this occurred automatically as a result of the "Automatically split between training and testing" checkbox). Now we can have our newly created model try to evaluate those 1,163 images, but using the Model Testing feature. Click on Model Testing on the left navigation, and then click the "Classify all" button. A job will be started, with logs available on the right side of the screen, and once complete the model will iterate through the unseen Test images and perform an inference on each, then render the results against the known value. This will give you a good indication of how well your model is working on new x-ray images. Here I received a Test classification accuracy of about 94.06%, so I will move forward with deploying the model to the board.

Deploying to the Developer Kit​

Now it's time to setup the Akida Developer Kit. There is an included SD Card, ready to use out-of-the-box with Brainchip's model zoo and sample applications. This makes it quick and easy to evaluate the Akida, and begin using the device. But as we're going to be using Edge Impulse in this tutorial, I've instead flashed a new SD Card with Ubuntu 20.04.5 LTS 64-bit by using the Raspberry Pi Imager application. I also used the "Customize" feature of the application to add a username and password, as well as local WiFi credentials, though you could just as easily plug in an ethernet cable for connectivity. Once booted up and on the network, the Akida Developer Kit is similar to any other Raspberry Pi in how you can interact with it. You can attach a keyboard, mouse, and HDMI monitor, or in my case I simply accessed the device over SSH.
image


As this was a fresh installation of Ubuntu, we'll need to install both the Akida tooling and drivers, as well as the Edge Impulse tooling and examples. Those require some prerequisites, so the process actually begins by updating the system and then installing necessary packages. After that, the Edge Impulse CLI, Akida CLI, Edge Impulse Linux SDK, and Akida PCIe driver can be installed.
Here is the complete series of commands I used, in order:

Copy1. sudo apt-get update && sudo apt-get upgrade
2. sudo reboot
3. sudo su
4. curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
5. apt-get install build-essential linux-headers-$(uname -r) git gcc g++ make nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps libatlas-base-dev libportaudio2 libportaudiocpp0 portaudio19-dev python3-opencv python3-pip
6. npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm
7. reboot
8. export PATH=/home/<your-username-here>/.local/bin:$PATH # (or add to `.profile` file to make permanent)
9. pip install --upgrade pip
10. pip install --upgrade akida
11. pip install edge_impulse_linux -i https://pypi.python.org/simple
12. pip install pyaudio
13. git clone https://github.com/edgeimpulse/linux-sdk-python
14. git clone https://github.com/Brainchip-Inc/akida_dw_edma
15. cd akida_dw_edma/
16. ./install.sh
17. akida devices

If successful, the akida devices command should return:

CopyAvailable devices
PCIe/NSoC_v2

Inference Evaluation​

Now that the Akida Developer Kit is setup, we can run our model and evaluate the results. We'll use two distinct methods to test out the Akida performance in this tutorial, though other methods or scenarios could also exist. The first method we will use is the Edge Impulse Linux SDK, which includes a sample python script that takes a model file and an image as inputs, run inferences on the given image, and then displays the output results on the command line.
The second method is to use a USB Webcam attached to the Akida Developer Kit, capture the live video feed, and inference what is seen through the camera. If the images are displayed on a monitor, the brightness of the screen, resolution, or light in the room could impact the overall accuracy or ability of the model to make predictions. Thus is likely a less ideal method for this use-case, but we'll document it as it could serve useful in other scenarios beyond x-ray classification.

Method 1 - Linux SDK Python Inferencing​

Earlier, when we unzipped the downloaded dataset from Kaggle, there were three folders inside it: Train, Test, and Val. We uploaded the Test and Train folders to Edge Impulse, but the Val folder was not uploaded. Instead, we can now place those Validation images on a USB drive and copy them over to the Akida Developer Box (or use the scp command, FTP, etc.) in order to evaluate how our model performs on the hardware.
In a terminal, we'll continue where we left off above. With the images copied over to a USB stick and then inserted into the Akida Developer Kit, the following series of commands will copy the images to the device, and use the example python from the Linux SDK to run inference:

Copy1. sudo mount /dev/sda1 /tmp
2. mkdir /home/<your-username-here>/validation
3. cp /tmp/NORMAL/*.jpeg /home/<your-username-here>/validation && cp /tmp/PNEUMONIA/*.jpeg /home/<your-username-here>/validation
4. edge-impulse-linux-runner # login to Edge Impulse, select project, build, then exit running process with Control+C
5. cd linux-sdk-python/examples/image/
6. python3 classify-image.py ~/.ei-linux-runner/models/<your-project-number-here>/<your-version-number-here>/model.eim ~/validation/NORMAL2-IM-1427-0001.jpeg

The edge-impulse-linux-runner command in Step 4 above is used to connect the Akida Developer Kit to Edge Impulse, login with your credentials, select a project, and then download your model to the device. Once that is complete, inference will attempt to begin, but you can cancel the running process with Control+C to exit the process. The model is downloaded, which is what we are interested in. Continuing on with Step 5 and Step 6 will run the inference and display the results, time it took to process, and power consumption. You can iterate through all of the images in the validation folder you created (which should contain some Normal and some Pneumonia images.)
image


Method 2 - USB Webcam Inferencing​

As mentioned above, the second methodology we'll explore is live inference from an attached USB Webcam, though this does introduce a set of variables that may impact accuracy for our selected use-case of x-ray classification. Other use-cases may not have these variables, so we'll document the method as it could be helpful for other projects. In this situation, we'll open up those same Validation images on a separate laptop or PC, then point the webcam that is hooked up to the Akida Developer Kit at the monitor showing he x-ray image.
On the Akida Developer Kit, launch the application by entering edge-impulse-linux-runner on the command line.
Inference will start running continuously, printing out results to the console. An extra feature of the Linux Runner is that it also starts an HTTP service, which can be accessed at http://<IP-address-of-the-device>:4912 (the IP will be displayed in the text that is printed out as the application begins, or just run ip a to find it).
Then in a browser on the PC or laptop, open up that URL and you will see a view from the camera, and its inference results. You might need to arrange your windows or move the camera so that it is only seeing the x-ray, otherwise classification will not work.
However, as identified earlier, this method may not be as reliable for the x-ray classification use-case, due to lighting conditions of the room, brightness and contrast of the monitor, quality of the USB Webcam, resolution and size of the monitor, etc. It is worth exploring though, as many vision projects are excellent candidates for live inferencing with a camera and the Akida NPU.
image


Going Further​

At this point, we have demonstrated how to build a machine learning model and deploy it to the Brainchip Akida Developer Kit, and proven that inference is working successfully. But let's look a bit closer at the results we achieved and analyze our current situation.
Inference times were only 100 to 150 milliseconds. For a doctor evaluating a single patient x-ray, this is near instantly, and anything within a minute or two would be acceptable on-site at a healthcare facility while diagnosing a patient. The Akida is orders of magnitude fast enough for that situation. Alternatively, if attempting to classify a large dataset of tens-of-thousands or hundreds-of-thousands of images, such as a researcher or government entity may need to do, again the Akida can dramatically speed up the process.
Second, power consumption is measured in the milliwatts of energy consumed. Although the Akida Developer Kit is plugged in to a steady power supply and a Raspberry Pi is measured in watts, keep in mind as mentioned above that the Akida processor could rather easily be integrated in to a more compact, lower power, single PCB alongside an application processor, lowering power consumption significantly. Even further, the Akida IP could be directly embedded into a processor, eliminating the need for the stand-alone co-processor completely, adding just that small uptick of those few milliwatts to a device's power consumption, while adding NPU acceleration for machine learning tasks.
With these factors in mind, it is entirely feasible to build x-ray classification capabilities directly into new generations of smart medical devices that can do real-time inference, to aid doctors in their decision making. It may even be possible to create small handheld, battery-powered classifiers, that simply accept a USB drive containing the images, which could be useful for remote clinics.
If you have ideas for other use-cases or product designs using the Brainchip Akida neuromorphic processor, be sure to reach out to us!
https://docs.edgeimpulse.com/experts/image-projects/brainchip-akida-industrial-inspection
Inference times were only 100 to 150 milliseconds. For a doctor evaluating a single patient x-ray, this is near instantly, and anything within a minute or two would be acceptable on-site at a healthcare facility while diagnosing a patient. The Akida is orders of magnitude fast enough for that situation. Alternatively, if attempting to classify a large dataset of tens-of-thousands or hundreds-of-thousands of images, such as a researcher or government entity may need to do, again the Akida can dramatically speed up the process.
Second, power consumption is measured in the milliwatts of energy consumed. Although the Akida Developer Kit is plugged in to a steady power supply and a Raspberry Pi is measured in watts, keep in mind as mentioned above that the Akida processor could rather easily be integrated in to a more compact, lower power, single PCB alongside an application processor, lowering power consumption significantly. Even further, the Akida IP could be directly embedded into a processor, eliminating the need for the stand-alone co-processor completely, adding just that small uptick of those few milliwatts to a device's power consumption, while adding NPU acceleration for machine learning tasks.
With these factors in mind, it is entirely feasible to build x-ray classification capabilities directly into new generations of smart medical devices that can do real-time inference, to aid doctors in their decision making. It may even be possible to create small handheld, battery-powered classifiers, that simply accept a USB drive containing the images, which could be useful for remote clinics.
If you have ideas for other use-cases or product designs using the Brainchip Akida neuromorphic processor, be sure to reach out to us!
https://docs.edgeimpulse.com/experts/image-projects/brainchip-akida-industrial-inspection





I am a real MD. (Not just one seen on TV)

Reading this literally made me cry. The implications for medical care are truly mind-boggling.

Please, we need an emoji for 1000 WOWs, 1000 HEARTS and a 1000 FLAMES

I'd be hammering (clicking) the bar like a mouse in Pavlov's trials.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 100 users
  • Wow
  • Like
  • Fire
Reactions: 8 users

cosors

👀
Hi Esq.

actually, Rolls-Royce Motor Cars Ltd, the present-day manufacturer of luxury automobiles with the iconic front grille and Spirit of Ecstasy bonnet mascot (https://www.press.rolls-roycemotorc...the-human-drama-behind-the-legend?language=en) and Rolls-Royce Holdings plc are two completely separate companies, despite their similar names and emblems.

In 1973, Rolls-Royce Motors (now defunct), was separated out of Rolls-Royce Ltd (which had been nationalised in 1971). But to make things even more complicated: Today’s company Rolls-Royce Motor Cars Ltd is not even a direct successor of that original automobile company associated with class, prestige and luxury, but a totally new company founded in 1998, which has been a wholly owned subsidiary of BMW AG for the past two decades!

By the way, the story of the Rolls-Royce marque is a reminder of how important IP due diligence is…

View attachment 59237

View attachment 59239

Here is a good overview of Rolls-Royce Motor Cars vs Rolls-Royce plc (and indeed, both could benefit from Akida… 😉). The reason the 2020 summary doesn’t mention that super-luxurious Spectre EV model in the video you shared, is that it did not go into production until 2023.


ROLLS-ROYCE MOTOR CARS AND ROLLS-ROYCE PLC ARE TWO DIFFERENT COMPANIES!​

Profile picture for user Darren Lynsdale

by DARREN LYNSDALE
SUPERCARS

THU, 06/11/2020 - 22:22
Rolls-Royce Motor Cars

ROLLS-ROYCE MOTOR CARS​

People seem to keep getting Rolls-Royce Motor Cars and Rolls-Royce plc confused, so here's a rundown of the differences and history!

ROLLS-ROYCE MOTOR CARS
Rolls-Royce Motor Cars is a wholly-owned subsidiary of the BMW Group. The company is the world’s leading luxury manufacturer based at The Home of Rolls-Royce at Goodwood, near Chichester, West Sussex, which comprises its global headquarters and Global Centre of Luxury Manufacturing Excellence – the only place in the world where Rolls-Royce motor cars are hand-crafted.


Production began on 1 January 2003 with the world’s pinnacle luxury product, Phantom. The range has since expanded to include Ghost, Wraith, Dawn, Cullinan and their Black Badge counterparts. An all-new Ghost is due to be launched later this year. The company has customers in more than 50 countries worldwide attended by a network of Rolls-Royce dealerships. Total sales in 2019 exceeded 5,000 cars. Over 2,000 people are employed at The Home of Rolls-Royce.

ROLLS-ROYCE PLC
Rolls-Royce plc is a leading industrial technology company with manufacturing facilities around the world employing some 52,000 people. Its head office is in London with main operations in Derby, UK; Bristol, UK; Indianapolis, US; Dahlewitz, Germany; Friedrichshafen, Germany; and Singapore. Originally founded in 1906, Rolls-Royce Ltd was nationalised in 1971, becoming Rolls-Royce 1971 Ltd., initially including the Motor Car Division. The Motor Car Division was floated as a separate company in 1973, and became Rolls-Royce Motors Holdings Ltd, which traded as Rolls-Royce Motors. Rolls-Royce 1971 Ltd. was privatised in 1987, becoming Rolls-Royce plc.

Rolls-Royce plc designs and manufactures engines for civil aerospace and military aircraft and ships. Its Power Systems business unit, based in Friedrichshafen, Germany, designs and manufactures engines for a range of land and marine applications including power generation.
Rolls-Royce plc has customers in more than 150 countries, comprising more than 400 airlines and leasing customers, 160 armed forces, 70 navies, and more than 5,000 power and nuclear customers. For the last 60 years it has also designed, supplied and supported the nuclear propulsion plant that provides power for all of the UK Royal Navy's nuclear submarines.

In January 2017, Rolls-Royce plc reached agreement with investigating authorities in the UK, US and Brazil relating to its activities in a number of overseas markets. On 20 May 2020, Rolls-Royce plc announced 9,000 job losses worldwide, predominantly in its Civil Aerospace business, in response to the medium-term reduction in demand for civil aerospace engines and services resulting from the Covid-19 pandemic.

At-a-glance summary:
Rolls-Royce Motor CarsRolls-Royce plc
Established20031987
OwnershipWholly-owned subsidiary of BMW Group (Munich)Rolls-Royce Holdings plc
ProductsThe world’s pinnacle super-luxury motor carsEngines and power systems for civil and military aircraft, ships, submarines and land‑based applications
Head officeGoodwood, West SussexKings Place, London
Manufacturing sitesGlobal Centre of Luxury Manufacturing Excellence, Goodwood, West SussexOperations in more than 50 countries worldwide. Major manufacturing operations in UK and overseas include: Ansty, Barnoldswick, Bristol, Derby, Hucknall and Inchinnan in the UK; Dahlewitz and Friedrichshafen in Germany; Indianapolis, USA; and Singapore.
Employeesc. 2,000c. 52,000
CEOTorsten Müller-ÖtvösWarren East


View attachment 59228

View attachment 59229
View attachment 59230
I think that Rolls-Royce plc is much more interesting for us, only when I'm thinking about vibration analysis on site and engines (generators, marine engines etc.) and turbines. They are among the leaders.
 
Last edited:
  • Like
  • Fire
Reactions: 10 users

Tothemoon24

Top 20
IMG_8613.jpeg




We are thrilled to be partnering with Arm on the launch of their latest Automotive Enhanced IP portfolio. Tata Technologies bring real complimentary skills - our 25 years of expertise in product engineering and digital services and solutions to the automotive industry puts us in pole position to address the needs of the Software Defined Vehicles. As a Strategic partner with Arm we are already developing solutions leveraging their latest Arm AE IP and we expect this to deliver significant time to market benefits for the whole automotive industry and are excited for the future of our partnership with Arm . Excited to collaborate with Dipti Vachani and her entire team - Warren Harris Shankara Narayanan Naveen Kalappa Jhenu Subramaniam VVSS Anjaneya Gupta Nachiket Paranjpe #softwaredefinedvehicles #automotive #sdv #partnershipannouncement
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Tothemoon24

Top 20
IMG_8614.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 21 users

IloveLamp

Top 20
1000014245.jpg
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Top Bottom