Stop feeding the troll!You pick on people who make mistakes on here 123, You pick n choose obviously your not consistent,
Is that how you run your life, pick and choose sort of guy
Hi Chips
I love stories so here is one for you.
When I employed Secretary typists it was sufficient to circle the mistake and give it back to them. They understood the circle indicated a mistake. They then corrected it and the work came back to me and I signed and off it went.
I never once spent all day for a week berating the person over a typo. I never sacked anyone for a typo even if it was missed by someone.
I absolutely never after ranting for a week over a typo came back to it in subsequent weeks time and time again making jokes at the persons expense.
I never found the typo so important that I stopped the analyse of a matter of significance to promote the typo’s significance to all making it the only thing I spoke about in the office day after day with every client who came through the door.
So what I would suggest is that those that do are either in need of help or have another agenda.
I would suggest that the appropriate action for even a pedant to take is to circle the typo and send it to Tony Dawe with a please correct.
Then post if they are so inclined that they have detected the typo and notified the company.
You obviously do not agree and consider it should dominate the conversation for weeks on end
If you were an employer I think you would find yourself on the front end of a constructive dismissal case if you behaved like this with your employees.
The End.
My opinion no further research or comment required.
Fact Finder
Pvdm spotted in Freo having lunch![]()
Great find and don’t apologise mentioning it. Samsung seems to be such a hot lead. Let’s hope that it’s us in their 2024 roadmap. There are so many hints already.Couple of dots for CES and 4 years in the making perhaps.... may have already been linked prior so apologies in advance if that's the case.
https://www.biometricupdate.com/202...sing-capabilities-opens-developer-environment
BrainChip has demonstrated the capabilities of its latest class of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California, the company announced.
“We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems,” said in a prepared statement Louis DiNardo, CEO of BrainChip. “We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”
In a session titled “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” BrainChip rolled out a demo of how the Akida Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. As a result, Akida takes up 40 to 60 percent less computations to process a CNN compared to a DLA.
https://news.samsung.com/global/sam...ble-expansive-kitchen-experiences-at-ces-2024
AI Features That Enable Food Ideas and Intelligence
To enhance the experience in the kitchen, the 2024 Bespoke 4-Door Flex™ Refrigerator with AI Family Hub™+ has been packed with a variety of innovative technologies. One impressive new feature is AI Vision Inside, which uses a smart internal camera that can recognize items being placed in and out of the refrigerator. Also, it is equipped with “Vision AI” technology, which can identify up to 33 different fresh food items based on a predefined set of training data comprising approximately one million food photographs.2 With the food list that is available and editable on the Family Hub™+ screen, users can also manually add expiration date information for items that they would like to keep track and the refrigerator sends out alerts through its 32” LCD screen for items before reaching that date.
Chris Stevens likes thisThis pretty much just added the same value (I.e. zero), but you used more words and encouraged people to take a nap half way through reading because it was so rehashed and a great waste to read. Waiting for fresh material from you buddy.
Rob Telson likes this, but Chris Stevens likes it more..This pretty much just added the same value (I.e. zero), but you used more words and encouraged people to take a nap half way through reading because it was so rehashed and a great waste to read. Waiting for fresh material from you buddy.
"based on a predefined set of training data comprising approximately one million food photographs"Couple of dots for CES and 4 years in the making perhaps.... may have already been linked prior so apologies in advance if that's the case.
https://www.biometricupdate.com/202...sing-capabilities-opens-developer-environment
BrainChip has demonstrated the capabilities of its latest class of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California, the company announced.
“We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems,” said in a prepared statement Louis DiNardo, CEO of BrainChip. “We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”
In a session titled “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” BrainChip rolled out a demo of how the Akida Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. As a result, Akida takes up 40 to 60 percent less computations to process a CNN compared to a DLA.
https://news.samsung.com/global/sam...ble-expansive-kitchen-experiences-at-ces-2024
AI Features That Enable Food Ideas and Intelligence
To enhance the experience in the kitchen, the 2024 Bespoke 4-Door Flex™ Refrigerator with AI Family Hub™+ has been packed with a variety of innovative technologies. One impressive new feature is AI Vision Inside, which uses a smart internal camera that can recognize items being placed in and out of the refrigerator. Also, it is equipped with “Vision AI” technology, which can identify up to 33 different fresh food items based on a predefined set of training data comprising approximately one million food photographs.2 With the food list that is available and editable on the Family Hub™+ screen, users can also manually add expiration date information for items that they would like to keep track and the refrigerator sends out alerts through its 32” LCD screen for items before reaching that date.
I'm pretty sure the secretaries FactFinder is talking about, worked in black and white offices..Hello Fact Finder
I don't like stories, but rather reality!
1. Nobody discussed "all day for a week" beating anybody! You were the one writing most words about it.
2. Secretaries should not make typing mistakes especially nowadays with automatic correction programs and when going public. It always gives bad impressions. In this case, nobody did a second check!
3. It surprises me over and over again how much and often you defend BrainChip even over minor topics.
My end too.
Have a good weekend!
CHIPS
Chris Stevens likes this
Rob Telson likes this, but Chris Stevens likes it more..
Stop feeding the troll!
"based on a predefined set of training data comprising approximately one million food photographs"
What are your thoughts on the size of the data set @Diogenese ?
To me it seems too large, to have anything to do with AKIDA, but how is a "zoo" or library, created for use by AKIDA?
It would take more training, to identify say stiff celery, from floppy celery..
Children please..
So you're saying, the use of such a large data set, doesn't exclude the use of AKIDA, but is it indicative of using AKIDA or not?When creating a data set used for inference, a set of 'features' is extracted and used for training the model. Various features are identified for image models and tagged, such as shapes. One could, for example, train a model to recognize dress shirts, blouses, t-shirts, hoodies, etc.
As these features are fed into a neural network, the "weights" are calculated and become part of the model. The weights are stored in a binary format and can be quantized, or a fancy word for saying do some "magic math" for these numbers to take up less space. We hear about 8-bit, 4-bit, etc. The fewer bits that can be used to represent a weight, the less memory it requires. This quantization comes at a cost since you can lose some accuracy when doing the inferencing to recognize things.
Microsoft's Resnet-50 model (the 50 represents the number of layers in the model, each set of layers having a set of weights) is trained on millions of images. Yet, the model itself can fit in around 100 megabytes of memory. Most cell phones today are in the gigabytes of storage. To put it more into perspective, most wireless routers today can have between 128 and 512 MB of memory. A Samsung smart refrigerator has about 2.5GB of RAM and 8GB of flash memory, much more space than the model requires.
That being said, a model that is trained on over a million images to recognize most refrigerator contents can easily fit within the confines of an appliance or mobile device.
I like to think of a neural network model as a "snapshot" of a brain state. When you learn new images, you're not necessarily increasing the size or mass of your brain, but you're altering the connections between the neurons in your brain that can remember and recognize those images later. As you see more pictures of a cat to learn the shape of its face, nose, ears, tail, etc. that make up that cat, then the more likely you are to recognize another picture of a cat, even if it is a color you've never seen before.