Hi Db,"based on a predefined set of training data comprising approximately one million food photographs"
What are your thoughts on the size of the data set @Diogenese ?
To me it seems too large, to have anything to do with AKIDA, but how is a "zoo" or library, created for use by AKIDA?
It would take more training, to identify say stiff celery, from floppy celery..
Children please..
I'll keep my doodle entendres under wraps.
This Samsung portmanteau patent application covers all sorts of domestic appliance with NNs, cameras, AI models, model training, speech recognition, display screens, menu suggestions ...
They contemplate having software NNs or SoC NNs.
WO2023090725A1 DOMESTIC APPLIANCE HAVING INNER SPACE CAPABLE OF ACCOMMODATING TRAY AT VARIOUS HEIGHTS, AND METHOD FOR ACQUIRING IMAGE OF DOMESTIC APPLIANCE 20211118
As shown in FIG. 2 , a home appliance 1000 according to an embodiment of the present disclosure may include a camera 1100 and a processor 1200 . The processor 1200 controls overall operations of the home appliance 1000 . The processor 1200 executes programs stored in the memory 1800, thereby enabling the camera 1100, the driving unit 1300, the sensor unit 1400, the communication interface 1500, the user interface 1600, the lighting 1700, and the memory. (1800) can be controlled.
According to one embodiment of the present disclosure, the home appliance 1000 may be equipped with an artificial intelligence (AI) processor. The artificial intelligence (AI) processor may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or manufactured as part of an existing general-purpose processor (eg CPU or application processor) or graphics-only processor (eg GPU). It may also be mounted on the home appliance 1000.
According to an embodiment of the present disclosure, the processor 1200 obtains a first image including the tray 1001 inserted into the internal space of the home appliance 1000 through the camera 1100, and uses the first image. Thus, the height at which the tray 1001 is inserted in the inner space can be identified. Also, the processor 1000 may identify the height at which the tray 1001 is inserted based on information obtained from at least one of the depth sensor 1410, the weight sensor 1420, and the infrared sensor 1430. An operation of identifying the height at which the tray 1001 is inserted by the processor 1200 will be described later in detail with reference to FIGS. 5 to 8 .
According to an embodiment of the present disclosure, the processor 1200 determines a setting value related to image capture of the interior space according to the height at which the tray 1001 is inserted, and based on the determined setting value, the tray 1001 A second image (hereinafter, also referred to as a monitoring image) including the contents placed thereon may be obtained. For example, the processor 1200 determines the brightness value of the lighting in the interior space according to the height at which the tray 1001 is inserted, and adjusts the brightness of the lighting 1700 disposed in the interior space according to the determined lighting brightness value. The camera 1100 may be controlled to acquire the second image. In addition, the processor 1200 may determine the size of the cropped area according to the height at which the tray 1001 is inserted, and obtain a second image by cropping a portion of the surrounding area from the first image based on the determined size of the cropped area. . Meanwhile, the processor 1200 may obtain a second image by determining a distortion correction value of the camera 1100 according to the height at which the tray 1001 is inserted and applying the distortion correction value to the first image. An operation in which the processor 1200 acquires the second image (monitoring image) by applying a set value according to the height at which the tray 1001 is inserted will be described in detail later with reference to FIGS. 9 to 16 .
…
The input interface 1620 may include a voice recognition module. For example, the home appliance 1000 may receive a voice signal that is an analog signal through a microphone and convert the voice part into computer-readable text using an Automatic Speech Recognition (ASR) model. The home appliance 1000 may obtain the user's utterance intention by interpreting the converted text using a natural language understanding (NLU) model. Here, the ASR model or NLU model may be an artificial intelligence model. The artificial intelligence model can be processed by an artificial intelligence processor designed with a hardware structure specialized for the processing of artificial intelligence models. AI models can be created through learning. Here, being made through learning means that a basic artificial intelligence model is learned using a plurality of learning data by a learning algorithm, so that a predefined action rule or artificial intelligence model set to perform a desired characteristic (or purpose) is created. means burden. An artificial intelligence model may be composed of a plurality of neural network layers. Each of the plurality of neural network layers has a plurality of weight values, and a neural network operation is performed through an operation between an operation result of a previous layer and a plurality of weight values.
…
Alternatively, the electronic device 200 may be implemented as an electronic device connected to a display device including a screen through a wired or wireless communication network. For example, the electronic device 200 may be implemented in the form of a media player, a set-top box, or an artificial intelligence (AI) speaker.
...
For example, a refrigerator may provide a service that recommends a menu suitable for stored ingredients. Meanwhile, in order for smart appliances to provide smart services based on object recognition, an object recognition rate needs to be improved.
…
In an embodiment, at least one neural network and/or a predefined operating rule or AI model may be stored in the memory 220 . In an embodiment, a first neural network for obtaining multi-mood information from at least one of user context information and screen context information may be stored in the memory 220 .
…
In an embodiment, the processor 210 may use artificial intelligence (AI) technology. AI technology can be composed of machine learning (deep learning) and element technologies using machine learning. AI technology can be implemented by utilizing algorithms. Here, an algorithm or a set of algorithms for implementing AI technology is called a neural network. The neural network may receive input data, perform calculations for analysis and classification, and output result data. In this way, in order for the neural network to accurately output result data corresponding to the input data, it is necessary to train the neural network. Here, 'training' is a method of inputting various data into a neural network, analyzing the input data, classifying the input data, and/or extracting features necessary for generating result data from the input data. It may mean training a neural network so that the neural network can discover or learn a method by itself. Training a neural network means that an artificial intelligence model with desired characteristics is created by applying a learning algorithm to a plurality of learning data. In an embodiment, such learning may be performed in the electronic device 200 itself where artificial intelligence is performed, or through a separate server/system.