Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SCALE AND METHOD FOR THE AUTOMATIC RECOGNITION OF A PRODUCT
Document Type and Number:
WIPO Patent Application WO/2021/048813
Kind Code:
A1
Abstract:
The present invention relates to a scale (1) for the automatic recognition of a product (P) comprising a weighing plane (2) intended to receive a product (P) and detecting means (7) mounted above the weighing plane (2) and adapted to acquire at least one image (IMG(P)) of said product (P) when positioned on the weighing plane (2) wherein the processor (5) intended to identify the type of product (TP) to which the product (P) placed on the scale (1) belongs.

Inventors:
BONARDI MASSIMO (IT)
ZORZELLA EMIDIO (IT)
Application Number:
PCT/IB2020/058470
Publication Date:
March 18, 2021
Filing Date:
September 11, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ANTARES VISION S P A (IT)
International Classes:
G06Q20/20; G07G1/00
Domestic Patent References:
WO2005084227A22005-09-15
Foreign References:
EP0685814A21995-12-06
US20190138845A12019-05-09
CN109932039A2019-06-25
US10127438B12018-11-13
Attorney, Agent or Firm:
TOGNIN, Mattia (IT)
Download PDF:
Claims:
CLAIMS

1) Scale (1) for the automatic recognition of a product (P) comprising:

- a weighing plane (2) intended to receive a product (P),

- detecting means (7) mounted above the weighing plane (2) and adapted to acquire at least one image (IMG(P)) of said product (P) when positioned on the weighing plane (2), characterized by the fact that it comprises a processor (5) intended to:

- identify the type of product (TP) to which the product (P) placed on the scale (1) belongs.

2) Scale (1) according to the preceding claim, wherein said processor (5) is intended to:

- receive said signal indicating the weight (W(P)) of said product (P),

- identify the type of product (TP) to which the product (P) placed on the scale (1) belongs,

- calculate the price of said product (P) depending on the price/kg (€/kg), the weight and the type of identified product (P).

3) Scale (1) according to the preceding claim, comprising a labeling machine (3) for the print of adhesive labels/receipts (4) indicating the price of the identified and weighed product (P).

4) Scale (1) according to claim 2 or 3, comprising a display (6) to show the price of said identified and weighed product (P).

5) Scale (1) according to any of the preceding claims, comprising a database (M) wherein are stored a plurality of training images (IMG_TR(P)) identifying a predetermined type of products (TP) selected from a list of products (LP) on sale at the retail store, said database (M) also comprising at least one representative value of the price/kg (€/kg) for each type of products (TP) on sale at the retail store.

6) Scale (1) according to the preceding claim, wherein the identification of the type of product (TP) is determined on the basis of the similarity between the acquired image (IMG(P)) and the training images (IMG_TR(P)).

7) Method for the automatic recognition of a product (P), comprising the phases a) placing at least one product (P) on a scale (1), b) acquiring at least one image (IMG(P)) of said product (P), c) identifying the type of product (TP) to which the product (P) placed on said scale (1) belongs.

8) Method according to the preceding claim, wherein: said phase b) of acquiring is carried out by means of detecting means (7) mounted on board said scale (1).

9) Method according to the preceding claim, comprising the phases of: d) receiving a signal indicating the weight (W(P)) of said product (P), e) identifying the type of product (TP) to which the product (P) placed on the scale (1) belongs,

(f) calculating the price of said product (P) depending on the price/kg (€/kg), the weight and the type of identified product (P).

10) Method according to the preceding claim, comprising the phases of: g) having a plurality of training images (IMG_TR(P)) identifying a predetermined type of product (TP) selected from a list of products (LP) on sale at the retail store, and wherein said phase e) of identifying is determined depending on the similarity between the acquired image (IMG(P)) and the training images (IMG_TR(P)).

11) Method for the generation of a learning model for the automatic recognition of a product (P), comprising the phases of:

(a) positioning a plurality of products (P) on a first detection area,

(b) acquiring a plurality of training images (IMG_TR(P)) of said plurality of products (P) according to different orientations, shapes and sizes,

(c) storing the plurality of training images (IMG_TR(P)) acquired in a database (M),

(d) determining a plurality of distinctive features of said training images (IMG_TR(P)), e) associating training images (IMG_TR(P)) with distinctive features similar to the same type of product (TP) selected from a list of products (LP),

(f) acquiring at least one image (IMG(P)) of a product (P) positioned on a second detection area, g) classifying the acquired image (IMG(P)) depending on the similarity with the training images (IMG_TR(P)) to identify to which type of product (TP) the product (P) positioned on the second detection area belongs.

12) Method according to the preceding claim, wherein said phase a), b), c), d) and e) are carried out in the factory by means of a processor (5) programmed to work in an automatic learning mode.

13) Method according to claim 11 or 12, wherein said phase (f) is carried out on a scale placed in a store.

14) Method according to any of claims 11 to 13, wherein said phase (f) comprises the phase of:

- receiving a signal indicating the weight (W(P)) of the product (P), and

- calculating the price of the product (P) depending on the price/kg (€/kg), the weight and the type of identified product (TP),

- printing the calculated price on an adhesive label/receipt (4) to be applied onto the product (P).

15) Method according to any of claims 11 to 14, wherein said first detection area and said second detection area are matching.

16) Method according to any of claims 11 to 15, wherein the last acquired image (IMG(P)) of the last weighed product (P) is stored in the database (M) as a new training image (IMG_TR(P)).

17) Method according to any of claims 11 to 16, wherein said phase e) comprises the phase of placing the product (P) inside a semi-transparent bag (S) before being acquired.

18) Method according to any of claims 11 to 17, wherein said phase g) comprises the phase of classifying the similarity of the acquired image (IMG(P)) of the product (P) with respect to each type of product (TP).

Description:
SCALE AND METHOD FOR THE AUTOMATIC RECOGNITION OF A PRODUCT

Technical Field

The present invention relates to a scale and a method for the recognition of a product and more particularly to a method for the recognition of an edible product when positioned on a scale for the subsequent weighing and pricing of products in a retail store. The present invention also relates to a method for the generation of an automatic learning model for the recognition of a product, more in particular, a method for the generation of a learning model to be installed on a scale in order to automatically recognize an edible product to be weighed and priced.

Background Art

In retail stores or department stores, such as e.g. fruit and vegetable departments in supermarkets, self-service weighing stations are usually available that allow a user to weigh the products and label them by selecting the type of product chosen from the scale. These weighing stations include a scale where the product is placed and weighed, a touch-screen display and a labeling machine to print an adhesive label that shows various information, such as e.g. the total price, the weight per kilogram and/or the bar code of the product chosen, etc., processed on the basis of the weight and of the product indicated on the display by the user.

Usually, after selecting one or more products from the respective shelf, the user places the product on the scale weighing pan and then, by means of the selection through the touch-screen, enters the type of product chosen (e.g., by pressing a push-button with the product reference number or an image of the product or by clicking directly on the screen) thus allowing starting to print the label which is finally to be applied onto the product.

Such weighing and labeling practices in supermarkets do have many drawbacks. First of all, it should be specified that the weighing/labeling operation may even take several minutes, especially if the user is a non-expert customer. The indication of the type of product is in fact often difficult because, most of the time, the product number to be selected on the touch-screen is indicated only on the shelf where the product was taken. It may therefore happen that a user, not remembering the reference number of the product chosen, has necessarily to return to the shelf that may be positioned even very far away from the weighing point with a clear lengthening of the weighing times.

In addition, since the fruit and vegetable departments are usually frequented by many people, such a complex weighing system means that the scales are occupied for a long time by a single user with the creation of queues and consequent customer dissatisfaction.

Last but not least, economic losses are noticed daily by the operators because the weighing operation, and therefore the calculation of the final price of the product, can be easily falsified by dishonest users who select from the display a product at a lower price than the one chosen.

The Applicant has realized that in order to speed up and make the weighing operations more reliable, it is necessary to devise a solution that does not require carrying out the selection phase of the product from the touch-screen independently by a user. This way, a faster and more reliable weighing and labeling system could be obtained, free from potential price falsifications. Description of the Invention

The Applicant has therefore devised a solution that provides to associate a camera with a scale in order to acquire an image of the product when placed on the scale itself and that, through processing, allows the automatic recognition of the type of product and the exact calculation of the final price in order to obtain a labeling operation which is more effective, repeatable and not affected by errors and/or human falsifications.

The present invention therefore relates, in its first aspect, to a scale according to claim 1 having structural and functional characteristics such as to satisfy the above mentioned requirements and to overcome at the same time the aforementioned drawbacks with reference to the prior art.

According to a further aspect, the present invention relates to a method for the automatic recognition of a product according to claim 7. Finally, according to a further aspect, the present invention relates to a method for the generation of a learning model for the automatic recognition of a product according to claim 11.

Brief Description of the Drawings

Further characteristics and advantages of the scale and of the methods made according to the present invention will result from the description below of a preferred embodiment, given as an indicative yet non-limiting example, with reference to the annexed figures, in which:

- Figure 1 represents a front view of the weighing scale according to the present invention,

- Figure 2 represents an example of a diagram indicating the processing of the information acquired by the scale in Figure 1 for the recognition of a product and for the automatic calculation of the price and subsequent labeling,

- Figures 3 and 4 represent schematic views of possible scale connections to local servers or the cloud.

Fmbodiments of the Invention

With reference to the example in Figure 1, a scale 1 is shown for weighing products P according to the present invention. In particular, the scale 1 comprises a weighing plane 2 where the product P is placed, a labeling machine 3 for the print of adhesive labels/receipts 4 indicating information about the weighed product P (such as e.g. the total price, the price/kg, the type of product TP, the bar code, etc.) and a processor 5.

The scale 1 is preferably intended to be placed inside a retail store, such as e.g. a fruit and vegetable department of a supermarket.

With reference to the examples of the attached figures, the weighing plane 2 is provided with one or more load cells (not shown) configured to send to the processor 5 (e.g. a pc or a pic) a signal indicating the weight W(P) of the product P. The processor 5 is intended to receive the signal W(P) and to process the price of the product P based on its weight and on the predefined price per kilogram. Preferably, after the calculation has been made, the information of the product P is shown on a display 6, preferably of the touch-screen type, associated with the scale 1.

Conveniently, the processor 5 is provided with appropriate proprietary software programs that first of all allow modifying/setting the predefined price/kg for each type TP of product P on sale at the supermarket. It is therefore possible to allow an operator of the supermarket to set, by means of the display 6, the price/kg directly on screen through the software.

Advantageously, the scale 1 according to the invention is provided with one or more detecting means 7 adapted to acquire at least one image IMG(P) of the product P to be weighed when the latter is positioned on the weighing plane 2. In this case, as can be seen in the example in Figure 1, the detecting means 7 can comprise a camera vision system (for example a camera of the color CMOS technology type), the camera being mounted on top of the weighing plane 2 of the scale 1 so as to frame a detection area substantially corresponding to the entire surface of the weighing plane 2 and therefore to be able to intercept a product P positioned in any portion of the weighing plane 2.

According to one embodiment, the processor 5 is provided with a database M for storing the images IMG(P) of the acquired products P. In the database M all types of products for sale in the retail store can also be stored and saved in a memory cell indicated as “list LP”.

The automatic recognition of a product P takes place after the latter has been positioned on the weighing plane 2. The scale 1 is in fact able to recognize to which type TP a product P belongs thanks to the processing developed by the processor 5. Assuming that the product P to be recognized positioned on the weighing plane 2 of the scale 1 is a carrot, the camera 7 will acquire at least one input image IMG(P) of the product that will be analyzed by the processor 5 to determine, through artificial intelligence by means of one or more neural networks, to which type TP the carrot belongs. The processor 5 and the camera 7 then dialogue together to allow the automatic recognition of the products positioned on the scale in order to determine the type thereof. If the result of determination is positive, the processor 5 will definitely attribute the acquired image IMG(P) to the “carrot” product type, choosing from the list LP saved in the database M. Since for each product P present in the supermarket, a predefined price per kg is attributed and stored in the database M, the processor 5 will finally combine the weight W(P) of the product P received from the scale 1 with the type of product P just identified to calculate the final price thereof. In particular, the processor 5 will multiply the weight W(P) of the product P by the price per kg stored in the database M and send the calculated final price to the printer 3 to print it on the receipt 4 to be applied to the product P and/or to show it on the display 6.

As shown in the examples in Figure 3 and 4, the processor 5 may be connected to one or more local servers 8 or to one or more cloud-based servers 8 in order to share the information and data processed between several scales 1 located in the same store or on the territory.

It is necessary to specify that the processor 5 is programmed to work in machine learning (or so-called “deep learning”) by means of one or more training phases carried out in the factory wherein a plurality of potentially possible events are simulated during the weighing of the products P by the users. The simulations substantially comprise the acquisition of several training images IMG_TR(P) of all types of products P on sale at the retail store and directed in different positions in order to associate them to a specific type TP of product. The type TP is selected depending on the list LP in the database M and is constantly updated as the training continues. The training can therefore provide for an initial phase of preparation of a training dataset that allows the creation of the photographic archive useful for the subsequent training of one or more neural networks.

Advantageously, the number of such training images is less than a predefined number, preferably less than 10000, and it is sufficient, as will be seen later, to allow a correct and complete training of the neural network.

In particular, the processor 5 is able to generate its own learning model through one or more neural networks associated therewith that allows automatically recognizing the products P and, further, increasing the learning level on the basis of the training carried out.

Simulations may also usefully comprise the acquisition of other image information such as, e.g., information about the amount of light in the image, product color, size, or a combination of two or more of this information.

During the training phase, assuming that the product P to be acquired is a carrot, the camera 7 will acquire at least one training image IMG_TR(P) of the product P positioned in a detection area that will be analyzed by the processor 5 and stored in the database M. The same carrot will then be directed according to different positions and acquired again to simulate possible conditions that can be traced back to reality when weighing the product P at the point of sale. Advantageously, during the training phase, the acquired products P may also be positioned on a detection area other than the weighing plane of the scale. As a consequence, the training can be carried out anywhere even without the scale. Similarly, the training phase may involve the acquisition of additional training images IMG_TR(P) wherein several carrots are directed in different positions, according to substantially random orientations, such as e.g. partly overlapping, rotated, etc., or chosen according to different shapes, colors and/or sizes.

It should be noted that the training phase is carried out for all types of products P on sale at the store and can be repeated at will depending on the type of approximation to be obtained during the simulation phase.

According to one embodiment, the method of the present invention may comprise a phase of enrichment having a predefined algorithm of augmentation that consists in making little random modifications to the attributes of each training image, such as e.g. changes of rotation, brightness, position, size, translations, cutouts, linear and non-linear distortions and other operations of image processing in order to create and increase the training dataset. It follows that from a relatively low number of initial training images it is possible to create an extremely large dataset containing a plurality of modified images and which is able to represent almost all the distinctive features of the images that will then be analyzed. In one of the preparation phases, the analysis is provided of all the training images IMG_TR(P) of the photographic archive created and/or modified in order to note, for each digital image, an area of interest containing one or more distinctive features.

In one version, for each acquired training image IMG_TR(P) there is an annotation phase in which the information representative of the training images IMG_TR(P) is stored in a specific database, such as, e.g., the storage of the coordinates of the specific area of interest for each analyzed training image IMG_TR(P) and containing the distinctive feature. In this case, for each image a specific area (so-called “bounding box”) is noted, which is contained in the digital image and comprising one or more distinctive features (so-called “groundtruth”) and then is classified whether and which distinctive feature is representative of a product P.

In one version, for each image a specific area is noted comprising only one distinctive feature.

In an additional version, it is possible to instruct the processor 5 so that it independently identifies one or more distinctive features (e.g., shape, color, size, coordinates, etc.) that serve to train the neural network in order to determine to which type TP a given product P belongs.

Subsequently to the preparation phase, the training of the neural network is provided so that the latter is independently able to assign, for each attribute and/or for each determined distinctive feature, the predefined accuracy threshold based on the distribution of the analyses carried out during the preliminary phases of generation of the training digital images and in order to minimize possible false positives/negatives and maintaining, at the same time, high ability to supply an immediate answer of the neural network during the next phases of execution of the method of the present invention, as will be seen in the following of the present treatise.

Advantageously, neural network training, aimed at learning the distinctive features as well as the attributes associated with the products, is carried out by means of predefined machine learning models such as, e.g., of the gradient drop down type, or working on the basis of particular intrinsic characteristics of the images such as the brightness of a pixel, the gray or color levels of all or part of its pixels, etc. of the captured image.

The training is therefore intended to use the training dataset previously created in the preparation phase containing all the information and associations representative of the distinctive features identified, as well as the individual attributes, in the training images that are saved in a specific database to train the neural network.

Summing up, the method according to the invention provides for a training phase of the neural network on the basis of the training images obtained from the enrichment phase and on the basis of the distinctive features noted and classified during the previous preparation phase.

The generation of the learning model allows the processor 5, when acquiring new images IMG(P), to classify the similarity of the products P on the basis of the proximity of the distinctive features of each product P (through the images related thereto) to the distinctive features of the training images IMG_TR(P). When the training phase has come to its end, the processor 5 is ready to acquire the images of the products P in the store in order to classify them depending on the similarity they will have with the training images IMG_TR(P) previously acquired so as to give them a score S (or so-called “score”) to each one. A user will therefore position the product or products P to be weighed on the weighing plane 2 by starting the automatic weighing from the display 6. It will be possible, at any stage of the weighing process, to circumvent the automatic recognition by manually entering the type of product TP by means of the display 6.

In the following of the present description and subsequent claims, the “score S” indicates the similarity of the acquired image IMG(P) of the product P with respect to one of the training images IMG_TR(P) acquired during the training phase.

When one of the values attributed to each image is comprised within a predefined threshold value, the processor 5 is therefore able to determine whether the product P positioned in the scale pan belongs to a certain type of product TP, in the case provided as an example by definitively attributing the acquired image IMG(P) to the “carrot” product type by choosing from the list LP. It is therefore possible to query the neural network to analyze the acquired input images IMG(P) of the product P in order to provide at output an identification result on the product type present on the scale. The result is generated on the basis of the distinctive features coming from the previously created training and test datasets. In actual facts, after the knowledge of the previously created training dataset has been grafted in the neural network during the training phase, the processor 5 and the network or networks associated therewith can operate in an autonomous manner. During the query phase, in fact, the network already contains all the necessary knowledge for its correct and autonomous operation.

Since a predefined price per kg stored in the database M is assigned for each product P in the supermarket, the processor 5 will finally combine the weight W(P) of the product P received from the scale 1 with the type of product TP just identified to calculate the final price thereof. In particular, the processor 5 will multiply the weight W(P) of the product P by the price per kg stored in the database M and send the calculated final price to the printer 3 to print it on the receipt 4 to be applied to the product P and/or to show it on the display 6. Preferably, the receipt will show whether the user has used automatic recognition to weigh the product or has performed manual weighing.

It should be specified that in some retail stores the weighing of the product P is carried out after the product is placed in a plastic bag (usually semi-transparent). As it can be seen later in the present description, the scale 1 according to the present invention allows the recognition of the products P and the calculation of the weight even when the products P are placed in bags. For this purpose, during the training phase, the simulations comprise the acquisition of several training images IMG_TR(P) of all types of products P for sale in the retail store already inside the bags and acquired with different orientations, number, color, size, or a combination of the latter, in order to associate them with a specific type of product TP. The selected type TP according to the list LP (and possibly also the other distinctive features of the images such as, e.g., amount of light, product color, size, etc.) is constantly updated as the learning continues. This way, the determination of the type of product TP can be identified even when the latter is inside semi-transparent bags without the need to remove the products from the bags.

According to one embodiment, the processor 5 is configured to acquire the images IMG(P) and the habits of the users that can be used in a predictive way in order to update in real time the memory cell related to the storage of the training images IMG_TR(P) depending on the products P chosen by the users over time.

In a further version, the scale 1 can be connected to an open network, e.g. the internet or a closed network, e.g. the supermarket intranet. This way, several scales 1 placed in the same store or in a predetermined territory and connected to the network can share the information processed by the processor 5 and saved in the database M to send it to the server 8 in order to improve the recognition performance of the products P. In this configuration, the training images IMG_TR(P) saved in the remote server 8 may be updated by several scales located in the territory, even simultaneously. Basically, the automatic learning may continue during the weighing operations in the stores where every last acquired image IMG(P) of a weighed product P is stored in the database M as a new training image IMG_TR(P).

Advantageously, each scale 1 may also comprise a geo-location device (not shown) in communication with the processor 5 so that the geographical position data can be used and saved in the database M to update the users’ habits. Conveniently, the processing means 5 are intended to monitor and store all weighing and image acquisition operations of the products P in order to acquire information about the turnout, stay time, type of products chosen, statistical data related to the users’ choices, etc.. This information can also be shared in the network to generate a plurality of common performances regarding the choices and weighing times by the users in multiple stores. In particular, the performances can comprise, in addition to the statistical data related to the choices of the products P, also data on the distinctive features of the products P. Substantially, the detecting means 7 and the processor 5 interact with each other to determine which products P, e.g., have taken less time in weighing or have generated a particular score to improve and speed up the calculation of the final price.

As it was possible to verify from the present description, it was ascertained that the described invention achieves the intended objects and in particular the fact is underlined that, by means of the method for the recognition of a product, it is possible to identify, weigh, calculate the price and label a product in a retail store in an automatic and fast manner and without the need of indication by the user about the type of product he/she is weighing.

In particular, thanks to the innovative training phase, the processor and the relevant neural networks associated therewith can be trained in order to carry out a continuous automatic learning during the use of one or more scales placed in the stores. The effectiveness of product recognition thus improves over time, speeds up the determination of the type of product chosen and ensures high- performance product recognition and weighing operations. The type and amount of images which have been acquired both during training and during weighing are potentially infinite and obviously a technician of the branch, in order to meet contingent and specific needs, will be able to make many changes and variations to the method and to the scale as described above, all of them contained within the scope of protection of the invention, as defined by the following claims.