Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
BEVERAGE DISPENSER WITH ARTIFICIAL INTELLIGENCE
Document Type and Number:
WIPO Patent Application WO/2022/107062
Kind Code:
A1
Abstract:
The object of the present invention is a method and an associated system for dispensing a liquid inside a container (4) by a tap (1). The method is based on artificial intelligence algorithms and comprises the steps of: - "Container detection" to detect the presence of said container (4), - "Accurate container positioning" to verify the correct positioning of said container (4) in relation to said tap (1), - "Automatic container filling" to manage the start and interruption of said dispensing when a pre-set level of liquid is reached inside said container (4). Said steps of the method are chronologically performed in a concatenated sequence or implemented individually in an autonomous manner.

Inventors:
MANDOLINI LUIGI (IT)
Application Number:
PCT/IB2021/060746
Publication Date:
May 27, 2022
Filing Date:
November 19, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BLUPURA S R L (IT)
International Classes:
B67D1/08; B67D1/12; G05B19/00
Domestic Patent References:
WO2016183631A12016-11-24
Foreign References:
US20100155415A12010-06-24
US5349993A1994-09-27
Attorney, Agent or Firm:
PREMRU, Rok (IT)
Download PDF:
Claims:
22

CLAIMS Method for the dispensing of a liquid inside a destination container (4) by a tap (1), characterised in that it comprises the steps of:

“Container detection” for detecting the presence of said container (4), “Accurate container positioning” for verifying the correct centering of said container (4) in relation to said tap (1),

“Automatic container filling” for managing the start of said dispensing and its subsequent stop upon reaching a preset level of liquid inside said container (4), each of said steps being carried out automatically by the CPU of an electronic board (5) by artificial intelligence algorithms on the basis of the classification activity of the scenes sensed by at least one sensor (2) with which said tap (1) is provided. Method according to the previous claim, characterised in that said artificial intelligence algorithms comprise an initial training stage for data acquisition and incremental training stages for the learning and storage in said electronic board (5) of data representative of said sensed scenes, by carrying out Labelling activities on said data for the purpose of their correct classification for the execution of said steps. Method according to the previous claim, characterised in that said data representative of the scenes sensed during the training stage comprise data relating to one or more of the following characteristics:

- type of said container (4),

- shape of said container (4),

- filling level of said container (4),

- position of said container (4) relative to said tap (1), - installation environmental conditions of said tap (1). Method according to any previous claim, characterised in that it further comprises a calibration step of the scene without said container (4), to be used as an initial reference for the comparison with the subsequent sensed scenes. Method according to the previous claim, characterised in that said calibration step takes place during the "Container detection" step, at the start of said step, and it's repeated periodically. Method according to any previous claim, characterised in that said "Container detection" step discriminates said sensed scene in two alternative classifications:

"busy base" if the presence of said container (4) is identified in the scene,

"empty base" if the presence of no container (4) recognised as such by the artificial intelligence algorithm presence is identified in the scene. Method according to the previous claim, characterised in that said artificial intelligence algorithm for the execution of said "Container detection" step uses at least one video-type or audio-type sensor (2) or a combination of both, said artificial intelligence algorithm comprising one of the algorithms among:

“Machine learning”,

“Computer vision”,

“DSP - Digital Signal Processing”. Method according to claim 7, characterised in that said artificial intelligence algorithm "Machine learning" uses at least one video-type sensor (2) and classifies the scene sensed in "busy base" if the presence of said container (4) positioned on a base (D.3) underlying said tap (1) is identified in said scene, said identification comprising the steps of: shooting said scene by said sensor (2), identifying in said scene the entry (4.1) of said container (4), carrying out a graphic processing of the sensed scene, by creating a mask to highlight said entry (4.1) of said container (4). Method according to claim 7, characterised in that said artificial intelligence algorithm "Computer vision" uses at least one video-type sensor (2) and classifies the sensed scene in "base busy" if the presence of said container (4) positioned on a base (D.3) underlying said tap (1) is identified in said scene, said identification comprising the steps of: shooting said scene by said sensor (2), performing a differential calculus among the pixels of said scene and the pixels of the scene identified in the calibration step, which is representative of a scene classified in "empty base", in the image deriving from said differential calculus, identifying a blob corresponding to the shape of said container (4), identifying in said blob the entry (4.1) of said container (4). Method according to any previous claim, characterised in that said "Accurate container positioning" step discriminates said picked up scene in two alternative classifications:

"centered container" if the correct positioning of said container (4), having its entry (4.1) in axis with the liquid dispensable by said tap (1), is identified in the scene, 25

"off-centered container" if the correct positioning of said container (4), which does not have its entry (4.1) in axis with the liquid dispensable said tap (1), is not identified in the scene. Method according to the previous claim, characterised in that said artificial intelligence algorithm for the execution of said "Accurate container positioning" step uses at least one video-type or audio-type sensor (2) or a combination of both, said artificial intelligence algorithm comprising one of the algorithms among:

“Machine learning”,

“Computer vision”,

“DSP - Digital Signal Processing”. Method according to claim 11, characterised in that said artificial intelligence algorithm "Machine learning" uses at least one video-type sensor (2) and classifies the sensed scene in "centered container" if the correct location of said container (4) on the base (D.3) underlying said tap (1) is identified in said scene, said identification comprising the steps of: shooting said scene by said sensor (2), recognising by classification that said entry (4.1) of said container (4) is in axis with the vertical column of the liquid dispensable by said tap (1). Method according to claim 11, characterised in that said artificial intelligence algorithm "Computer vision" uses at least one video-type sensor (2) and classifies the sensed scene in "centered container" if the correct location of said container (4) on the base (D.3) underlying said tap (1) is identified in said scene, said identification comprising the steps of: 26 shooting said scene by said sensor (2), verifying that said entry (4.1) of said container (4) is in axis with the vertical column of the liquid dispensable by said tap (1), said vertical column being identified by mask (1.1) obtained by digital processing techniques. Method according to any previous claim, characterised in that said "Automatic container filling" step discriminates said sensed scene in two alternative classifications:

"container empty" if the failure to reach the pre-set liquid level inside said container (4) is identified in the scene,

"container full" if the reaching of the pre-set liquid level inside said container (4) is identified in the scene, said classification of the scene in "container empty" allowing the dispensing of the liquid by the tap (1) to be continued, said classification of the scene in "container full" stopping the dispensing of the liquid by the tap (1). Method according to the previous claim, characterised in that said artificial intelligence algorithm for the execution of said "Automatic container filling" step uses at least one video-type or audio-type sensor (2) or a combination of both, said artificial intelligence algorithm comprising one of the algorithms among:

“Machine learning”,

“Computer vision”;

“DSP - Digital Signal Processing”. Method according to claim 15, characterised in that said artificial intelligence algorithm "Machine learning" uses at least one video-type sensor (2) and classifies the sensed scene in "container full" if the reaching of the preset level of liquid inside said container (4), located in the correct position on the base (D.3) underlying said tap (1) is identified in said scene, said identification comprising the steps of: shooting said scene by said sensor (2), recognising by means of classification that the filling level inside said container (4) has reached said preset level of liquid. Method according to claim 15, characterised in that said artificial intelligence algorithm "Computer vision" uses at least one video-type sensor (2) and classifies the sensed scene in "container full" if the reaching of the preset level of liquid inside said container (4), located in the correct position on the base (D.3) underlying said tap (1) is identified in said scene, said identification comprising the steps of: shooting said scene by said sensor (2), identifying, inside the blob corresponding to the shape of said container (4), an image representative of the rim, verifying, by differential calculations based on the difference in brightness of the pixels, that the filling level of the inside of said container (4) has reached said preset level of liquid, by verifying the advance of said liquid relative to said rim. Method according to any previous claim from 6 onwards, characterised in that during the dispensing of said liquid in said container (4), said video-type sensor (2) is used also for said "Container detection" step to discriminate the changes in said sensed scene from "busy base" to "empty base" in case said container (4) is removed from the base (D.3) underlying said tap (1), causing the consequent stop of liquid dispensing by the tap (1). Method according to any previous claim from 10 onwards, characterised in that during dispensing of said liquid in said container (4), said video-type sensor (2) is used also for said step of "Accurate container positioning" to discriminate the changes in said sensed scene from "centered container" to "off-centered container" in case said container (4) is moved from the correct location underlying said tap (1), causing the consequent stop of the dispensing of the liquid by the tap (1). Method according to claim 15, characterised in that said artificial intelligence algorithm "Machine learning" uses at least one audio-type sensor (2) and classifies the sensed scene in "container full" if the reaching of the preset level of liquid inside said container (4), located in the correct position on the base (D.3) underlying said tap (1) is identified in said scene, said identification comprising the steps of: detecting the audio changes of said scene by said sensor (2), assessing the spectrometric dynamic of said audio features of said scene, recognising by classifications, the sound instant representative of the fact that the filling level of the inside of said container (4) has reached said preset level of liquid, by the change from a deeper sound to a more acute one of the sound representative of the liquid dispensing. Method according to claim 15, characterised in that said artificial intelligence algorithm "DSP - Digital Signal Processing" uses at least one audio-type sensor (2) and classifies the sensed scene in "container full" if the reaching of the preset level of liquid inside said container (4), located in the correct position on the base (D.3) underlying said tap (1) is identified in said scene, said identification comprising the steps of: detecting the audio changes of said scene by said sensor (2), identifying, by audio processing techniques, the sound instant representative of the fact that the filling level inside said container (4) has reached said preset level of liquid, by the change from a deeper sound to a more acute one of the sound representative of the liquid dispensing. Method according to any previous claim from 6 onwards, characterised in that during dispensing of said liquid in said container (4), said audio-type sensor (2) is used also for said step of "Container detection" to discriminate the changes in said sensed scene from "busy container" to "container empty" in case said container (4) is removed from the base (D.3) underlying said tap (1), causing the consequent stop of liquid dispensing by the tap (1). Method according to any previous claim from 10 onwards, characterised in that during dispensing of said liquid in said container (4), said audio-type sensor (2) is used also for said step of "Accurate container positioning" to discriminate the changes in said sensed scene from "centered container" to "off-centered container" in case said container (4) is moved from correct location underlying said tap (1), causing the consequent stop of liquid dispensing by the tap (1). Method according to any previous claim, characterised in that said steps of:

“Container detection”,

“Accurate container positioning”,

“Automatic container filling”, are carried out in sequence one following the other. Automatic dispensing system of a liquid inside a destination container (4) by a tap (1), comprising:

- a base (D.3) underlying said tap (1), whereon said container is placed (4),

- and activation means of the dispensing function of the liquid by said tap (1), characterised in that it further comprises: at least one video-type or audio-type sensor (2) or a combination of both, an electronic board (5) for the storage and processing of data sensed and detected by at least one sensor (2), said system comprising a method adapted to the automatic execution by artificial intelligence algorithms of the sequential steps of:

“Container detection” for detecting the presence of said container (4),

“Accurate container positioning” for activating the dispensing of liquid inside said container (4) by said tap (1),

“Automatic container filling” for stopping said dispensing upon reaching a preset level of liquid inside said container (4), according to claims 1 to 24. System according to claim 25, characterised in that said activation means of the dispensing function of liquid by said tap (1) comprise at least one solenoid valve piloted by a relay. System according to any claims from 25 onwards, characterised in that said at least one sensor (2) is located on the external body of said tap (1) or inside its dispensing nozzle or on the top wall opposite said base (D.3). System according to any claims from 25 onwards, characterised in that said at least one sensor (2) is selectable among one or more of the following: 31 video camera, audio sensor,

“ToF - Time of Flight” sensor,

Laser sensor, - infrared radiation IR sensor, high frequency ultrasound "Narrow Beam" sensor. Dispenser (D) for the dispensing of drinking water and/or other beverages, comprising: a containment and support structure (D.l), - a drawing station comprising a compartment (D.2), made inside said structure (D.l), said compartment (D.2) comprising a base (D.3) adapted to host at least one container (4), at least one tap ( 1 ) adapted to dispense said drinking water and/or other beverages inside said at least one container (4), characterised in that it comprises the system of claims 25 to 28, for the automatic execution of the method according to claims 1 to 24.

Description:
BEVERAGE DISPENSER WITH ARTIFICIAL INTELLIGENCE

DESCRIPTION

The present invention relates to a tap for dispensing liquids, operating with an artificial intelligence algorithm capable of autonomously carrying out a series of steps aimed at correctly dispensing the liquid inside a target container, until a preset filling level is reached.

The invention is therefore included in the field of taps, understood in the broad sense of any means for dispensing liquids for filling an underlying container, intended for use both in the private sector and in the public sector.

More specifically, the invention is particularly useful in the field of beverage dispensers (a term which, from now on, will be used to identify a generic "device for dispensing water and/or beverages"), suitable for use in domestic environments, workplaces, restaurants and public places and/or outdoors.

Such dispensers are described, for example, in the European patent EP3049364 of the applicant for the present patent application.

Typically, a dispenser is connected to the water mains and/or to storage tanks for the beverage to be dispensed, and is capable of supplying drinking water and/or beverages (after purification, where necessary, by prior filtration media and/or sterilizing units) at room temperature and/or chilled (by prior refrigerating media) and/or carbonated (by addition of carbon dioxide).

The dispenser is then provided with at least one dispensing station for accessing the beverage dispensing tap, under which to place the container to be filled, as well as with appropriate controls for starting the dispensing function, typically including at least one currency acceptance device (or suitable pre-loaded circuit board or micro-chip key) and at least one interface suitable for selecting the operating functions of the dispenser, such as the type of beverage dispensed and its quantity.

The system object of the present invention, consisting in the tap and in the relative artificial intelligence method implemented therein, simulates human intelligence to autonomously perform a sequence of steps aimed at filling the container up to a pre-set level of liquid.

At present, some systems aimed at controlling the filling of a container are already known, especially in the field of industrial techniques for bottling lines and dosing of fluids.

Generally, these systems consist of mechanical gravity fillers, associated with prior means of visual verification that monitor the level of the liquid dispensed into suitable containers: this is, therefore, an "after-the-fact" control methodology, essentially limited to an application in the industrial lines of bottling plants and functioning exclusively for transparent containers that allow these means of verification to ascertain the filling level from the outside.

With specific reference to the sector of liquid dispensing systems, the methodologies known to date in the state of the art can be divided into two categories:

- liquid filling systems on the basis of a previously determined quantity, e.g. dispensing a fixed quantity of 50 cl. or 100 cl. volume;

- liquid filling systems on the basis of an estimate of the volume of the container used, by means of observation of the shape of that container.

In the end, in both of the above categories, the filling systems decide the amount of liquid to be dispensed on the basis of pre-determined criteria, either on the basis of a fixed volume or on the basis of the external observation of the container placed under the tap: in practice, this results in the drawback that, if the container is already partially filled before the new dispensing, the latter would cause the liquid to spill.

The present invention aims to solve, at least in part, the problems of the above- mentioned prior technique, by proposing a system composed of a tap and of the relevant artificial intelligence method implemented therein, capable of operating with any type of container and in any starting condition of the initial filling level of said container.

Another aim of the invention is to propose a system comprising a tap whose operation allows an improved user experience by any user.

A further purpose of the invention is to provide means for facilitating and automating the liquid dispensing function of the tap, without incurring waste or spillage from the target container.

These and other purposes, which will become clear later, are achieved by a tap operating with an artificial intelligence algorithm capable of detecting the presence of a container and dispensing liquid to fill it to a pre-set level, in accordance with claim 1 of method and claim 25 of the apparatus.

Other purposes may also be achieved by the additional features of the dependent claims.

The features of the present invention will be better evidenced by the following description of a preferred embodiment in accordance with the patent claims and illustrated, by way of a non-limiting example only, in the attached drawings, wherein:

- fig. 1 is a side view of a dispenser equipped with the system according to the invention, comprising the tap on which the artificial intelligence method according to the invention is implemented;

- fig. 2.A and fig. 2.B show a top view of the system according to the invention, illustrating a recognition step of a container, respectively according to a real situation and according to the corresponding graphic elaboration carried out by the artificial intelligence algorithm by means of a video sensor;

- fig. 3 shows a top view of the system according to the invention, with reference to a recognition step of the absence of a container according to the artificial intelligence algorithm;

- fig. 4.A and fig. 4.B show a top view of the system according to the invention, representing a control step of the centering of the opening of the container in relation to the tap above, respectively according to an example of correct positioning and according to an example of incorrect positioning;

- figs. 5. A, 5.B, 5.C and 5.D show the sequence of operations of the system according to the invention, carried out in the step of managing the starting and stopping of the liquid dispensing inside the container by the tap;

- fig. 6 is a block diagram showing by way of example but not limited to a variant of the artificial intelligence algorithm in its various steps as implemented in the system according to the invention.

The characteristics of a preferred variant of the system comprising the tap and the related artificial intelligence method are now described, using the references contained in the figures. It should be noted that the aforesaid figures, although schematic, reproduce the elements of the invention according to proportions between their dimensions and spatial orientations which are compatible with a possible embodiment.

It is also specified that any dimensional and spatial terms (such as "lower", "upper", "internal", "external", "front", "rear" and the like) refer to the positions of the elements as represented in the attached figures, without any limiting intent in relation to the possible operating conditions.

It should be noted that, from now on, for the sake of clarity but without any limiting intent, reference will be made herein to a system comprising the tap and the relative artificial intelligence method associated therewith, in the variant in use in a beverage dispenser, reiterating however that such system is also applicable to any product in which a tap intended to dispense a liquid inside a target container is used.

With reference to fig. 1, D is therefore used to indicate a beverage dispenser, typically comprising a containment and support structure D.l which, in the example shown in the figure, consists of a box- shaped element having a substantially parallelepipedal shape; however, it may take on the most varied architectural and dimensional configurations, in consideration of the destination environment and of the requirements inherent to the type of use.

Said structure D.l defines an internal volume suitable for housing the typical constructive-functional components of a dispenser, such as, for example, hydraulic circuitry and tanks, refrigeration and gasification units, sanitization elements, which do not require in-depth discussion since they are part of the prior art in the sector.

For the purposes of the present invention, it is sufficient to say that said structure D.l of said dispenser D also comprises at least one withdrawal station including a compartment D.2, having dimensions suitable to house at least one container 4 to be filled with a liquid dispensed by a tap 1 above which, in the example shown in the figure, protrudes from the top wall of said compartment D.2: said tap 1 may, however, also be located in other walls of said compartment D.2, such as, for example, the rear wall or side walls, if any.

Said compartment D.2 is provided with a base D.3 for supporting the container 4, preferably provided with a grid 3 (or similar surface provided with openings) for draining the liquid that may leak from the container or drip from the tap 1.

In the example of the attached figures, said container 4 is represented as a bottle or a glass, but of course it may consist of any container having a shape and size such as to be placed in the compartment D.2.

In accordance with the present invention, the tap 1 comprises at least one sensor 2, advantageously located in the outer body of said tap 1 or directly inside its dispensing nozzle or in the top wall of said compartment D.2.

The features and operation of said at least one sensor 2 will be further clarified below, but it is anticipated herein that it may comprise a video sensor or an audio sensor or a combination thereof, the inputs of which are sent to an electronic circuit board 5 which processes them by means of its CPU for carrying out the various steps comprising the artificial intelligence method of the system according to the invention. As illustrated in the block diagram of fig. 6, the artificial intelligence method implemented in the tap 1 comprises the following steps:

- Step 1: calibration of the scene and detection of the container 4; from here on, this step 1 will be referred to as "Container detection";

- Step 2: control of the correct centring of container 4 below tap 1 ; from here on, this step 2 will be called "Accurate container positioning";

- Step 3: management of the start and stop of liquid dispensing inside the container 4 by the tap 1, up to a pre-set level, i.e. typically close to the rim of the container 4; henceforth this step 3 will be referred to as "Automatic container filling".

As it happens in any artificial intelligence algorithm, for the purposes of the correct functioning of the above-listed steps, a preliminary training stage is necessary in which the system is trained to recognise and discriminate certain operating conditions of the same, storing the resulting data in the CPU of the electronic circuit board 5.

For this preparatory training stage, which can be understood as an initial configuration of the system, it is necessary to store in the electronic circuit board 5 a multitude of suitable data (together known as "dataset"), representative of the different operating conditions in which the constituent elements of said system may be found in real use by the user.

More specifically, in the training stage, the acquired data are processed to discriminate practical operating situations of the system, assuming the most diverse situations of actual operation of the three steps mentioned above, in relation, for example, to the type and shape of the container, its filling level, its position in relation to tap 1, the environmental conditions in which tap 1 is installed.

As well as at an initial stage, training can also continue during normal operation of the system, in accordance with prior artificial intelligence algorithms which provide for progressive learning ("incremental training stages") by means of a sort of retro-propagation of the data, which allows the algorithm to adapt over time through training and the storage of additional data in the electronic circuit board 5, through input provided by a technician or directly by the user during normal use.

In other words, the artificial intelligence algorithm operates an adaptation at the moment in which new data arrives, intended as representative data of practical situations that were not initially simulated in the initial training stage: see the block diagram summarised in fig. 6, which shows that each step of the method is followed by a data acquisition step which, if different from similar data already stored in the electronic circuit board 5 during the initial training stage or as a result of subsequent learning ("incremental training stages"), increases the amount of data representative of the various possible operating conditions of the system.

By way of example but not limited thereto, below is a list of artificial intelligence algorithms which may be used in the system of the present invention:

- "Machine Learning", in accordance with the various methods by which said algorithm may be implemented: according to a preferred variant of the invention, the machine learning employed operates on a binary classifier model, by means of "Deep Neural Networks" of a convolutional and fully connected type;

- "Computer Vision", specifically in the case that the sensor 2 is a video sensor;

- "Digital Signal Processing", specifically in the case that sensor 2 is an audio sensor.

As regards the type of sensor 2 implemented in the tap 1 or close to it, which represents the source of data acquisition to be sent to the electronic circuit board 5 for their processing for the execution of the steps of the method, it has already been anticipated that it may include at least one video sensor (typically a camera) or an audio sensor, or even a combination of them.

Alternatively, said at least one sensor 2 may include:

- a "ToF - Time of flight" sensor, capable of estimating in real-time the distance between it and the elements being imaged, by calculating the time it takes for a light pulse to travel along the "sensor-element-sensor" path; - a Laser sensor;

- an IR infrared sensor;

- a high-frequency ultrasound "Narrow Beam" sensor.

In the following, the three steps of the artificial intelligence method implemented in the tap 1 according to the invention are further detailed. According to a variant, shown in the block diagram in fig. 6, the three steps of the method are performed sequentially one after the other; however, the teachings of the present invention also apply to the individual steps, performed individually and independently of each other.

STEP 1: Container detection

In this step, the system ascertains whether a container 4 is present in the compartment D.2 of the dispenser D, i.e. whether its base D.3 (provided with a possible support grid 3) is occupied or not by a container 4 suitable for filling the liquid dispensed by the tap 1.

For the sake of simplicity, from now on reference will be made exclusively to the most common hypothesis in which said container 4 is actually placed on said base D.3 but, for the purposes of the invention, it is understood that said Step 1 fully achieves its operability also in the hypothesis in which said container 4 is not necessarily resting on said base D.3, since it may also be held in the hand of the user or placed on a suspended tray.

In Step 1 the artificial intelligence algorithm discriminates, therefore, between two operating conditions, using a binary classificatory model to categorise the processed scene into "empty base" or "busy base".

To this end, the artificial intelligence algorithm starts with a calibration step of the scene, to be used as a starting reference for subsequent surveys: such calibration is necessary to acquire an initial reference, representative of the starting conditions of the scene in which the system operates.

This calibration step should be repeated periodically, in addition to the first operation of the system, so that this initial reference can be modified and adapted to operating conditions that may have changed over time, for example due to different environmental situations and/or a different positioning of the dispenser D.

In the training stage, the system acquires a series of representative data of scenes in which the most varied types of containers suitable for filling with a liquid appear, through multiple continuous acquisitions, both at an initial stage and during the normal operation of the system (through input provided by a technician or directly by the user during normal use).

Therefore, in this training stage, by means of the prior Labelling technique, the corresponding information of belonging to the class "busy base" or "empty base" is associated with each data acquired.

With particular reference to the variant of fig. 6, if with this Step 1 the condition of "busy base" is ascertained, the method can continue with Step 2; on the contrary, the condition of "empty base" is ascertained and the method stops, returning to the initial stage of data acquisition and eventual training, intended as a progressive learning step ("incremental training stages") with retro-propagation of the data emerging from the binary classification just carried out.

Such Step 1 "Container detection" is carried out cyclically, thus allowing to interrupt the liquid dispensing during Step 3 "Automatic container filling", i.e. the latter is interrupted if, during its execution, the container 4 is removed from the base D.3. of the compartment D.2 of the dispenser D.

As said, Step 1 "Container detection" can be carried out with the use of a sensor 2 of a video type, of an audio type or with a combination of the same.

In the case of use of a video type sensor 2, the artificial intelligence algorithms that can be used may consist of:

• "Machine Learning": through the video sensor 2, the base D.3 of the compartment D.2 is observed; in the initial calibration step, this base D.3 is empty and the scene is classified, therefore, as "empty base", acting as an initial reference. This algorithm discriminates between "empty base" and "busy base" according to the presence or not, in the scene covered by the field of view of the sensor 2, of an object detectable as container 4.

As shown in fig. 2. A and fig. 2.B, the search for the sensor 2 is aimed at identifying a precise portion of the container 4, identifiable as the entry section 4.1 of the container 4 for the liquid dispensed by the tap 1.

In the example shown in the figure, this portion of the container 4 (henceforth referred to as "entry 4.1") corresponds to the circular area of the neck of a bottle, but obviously, it can take on the most varied forms depending on the type and conformation of the specific container 4.

In fig. 2. A the real scene picked up by the sensor 2 is shown, while in fig. 2.B the corresponding graphic elaboration according to the algorithm of artificial intelligence, obtained through prior operations of analysis of the image: more in detail, to discriminate the entry 4.1 of the container 4 a mask corresponding to the representative image of the real scene is created, highlighting then such portion of the container 4 (the black zone in fig. 2.B) to learn its allocation inside the whole analyzed image.

If the algorithm identifies this entry 4.1 of the container 4, the scene is classified as "busy base", representative of the operating condition in which there is a container 4 on the base D.3 of compartment D.2 of the dispenser D.

Otherwise, no identification of the entry 4.1 of container 4 takes place and the scene is classified as "empty base", either:

- if no object is inserted on the D.3 base of the D.2 compartment (obtaining, in substance, representative data equal to the initial calibration scene),

- or if an object that cannot be classified as a container 4 is inserted (e.g. the user's hand or any other object other than a container 4 provided with an entry 4.1 suitable to be classified as suitable to receive the liquid dispensed by the tap 1),

- or if a container 4 is inserted but no entry 4.1 is identified in it (e.g. because said container 4 is mistakenly placed upside down or because said entry 4.1 is covered by a cap).

• "Computer Vision": also with this method the video sensor 2 takes as initial reference an empty scene, classifying it as "empty base" (see fig. 3) and operates a binary classification between "empty base" and "busy base" depending on the presence or not, in the scene covered by the field of view of the sensor 2, of an object detectable as container 4.

When an object is placed on the base D.3, the "Computer Vision" algorithm carries out a differential calculation between the pixels of the new scene taken and those of the initial calibration scene.

As per prior art, said differential calculation can be performed by subtraction of the chromatic pixel values or by calculations based on statistics, previously adjusting the differential detection thresholds or adapting them to the particular environmental conditions of the scene being processed.

From this differential calculation, the algorithm obtains a portion of the image that can be defined as a "blob", i.e. a shape corresponding to the object inserted on the base D.3: also in this case, like the "Machine Learning" algorithm, the aim is to identify a precise portion of the container 4 within this blob, which can be traced back to the entry 4.1 for the liquid dispensed by the tap 1.

The identification in the blob of said entry 4.1 is carried out by means of prior techniques of computer vision and digital processing of images, such as, for example, the Hough transform.

If the algorithm identifies this entry 4.1 of the container 4, the classification of the scene as "busy base" takes place; otherwise, the scene is classified as "empty base".

• In the case of the use of an audio sensor 2, the artificial intelligence algorithm uses said sensor 2 for the calibration step of the scene, classifying it as an "empty base"; while for the actual binary classification in "empty base" or "busy base" it is necessary to wait for the data emerging in the dispensing condition to discern, in an effective manner, the presence or absence of a container 4 on the base D.3 of the dispenser D. That is, in such Step 1, the audio sensor 2 is used by the algorithm exclusively to carry out the initial calibration step of the scene, recording the audio in the conditions of normal operation of the system and in the absence of a container 4 on the base D.3 of the dispenser D or, in general, in the absence of dispensing in said container 4 (although present on said base D.3).

This calibration step takes place prior to the first operation of the system, and continues continuously over time, in order to adapt the representative data to the actual audio conditions of the place and context of installation of the dispenser D. Only if the dispensing of the liquid by the tap 1 is activated, instead, will the algorithm be able to understand whether or not a container 4 is inserted on the base D.3, as will be clear later on, in the paragraphs of the description dedicated to Step 3 "Automatic container filling".

STEP 2: Accurate container positioning

In this step, the system ascertains whether or not the container 4 inserted on the base D.3 (provided with a possible support grid 3) is centered in relation to the tap 1 above, i.e. whether the entry 4.1 of said container 4 is aligned with the liquid dispensable by said tap 1 or, in any case, in a position useful for filling with said liquid (dispensable, for example, by a tap 1 placed on a side wall and not on the top of the compartment D.2).

In Step 2, the artificial intelligence algorithm discriminates, therefore, between two operating conditions, using a binary classificatory model to categorise the processed scene into "centered container" or "off-centered container".

Following the ascertainment of the presence of a container 4 (with consequent classification of the scene as "busy base"), with this Step 2, the algorithm verifies whether this container 4, as well as being present, is also correctly placed on the base D.3 in the correct position so that it can receive the liquid then dispensed by the tap 1.

In the training stage, the system acquires a series of representative data of scenes in which the most varied types of containers have their entry 4.1 centered or off- centered in relation to the axis of fall of the liquid from the tap 1, through multiple continuous acquisitions, both at an initial stage (i.e. prior to the first start-up of the system) and during the normal operation of the system (through input provided by a technician or directly by the user during normal use).

Therefore, in this training stage, by means of the prior Labelling technique, the corresponding information belonging to the class "centered container" or "off- centered container" is associated with each acquired data.

With particular reference to the variant of fig. 6, if with this Step 2 the condition of "centered container" is ascertained, the method can continue with Step 3 for the management of the liquid dispensing; otherwise the condition of "off-centered container" is ascertained and the method stops, returning to the initial stage of data acquisition and possible training, intended as a progressive learning stage ("incremental training stages") with retro-propagation of the data emerging from the binary classification just carried out.

Such Step 2 "Accurate container positioning" is carried out cyclically, thus allowing to interrupt the liquid dispensing during the Step 3 "Automatic container filling", i.e. the latter is interrupted if, during its execution, the container 4 is moved from the correct position centered on the base D.3. of the compartment D.2 of the dispenser D.

Also in such Step 2 "Accurate container positioning" the sensor 2 employed may be a video type, an audio type or both.

In the case of use of a video type sensor 2, the artificial intelligence algorithms that can be used may consist of:

• "Machine Learning": using criteria similar to those already described in Step 1 "Container detection", the video sensor 2 observes the scene and discriminates between "centered container" or "off-centered container" depending on whether or not the entry 4.1 of the container 4 is positioned on axis in relation to the vertical column of liquid dispensed by the tap 1.

• "Computer Vision": similarly to the method already seen in Step 1 "Container detection", the algorithm is able to discern between the two classifications "centered container" and "off-centered container" on the basis of computer vision techniques and digital processing of the images captured by the sensor 2.

In fig. 4.A and fig. 4.B two scenes are shown, respectively classifiable as "centered container" and as "off-centered container" depending on whether the mask 1.1, representing the vertical column of liquid that can be dispensed from the tap 1, falls in plan or not inside the entry 4.1 of the container 4, preliminarily obtained with the technique seen for Step 1.

This method, that is to say, analogously to that of Step 1, envisages the identification of the precise portion of the container 4 within the blob, obtaining the representative image of said entry 4.1 in order to verify whether the mask 1.1 of the vertical column of the liquid dispensable from the tap 1 is on axis with said entry 4.1.

If the algorithm ascertains this correct positioning, the scene is classified as "centered container" (fig. 4. A), representative of the operative condition in which on the base D.3 of the compartment D.2 of the dispenser D there is a container 4 whose entry 4.1 is on axis with the liquid dispensable from the tap 1.

Otherwise, the scene is classified as "off-centered container" (fig. 4.B), representative of the operative condition in which on the base D.3 of the compartment D.2 of the dispenser D there is a container 4 (as ascertained in Step 1 "Container detection"), but this does not have its entry 4.1 on axis with the liquid that should be subsequently dispensed from the tap 1.

• In case of use of an audio sensor 2, similarly to what has been seen for the Step 1 "Container detection", it is effectively used by the algorithm for the purposes of this Step 2 only in dispensing conditions, since only with dispensing in progress the artificial intelligence algorithm is able to understand if the container 4 has lost its previous condition of "centered container".

As will be clarified in the paragraphs of the description dedicated to Step 3 "Automatic container filling", the audio sensor 2 is able to detect the misalignment condition of the entry 4.1 of the container 4 on the basis of the variation of the typical sound of the liquid dispensing inside the same, passing from a deep sound to an acute sound, up to the absence of sound, representative of the condition of "off-centered container" .

STEP 3: Automatic container filling

The management of the start and interruption of the liquid dispensing by the tap 1 is carried out in this Step 3 when the CPU of the electronic circuit board 5 activates the means for starting the liquid dispensing by the tap 1, typically consisting of a solenoid valve piloted by a relay (not shown in the figure), according to the prior art.

In said Step 3, the system ascertains whether or not the container 4, present on the base D.3 of the dispenser D and having its entry 4.1 in a position centered in relation to the tap 1, has reached the pre-set filling level, i.e. whether the liquid dispensing by the tap 1 can be started or must be interrupted.

In Step 3, the artificial intelligence algorithm detects the filling condition of the container 4. This may be done using a binary classificatory model that categorises the processed scene into "container empty" or "container full": in this case, a partially filled container 4 is classified as an "container empty".

According to a variant of the invention, several classes may also be defined, corresponding to different levels of filling of the container 4: in this case, the artificial intelligence algorithm uses a multi-class, non-binary classificatory model.

Therefore, following the ascertainment of the presence of a container 4 and the ascertainment of the correct positioning of its entry 4.1 in axis with the above- mentioned tap 1, with this Step 3, the algorithm verifies whether or not the dispensing of the liquid inside said container 4 has reached a certain limit, discriminating the scene between "container empty" or ""container full" depending, respectively, on whether said limit has not been reached (in which case the dispensing of the liquid continues) or has been reached (in which case the dispensing is interrupted). In the training stage, the system acquires a series of representative data of scenes in which the most varied types of containers suitable for filling with a liquid and placed in a central position in relation to the axis of dispensing of the tap 1 appear, through multiple continuous acquisitions, both at an initial stage and during the normal operation of the system, through input provided by a technician or directly by the user during normal use.

Therefore, in this training stage, by means of the prior Labelling technique, the corresponding information of belonging to the class "container empty" or "container full" is associated with each acquired data.

With particular reference to the variant of fig. 6, following the ascertainment of the presence of a container 4 in accordance with Step 1 (with consequent classification of the scene as "busy base") and the ascertainment of the correct positioning of its entry 4.1 in line with the tap 1 above (with relative classification of the scene as "centered container"), if the "container empty" condition is ascertained with this Step 3, the liquid dispensing starts or can continue; otherwise the condition of "container full" is ascertained and the liquid dispensing is interrupted, with the CPU of the electronic circuit board 5 deactivating the means for the liquid dispensing by the tap 1 and the algorithm returning to the initial stage of data acquisition and possible training, intended as a progressive learning stage ("incremental training stages") with retro-propagation of the data emerging from the binary classification just carried out.

Also in this Step 3 "Automatic container filling", a sensor 2 of a video type, of an audio type or a combination of the same can be used.

In the case of use of a video type sensor 2, the artificial intelligence algorithms that can be used may consist of:

• "Machine Learning": with criteria similar to those already described in Step 1 "Container detection" and Step 2 "Accurate container positioning", the CPU of the electronic circuit board 5 receives data representative of the scene captured by the video sensor 2 to discriminate between "container empty" or "container full". In the training stage, the system acquires a series of data, specifically multiple example videos, representative of a wide range of containers 4 with the entry 4.1 correctly centered in relation to the axis of liquid dispensing by the tap 1 and representative of different filling levels of such containers 4.

For each container 4, the algorithm defines the belonging of the acquired scene to one of the classes "container empty" or "container full": typically, such decisional line coincides with a filling level near the rim of the container 4, beyond which it is appropriate to stop the liquid dispensing step under penalty of overflowing the same outside the said container 4.

As anticipated above, according to a variant of the invention, several classes may also be defined, corresponding to different filling levels; in this case, the artificial intelligence algorithm uses a multi-class classifier model.

Similarly to what happens in Steps 1 and 2, in order to limit the complexity of the algorithm, also in this Step 3, it can be calibrated to limit the identification to only the portion of container 4 which represents the entry 4.1.

Practical tests of actual operation of the system have shown how this algorithm successfully discriminates between "container empty" and "container full", identifying the borderline level between the two classes, corresponding to the limit at which the liquid dispensing must be stopped.

Fig. 5. A to fig. 5.D illustrate a sequence of operation of said Step 3, starting from fig. 5. A which shows the container 4 as correctly classified (i.e. in the "centered container" class) at the end of Step 2 "Accurate container positioning" and ready to receive the liquid from the tap 1 above thanks to the activation of the dispensing means by the electronic circuit board 5.

The algorithm identifies the scenes shown in the following figures 5. A, 5.B and 5.C as "container empty" and the dispensing of the liquid continues; on the contrary, in fig. 5.D this dispensing is interrupted because it is representative of a scene in which the algorithm has identified the reaching of the pre-set filling level, classifying it as a "container full".

It should be noted that fig. 5. A shows a physically container empty 4 as a starting condition, but its classification as an "container empty" would have been analogous also in the case of partial filling, but always at a level lower than the boundary level between the classes "container empty" and "container full".

Ultimately, as long as the algorithm ascertains that this pre-set container filling level 4 has not been reached, the scene is classified as "container empty" (fig. 5.B and fig. 5.C) and the tap dispenses the liquid.

Otherwise, the scene is classified as "container full" (fig. 5.D) and the liquid dispensing is stopped.

• "Computer Vision": similarly to the method already seen in Step 1 "Container detection" and in Step 2 "Accurate container positioning", the algorithm is able to discern between the two classifications "container empty" and "container full" on the basis of computer vision techniques and digital processing of the images captured by the sensor 2.

The said method envisages the identification of the precise portion of the container 4 inside the blob, obtaining the representative image of the rim to verify the advancement of the level of the liquid inside said container 4, on the basis of the different brightness reflected by the internal walls of the same.

By means of prior comparative techniques and differential calculations based on measurements of pixel brightness between one scene and the next, the algorithm is able to understand when the liquid reaches the pre-set maximum filling level. When using an audio sensor 2, the algorithm exploits the characteristics of the sound emitted during the liquid dispensing stage inside a container 4, evaluating its spectrometric dynamics, starting from a deeper sound (representative of a low filling level) to a more acute sound (representative of the liquid approaching the maximum pre-set filling level).

Typically, the audio sensor 2 can work on a raw signal or on a signal processed with prior sound wave processing techniques, e.g. using the Fourier Transform in the frequency domain.

With an audio sensor 2 the artificial intelligence algorithms that can be used may consist of: • "Machine Learning": during the training stage, the system acquires a series of data, specifically multiple example recordings of the sound signal emitted during liquid dispensing in a wide range of containers 4 (having the entry 4.1 correctly centered in relation to the axis of liquid dispensing by the tap 1).

For each container 4, the algorithm defines the belonging of the acquired scene to one of the classifications "container empty" or "container full" and specifically such decisional line coincides with the boundary sound instant corresponding to the pre-set level of maximum filling of the container, and therefore representative of the boundary between said two classifications "container empty" and "container full": typically, said boundary coincides with a filling level close to the rim of the container 4, beyond which it is appropriate to stop the liquid dispensing stage on pain of overflowing the same outside said container 4.

According to a variant of the invention, several classes may also be defined, corresponding to different filling levels; in this case, the artificial intelligence algorithm uses a multi-class classifier model and is not strictly binary.

• "DSP - Digital Signal Processing": by means of this algorithm, and by means of prior processing techniques of the signal picked up (such as, for example, the Fourier Transform), the audio sensor 2 identifies raw audio signals, from which certain characteristics are extracted in order to discern the instant of the passage between a deep and an acute sound, so as to identify the signal representative of the reaching of the maximum filling level of the container 4 in order to classify the scene as "container full" and interrupt the liquid dispensing by the tap 1.

As anticipated, during this Step 3 "Automatic container filling" the audio sensor 2 can also be used to verify the possible variation of the classifications "busy base" and/or "off-centered container", ascertained respectively in the previously described steps of Step 1 "Container detection" and Step 2 "Accurate container positioning": this is because only during liquid dispensing the audio sensor is able to discern such changes in the presence of the container 4 on the base D.3 of the dispenser D and/or its displacement in relation to the correct centring position in relation to the tap 1 above.

In summary, the method for dispensing the liquid inside the container 4 by the tap 1 comprises the following three steps which, according to the variant of fig. 6 are performed in sequence one after the other, but which can also be performed individually independently:

"Container detection" to detect the presence of said container 4,

"Accurate container positioning" to check the correct centring of said container 4 in relation to said tap 1,

"Automatic container filling" to manage the start of said liquid dispensing inside said container 4 and to interrupt it when a predetermined liquid level is reached, and each of said steps is performed autonomously and automatically by the system comprising said tap 1, by means of artificial intelligence algorithms operating on the basis of scene classification activities captured by at least one sensor 2.

From the description above, the advantages achievable with the present invention in relation to prior art solutions are clear, both from the aspect of functionality and practicality of use by a user.

In comparison with systems of the prior art for controlling the filling of a container, the system described herein has the advantage that it operates irrespective of what the initial state of filling of the container is since for the distinction between continuing or not continuing the liquid dispensing what is relevant is only the maximum filling limit, which causes the classification as "container full" and the deactivation of the liquid dispensing means.

In other words, unlike currently known systems, the initial state of the container and its filling level is irrelevant, but only the point of arrival is taken into account, thus completely avoiding the risk of the liquid spilling out.

Another order of advantages resides in the fact that the system described herein operates with any type of container 4, while in the systems known up to now the operation is ensured only in presence of transparent containers since the control means monitor the raising of the liquid level by means of a control from outside the container: in the present invention, on the contrary, the container 4 can have opaque walls, since the sensor 2 carries out a control inside the same.

It is clear that numerous variants of the tap 1 with the artificial intelligence system described above are possible to the person skilled in the art, without going beyond the scope of innovation inherent in the inventive idea, just as it is clear that in the practical implementation of the invention the various components described above can be replaced by technically equivalent elements.

For example, in the attached drawings, a tap 1 provided with a single sensor 2 is schematically shown; but, as mentioned in the description, several sensors 2 can also be provided, indifferently of video and/or audio type.

Similarly, a dispenser D having a single tap 1 is shown, but the teachings of the present invention are also applicable to dispensers D having a compartment D.2 having more than one tap 1, intended to dispense liquid into a number of containers 4.