Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IRIS RECOGNITION-BASED SYSTEM AND METHOD FOR THE AUTHENTICATION OF A LIVING BEING
Document Type and Number:
WIPO Patent Application WO/2023/135580
Kind Code:
A1
Abstract:
The present invention relates to an iris recognition-based system (1 ) for the authentication of a living being, comprising an image acquisition means (2), a first memory unit (3) comprising a biometric template of the irises of each living being (TBR) enrolled in the authentication system (1 ), and a unique identifier (ID) associated with each acquired biometric template (TBI); a second memory unit (4) comprising data of enabled functions (DFA) associated with each unique identifier (ID), and a processing unit (10) configured to process data of enabled functions. The present invention further relates to an iris recognition-based method for the authentication of a living being.

Inventors:
ENNAS GIORGIO (IT)
SARASSO STEFANO (IT)
Application Number:
PCT/IB2023/050368
Publication Date:
July 20, 2023
Filing Date:
January 16, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BIOMEYE S R L (IT)
International Classes:
G06Q20/40; G06F21/32; G06V10/14; G06V40/18; G06V40/40; G07C9/37; H04L9/32; H04L9/40
Foreign References:
US20100299530A12010-11-25
US20160117544A12016-04-28
US20200202153A12020-06-25
US20170323167A12017-11-09
US20160125178A12016-05-05
Attorney, Agent or Firm:
ROSSI, Ugo et al. (IT)
Download PDF:
Claims:
23

CLAIMS

1. An iris recognition-based authentication system (1 ) for the authentication of a living being, the system comprising:

- an image acquisition means (2), configured to acquire a first image (IM1 ) and a second image (IM2) of the right and left eyes of the living being;

- a first memory unit (3) comprising: o a biometric template of the irises of each living being (TBR) recorded in the authentication system (1 ); and o a unique identifier (ID) associated with each biometric template (TBR);

- a second memory unit (4) comprising:

- data of enabled functions (DFA) associated with each unique identifier (ID);

- a processing unit (10) configured to process data of enabled functions, comprising:

• an input module (11 ) configured to receive, from said acquisition means (2), said first image (IM1 ) and said second image (IM2) of the right and left eyes of the living being, and identification data (ID2) of said image acquisition means (2);

• an image processing module (12) configured to perform a segmentation of the captured images and to generate an iris biometric template (TBI);

• an iris pattern comparison module (13) configured to perform a comparison of said iris biometric template (TBI) with said iris biometric templates of living beings (TBR) stored in said first memory unit (3);

• an authorisation module (14) configured to o extract the identifier (IDi) of the living being associated with the stored iris biometric template (TBR) from said first memory unit (3); o search for data of enabled functions (DFA) associated with said identifier (IDi) in said second memory unit (4); and o generate and send an authentication signal (S_AUT) on the basis of a confirmed correspondence (OK) between said identifier (IDi) and said data of enabled functions (DFA).

2. The authentication system (1 ) according to claim 1 , wherein said image acquisition means (2) comprises at least two high-resolution multicameras, each configured to acquire an image (IM1 , IM2) of an eye of the living being.

3. The authentication system (1 ) according to claim 2, wherein each high- resolution multicamera comprises:

- a high-resolution monochromatic microcamera; and

- an optical filter with passband in the near-infrared spectrum.

4. The authentication system (1 ) according to claim 2 or 3, wherein each high-resolution multicamera comprises:

- a plurality of LEDs, each of which configured to emit infrared light;

- a sensor of light in the near-infrared spectrum; and

- a sensor of light in the visible light spectrum.

5. The authentication system (1 ) according to one or more of claims 2 to 4, wherein each high-resolution multicamera comprises:

- an autofocus module; and

- an autofocus microcamera configured to identify the degree of blurring of the captured image of the iris and to control said motorised autofocus module.

6. The authentication system (1 ) according to one or more of claims 2 to 5, wherein each high-resolution multicamera comprises at least one LED configured to emit light in the visible spectrum.

7. The authentication system (1 ) according to one or more of the preceding claims, comprising a sensor for sensing the body temperature of a living being and configured to measure the body temperature remotely.

8. The authentication system (1 ) according to one or more of the preceding claims, comprising a display device in data connection with a display module configured to display information and data of the authentication system (1 ).

9. The authentication system (1 ) according to one or more of the preceding claims, comprising a proximity detection sensor configured to detect the distance (DST) between the living being and the image acquisition means (2).

10. The iris authentication system (1 ) according to one or more of the preceding claims, wherein said processing unit (10) comprises a gesture recognition module configured to interpret a plurality of gestures of the living being and to control said display module of said display device.

11. The authentication system (1 ) according to one or more of the preceding claims, wherein said processing unit (10) comprises a module for an anti-spoof-liveness check on the living being through an analysis of the images (IM1 , IM2) captured by said image acquisition means (2).

12. The authentication system (1 ) according to claim 11 , wherein said module for an anti-spoof-liveness check on the living being comprises:

- the emission of a plurality of flashes, having a predetermined duration and frequency over time, by means of said LED configured to emit light in the visible spectrum and by means of said LED configured to emit light in the near-infrared - NIR - spectrum, and capable of reaching the ocular surface and fundus; 26

- the detection, during the flash emission time, of the images (IM1 , IM2) captured by said image acquisition means (2); and

- a verification that the change in illumination of the pupil is consistent with the emitted flash pattern.

13. The authentication system (1 ) according to claim 11 or 12, wherein said module for an anti-spoof-liveness check on the living being comprises a means for counting the blinks and double blinks of each eye of the living being.

14. The authentication system (1 ) according to claim 13, wherein said module for an anti-spoof-liveness check on the living being comprises a verification of the simultaneousness between and frequency of the blinks of the two eyes of the living being and of the compatibility thereof with respect to a reference pattern.

15. The authentication system (1 ) according to one or more of claims 12 to 14, wherein a light is emitted in the near-infrared spectrum.

16. The authentication system (1 ) according to one or more of the preceding claims, wherein said processing unit (10) comprises a pupillometer module configured to:

- extract an image of the pupil (IMP) from said image of the eye (IM1 , IM2) of the living being;

- measure the size, position, shape and reactivity of the pupil to a light stimulus; and

- generate an indication of the neurological condition of the living being.

17. The authentication system (1 ) according to one or more of the preceding claims, wherein said enabled functions (DFA) comprise one or more of at least: 27

- a payment procedure for paying for a product or service;

- an authorisation of access to a given facility or environment;

- an activation and/or deactivation of devices;

- device customisations;

- functions of specific equipment.

18. The authentication system (1 ) according to one or more of the preceding claims, comprising a third memory unit (5), wherein for every unique identifier (ID) the following corresponding items are stored:

- e-wallet;

- account;

- capabilities.

19. The authentication system (1 ) according to claim 18, wherein said processing unit comprises a payment validation module configured to:

- receive a payment or a transaction to be carried out for the purchase of a product or a service by an authenticated living being;

- extract the e-wallet present in said third memory unit (5) by means of the identifier (ID);

- validate said transaction or payment on a blockchain;

- if the payment or the transaction is validated, update the blockchain with the payment or the transaction carried out, returning a positive outcome;

- if the payment or the transaction is not validated, return a negative outcome and without updating the blockchain.

20. An iris recognition-based method of authentication of a living being, the method comprising the steps of:

- providing an image acquisition means (2);

- acquiring, by said image acquisition means (2), a first image (IM1) and a second image (IM2) of the right and left eyes of the living being;

- providing a first memory unit (3) comprising: 28 o a biometric template of the irises of each living being (TBR) enrolled in the authentication system (1 ); and o a unique identifier (ID) associated with each biometric template (TBI);

- providing a second memory unit (4) comprising:

- data of enabled functions (DFA) associated with each unique identifier (ID);

• receiving, from said acquisition means (2), said first image (IM1 ) and said second image (IM2) of the right and left eyes of the living being, and identification data (ID2) of said image acquisition means (2);

• performing a segmentation of the captured images (IM1 , IM2) and generating an iris biometric template (TBI);

• performing a comparison of said iris biometric template (TBI) with said iris biometric templates of living beings (TBR) stored in said first memory unit (3);

• extracting the identifier (IDi) of the living being associated with the iris biometric template (TBR) from said first memory unit (3);

• searching for the data of enabled functions (DFA) associated with said identifier (IDi) in said second memory unit (4); and

• generating and sending an authentication signal (S_AUT) on the basis of a confirmed correspondence (OK) of the comparison performed between said identifier (IDi) and said data of enabled functions (DFA).

21. The authentication method according to claim 20, comprising the steps of:

- acquiring, by said image acquisition means (2), a first image (IM1) and a second image (IM2) of the right and left eyes of the living being;

- performing a segmentation of the captured images (IM1 , IM2) and generating an iris biometric template (TBI);

- associating said iris biometric template (TBI) with a unique identifier (ID);

- storing, in said first memory unit (3):

• said iris biometric template (TBR) of the enrolled living being (1 ); 29 and

• said unique identifier (ID) associated with said acquired biometric template (TBI). 22. The authentication method according to claim 21 , comprising the step of storing data of enabled functions (DFA) associated with each unique identifier (ID) in said second memory unit (4).

23. The method according to one or more of claims 20 to 22, wherein one or more steps are implemented by means of a computer.

Description:
IRIS RECOGNITION-BASED SYSTEM AND METHOD FOR THE AUTHENTICATION OF A LIVING BEING

Technical field

The present invention relates to a system and method for the authentication of a living being.

In a particular example, the present invention relates to an iris recognitionbased system and method for the authentication of a living being.

Once the user has been authenticated, the present invention enables the completion of a payment procedure to pay for a product or service, authorisation of access to a given facility or environment, an activation and/or deactivation of mechanical or electronic devices and the like.

In particular, the present invention has application in the luxury tourism sector, by way of non-limiting example on cruise ships and in resorts.

Prior art

The luxury tourism sector is based on the exclusiveness of the experience and the enjoyment of exclusive and distinctive services. In particular, the experiences of interaction with high technology must be non-invasive, function without hitches and be easy to use in order to be perceived positively and generate emotions and well-being in the user.

In particular, in the present electronic systems for the recognition, identification and authentication of users and the integration of smart management of the access and automation functions used in the luxury hospitality and nautical sectors, the following problems and disadvantages are encountered:

- they are based on “physical objects” (for example keys, badges, smartphones) that have to be carried around and limit personal freedom, thus generating a non-optimal experience, with the possibility of losing them;

- in the majority of cases, physical objects have poor hygiene because they are touch-based. The user has to touch and interact with surfaces, be they traditional handles, smartphones, or badges. In this period of COVID the obligations and sanitisation procedures are particularly structured, careful and stringent, and this in itself constitutes a difficulty, an extra problem to be managed in the luxury hospitality sector, and to which an interaction of this type contributes to amplifying further;

- personal physical objects pose high management costs, considering the costs of the physical media that must be distributed to users; let us consider, for example, a cruise ship or a resort with thousands of users, and even in the case of a more limited number of users it is necessary to provide for the cost of the equipment, and also the costs of loss and obsolescence. Everything I can or have to carry around with me is something I can lose and is subject to wear; keys or cards are placed on wet surfaces, fall into the water (sea or swimming pool), are rubbed, scraped, come into contact with other objects, are demagnetised, and are liable to deteriorate and become obsolete: touch-based technologies are subject to more rapid aging;

- another issue that arises is “plastic pollution” and the pollution due to the plastics used by plastic-based systems. The issue of sustainability is more and more strongly felt and perceived and demands particular attention; our seas are extremely full of microplastics, and responsible choices are necessary in this regard;

- the set of functions featured by electronic systems used for user identification and the integration of smart management of access and automation functions are poorly integrated and this constitutes a limiting factor. We have an instrument for identifying ourselves and entering a room, an instrument for identifying the settings of the devices present in the room (television, other screens, music, lighting), an instrument for starting the engine and managing the settings of the ship or services etc. We have a redundancy of systems, which function with each other differently, must memorise different settings and must memorise users, require settings and then they throw us back into the loop and once again we have a lack of experience.

Some authentication systems of the known type implement biometric technologies based on facial recognition.

The use of facial technologies in the time of COVID is a problem, mainly because of masks that partially conceal the face.

Object of the invention

The object of the present invention is to provide an iris recognition-based system and method for the authentication of a living being that avoids the drawbacks of the prior art.

A further object of the present invention is to provide a system and method for the authentication of a user that makes it possible to access the rooms of a facility, activate/deactivate specific equipment, and manage transactions and payments without the need to have to carry around physical media or to have to enter access codes.

Another object of the present invention is to provide a system and method for the authentication of a user that has high security and robustness against possible fraud attempts.

A further object of the present invention is to provide a system and method for the authentication of a user which enables the step of registering (or enrolling) a user to be carried out in a rapid, touchless manner.

Another object of the present invention is to provide a multifunction iris recognition-based system and method for the automated management of a plurality of user functions.

A specific object of the present invention is to provide an iris recognitionbased system and method for the authentication of a living being that is safe, secure, and efficient.

A further specific object of the present invention is to provide an iris recognition-based system and method for the authentication of a living being that is easy to implement and manage. In a first aspect of the invention, the aforesaid objects are achieved by an iris recognition-based system for the authentication of a living being according to what is described in claim 1 .

Advantageous aspects are described in the dependent claims 2 to 19.

In a second aspect of the invention, the aforesaid objects are achieved by an iris recognition-based method for the authentication of a user according to what is described in claim 20.

Advantageous aspects are described in the dependent claims 21 to 22.

The invention achieves, in general, the following technical effects:

- it integrates, within a single system, different functions that are normally performed by separate devices;

- it enables an authentication without any need to provide the users to be authenticated with physical objects they then always have to carry around with them;

- it enables an authentication of a completely touchless type, without any need for the user to have to touch displays, handles, alphanumeric keypads, keys and/or badges;

- it achieves a robust, secure and efficient authentication;

- it achieves an authentication that is easy to implement and manage;

- it enables real-time updating of data related to access, occupation of a facility or completion of a payment;

- moreover, it enables a real-time automated authentication of any user present in a resort and/or on a cruise ship;

- it enables a non-invasive verification of the neurological condition of a user to be authenticated.

The aforesaid technical effects/advantages and other technical effects/advantages of the invention will emerge in greater detail from the description, provided below, of an example embodiment, given by way of non-limiting illustration with reference to the appended drawings.

Brief description of the drawings

Figure 1 is a block diagram of the system of the invention. Figure 2 schematically shows the steps of generating a biometric template of the user;

Figure 3 shows the procedure for enrolling a user in the system;

Figure 4 shows a detail of the step of comparing and extracting the identifier of a user

Figure 5 schematically shows a payment procedure;

Figure 6 shows a detail of figure 5;

Figure 7 schematically shows an access control procedure.

Detailed description of preferred embodiments of the invention

The present invention describes an iris recognition-based authentication system 1 for the authentication of a living being comprising an image acquisition means 2, configured to acquire a first image IM1 and a second image IM2 of the right and left eyes of the living being, in data connection with a processing unit 10 configured to process data of enabled functions DFA.

Preferably, the enabled functions DFA comprise one or more among at least one payment procedure for paying for a product or a service (for example, closed-circuit payments, such as the bill for a hotel room, etc.), an authorisation of access to a given facility or environment, an activation and/or deactivation of devices, customisations of the different devices (for example, controlling the temperature of a room, the lights, the shades, the music), and functions of specific equipment (for example, starting the engine and machinery, loT devices).

Preferably, the system 1 comprises a first memory unit 3 comprising a biometric template of the two irises of each living being TBR enrolled in the authentication system 1 , and a unique identifier ID associated with each biometric template TBR recorded in the system.

The system 1 also comprises a second memory unit 4 comprising data of enabled functions DFA associated with each unique identifier ID.

Preferably, the image acquisition means 2, and the first and second memory units 3, 4 are in data connection with said processing unit 10 by means of a telematic network 30 (for example, Ethernet, fibre optic, intranet, daisy chain, internet, Bluetooth, WiFi, or the like).

According to the invention, the system 1 comprises a processing unit 10 configured to process data of enabled functions.

In general, it should be noted that, in the present context and in the claims below, the processing unit 10 is presented as divided into distinct functional modules (memory modules or operating modules) for the sole purpose of describing the functions thereof clearly and completely.

In general, it should be noted that the processing unit 10 can consist of a single electronic device, suitably programmed to perform the functions described, and the various modules can correspond to hardware entities and/or routine software belonging to the programmed device.

Alternatively, or additionally, such functions can be performed by a plurality of electronic devices over which the aforesaid functional modules can be distributed.

The processing unit 10 can further rely on one or more processors for the execution of the instructions contained in the memory modules.

Moreover, the aforesaid functional modules can be distributed over different local or remote computers based on the architecture of the network in which they reside.

With particular reference to figure 1 , the processing unit 10 comprises an input module 11 configured to receive, from said acquisition means 2, a first image IM1 and a second image IM2 of the right and left eyes of the living being to be authenticated.

In other words, the first and second acquired images each comprise the zone of each eye of a user to be authenticated.

The input module 11 is further configured to receive a unique identifier ID2 of each of said image acquisition means 2 (based on which it can identify, for example, the position and type of the image acquisition means 2).

The processing unit 10 comprises an image processing module 12 configured to perform a segmentation of the captured images and to generate a biometric template of the two irises TBI of the individual to be authenticated.

Advantageously, the biometric template TBI also includes the parameters detected by the liveness verification and anti-spoof check (step 15 in figure 2). The blocks of the various steps leading to the generation of the biometric template TBI are schematically illustrated in figure 2. In particular, the module 12 comprises a step of pre-processing the acquired images of the eyes (comprising a segmentation and a normalization step), a step of pure processing and enhancement 15 of the images and a step of extracting and encoding 17 the biometric template TBI.

Optionally, a step of liveness detection and anti-spoof check 16 on the user’s iris (or anti-spoof-liveness check module) can be included, as described below. In other words, for the spoofing check, i.e. that the iris present in front of the image acquisition means is real, a series of flashes is emitted, whereas for the liveness check a blink analysis is used.

The anti-spoof-liveness check verifies whether what is proposed is structurally and physically an eye of a living being (e.g. human and/or animal). The processing unit 10 comprises an iris pattern comparison module 13 configured to perform a comparison of said iris biometric template TBI, detected by the image acquisition means 2 with a plurality of iris biometric templates of living beings TBR recorded in said first memory unit 3. In this manner, it is possible to verify whether the template of the living being TBI detected by the acquisition means 2 is of a user previously enrolled in the system. The processing unit 10 comprises an authorisation module 14 configured to extract the identifier IDi of the living being associated with the recorded iris biometric template TBR from the first memory unit 3, search for data of enabled functions DFA associated with said identifier IDi in the second memory unit 4, and, based on the result of said comparison, generate and send an authentication confirmation signal S_AUT on the basis of a confirmed correspondence OK of the comparison performed by said comparison module 13 (with the respective extraction of the unique identifier ID associated with the recorded biometric template TBR) and said data of enabled functions DFA. In other words, there are at least three databases for managing anonymisation and privacy by design. The IDa1 , Biometric Template TBR correspondence is stored in the first database 3, the IDa2, User data correspondence in a second database and a third database contains the IDa1 , IDa2 correspondences.

If, based on the comparison of the comparison module 13, it emerges that the living being is not enrolled in the system, the authorisation module 14 will not allow access to any enabled function DFA.

In this manner it will not be possible to reconstruct the data unless the entire correspondence is complete and correct. In other words, if the correspondence is not OK, the user will not be authenticated and will not be able to access a given facility, payment or transaction, etc.

If the user is authenticated, he or she will be able to exploit the functions for which he or she is enabled by the system. These functions are stored in the database 4 of figure 1 . For example, he or she will be able to access some areas or facilities, make a payment, start a device, control the temperature or lighting of a room, etc.

Preferably, the image acquisition means 2 comprises one or more high- resolution multicameras, each configured to acquire an image IM1 , IM2 of an eye or the face of the living being to be authenticated.

Preferably, each high-resolution multicamera comprises a high-resolution monochromatic microcamera, an optical filter with passband in the nearinfrared spectrum (i.e. 700-900nm) and an optical filter in the visible spectrum. In this manner, the parasitic light reflections are reduced.

Preferably, each high-resolution multicamera comprises a plurality of LEDs, each of which configured to emit infrared and visible light, a sensor of light in the near-infrared spectrum and a sensor of light in the visible light spectrum.

Each high-resolution multicamera comprises a motorised autofocus module and an autofocus microcamera configured to identify the degree of blurring of the captured image of the iris and to control said motorised autofocus module.

In addition to motorised lenses, it is also possible to envisage the use of an optical assembly without moving parts (i.e. with fixed parts), in order to enable a more rapid, more accurate response of the system, quieter operation, less wear on the various elements (and hence a longer life) and greater reliability. The system can use one or more lenses.

The system 1 is advantageously a multicamera system. Each camera is equipped with one or more infrared (IR) emitters arranged, for example, in a circular ring. The brightness of the IR emitters is varied based on a reading of the ambient lighting by integrated exposure meters. In this manner, one obtains uniform lighting, thus improving and speeding up the process of acquisition of the two irises of the individual.

In other words, the system 1 also includes exposure meters for determining the ambient lighting.

Each high-resolution multicamera can also comprise an LED configured to emit light in the visible spectrum (400-700nm) and/or an LED configured emit light in the NIR spectrum (700-900nm).

Advantageously, the authentication system 1 comprises a sensor for sensing the body temperature of a living being, the sensor being configured to measure the body temperature remotely.

Optionally, the authentication system comprises a display device in data connection with a display module configured to display information and data of the system 1 .

Preferably, the authentication system 1 comprises a proximity detection sensor configured to detect the distance DST between the living being and the image acquisition means 2.

The device is capable of detecting an individual both from close up (about 50cm) and from far away (up to 3 metres). For the detection of the distance, the system 1 is capable of operating both at short distances and from long distances through a technical analysis of the digital image performed by image processing and the autofocus tracking system (which is capable of “following” the individual’s movement in order to maintain a constant, continuous focus).

In one embodiment of the present invention, the processing unit 10 of the authentication system 1 comprises a gesture recognition module configured to interpret a plurality of gestures of the living being and to control said display module of said display device.

For example, the gestures interpreted by said module comprise at least movements of the hands, gazes, arms, etc.

Optionally, the processing unit 10 comprises a module 15 for an anti- spoof-liveness check on the living being through an analysis of the images IM1 , IM2 captured by said image acquisition means 2.

The module for an anti-spoof-liveness check on the living being comprises the emission of a plurality of flashes, having a predetermined duration and frequency over time, by means of said LED configured to emit light in the visible spectrum and by means of said LED configured to emit light in the near-infrared - NIR - spectrum, and capable of reaching the ocular fundus, the detection, during the flash emission time, of the images IM1 , IM2 captured by said image acquisition means 2 and a verification that the change in illumination of the pupil is consistent with the emitted flash pattern.

The emitted flash pattern varies dynamically and randomly over time.

For example, the module 15 for an anti-spoof-liveness check on the living being comprises a means for counting the blinks and double blinks of each eye of the living being. In particular, in addition to counting the blinks and double blinks, the system also considers the signal associated with the blinks and double blinks (i.e. how the eyelid opens, how long it is open, when it closes and how it closes). The whole signal detected is compared with a reference pattern and the system evaluates the deviation therefrom. The module 15 for an anti-spoof-liveness check on the living being comprises a verification of the simultaneousness between and frequency of the blinks of the two eyes of the living being and their compatibility with respect to a reference pattern.

The anti-spoof-liveness check is carried out with an implementation by means of a neural network or a support vector machine-based supervised learning algorithm for identifying and counting blinks and double blinks.

Preferably, during the anti-spoof check, a light is emitted in the nearinfrared spectrum for a more accurate verification.

Advantageously, the processing unit 10 comprises a pupillometer module configured to verify the neurological condition of the living being. The pupillometer comprises the steps of extracting an image of the pupil IMP from said image of the eye IM1 , IM2 of the living being, measuring the size, position, shape and reactivity of the pupil to a light stimulus in the visible spectrum, and generating an indication of the neurological condition of the living being.

The access functions can also be enabled based on the neurological conditions monitored by means of “biomarkers” of the users’ pupils (e.g. for measuring the point of eye fixation, the direction of the gaze, and/or the motion of the eyes relative to the position of the head, optokinetic response, pupil dilation), in both an animal and human environment. For example, a driver will be enabled to drive a vehicle if, in addition to being authenticated, the individual is reactive (i.e. possesses reflexes and conditions compatible with driving the vehicle), based on a comparison with the parameters detected through the blink and anti-spoof system and processed through neural networks. Another possible application comprises possible access to a spa or a swimming pool, or to an office, warehouse or workplace, allowed only if the system detects reflections within the norm (e.g. the individual is neither tired nor drunk). In particular, the system sends a light stimulus and detects pupil contraction. Preferably, the authentication system 1 comprises a third memory unit 5, illustrated in figure 5, wherein for every unique identifier ID of users the corresponding e-wallet, account, and capabilities are stored. “E-wallet” means a tool (or an online service) which, like a wallet, “contains” documents and information on the account, room, payment cards, identity documents, and loyalty cards and enables payments to be made. Instead of cash from a physical wallet, with an e-wallet payment is made, also in cryptocurrencies, through the amounts deposited on cards or current accounts linked to the digital wallet.

Preferably, the processing unit comprises a payment validation module configured to receive a payment or a transaction to be carried out for the purchase of a product or a service by an authenticated living being, extract the e-wallet present in said third memory unit 5 by means of the user identifier ID, and validate the transaction or payment on a blockchain.

If the payment or the transaction is validated, the blockchain will be updated with the payment made or the transaction carried out and return a positive outcome. Otherwise, if the payment or the transaction is not validated, a negative outcome will be returned, without the blockchain being updated.

In a second aspect, the present invention relates to an iris recognitionbased method of authentication of a living being, comprising the steps of:

- providing an image acquisition means 2;

- acquiring, by said image acquisition means 2, a first image IM1 and a second image IM2 of the right and left eyes of the living being;

- providing a first memory unit 3 comprising: o a biometric template of the irises of each living being TBR enrolled in the authentication system 1 ; and o a unique identifier ID associated with each biometric template TBI;

- providing a second memory unit 4 comprising:

- data of enabled functions DFA associated with each unique identifier ID;

• receiving, from said acquisition means 2, said first image IM1 and said second image IM2 of the right and left eyes of the living being, and identification data ID2 of said image acquisition means 2; • performing a segmentation of the captured images IM1 , IM2 and generating an iris biometric template TBI;

• performing a comparison of said iris biometric template TBI with said iris biometric templates of living beings TBR stored in said first memory unit 3;

• extracting the identifier IDi of the living being associated with the iris biometric template TBR from said first memory unit 3;

• searching for the data of enabled functions DFA associated with said identifier IDi in said second memory unit 4; and

• generating and sending an authentication signal S_AUT on the basis of a confirmed correspondence OK of the comparison performed between said iris biometric template TBI, detected in real time, and said iris biometric templates of living beings TBR stored in said first memory unit 3 and said data of enabled functions DFA.

Preferably, the method comprises the steps of acquiring, by said image acquisition means 2, a first image IM1 and a second image IM2 of the right and left eyes of the living being, performing a segmentation of the captured images IM1 , IM2 and generating an iris biometric template TBI, and associating the iris biometric template TBI with a unique identifier ID. Advantageously, the present invention comprises a step of enrolling a user in the authentication system 1 , comprising the step of storing, in the first memory unit 3, the iris biometric template TBR of the enrolled living being and the unique identifier ID associated with said biometric template TBI.

During or after the step of enrolling a user in the system 1 , there is a step of storing data of enabled functions DFA associated with each unique identifier ID in the second memory unit 4.

According to the invention, one or more steps of the method are implemented by means of a computer.

The above-described system and method implement algorithms based on convolutional neural networks in order to render the computation efficient and fast. The system implements anonymisation algorithms for the management of the biometric template TBI to ensure security during the storage and uploading of information to the cloud (according to a “privacy by design” principle). Advantageously, for the purpose of implementing the anonymisation and privacy by design techniques, the storage of the biometric templates TBI is in fact divided into three distinct databases.

The system also envisages the use of a cloud platform. In particular, the stored data related to users can be consulted, searched for and managed through a cloud platform (online and/or on-premise), which is an integral part of the solution.

The user has an account on the platform whereby he or she can give his or her consent to the use, erasure, limitation, updating, correction, portability, consultation, and management of the data.

A non-limiting example of the authentication of an individual is provided below. When the individual appears in front of the image acquisition device 2, one or two pictures are taken of the eye area of the individual to be authenticated. A biometric template TBI is subsequently generated. The biometric template TBI is sent to the comparison module, which can provide one of the following cases:

A) The template TBI presented is not contained in the system;

B) The template TBI presented is contained in the system (which implies that the user has already been enrolled in the system previously).

In case B), the unique identifier ID of the system for this template (which is associated 1 -1 with a user) is returned. Let us call it, for example, ID_1 .

The system contains in a memory an associative map which maps the correspondence “unique identifier of the system - associated function”. Every user identifier can contain several functions (entries) associated with it and every physical device has its own identifier and a map/list of functions it can enable. For example, let us consider the two user identifiers ID_1 and ID_2 and the following three enabled functions: IN_SPA, ROOM_ENTER_202, ROOM_ENTER_234.

We have two physical hardware devices, one installed in room 202 (DEV202) and one in room 234 (DEV234). In this case, the map is as follows:

ID_1 <-> IN_SPA

ID_1 <-> ROOM—ENTER—202

ID_2 <-> ROOM—ENTER—234

DEV202 <-> ROOM_ENTER_202

DEV234 <-> ROOM_ENTER_234

In a scenario of this type, the user identifier ID_ 1 can enter room 202 and can also use the spa, whilst the identifier ID_ 2 can enter room 234 but is not authorised to access the spa or room 202.

As it is known what every physical device can enable (e.g. the hardware device installed in room 202 will have, among the activatable functions, ROOM—ENTER—202 but not ROOM_ENTER_234 or IN_SPA), we can consider the following simulation scenarios:

Scenario A):

• Biometric template > comparison module > identifier ID_ 1 ;

• It is in front of room 234, so the identifier of the device (DEV234) is also sent to the system for verification;

• Extraction of the associated functions from the map for ID_ 1 and DEV234:

ID_1 <-> IN_SPA

ID_1 <-> ROOM—ENTER—202

DEV234 <-> ROOM—ENTER—234

• No correspondence is found within the set of activatable functions which intersects those of the device DEV234 and ID_ 1 (empty set);

• The authentication signal is not generated (or it is generated as user not OK).

Scenario B): • Biometric template > comparison module > identifier ID_1 ;

• It is in front of room 202, so the identifier of the device (DEV202) is also sent to the system for verification;

• Extraction of the associated functions from the map for ID_1 and DEV202:

ID_1 <-> IN_SPA

ID_1 <-> ROOM—ENTER—202

DEV202 <-> ROOM_ENTER_202.

• A correspondence is found within the set of activatable functions which intersects ID_1 and DEV202 (ROOM_ENTER_202)

• The signal of confirmed authentication is generated user OK).

In the case of a device that enables multiple functions (e.g. payment and access), the latter are considered by the system as if they were distinct devices (thus with separate identifiers: 1 real device -> 2 virtual devices). The present authentication system enables both recognition of the user and authorisation and validation of payments. In other words, payment validation falls among the enabled functions DFA.

In the present description, living being is understood to mean a human being and/or an animal.

Further details regarding the steps constituting the iris recognition pipeline comprise the following two steps.

The first step consists in automatic cropping of the iris (i.e. an automatic cropping of the portion of the image containing the iris). This is achieved with a convolutional neural network composed of the following layers (i.e. a layer of nodes of the neural network):

CONV

MAXPOOL

CONV

CONV

MAXPOOL

CONV CONV MAXPOOL CONV MAXPOOL CONV MAXPOOL CONV MAXPOOL DROPOUT

FC (4)

The output is a vector of four numbers (ex, cy, rx, ry) which represent the coordinates of the centre of the iris and the horizontal and vertical dimensions. These are used to crop the corresponding rectangle. This is the input of the second step, which takes the crop as input and produces an embedding vector by means of a neural network with the following structure:

CONV

MAXPOOL

CONV

MAXPOOL

CONV

MAXPOOL

CONV

MAXPOOL GLOB_AVG_POOL DROPOUT FC(48)

The embedding vector produced has a dimension 48.

Description of the layers shown above: - CONV: convolution followed by normalization (batchnorm) and non-linear activation (ReLu). It applies a series of convolution operations to the input. The result is normalized and processed by the non-linear activation.

- MAXPOOL: Max pooling. It reduces the size of the input by dividing it into blocks and taking the maximum value of each.

- DROPOUT: used only during training to reduce the risk of overfitting and increase the generalisation capacity. It sets to 0 the input filters with a certain probability (for example, 0.2). The probability parameter is variable.

- FC: fully-connected. It is the base layer of the neural networks, which connects all the inputs with all the output by means of weights. In the first case there are 4 outputs (ex, cy, rx, ry). In the second case there are 48 (embedding).

- GLOB_AVG_POOL: global average pooling. For every input channel only one number is calculated, namely the average in that channel.

The system of the present invention enables real-time detection and recognition of an iris of a living being, with images with a high frame rate, by adopting an approach based on convolutional neural networks that process the image to extract the desired information. More specifically, two convolutional neural networks are used. The first network takes the image acquired by a camera as input and produces, as output, the coordinates (centre and radius) of the iris, if present.

The second network takes the image of the iris, cropped using the coordinates derived from the first network, as input and applies a series of convolutional filters to produce a numerical vector (embedding).

This vector is a representation of the iris which can be numerically compared with others, so that similar irises produce vectors that are near in a Euclidean space.

The method for the detection and recognition of the iris comprises the following sequence of steps:

Downscaling -> first neural network -> segmentation -> upscaling -> second neural network With reference to the above-described steps, the first neural network uses a reduced (downscaled) image of the original and detects the segmentation mask. The segmented mask is then brought back to the original proportions (upscaling). The original image is "filtered" on the basis of this mask considering only the “useful” pixels of the original image. The second neural network is applied on them.

In a learning step, the model is trained using a dataset that contains numerous irises of different individuals. Every image is accompanied with the coordinates of the iris and the identity of the individual. In order to choose the best model for producing the embedding vector, training is carried out using various filters and performances are evaluated based on an evaluation subset.

Since the model is capable of working on images of people in movement, the system is integrated with an autofocus tracking algorithm. The autofocus tracking algorithm makes it possible to detect wrong coordinates (outliers) in the iris detection and discard the coordinates that deviate from the previous ones beyond a certain predetermined threshold.

In order to enable a more accurate and more rapid acquisition of images IM1 , IM2, also with individuals in movement, there is provided an adaptive autofocus tracking device comprising:

1. One or more integrated LIDAR (laser imaging detection and ranging, a remote detection technique that allows the distance of an object or surface to be detected using pulsed laser) sensors configured to perform real-time measurements of an individual and determine the distance thereof;

2. Based on the measured distance, by means of a lookup table, which maps the distance with the optimal value of the optics, a coarse value is determined for the setup of the optics, and a new frame is acquired by the LIDAR sensor; after the lens has been adjusted according to the value derived from the lookup table another n frames (odd number) are acquired, and the optics are adjusted to slightly smaller estimated distances and slightly larger estimated distances (the distances are estimated by an artificial intelligence system and are based on an estimation of the speed at which the individual moves nearer, derived from the readings of the LIDAR sensors. In practical terms, the system takes the readings of the LIDAR sensors as input and provides, as output, the changes in distance in order to derive the previous and subsequent estimated coarse values, in addition to the number of acquisitions. The number of frames acquired at smaller distances is equivalent to the number of frames acquired at a larger distance; the coarse value is in the middle;

3. The following algorithm is applied to determine the sharpness value of the images: a jpeg compression, set on a predefined level of quality, is performed and the numerical value of the size of the file obtained is derived. The compressed image is not used, the only item of interest is the numerical value of the size expressed in bytes, which is proportional to the sharpness of the image. Since the sharpest images will have more details, this is reflected in an increase in the size of the image. Advantageously, a graphic accelerator is used for the compression. A different compression function can also be used, provided that there is a correlation between the level of detail of the image and the size in bytes.

4. These numbers will serve to estimate a function, derived by interpolating the points. On the x-axis we have the distances and on the y- axis the weights in bytes.

5. It is of interest to estimate, from the derived function, the distance whose value on the y-axis has the maximum weight value. This will become the new “coarse” value for adjusting the lens. When the values in the neighbourhood of the set coarse value are similar (for example immediately preceding or subsequent), the system considers that the image is sharp and the system can “follow” the individual’s movements. At this point the settings are based on the system of artificial intelligence, which estimates the distances. It is important to highlight that the system sets the focus with respect to the portion of interest (containing the iris and not the entire image). To this end the Al system works in synergy with the segmentation model to determine the portion of the image of interest in real time.

To ensure that the sensors are synchronised, both with one another and with the optical system, the acquisition of frames is controlled by means of a trigger pulse provided by the system (simultaneous synchronised acquisition).

The system makes it possible to interface a number of devices among one another in space and to synchronise them, also in a wireless mode, for recognition in natural (human, animal) environments.

Advantageously, the system of the present invention can comprise remote eye tracking to measure the point of eye fixation, the direction of the gaze, or the motion of the eyes relative to the position of the head, and other correlated measurements (optokinetic response, pupil dilation) in both an animal and human environment. In particular, the eye tracking module comprises the following steps:

1 ) from a third camera (the facial recognition one), we estimate the pose of the individual to be authenticated;

2) from the two image recognition systems, the module extracts the position of each eye (iris and pupil);

3) the system estimates: the eye position, the iris position, and the pupil position. Based on these parameters, the eye tracking module processes and derives the position of the centre of the pupil and the points of the eye and iris;

4) the autofocus tracking module derives the distance of the individual from the image acquisition means;

5) 3D mathematical triangulation operations are applied to estimate the fixation point. Every time a connection (or a data exchange) occurs between A and B (a new encryption key is generated). This key is generated by a dedicated algorithm, by a parent key and an OTP, and varies periodically. The key generated is used in communication and is never reused. If it is compromised, only the data encrypted with it are potentially at risk, and past and future communications remain protected. This offers greater security compared to a situation of data encrypted with a single key.

Among the possible determinable parameters, the system is also capable of detecting and measuring the following parameters for the various types of stimulus and in a bilateral manner:

• The maximum diameter of the pupil before constriction by means of a light pulse;

• The minimum diameter at the peak of constriction;

• the percentage of variation of the pupil diameter;

• the latency of constriction, that is, the wait time for the start of constriction in response to the light stimulus;

• the average speed of constriction of the pupil diameter measured in millimetres per second;

• the maximum speed of constriction of the pupil diameter measured in millimetres per second;

• the speed of dilation, i.e. the average speed of dilation of the pupil when, after it has reached the peak of constriction, the pupil tends to recover and dilate again to return to the initial diameter in rest conditions, measured in millimetres per second;

• the recovery time, i.e. the time it takes to reach a given percentage of the basal pupil diameter after the peak of constriction;

• the inclination of the head and asymmetry of the direction of vision, vertical and horizontal, with the values of the relative lateral difference between the left and right eyes.