Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
FACIAL AUTHENTICATION SYSTEM
Document Type and Number:
WIPO Patent Application WO/2022/051300
Kind Code:
A1
Abstract:
An authentication system accesses an image of a face of a user. The face of the user is partially covered by a facial mask. The authentication system detects an area on the facial mask and generates a first identification of the user based on the area on the facial mask. The authentication system also detects an exposed area uncovered by the facial mask on the face of the user and generates a second identification of the user based on the exposed area. The authentication system compares the first identification of the user with the second identification of the user, and authenticates the user based on the comparison.

Inventors:
LEARMONTH DARREN (US)
Application Number:
PCT/US2021/048531
Publication Date:
March 10, 2022
Filing Date:
August 31, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NORTEK SECURITY & CONTROL LLC (US)
International Classes:
G06K9/00
Foreign References:
US20070122005A12007-05-31
US20070036398A12007-02-15
US10984225B12021-04-20
Other References:
AQEEL ANWAR; ARIJIT RAYCHOWDHURY: "Masked Face Recognition for Secure Authentication", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 25 August 2020 (2020-08-25), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081748161
WALID HARIRI: "Efficient Masked Face Recognition Method during the COVID-19 Pandemic", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 7 May 2021 (2021-05-07), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081959209
METZ RACHEL: "Think your mask makes you invisible to facial recognition? Not so fast, AI companies say ", CNN BUSINESS, 12 August 2020 (2020-08-12), XP055911319, Retrieved from the Internet [retrieved on 20220411]
Attorney, Agent or Firm:
ARORA, Suneel et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computer-implemented method comprising: accessing an image of a face of a user, the face being partially covered by a facial mask; detecting an area on the facial mask; generating, at a computer, a first identification of the user based on the area on the facial mask; detecting an exposed area uncovered by the facial mask on the face of the user; generating, at the computer, a second identification of the user based on the exposed area; comparing the first identification of the user with the second identification of the user; and authenticating, at the computer, the user based on the comparison.

2. The computer-impl emented method of claim 1, wherein the area on the facial mask includes a user signature.

3. The computer-implemented method of claim 2, wherein generating the first identifi cation of the user further comprises: accessing a user signature library, the user signature library comprising a library of user identifiers and corresponding signature images; comparing the user signature with the user signature library; and determining the first identification of the user based on comparing the written signature with the user signature library. 4. The computer-implemented method of claim 2, wherein the user signature includes ink that is not visible to a human eye, wherein detecting the area on the facial mask further comprises: illuminating the facial mask with a light source operating at a light spectrum that renders the ink visible to a camera; and capturing an image of the written signature with the camera, wherein generating the first identification of the user is based on the image of the user signature.

5. The computer-implemented method of claim 1, further comprising:

5 illuminating the facial mask with a light operating at a light spectrum that renders a content of the area visible to a camera; and capturing an image of the content of the area with the camera, wherein generating the first identification of the user is based on the image of the content. 6. The computer-implemented method of claim 1, wherein the area on the facial mask includes a visual element, wherein generating the first identification of the user further comprises: accessing a user identification library, the user identification library comprising a library of user identifiers and corresponding visual elements; comparing the visual element with the user identification library; and determining the first identification of the user based on comparing the visual element with the user identification library.

7. The computer-implemented method of claim 1, wherein the area on the facial mask includes a first visual element visible to a camera, and a second visual element visible to the camera only when exposed to a light source operating at a non-human visible light spectrum, wherein generating the first identification of the user further comprises: accessing a user identification library, the user identification library comprising a library of user identifiers and corresponding visual elements; comparing the first visual element and the second visual element with the user identification library; and determining the first identification of the user based on comparing the first visual element and the second visual element with the user identification library. 8. The computer-implemented method of claim 1, wherein detecting the exposed area uncovered by the facial mask on the face of the user further comprises: determining biometrics data based on the exposed area,

5 wherein generating the second identification of the user based on the exposed area further comprises: comparing the determined biometrics data with biometrics data from a biometrics library, the biometric library comprising a library of biometrics data and corresponding user identifiers; and determining the second identification of the user based on comparing the determined biometrics data with the biometrics library.

9. The computer-implemented method of claim 1, further comprising: determining that the first identification of the user and the second identification indicate the same user; and in response to the first and second identification being the same, validating an identity of the user.

10. The computer-i mplemen ted method of claim 1, further comprising: determining that the first identification of the user is the different from the second identification of the user; and in response to the first identification being different from the second identification, detecting a second exposed area uncovered by the facial mask on the face of the user; generating, at the computer, a third identification of the user based on the second exposed area; determining that the first identification and the third identification indicate the same user; and in response to the first and third identification being the same, validating an identity of the user. 11. A computing apparatus comprising: a processor; and a memory storing instructions that, when executed by the processor, configure the apparatus to:

5 access an image of a face of a user, the face being partially covered by a facial mask; detect an area on the facial mask; generate, at a computer, a first identification of the user based on the area on the facial mask; detect an exposed area uncovered by the facial mask on the face of the user; generate, at the computer, a second identification of the user based on the exposed area; compare the first identification of the user with the second identification of the user; and authenticate, at the computer, the user based on the comparison.

12. The computing apparatus of claim 11, wherein the area on the facial mask includes a user signature.

13. The computing apparatus of claim 12, wherein generating the first identification of the user further comprises: access a user signature library, the user signature library comprising a library of user identifiers and corresponding signature images; compare the user signature with the user signature library; and determine the first identification of the user based on comparing the written signature with the user signature library.

14. The computing apparatus of claim 12, wherein the user signature includes ink that is not visible to a human eye, wherein detecting the area on the facial mask further comprises: illuminate the facial mask with a light source operating at a light spectrum that renders the ink visible to a camera; and capture an image of the written signature with the camera, wherein generating the first identification of the user is based on the image of the user signature.

15. The computing apparatus of claim 11, wherein the instructions further 5 configure the apparatus to: illuminate the facial mask with a light operating at a light spectrum that renders a content of the area visible to a camera; and capture an image of the content of the area with the camera, wherein generating the first identification of the user is based on the image of the content.

16. The computing apparatus of claim 11, wherein the area on the facial mask includes a visual element, wherein generating the first identification of the user further comprises: access a user identification library, the user identification library comprising a library of user identifiers and corresponding visual elements; compare the visual element with the user identification library; and determine the first identification of the user based on comparing the visual element with the user identification library.

17. The computing apparatus of claim 11, wherein the area on the facial mask includes a first visual element visible to a camera, and a second visual element visible to the camera only when exposed to a light source operate at a non-human visible light spectrum, wherein generating the first identification of the user further comprises: access a user identification library, the user identification library comprising a library of user identifiers and corresponding visual elements; compare the first visual element and the second visual element with the user identification library; and determine the first identification of the user based on comparing the first visual element and the second visual element with the user identification library.

18. The computing apparatus of claim 11, wherein detecting the exposed 5 area uncovered by the facial mask on the face of the user further comprises: determine biometrics data based on the exposed area, wherein generating the second identification of the user based on the exposed area further comprises: compare the determined biometrics data with biometrics data from a biometrics library', the biometric library comprising a library of biometrics data and corresponding user identifiers: and determine the second identification of the user based on comparing the determined biometrics data with the biometrics library.

19. The computing apparatus of claim 11, wherein the instructions further configure the apparatus to: determine that the first identification of the user and the second identification indicate the same user; and in response to the first and second identification being the same, validate an identity' of the user. 20. A non-transitory computer-readable storage medium, the computer- readable storage medium including instructions that when executed by a computer, cause the computer to: access an image of a face of a user, the face being partially covered by a facial mask; detect an area on the facial mask; generate, at a computer, a first identification of the user based on the area on the facial mask; detect an exposed area uncovered by the facial mask on the face of the user; generate, at the computer, a second i dentifi cation of the user based on the exposed area; compare the first identification of the user with the second identification of the user; and authenticate, at the computer, the user based on the comparison.

Description:
FACIAL AUTHENTICATION SYSTEM

CLAIM OF PRIORITY

[0001] This application claims the benefit of priority to U.S. Application Serial No. 17/009,280, filed on September 1, 2020 which is hereby incorporated by reference herein in its entirety.

TECHNICAL FIELD

[0002] The present application generally relates to the field of authentication system, and in particular, relates to methods and systems for user authentication using facial recognition.

BACKGROUND

[0003] Traditional facial recognition software typically relies on capturing a substantial portion of a face of a person. As such, when the person covers a portion of their face with a face mask, the facial recognition software may not properly operate. Other types of biometric authentication software rely on a limited uncovered portion of the face such as the eyes. However, in such situation, a user who wears glasses will need to remove his/her glasses and move his eyes closer to a camera. In other situation, the face mask may cover different parts of the face, making difficult for the biometric authentication software to properly operate.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

[0004] To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.

[0005] FIG. 1 is a diagrammatic representation of a networked environment in which the present disclosure may be deployed, in accordance with some example embodiments.

[0006] FIG. 2 illustrates an example operation of the authentication system in accordance with one example embodiment. [0007] FIG. 3 illustrates another example operation of the authentication system in accordance with one example embodiment.

[0008] FIG. 4 illustrates an authentication system in accordance with one example embodiment.

5 [0009] FIG. 5 is a flow diagram illustrating a method for authenticating a user in accordance with one example embodiment.

[0010] FIG. 6 is a flow diagram illustrating a method for validating a user in accordance with one example embodiment.

[0011] FIG. 7 illustrates a routine in accordance with one embodiment. [0012] FIG. 8 is a diagrammatic representation of a machine in the form of a computer system within which a set of instructions may be executed for causing the machine to perform any one or more of the methodologies discussed herein, according to an example embodiment.

DETAILED DESCRIPTION [0013] Example methods and systems are directed to multiple camera calibration in a distributed camera system. Examples merely typify possible variations. Unless explicitly stated otherwise, components and functions are optional and may be combined or subdivided, and operations may vary in sequence or be combined or subdivided. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of example embodiments. It will be evident to one skilled in the art, however, that the present subject matter may be practiced without these specific details.

[0014] A camera of a computing device captures an image of a face of a user for authentication. For example, the user is attempted to access physical entry into a physical location or access application features of a computer application. The user is wearing a facial mask that partially obstructs a portion of the face of the user. For example, a portion of the mouth and nose of the user is partially obstructed by the facial mask. An authentication system processes the image to authenticate the user by identifying a first portion of the image and a second portion of the image. [0015] The first portion of the image includes an image of the facial mask. For example, the image of the facial mask includes a written signature displayed on the facial mask. In another example, the image of the facial mask includes a graphical content (e.g., QR code, geometric pattern, unique 5 image) that is uniquely associated with a user identifier. In another example, the graphical content on the image can only be viewed when illuminated with a light source from a non-human visible light spectrum. The authentication system accesses a signature library that maps users with their corresponding signatures. The authentication system then determines a first identification of the user based on the signature on the facial mask and the signature library.

[0016] The second portion of the image includes an image of the exposed areas of the face of the user. For example, the exposed area includes the eyes of the user. The authentication system performs biometric measurements (e.g., relative distance and location between features of the eyes or eyebrows) on the exposed area to determine biometrics data. The authentication system determines a second identification of the user based on the biometrics data.

[0017] The authentication system compares the first identification of the user with the second identification of the user to authenticate the user. For example, if the first and second identification of the user are the same, the user is validated and the user is allowed access. If the first and second identification of the user are different, the system may deny access or may request the user to take off his/her facial mask, or to present another exposed area of the face of the user.

[0018] In one example embodiment, the present application describes an authentication system based on a partially exposed face of a user. The authentication system accesses an image of a face of a user. The face of the user is partially covered by a facial mask. The authentication system detects an area on the facial mask and generates a first identification of the user based on the area on the facial mask. The authentication system also detects an exposed area uncovered by the facial mask on the face of the user and generates a second identification of the user based on the exposed area. The authentication system compares the first identification of the user with the second identification of the user, and authenticates the user based on the comparison.

[0019] FIG. 1 is a diagrammatic representation of a network environment 5 100 in which some example embodiments of the present disclosure may be implemented or deployed. One or more application servers 104 provide server-side functionality via a network 102 to a networked user device (in the form of a client device 106 of the user 128) connected to a camera 130. A web client 110 (e.g., a browser) and a programmatic client 108 (e.g., an “app”) are hosted and execute on the client device 106. The client device 106 can communicate with the application servers 104 via the network 102 or via other wireless or wired means.

[0020] The camera 130 includes a camera that operates within a light spectrum visible to the human eye. In another example, the camera operates outside the human- visible light spectrum. In one example, the camera 130 is configured to capture an image of a face of a user 132.

[0021] An Application Program Interface (API) server 118 and a web server 120 provide respective programmatic and web interfaces to application servers 104. A specific application server 116 hosts an authentication system 122 that operates with the application server 116.

[0022] In one example embodiment, the authentication system 122 receives a video/image from the camera 130. The authentication system 122 identifies two portions of the image: a first portion that includes the facial mask, and a second portion that includes an exposed area of the face of the user 132. The authentication system 122 determines a first user identification based on the first portion and a second user identification based on the second portion. The first and second user identification are compared to authenticate the user 132.

[0023] The operations performed by the authentication system 122 may be also performed or distributed to another server such as a third-party server 112. For example, the first or second user identification may be determined at the third-party server 112. [0024] In another example embodiment, the camera 130 includes a processor and a memory. The memory of the camera 130 stores the authentication system 122. The processor of the camera 130 is configured to perform operations/computations of the algorithms described further below 5 with respect to FIG. 4 of the authentication system 122. As such, in one embodiment, the camera 130 can be a standalone device that is capable of authenticating user 132 without having to connect with the application servers 104 to identify the first or second user identification.

[0025] In another example embodiment, the computation of the algorithms described in authentication system 122 can be distributed across multiple devices. For example, the portion of the computation that determines the first user identification can be performed locally at the camera 130 or the client device 106. The portion of the computation that determines the second user identification can be performed at the application server 116 or at the third-party server 112. In yet another example, the portion of the computation that determines the first user identification can be performed at the third-party application 114 and the portion that determines the second user identification can be performed at the application server 116.

[0026] The web client 110 communicates with the authentication system 122 via the web interface supported by the web server 120. Similarly, the programmatic client 108 communicates with the authentication system 122 via the programmatic interface provided by the Application Program Interface (API) server 118. The third-party application 114 may, for example, be another application to support the authentication system 122 or mine the data from the authentication system 122. For example, the third- party application 114 may access image/video data from the camera 130. The application server 116 is shown to be communicatively coupled to database servers 124 that facilitates access to an information storage repository or databases 126 (e.g., user identification library, user biometrics library, user signature library). In an example embodiment, the databases 126 includes storage devices that store information to be published and/or processed by the authentication system 122. [0027] FIG. 2 illustrates an example operation of the authentication system in accordance with one example embodiment. The authentication system 122 is connected (directly or indirectly via client device 106) to the camera 130. The camera 130 captures an image of the face of the user 132. The 5 face of the user is partially covered by a facial mask 202. A signature 204 is displayed on the facial mask 202. The user 132 may have signed the signature 204 on the facial mask 202. The user may have written the signature on the facial mask 202. In another example, the facial mask 202 includes a graphic element such as a QR code, a bar code, a graphical design, or an image.

[0028] The authentication system 122 identifies a first portion (e.g., facial detection area 206) of the image and a second portion (e.g., signature detection area 208) of the image. The first portion includes the exposed areas of the face of the user 132. For example, the first portion may include an image of the eyes, hair, eyebrows, ears of the user. In other words, these exposed areas are not blocked by the facial mask 202.

[0029] The second portion includes the area including a front portion of the facial mask 202. The front portion includes the portion that covers the mouth of the user 132. In another example embodiment, the second portion includes an image of the facial mask 202 and a portion of the string 210 that retains the facial mask 202 to the face of the user. The portion of the string 210 may include a visually distinguishable pattern (e.g., bar code). In another example, each string include a portion of a bar code.

[0030] FIG. 3 illustrates another example operation of the authentication system in accordance with one example embodiment. The authentication system 122 is directly or indirectly connected to a light source (e.g., UV light 302) that is directed to the face of the user 132. The light source may generate a light from a non-human visible spectrum to trigger a display of the signature 204. [0031] FIG. 4 illustrates an authentication system in accordance with one example embodiment. The authentication system 122 comprises a signature area detection module 402, a partial facial area detection module 404, a signature validation module 406, a partial facial area validation module 408, a user validation module 410, a signature library 412, and a biometrics library 414.

[0032] The signature area detection module 402 detects the region of the image that includes the facial mask 202 (e.g., signature detection area 208).

5 In one example, the signature area detection module 402 identifies a region in the image that includes the facial mask 202 using an object recognition algorithm. Once the signature area detection module 402 identifies the facial mask 202, the signature area detection module 402 identifies a graphical content on the surface of the facial mask 202: a signature, a user-written content, a QR code, an image.

[0033] The signature validation module 406 compares the graphical content from the facial mask 202 with a signature library 412 to identify a first user identification. For example, the signature validation module 406 compares the signature on the facial mask 202 with signatures from the signature library 412 to retrieve a first user identification corresponding to the signature on the facial mask 202. In other example embodiments, the signature library 412 includes a graphical content library that maps graphical elements (e.g., QR code) to users.

[0034] In one example embodiment, a user can record his/her signature in the signature library 412 by providing an image of his/her signature (e.g., signature signed on the facial mask 202) to the signature library 412. The signature library 412 associates the provided signature with the user. In other examples, the user may provide other types of visual content (e.g., hand drawn patterns, QR code, bar code, or any uniquely identifiable graphic content or element).

[0035] The partial facial area detection module 404 detects the region of the image that includes exposed areas of the face of the user 132. (e.g., facial detection area 206). In one example, the partial facial area detection module 404 identifies a region in the image that includes exposed areas of the face of the user 132. The partial facial area validation module 408 determines biometri cs data based on the exposed areas of the face of the user 132. In another example, the partial facial area detection module 404 determines biometrics data based on the exposed areas of the face of the user 132. The partial facial area validation module 408 compares the biometrics data of the user 132 with the biometrics library 414 to retrieve a second user identification corresponding to the biometrics data of the user 132.

[0036] The user validation module 410 compares the first user 5 identification with the second user identification to validate an identity of the user 132. For example, if the first user identification and the second user identification are the same, the identity of the user 132 is authenticated and the authentication system 122 communicates the validation to another application to process access. If the first user identification and the second user identification are different, the identity of the user 132 cannot be verified and validated. The authentication system 122 may communicate the un-authentication to another application to deny access.

[0037] FIG. 5 is a flow diagram illustrating a method for authenticating a user in accordance with one example embodiment. Operations in the method 500 may be performed by the authentication system 122, using components

(e.g., modules, engines) described above with respect to FIG.

4. Accordingly, the method 600 is described by way of example with reference to the authentication system 122. However, it shall be appreciated that at least some of the operations of the method 500 may be deployed on various other hardware configurations or be performed by similar components residing elsewhere. For example, some of the operations may be performed at the client device 106.

[0038] At block 502, the camera 130 captures an image of the face of the user wearing a facial mask 202. At block 504, the signature area detection module 402 identifies a face mask area and detects a signature in the face mask area. At block 506, the partial facial area detection module 404 identifies an exposed face area and determines biometrics data based on the exposed face area. At block 508, the user validation module 410 authenticates a user based on the signature validation and biometrics data validation.

[0039] FIG. 6 is a flow diagram illustrating a method 600 for validating a user in accordance with one example embodiment. Operations in the method 600 may be performed by the authentication system 122, using components (e.g., modules, engines) described above with respect to FIG.

4. Accordingly, the method 600 is described by way of example with reference to the authentication system 122. However, it shall be appreciated that at least some of the operations of the method 600 may be deployed on 5 various other hardware configurations or be performed by similar components residing el sewhere. For example, some of the operations may ¬ be performed at the client device 106.

[0040] At block 602, the signature area detection module 402 detects a signature on the facial mask 202. At block 604, the signature validation module 406 determines a first user identification based on the signature. In one example, the signature validation module 406 determines the first user identification based on a combination of the content displayed on the facial mask 202 and graphical patterns on the string 210 of the facial mask 202.

[0041] At block 606, the partial facial area detection module 404 determines biometrics data based on the exposed facial area. At block 608, the partial facial area validation module 408 determines second user identification based on the biometrics. At 610, the user validation module 410 compares the first user identification with the second user identification. At 612, the user validation module 410 validates a user authentication based on the comparison. In one example, the user validation module 410 detects that the first user identification does not match the second user identification, and requests the partial facial area detection module 404 or partial facial area validation module 408 to compute another biometrics data based on other exposed area of the face of the user 132 to generate a third user identification. For example, the partial facial area detection module 404 may calculate biometrics data based on eye brows location instead of eye location. In another example, the user validation module 410 detects that the first user identification does not match the second user identification, and requests that the user further expose addition areas of his face to recompute the second user identification. In another example, the user validation module 410 detects that the first user identification does not match the second user identification, and requests that the user removes the facial mask 202. [0042] FIG. 7 is a flow diagram illustrating a routine 700. In block 702, routine 700 accesses an image of a face of a user, the face being partially covered by a facial mask. In block 704, routine 700 detects an area on the facial mask . In block 706, routine 700 generates, at a computer, a first 5 identi fication of the user based on the area on the facial mask. In block 708, routine 700 detects an exposed area uncovered by the facial mask on the face of the user. In block 710, routine 700 generates, at the computer, a second identification of the user based on the exposed area. In block 712, routine 700 compares the first identification of the user with the second identification of the user. In block 714, routine 700 authenticates, at the computer, the user based on the comparison.

[0043] FIG. 8 is a diagrammatic representation of the machine 800 within which instructions 808 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 800 to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions 808 may cause the machine 800 to execute any one or more of the methods described herein. The instructions 808 transform the general, non-programmed machine 800 into a particular machine 800 programmed to carry out the described and illustrated functions in the manner described. The machine 800 may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine 800 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine 800 may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 808, sequentially or otherwise, that specify actions to be taken by the machine 800. Further, while only a single machine 800 is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions 808 to perform any one or more of the methodologies discussed herein.

[0044] The machine 800 may include processors 802, memory 804, and I/O components 842, which may be configured to communicate with each other 5 via a bus 844. In an example embodiment, the processors 802 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor 806 and a processor 810 that execute the instructions 808. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. Although FIG. 8 shows multiple processors 802, the machine 800 may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof.

[0045] The memory 804 includes a main memory 812, a static memory 814, and a storage unit 816, both accessible to the processors 802 via the bus

844. The main memory 804, the static memory 814, and storage unit 816 store the instructions 808 embodying any one or more of the methodologies or functions described herein. The instructions 808 may also reside, completely or partially, within the main memory 812, within the static memory 814, within machine-readable medium 818 within the storage unit 816, within at least one of the processors 802 (e.g., within the processor’s cache memory), or any suitable combination thereof, during execution thereof by the machine 800.

[0046] The I/O components 842 may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components 842 that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components 842 may include many other components that are not shown in FIG. 8. In various example embodiments, 5 the I/O components 842 may include output components 828 and input components 830. The output components 828 may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components 830 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. [0047] In further example embodiments, the I/O components 842 may include biometric components 832, motion components 834, environmental components 836, or position components 838, among a wide array of other components. For example, the biometric components 832 include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components 834 include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components 836 include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations 5 of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components 838 include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.

[0048] Communication may be implemented using a wide variety of technologies. The I/O components 842 further include communication components 840 operable to couple the machine 800 to a network 820 or devices 822 via a coupling 824 and a coupling 826, respectively. For example, the communication components 840 may include a network interface component or another suitable device to interface with the network 820. In further examples, the communication components 840 may include wired communication components, wireless communication components, cellular communication components, Near Field

Communication (NFC) components, Bluetooth ® components (e.g.,

Bluetooth ® Low Energy), Wi-Fi ® components, and other communication components to provide communication via other modalities. The devices 822 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB).

[0049] Moreover, the communication components 840 may detect identifiers or include components operable to detect identifiers. For example, the communication components 840 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect onedimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components 840, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal tri angulation, location via detecting an NFC beacon signal that may indicate 5 a particular location, and so forth.

[0050] The various memories (e.g., memory 804, main memory 812, static memory 814, and/or memory of the processors 802) and/or storage unit 816 may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions 808), when executed by processors 802, cause various operations to implement the disclosed embodiments.

[0051] The instructions 808 may be transmitted or received over the network 820, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components 840) and using any one of a number of well- known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions 808 may be transmitted or received using a transmission medium via the coupling 826 (e.g., a peer-to-peer coupling) to the devices 822.

[0052] Although an embodiment has been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader scope of the present disclosure. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof, show by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.

[0053] Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term "invention" merely 5 for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description. [0054] The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.

[0055] Examples

[0056] Example 1 is a computer-impl emented method comprising: accessing an image of a face of a user, the face being partially covered by a facial mask; detecting an area on the facial mask; generating, at a computer, a first identification of the user based on the area on the facial mask; detecting an exposed area uncovered by the facial mask on the face of the user; generating, at the computer, a second identification of the user based on the exposed area; comparing the first identification of the user with the second identifi cation of the user; and authenticating, at the computer, the user based on the comparison.

[0057] Example 2 includes example 1, wherein the area on the facial mask 5 includes a user signature.

[0058] Example 3 includes example 2, wherein generating the first identification of the user further comprises: accessing a user signature library, the user signature library comprising a library of user identifiers and corresponding signature images; comparing the user signature with the user signature library; and determining the first identification of the user based on comparing the written signature with the user signature library.

[0059] Example 4 includes example 2, wherein the user signature includes ink that is not visible to a human eye, wherein detecting the area on the facial mask further comprises: illuminating the facial mask with a light source operating at a light spectrum that renders the ink visible to a camera; and capturing an image of the written signature with the camera, wherein generating the first identification of the user is based on the image of the user signature.

[0060] Example 5 includes example 1, further comprising: illuminating the facial mask with a light operating at a light spectrum that renders a content of the area visible to a camera; and capturing an image of the content of the area with the camera, wherein generating the first identification of the user is based on the image of the content.

[0061] Example 6 includes example 1, wherein the area on the facial mask includes a visual element, wherein generating the first identification of the user further comprises: accessing a user identification library, the user identification library comprising a library of user identifiers and corresponding visual elements; comparing the visual element with the user identification library; and determining the first identification of the user based on comparing the visual element with the user identification library. [0062] Example 7 includes example 1, wherein the area on the facial mask includes a first visual element visible to a camera, and a second visual element visible to the camera only when exposed to a light source operating at a non-human visible light spectrum, wherein generating the first identification of the user further comprises: accessing a user identifi cation library, the user identification library comprising a library of user identifiers and corresponding visual elements; comparing the first visual element and

5 the second visual element with the user identification library; and determining the first identification of the user based on comparing the first visual element and the second visual element with the user identification library.

[0063] Example 8 includes example 1, wherein detecting the exposed area uncovered by the facial mask on the face of the user further comprises: determining biometrics data based on the exposed area, wherein generating the second identification of the user based on the exposed area further comprises: comparing the determined biometrics data with biometrics data from a biometrics library, the biometric library comprising a library of biometrics data and corresponding user identifiers; and determining the second identification of the user based on comparing the determined biometrics data with the biometrics library.

[0064] Example 9 includes example 1, further comprising: determining that the first identification of the user and the second identification indicate the same user; and in response to the first and second identification being the same, validating an identity of the user.

[0065] Example 10 includes example 1, further comprising: determining that the first identification of the user is the different from the second identification of the user; and in response to the first identification being different from the second identification, detecting a second exposed area uncovered by the facial mask on the face of the user; generating, at the computer, a third identification of the user based on the second exposed area; determining that the first identification and the third identification indicate the same user; and in response to the first and third identification being the same, validating an identity of the user.