Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
USING AN ENROLLED FINGERPRINT IMAGE FOR VARIOUS FINGERPRINT SENSORS
Document Type and Number:
WIPO Patent Application WO/2022/035415
Kind Code:
A1
Abstract:
This disclosure describes apparatuses, methods, and techniques that enable using an enrolled fingerprint image (118) of a verified user for one or more fingerprint sensors (114) that are embedded in or on a computing device (100). The verified user may capture the enrolled image (118) using a sensor (112) that differs from the fingerprint sensor (116) a user utilizes to request access to the computing device (100). The described enrolled image (118) enables the verified user to set up fingerprint authentication in a shorter amount of time and offers a good biometric security and a good user experience with each fingerprint sensor (114) of the computing device (100).

Inventors:
SAMMOURA FIRAS (US)
BUSSAT JEAN-MARIE (US)
Application Number:
PCT/US2020/045596
Publication Date:
February 17, 2022
Filing Date:
August 10, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GOOGLE LLC (US)
International Classes:
G06K9/00
Other References:
QIJUN ZHAO ET AL: "3D to 2D fingerprints: Unrolling and distortion correction", BIOMETRICS (IJCB), 2011 INTERNATIONAL JOINT CONFERENCE ON, IEEE, 11 October 2011 (2011-10-11), pages 1 - 8, XP032081625, ISBN: 978-1-4577-1358-3, DOI: 10.1109/IJCB.2011.6117585
TODERICI GEORGE ET AL: "UHDB11 Database for 3D-2D Face Recognition", 28 October 2013, ICIAP: INTERNATIONAL CONFERENCE ON IMAGE ANALYSIS AND PROCESSING, 17TH INTERNATIONAL CONFERENCE, NAPLES, ITALY, SEPTEMBER 9-13, 2013. PROCEEDINGS; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER, BERLIN, HEIDELBERG, PAGE(S) 73 - 86, ISBN: 978-3-642-17318-9, XP047192512
KAKADIARIS IOANNIS A ET AL: "3D-2D face recognition with pose and illumination normalization", COMPUTER VISION AND IMAGE UNDERSTANDING, ACADEMIC PRESS, US, vol. 154, 21 May 2016 (2016-05-21), pages 137 - 151, XP029831510, ISSN: 1077-3142, DOI: 10.1016/J.CVIU.2016.04.012
DAS ABHIJIT ET AL: "Recent Advances in Biometric Technology for Mobile Devices", 2018 IEEE 9TH INTERNATIONAL CONFERENCE ON BIOMETRICS THEORY, APPLICATIONS AND SYSTEMS (BTAS), IEEE, 22 October 2018 (2018-10-22), pages 1 - 11, XP033541783, DOI: 10.1109/BTAS.2018.8698587
Attorney, Agent or Firm:
COLBY, Michael K. (US)
Download PDF:
Claims:
CLAIMS:

What is claimed:

1. A computer-implemented method comprising: capturing, by a first sensor (112), a first image (118) of a verified user’s skin, the first image (118) of the verified user’s skin including first biometric data (602-630) of the verified user; creating an enrolled template (118), responsive to the capturing of the first image (118) at the first sensor (112), the enrolled template (118) having or being derived from the first biometric data (602-630) of the verified user; capturing a second image (120) at a second sensor (116), the second sensor (116) of a different type than the first sensor (112), the second sensor (116) configured to capture the second image (120) of a user’s skin, the second image (120) of the user’s skin including second biometric data (602-630) of the user; comparing the second biometric data (602-630) of the second image to the enrolled template (118); responsive to the comparing, authenticating the user as being the verified user; and responsive to the authenticating, enabling access to a computing device (100), application (102), function, or peripheral thereof.

2. The computer-implemented method of claim 1, wherein the first sensor includes a three- dimensional, 3D, camera, having red-green-blue, RGB, light-emitting elements, RGB channels, cyan-magenta-yellow-key, CMYK, channels, hue-saturation-brightness, HSV, channels, or a combination thereof, the 3D camera enabling the capturing of the first image without a touch of the verified user’s skin to the first sensor.

3. The computer-implemented method of claims 1 or 2, wherein the creating of the enrolled template utilizes one of the channels of the 3D camera.

4. The computer-implemented method of any of claims 1 to 3, wherein the creating of the enrolled template includes down-sampling the one of the channels of the 3D camera.

5. The computer-implemented method of any of claims 1 to 4, wherein the creating of the enrolled template includes applying histogram equalization, cropping, resizing, filtering frequencies, performing color inversion, or combinations thereof, to the first image.

6. The computer-implemented method of any of claims 1 to 5, wherein the second sensor is a capacitive image sensor, an ultrasonic image sensor, or an optical under-display fingerprint sensor.

7. The computer-implemented method of any of claims 1 to 6, wherein: the first image captured by the first sensor is of a first domain, the first domain expressing a relationship among different intensities in pixels of the first image; and the second image captured by the second sensor is of a second domain, the second domain expressing a relationship among different intensities in pixels of the second image.

8. The computer-implemented method of claim 7, wherein the creating of the enrolled template includes using a machine-learned model to match the first domain of the first image to the second domain of the second image.

9. The computer-implemented method claims 7 or 8, wherein the enrolled template is similar to or of a similar quality to the second domain of the second image.

10. The computer-implemented method of any of claims 1 to 9, wherein the enrolled template includes vector-based templates, and wherein the comparing of the second biometric data of the second image to the enrolled template compares a vector conversion of the second image to the vector-based templates.

11. The computer-implemented method of any of claims 1 to 10, wherein the first biometric data of the verified user and the second biometric data of the user includes fingerprint data, the fingerprint data derived from a same fingertip, thumb, palm, or a plurality of fingertips.

12. The computer-implemented method of any of claims 1 to 11, wherein the enrolled template and the second image include multiple blocks or image frames, and wherein the comparing of the second biometric data of the second image to the enrolled template includes the comparing of the multiple blocks or image frames of the second image to the multiple blocks or image frames of the enrolled template.

13. The computer-implemented method of claim 12, wherein the multiple blocks or image frames are: overlapping; non-overlapping and apart, with a sliding distance of more than one pixel between the blocks; or adjacent, with a sliding distance of zero or one pixel between the blocks.

14. The computer-implemented method of claim 13, wherein the comparing includes comparing the multiple blocks or image frames of the second image to the multiple blocks or image frames of the enrolled template to determine a confidence level for each of the one or more blocks or image frames, and wherein the authenticating the user as being the verified user is performed responsive to a confidence threshold being met by the determined confidence level.

15. The computer-implemented method of any of claims 1 to 14, wherein: the verified user is established by a first-party, a trusted third-party, a personal identification number, PIN, a username, a password, a passcode, a serial number of the computing device, or a combination thereof; and the user is a person requesting access to the computing device, application, function, or peripheral thereof.

16. A computing device comprising: a first sensor; at least a second sensor; one or more processors; and one or more computer-readable media having instructions thereon that, responsive to execution by the one or more processors, perform the operations of the method of any of claims 1 to 15.

19

Description:
USING AN ENROLLED FINGERPRINT IMAGE FOR VARIOUS FINGERPRINT SENSORS

BACKGROUND

[0001] A computing device (e.g., smartphone) may include one or more fingerprint sensors that enable a verified user to safeguard their smartphone, application, function, or peripheral thereof, from being used by an unverified user. For example, the smartphone may include a front-facing fingerprint sensor, a rear-facing fingerprint sensor, a side-facing fingerprint sensor (e.g, embedded on top or adjacent to a power button), or a combination thereof. The fingerprint sensors may utilize the same or different fingerprint sensor technologies. Also, due to space constraints and design considerations, the fingerprint sensors may be different sizes and/or may have different aspect ratios.

[0002] The smartphone often instructs the verified user to set up each fingerprint sensor separately to ensure that each fingerprint sensor provides good biometric security. Assume the smartphone includes a front-facing fingerprint sensor (e.g, 6 millimeters (mm) by 6 mm), a rearfacing fingerprint (e.g, 5 mm by 5 mm), and a side-facing fingerprint sensor (e.g, 9.6 mm by 2.8 mm). Also, assume the smartphone utilizes a display screen to instruct the verified user on how to set up each fingerprint sensor. During the setup process (enrollment), the smartphone may instruct the verified user to tap their thumb six (6) to ten (10) times on top of the front-facing fingerprint sensor. Then, the smartphone may instruct the verified user to tap their thumb 11 to 20 times on top of the rear-facing fingerprint sensor. Finally, the smartphone may instruct the verified user to tap their thumb more than 30 times on top of the side-facing fingerprint sensor, due to the large aspect ratio (long and narrow). One reason the smartphone instructs the verified user to tap their thumb multiple times is to enable the smartphone to capture a full image of the verified user’s thumb, using each fingerprint sensor. As such, the smartphone may also offer more detailed instructions, for example, for the verified user to tap their thumb on different locations to each fingerprint sensor. The verified user, however, may ignore or fail to follow such detailed and tedious instructions properly. As a result, the smartphone may complete the setup process of each fingerprint sensor without capturing a full image of the verified user’s thumb, resulting in either poor biometric security (high false acceptance rate) and/or improperly denying access to the verified user (high false rejection rate). Therefore, it is desirable to have a technological solution that enables the verified user to set up each fingerprint sensor of the smartphone with ease and for the smartphone to provide a good biometric security and a good user experience. SUMMARY

[0003] This disclosure describes apparatuses, methods, and techniques that enable using an enrolled fingerprint image of a verified user for one or more fingerprint sensors that are embedded in or on a computing device. The verified user may capture the enrolled image using a sensor that differs from the fingerprint sensor a user utilizes to request access to the computing device. The described enrolled image enables the verified user to set up fingerprint authentication in a shorter amount of time and offers a good biometric security and a good user experience with each fingerprint sensor of the computing device.

[0004] Throughout this disclosure, the terms “user” and “verified user” may be used interchangeably, depending on the context of the description, linguistic choice, and other factors. In general, however, “a verified user” may refer to a person who owns and/or is authorized to access the computing device, application, function, or peripheral thereof. The verified user may be established by a first-party (e.g., manufacturer), a trusted third-party (e.g. , a cellphone carrier), a personal identification number, PIN, a username, a password, a passcode, a serial number of the computing device, or a combination thereof. On the other hand, throughout this disclosure, “a user” may refer to a person who is requesting access to the computing device, and the computing device is yet to authenticate “the user” as “the verified user.”

[0005] In one aspect, a computer-implemented method includes capturing, by a first sensor, a first image of a verified user’s skin, the first image of the verified user’s skin including first biometric data of the verified user. Then, the method includes creating an enrolled template, responsive to the capturing of the first image at the first sensor, the enrolled template having or being derived from the first biometric data of the verified user. After the creation of the enrolled template, the method includes capturing a second image at a second sensor, the second sensor of a different type than the first sensor, the second sensor configured to capture the second image of a user’s skin, the second image of the user’s skin including second biometric data of the user. Then, the method includes comparing the second biometric data of the second image to the enrolled template. Once the computer-implemented method compares the second biometric data to the enrolled template, the method includes being responsive to the comparing by authenticating the user as being the verified user. Finally, the method includes being responsive to the authenticating by enabling access to a computing device, application, function, or peripheral thereof.

[0006] In another aspect, a computing device comprises a first sensor, a second sensor, one or more processors, and one or more computer-readable media having instructions thereon that, responsive to execution by the one or more processors, perform the operations of the method described above. In yet other aspects, a system, a software, or means includes performing the operations of the method described above.

[0007] The disclosure describes examples where a computing device (e.g., user device, smartphone) analyzes information (e.g. , fingerprint images) associated with a user or a computing device. The computing device uses the information associated with the user after the computing device receives explicit permission from the user to collect, store, or analyze the information. For example, in situations discussed below in which a computing device authenticates a user based on fingerprints, the user will be provided with an opportunity to control whether programs or features of the computing device or a remote system can collect and make use of the fingerprint for a current or subsequent authentication procedure. Individual users, therefore, have control over what the computing device can or cannot do with fingerprint images and other information associated with the user. Information associated with the user (e.g, an enrolled image), if ever stored, is pre-treated in one or more ways so that personally identifiable information is removed before being transferred, stored, or otherwise used. For example, before the computing device stores an enrolled image (also referred to as an “enrolled template”), the computing device may encrypt the enrolled image. Pre-treating the data this way ensures the information cannot be traced back to the user, thereby removing any personally identifiable information that may otherwise be inferable from the enrolled image. Thus, the user has control over whether information about the user is collected and, if collected, how the computing system may use such information.

[0008] This summary introduces simplified concepts for using an enrolled template for one or more fingerprint sensors, which are further described below in the Detailed Description and Drawings. For ease of description, the disclosure focuses on capturing an enrolled image of the fingerprint of the verified user’s thumb using a built-in three-dimensional (3D) camera of the smartphone to create the enrolled template. The techniques, however, are not limited to the use of visible light to capture the enrolled image. Also, the techniques are not limited to fingerprint data of the verified user’s thumb; the techniques also apply to other forms of biometric data, including biometric data derived from the verified user’s finger, a plurality of fingers, palm, and so forth. It will be understood that the term “fingerprint data” may be used to refer to biometric data derived from the verified user’s thumb, finger, a plurality of fingers, palm, and so forth, and is not limited to only data derived from a finger. This summary is not intended to identify essential features of the claimed subject matter, nor is it intended for use in determining the scope of the claimed subject matter. BRIEF DESCRIPTION OF THE DRAWINGS

[0009] The details of one or more aspects of computing devices with a fingerprint identification system that utilize an enrolled template for one or more fingerprint sensors are disclosed. The same numbers are used throughout the drawings to reference like features and components.

FIG. 1 illustrates an example computing device with a fingerprint identification system having one or more fingerprint sensors, used to authenticate a user utilizing an enrolled template of a verified user’s thumb, fingertip, or plurality of fingertips.

FIG. 2 illustrates an example logic-flow diagram for capturing an enrolled image and creating the enrolled template, which is used to authenticate verify image(s) captured by the one or more fingerprint sensors of the fingerprint identification system of the computing device.

FIG. 3 illustrates a method for authenticating the verify image of the fingerprint of the user against the enrolled image of the fingerprint of the verified user.

FIG. 4 illustrates examples of the fingerprint identification system of the computing device authenticating the user as being the verified user.

FIG. 5 illustrates a computer-implemented method for matching the enrolled image captured by a first sensor of the fingerprint identification system to the verify image captured by a second sensor of the fingerprint identification system.

FIG. 6 illustrates examples of patterns and minutiae used in fingerprint authentication.

DETAILED DESCRIPTION

Overview

[0010] This document describes apparatuses, methods, and techniques that enable a verified user to utilize an enrolled template for a computing device (e.g., a smartphone) with one or more fingerprint sensors. For example, the smartphone may use a fingerprint identification system with one or more fingerprint sensors embedded on the back (rear-facing), the front (frontfacing), and/or the side (side-facing) of the smartphone. The smartphone captures a “verify image” using a fingerprint sensor and matches patterns and/or minutiae of the verify image to an enrolled image. As described herein, a “verify image” is a fingerprint image used for authentication. An “enrolled image” is an image that the smartphone captures during enrollment when the verified user first sets up the smartphone or an application. An enrolled image could also be updated during a re-verification process. Also, as described herein, an “enrolled template” can be a mathematical representation of the enrolled image. The enrolled template can be a vectorized representation of the enrolled image, which may take less memory space in the computing device. While beneficial in some respects, the use of a vectorized representation for an enrolled template is not required for matching a verify image to the enrolled template. The described apparatuses, methods, and techniques can perform image-to-image (rather than vector-to-vector) comparisons, as well as other representations, to compare each verify image to the enrolled template.

[0011] In more detail, fingerprint sensors of the smartphone, whether rear-facing, frontfacing, and/or side-facing, often limit the number of patterns and minutiae of the fingerprint image that are captured for authentication. For examples of patterns and minutiae refer to FIG. 6. On the other hand, when the computing device uses a sensor capable of capturing a high-resolution image of the whole pad of a thumb (thumb), the whole pad of a finger (fingertip), pads of multiple fingers (multiple fingertips), and so forth, the captured fingerprint image includes a greater number of patterns and minutiae. There is a positive correlation between the number of patterns and minutiae used in fingerprint authentication and the success rate of the authentication. Note that in biometric security, the success rate is often characterized by using a receiver operating curve (ROC), which is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. More specifically, biometric security measurements may include a false acceptance rate (FAR) for the proportion of times a fingerprint identification system grants access to an unauthorized person and a false rejection rate (FRR) for the proportion of times a fingerprint identification system fails to grant access to an authorized person. Qualitatively speaking, a fingerprint identification system with a high success rate has a low false acceptance rate and a low false rejection rate. With more detail (e.g, patterns and minutiae) in a large fingerprint image, it is possible to make a more-accurate identification resulting in a lower false acceptance rate and a lower false rejection rate.

[0012] Some fingerprint sensors, however, often fail to capture enough detail to make an accurate identification. As a result, as described in the Background section, a current smartphone may request the verified user to tap their thumb numerous times to set up each fingerprint sensor separately. During enrollment, the current smartphone may only capture a partial enrolled image using each fingerprint sensor, resulting in a poor biometric security and/or a poor user experience. This disclosure, however, describes a smartphone with a fingerprint identification system that uses a 3D camera to capture an enrolled image of the whole thumb, or a significant part of the thumb, of the verified user, and the fingerprint identification system uses the enrolled image to create the enrolled template. The smartphone uses the enrolled template to authenticate verify images of the user’s fingerprint captured by each fingerprint sensor, regardless of the type, size, and/or aspect ratio of the fingerprint sensor.

[0013] While features and concepts of the described apparatuses, methods, and techniques for fingerprint identification systems of user devices can be implemented in any number of different environments, systems, devices, and/or various configurations, aspects that enable the fingerprint identification system with one or more fingerprint sensors to capture a fingerprint (e.g., a verify image) and match it to an enrolled template are described in the context of the following example devices, systems, methods, and configurations.

Example Environments

[0014] FIG. 1 illustrates an example computing device 100 with a fingerprint identification system 110 having one or more fingerprint sensor(s) 114 that are used to authenticate a user utilizing an enrolled template 118 of a verified user’s thumb, fingertip, or plurality of fingertips. The computing device 100 may include additional or fewer components than what is illustrated in FIG. 1. FIG. 1 illustrates the computing device 100 as being a variety of example devices, including a smartphone 100-1, a tablet 100-2, a laptop 100-3, a desktop computer 100-4, a computing watch 100-5, computing eyeglasses 100-6, a gaming system or controller 100-7, a smart speaker system 100-8, and an appliance 100-9. The user device 100 can also include other devices, for example, televisions, entertainment systems, audio systems, automobiles, unmanned vehicles (in-air, on the ground, or submersible “drones”), trackpads, drawing pads, netbooks, e-readers, home security systems, doorbells, refrigerators, and other devices with a fingerprint identification system.

[0015] The computing device 100 includes one or more application processors (illustrated as application processor 104) and one or more computer-readable storage media (CRM 106). The application processor 104 may include any combination of one or more controllers, microcontrollers, processors, microprocessors, hardware processors, hardware processing units, digital signal processors, graphics processors, graphics processing units, and the like. The application processor 104 processes computer-executable instructions (e.g., code) stored by the CRM 106. The CRM 106 may include any suitable memory media and storage media, for example, volatile memory (e.g, random-access memory (RAM)), non-volatile memory (e.g, Flash memory), optical media, magnetic media (e.g, disk or tape), and so forth. Also, the CRM 106 may store instructions, data (e.g. , biometric data), and/or other information, and the CRM 106 excludes propagating signals.

[0016] The computing device 100 may also include an application 102. The application 102 may be a software, applet, peripheral, or other entity that requires or favors authentication of a verified user. For example, the application 102 can be a secured component of the computing device 100 or an access entity to secure information accessible from the computing device 100. The application 102 can be an online banking application software or webpage that requires fingerprint identification before logging in to an account. Or, the application 102 may be part of an operating system (OS) that prevents access (generally) to the computing device 100 until the user’s fingerprint is authenticated as the verified user’s fingerprint. The verified user may execute the application 102 partially or wholly on the computing device 100 or in “the cloud” (e.g, on a remote device accessed through the Internet). For example, the application 102 may provide an interface to an online account using an internet browser and/or an application programming interface (API).

[0017] Further, the computing device 100 may also include one or more input/ output ports

(I/O ports, not illustrated) and/or at least one display screen 108. The display screen 108 may display graphical images and/or instructions provided by the computing device 100 and may aid a user in interacting with the computing device 100. The display screen 108 can be separated from the fingerprint identification system 110 (as illustrated in FIG. 1) or can be part of the fingerprint identification system 110 (not illustrated as such). For example, the display screen 108 can contain an under-display fingerprint sensor (UDFPS) that may enable user authentication.

[0018] The fingerprint identification system 110 of the computing device 100 includes a first sensor 112 and the fingerprint sensors 114. The first sensor 112 may include a 3D camera, having red-green-blue (RGB) light-emitting elements, RGB channels, cyan-magenta-yellow-key (CMYK) channels, hue-saturation-brightness (HSV) channels, or a combination thereof. The first sensor 112 may also include other types of cameras (e.g, infrared), radar sensors, inertial measurement units, movement sensors, temperature sensors, position sensors, proximity sensors, light sensors, infrared sensors, moisture sensors, pressure sensors, and the like. Thus, the first sensor 112 can capture, in high resolution, a fingerprint image, including depth information of the fingerprint. As such, the first sensor 112, in addition to capturing patterns and minutiae of the fingerprint in two dimensions (2D), can also capture the “flatness” of the patterns and minutiae. Further, to capture accurate dimensions of the full fingerprint and the pattens and minutiae of the fingerprint, the first sensor 112 can perform spatial calibration between the first sensor 112 and the thumb of the verified user. It is to be appreciated that the 3D camera can capture the enrolled image of the fingerprint without the prerequisite for the verified user to touch the first sensor 112, let alone the prerequisite for the verified user to touch the first sensor 112 in a certain and tedious manner that is required by the current solutions.

[0019] The fingerprint sensors 114 of the fingerprint identification system 110 may include at least a second sensor 116 used to capture a verify image 120 of the user. Although not illustrated in FIG. 1, the fingerprint sensors 114 may also include a third sensor, a fourth sensor, and so forth, that the computing device 100 may use to capture the verify image 120. For example, the second sensor 116 may be a front-facing fingerprint sensor, the third sensor (not illustrated) may be a rear-facing fingerprint sensor, and the fourth sensor (not illustrated) may be a side-facing fingerprint sensor. For ease of description, this disclosure focuses on the second sensor 116. The second sensor 116 can be any sensor able to capture a high-resolution image, for example, five hundred (500) Dots-Per-Inch (DPI), seven hundred (700) DPI, one thousand (1000) DPI, and so forth. Depending on the imaging technology that the computing device 100 utilizes to capture the verify image 120, the second sensor 116 may be a capacitive image sensor, an ultrasonic image sensor, or a UDFPS. Regardless of the technology of the second sensor 116, the second sensor 116 utilizes a different technology than the first sensor 112. Thus, the first sensor 112, which is used to capture the enrolled image (the enrolled template 118) of the fingerprint of the verified user, is of a different type than the second sensor 116, which is used to capture the verify image 120 of the user.

[0020] Theoretically, the smartphone can be configured to enable the user to capture the verify image 120 by also using the first sensor 112. This disclosure, however, does not describe such usage of the computing device 100 because it may not be user friendly for fingerprint authentication. In such a scenario, the user may need to hold the smartphone on the one hand and present the thumb of the other hand to the 3D camera, every time the user wants to use the computing device 100, the application 102, a function, or peripheral thereof. It is more advantageous for the user to capture the verify image 120 using a fingerprint sensor that utilizes a user touch (the second sensor 116). That way, the user can utilize the fingerprint sensor one- handed, even when they cannot see the sensor (e.g., during nighttime). It is advantageous for the computing device 100 to authenticate the verify image 120 captured by the second sensor 116 against a high-quality enrolled image (enrolled template 118) captured by the first sensor 112.

Enrolled Template Creation

[0021] FIG. 2 illustrates an example logic-flow diagram for capturing an enrolled image 118-1 and creating the enrolled template 118 (not illustrated in FIG. 2), which is used to authenticate verify images 120 captured by one or more fingerprint sensors 114 of the fingerprint identification system 110 of the computing device 100. At stage 202, the verified user, using the first sensor 112, captures the enrolled image 118-1. Assume the first sensor 112 is a 3D camera, having RGB channels. The computing device 100 may use the display screen 108 to instruct the verified user to hold their thumb close to the 3D camera of the computing device 100. The computing device 100 may utilize various sensors (e.g., light sensors, accelerometers, proximity sensors) to apply an appropriate zoom automatically (e.g., 7x) and proper lighting (e.g., auto flash) to capture one or more high-resolution 3D enrolled images 118-1 of the verified user’s thumb. Note that the verified user often is not able to hold their thumb still in front of the 3D camera. For example, during stage 202 (enrollment), the verified user’s thumb may shift by a few millimeters or micrometers with respect to the 3D camera, as the verified user holds their thumb close to the 3D camera. The computing device 100, however, may use this perceived inadequacy in the behavior of the verified user to capture enrolled images that are overlapping, non-overlapping and apart (with a sliding distance of more than one pixel), or adjacent (with a sliding distance of zero or one pixel). Thus, the fingerprint identification system 110 can create an enrolled template 118 with a considerable amount of biometric data. FIG. 2 illustrates a grey enrolled image 118-1. The 3D camera, however, using the RGB channels, can capture the enrolled image 118-1 in color.

[0022] At stage 204, the fingerprint identification system 110 processes the enrolled image 118-1 of FIG. 2 to create the enrolled template 118 of FIG. 1. Depending on the count and the types of fingerprint sensors 114, the fingerprint identification system 110 may use the enrolled image 118-1 to create numerous enrolled templates 118 that are suitable to be used with each fingerprint sensor 114. For example, the second sensor 116 of the fingerprint sensors 114 may be a capacitive sensor that captures the verify image 120, as is illustrated in FIG. 2. Thus, at stage 204, the fingerprint identification system 110 processes the enrolled image 118-1 that is captured by the 3D camera (the first sensor 112) to create the enrolled template 118 that is suitable to be used with the capacitive sensor (the second sensor 116), as it is further discussed in FIG. 3. As another example, the fingerprint sensors 114 may include a third sensor, where the third sensor is a resistive sensor (not illustrated). Thus, at stage 204, the fingerprint identification system 110 may process the enrolled image 118-1 that is captured by the 3D camera (the first sensor 112) also to be used with the resistive sensor (the third sensor).

[0023] Although not always required, at stage 206, the fingerprint identification system 110 may divide the enrolled template 118, so it can be used with each fingerprint sensor 114 regardless of the size, type, and/or aspect ratio of the respective fingerprint sensor. For example, the fingerprint identification system 110 may have a fourth sensor (not illustrated). The fourth sensor may be another capacitive sensor of a different size or aspect ratio. Alternatively, the fourth sensor may be another resistive sensor or an under-display fingerprint sensor (UDFPS). Thus, the enrolled template 118 is capable of supporting authentication of any verified image 120 that contains biometric data (e.g, minutiae and patterns) regardless of the portion of the thumb that the user presents to the fingerprint sensor(s) 114 and irrespective of the type, size, and aspect ratio of the fingerprint sensors 114.

[0024] In some respects, the fingerprint identification system 110 may divide the enrolled template 118 into individual blocks of the fingerprint of the verified user. Similarly, the fingerprint identification system 110 may also divide the verify image 120 into individual blocks of the fingerprint of the user. A block can be an image area of N by N pixels (e.g. , 31 by 31 pixels, 23 by 23 pixels). Also, the blocks can be overlapping, non-overlapping and apart (with a sliding distance of more than one pixel between the blocks), or adjacent (with a sliding distance of zero or one pixel between the blocks). The fingerprint identification system 110 may perform a vector conversion of the individual blocks of the enrolled template 118 and the verify image 120. As such, the enrolled template 118 can be a vector-based template that includes vector representations of the individual blocks. Alternatively, the fingerprint identification system 110 may divide the enrolled template 118 into individual image frames with different aspect ratios (e.g., 93 pixels by 31 pixels). Similarly, the individual frames can be overlapping, non-overlapping and apart, or adjacent.

Examples of Enrolled Image-Processing and Fingerprint Authentication

[0025] FIG. 3 illustrates a method for authenticating the verify image 120 of the fingerprint of the user against the enrolled image 118-1 of FIG. 2 of the fingerprint of the verified user. To use less computational power in creating the enrolled template 118, the fingerprint identification system 110 may use one of the RGB channels of the enrolled image 118-1 of FIG. 2, for example, an R-channel 118-2. Also, the fingerprint identification system 110 may downsample the R-channel 118-2 further to reduce the need for considerable computational power. Then, the fingerprint identification system 110 applies histogram equalization to the R-channel 118-2 to create an image 118-3 that resembles the quality of the verify image 120 captured by the second sensor 116. Histogram equalization is a contrast adjustment method of using the image’s histogram to increase the quality of images without loss of any information (e.g, patterns and minutiae). Then, the fingerprint identification system 110 crops the image 118-3 and may again apply histogram equalization to the image 118-3 to create a fingerprint image 118-4. The fingerprint identification system 110 resizes (e.g, zooms), flips (up-down), applies a bandpass filter to remove high and low frequencies, may again apply histogram equalization, and performs color inversion to the image 118-4 to create an image 118-5. Note that the described imageprocessing techniques to transform the image 118-4 to the image 118-5 may be performed in a different order and/or may include less or more image-processing techniques than what is illustrated and noted in FIG. 3. After the fingerprint identification system 110 creates the image 118-5, the fingerprint identification system 110 crops the image 118-5 to create an image 118-6. If further image processing is not necessary, the fingerprint identification system 110 uses the image 118-6 to create the enrolled template 118. Finally, the fingerprint identification system 110 matches patterns and minutia of the verified image 120 captured by the second sensor 116 against patterns and minutia of the enrolled template 118 captured by the first sensor 112, as it is illustrated in FIG. 3. Example 1 of FIG. 3 illustrates the fingerprint identification system 110, authenticating the user as being the verified user. [0026] The fingerprint identification system 110 may utilize a variety of image-processing techniques, software, and codes (e.g., MATLAB® code) to process the enrolled image 118-1 of FIG. 2, as described in FIG. 3. Alternatively and/or additionally, the fingerprint identification system 110 may utilize a machine-learned model to compare the enrolled image 118-1 of FIG. 2 to the verify image 120 of FIG. 2. The machine-learned model may be a standard neural-networkbased model with corresponding layers required for processing input features like fixed-side vectors, directional maps, images, variable-length sequences, and so forth. The machine-learned model may be a support vector machine, a recurrent neural network, a convolutional neural network, a deconvolution neural network, a dense neural network, a generative adversarial network, heuristics, or a combination thereof. The machine-learned model may perform image- to-image, vector-to-vector, block-to-block transformations and comparisons, or a combination thereof. Inputs to the machine-learned model are verify images captured by the second sensor 116. Outputs of the machine-learned model are enrolled images captured by the first sensor 112. As illustrated and described in FIGs. 2, 3, 4, and 6, the machine-learned model may compare minutiae and patterns of the enrolled image to patterns and minutiae of the verify images. For example, the machine-learned model may transform or convert an image of a first domain captured by the 3D camera (an enrolled image) to an image of a second domain, the second domain being the domain of the image captured by the capacitive sensor (a verify image).

[0027] In one aspect, the first and the second domains express a relationship among different intensities in pixels of an image. The fingerprint identification system 110 may use spatial domains, frequency domains, time-frequency domains, and so forth, to represent enrolled and verify images. Alternatively and/or additionally, the machine-learned model may aid imageprocessing techniques to adjust the various parameters described in FIG. 3. As such, the machine- learned model may change the amount of contrasting in histogram equalization or may change the low and high frequencies of the bandpass filter, depending on the “quality” of the fingerprint. Assume the verified user has flatter than usual fingerprints. In that case, the machine-learned model may improve image processing of the enrolled image by increasing the contrast of histogram equalization.

[0028] Given the large computational power that machine learning can use to train a model, the model training can be performed on a cloud, server, or other capable computing device or system. Periodic model updates can be sent to the verified user’s smartphone, allowing the verified user’s smartphone to execute the machine-learned model even if the smartphone does not have the resources to update the model itself. Instead of or in addition to, some or all of the model training can be performed on the verified user’s smartphone. [0029] FIG. 4 illustrates additional examples of the fingerprint identification system 110 of the computing device 100 authenticating the user as being the verified user. Specifically, in Example 2, the fingerprint identification system 110 matches a verify image 120-1 captured by the second sensor 116 against the enrolled template 118. In Example 2, the fingerprint identification system 110 matches a verify image 120-2 captured by the second sensor 116 against the enrolled template 118. In Example 3, the fingerprint identification system 110 matches a verify image 120-3 captured by the second sensor 116 against the enrolled template 118. Similar to the descriptions of FIGs. 2 and 3, the enrolled template 118 is a product of an enrolled image captured by the first sensor 112 (e.g., a 3D camera), and the verify images 120-1, 120-2, and 120- 3, are captured by the second sensor 116 (e.g., a capacitive sensor).

Example Methods

[0030] FIG. 5 illustrates an example method 500 performed by the fingerprint identification system 110 of the computing device 100, which authenticates user inputs that enable a verified user to utilize an enrolled fingerprint image e.g., enrolled image 118-1, enrolled template 118) for one or more fingerprint sensors 114 (e.g., second sensor 116). FIG. 5 is described in the context of FIG. 1, the fingerprint identification system 110, and the computing device 100. The operations performed in the example method 500 may be performed in a different order or with additional or fewer steps than what is shown in FIG. 5. Moreover, various ones of the listed steps may be performed at significantly various times (e.g., separated by days, months, or years).

[0031] At stage 502, the fingerprint identification system 110 captures, using a first sensor (e.g., first sensor 112), a first image of the verified user’s skin (e.g., enrolled image 118-1 of FIG. 2). The first image of the verified user’s skin includes first biometric data (e.g., patterns and minutiae of FIG. 6) of the fingerprint of the verified user. It is to be understood that the verified user may utilize stage 502 of the example method 500 after the verified user is already authenticated by a first-party (e.g., manufacturer), a trusted third-party (e.g., cellphone carrier), a PIN, a username, a password, a passcode, a serial number of the computing device 100, or a combination thereof. Assume the first sensor 112 is a 3D camera, the computing device 100, utilizing the display screen 108, may instruct the verified user to hold their thumb in front of the 3D camera. Then, the computing device 100 captures afull image (e.g., enrolled image 118-1) of the verified user’s thumb.

[0032] At stage 504, the fingerprint identification system 110 creates an enrolled template (e.g., enrolled template 118 of FIG. 3) of the full image of the verified user’s thumb, including patterns and minutia. To create the enrolled template 118, the fingerprint identification system 110 can utilize any of the methods described in FIGs. 2, 3, and 4, which may include performing histogram equalization, cropping, resizing, flipping, filtering (e.g., applying a bandpass filter), performing color inversion, using the machine-learned model, or a combination thereof, to the first image (e.g., the enrolled image 118-1 ofFIG. 2) to create the enrolled template (e.g, enrolled template 118 of FIG. 3). Further, the fingerprint identification system may divide the enrolled template 118 to be used with each fingerprint sensor 114 (e.g, second sensor 116), as described in FIG. 2.

[0033] At stage 506, the fingerprint identification system 110 captures a second image (e.g, verify image 120 of FIG. 2) at a second sensor (e.g, second sensor 116). As described in FIGs. 1 to 4, the second sensor 116 (e.g, a capacitive sensor) is of a different type than the first sensor 112 (e.g, a 3D camera). The second sensor 116 is configured to capture the second image of a user’s skin, for example, the user’s thumb. The user’s thumb contains biometric data (e.g, patterns, minutiae) of the user who is yet to be authenticated as the verified user. Differently stated, the user utilizes stage 506 of the example method 500 at the time the user requests access to the computing device 100, the application 102, a function, or peripheral thereof.

[0034] At stage 508, the fingerprint identification system 110 compares the second image (e.g, verify image 120 ofFIG. 2) to the enrolled template (e.g, enrolled template 118 ofFIG. 3). The computing device 100 may perform a block-by -block (block-based), a vector-to-vector (vector-based), or an image-to-image (image-based) comparison between the enrolled template 118 and the verify image 120. The fingerprint identification system 110 performs block-by -block, vector-to-vector, or image-to-image comparisons, after performing any of the imaging-processing techniques, vector conversions, or matching techniques that are described in FIGs. 2, 3, and 4. Alternatively and/or additionally, the fingerprint identification system 110 of the computing device 100 may utilize the machine-learned model described in the description of FIG. 3 to compare the first image (e.g, enrolled image 118-1 of FIG. 2) to the second image (e.g, verify image 120 of FIG. 2) or the enrolled template (e.g. , enrolled template 118 of FIG. 3) to the second image (e.g, the verify image 120 of FIG. 3).

[0035] At stage 510, the fingerprint identification system 110 authenticates the user after performing block-by -block, vector-to-vector, or image-to-image comparisons. At this stage 510, the fingerprint identification system 110 may require that a confidence C meets a pre-determined threshold level. If the confidence C does not meet the pre-determined threshold level, the fingerprint identification system 110 of the computing device 100 issues a deny access 512 verdict and, possibly, a message on the display screen 108, stating that access is denied. On the other hand, if the confidence C meets the pre-determined threshold level, the fingerprint identification system 110 authenticates the user as being the verified user. Once the user is authenticated as being the verified user, the fingerprint identification system 110 issues a grant access 516 verdict, granting access to the computing device 100, the application 102, a function, or peripherals associated with the computing device 100.

Examples of Biometric Data

[0036] FIG. 6 illustrates examples of patterns (602, 604, and 606) and minutiae (610 through 630) used in matching fingerprints. The analysis of fingerprints for matching purposes compares patterns and/or minutiae of the fingerprints. The three major patterns of fingerprint ridges are an arch 602, a loop 604, and a whorl 606. The arch 602 is a fingerprint ridge that enters from one side of the finger, rises in the center forming an arc, and then exits the other side of the finger. The loop 604 is a fingerprint ridge that enters from one side of the finger, forms a curve, and then exits on that same side of the finger. The whorl 606 is a fingerprint ridge that is circular around a central point. The minutiae 610 through 630 are features of fingerprint ridges, for example, a ridge ending 610, a bifurcation 612, a short ridge 614, a dot 616, a bridge 618, a break 620, a spur 622, an island 624, a double bifurcation 626, a delta 628, a trifurcation 630, a lake or a ridge enclosure (not illustrated), a core (not illustrated), and so forth.

[0037] The following are additional examples of the describes apparatuses, methods, and techniques that enable using an enrolled fingerprint image (enrolled image, enrolled template) of a verified user for one or more fingerprint sensors that are embedded in or on a computing device.

Example 1 : A computer-implemented method comprising: capturing, by a first sensor, a first image of a verified user’s skin, the first image of the verified user’s skin including first biometric data of the verified user; creating an enrolled template, responsive to the capturing of the first image at the first sensor, the enrolled template having or being derived from the first biometric data of the verified user; capturing a second image at a second sensor, the second sensor of a different type than the first sensor, the second sensor configured to capture the second image of a user’s skin, the second image of the user’s skin including second biometric data of the user; comparing the second biometric data of the second image to the enrolled template; responsive to the comparing, authenticating the user as being the verified user; and responsive to the authenticating, enabling access to a computing device, application, function, or peripheral thereof.

Example 2: The computer-implemented method of Example 1, wherein the first sensor includes a three-dimensional, 3D, camera, having red-green-blue, RGB, light-emitting elements, RGB channels, cyan-magenta-yellow-key, CMYK, channels, hue-saturation-brightness, HSV, channels, or a combination thereof, the 3D camera enabling the capturing of the first image without a touch of the verified user’s skin to the first sensor. Example 3 : The computer-implemented method of Examples 1 or 2, wherein the creating of the enrolled template utilizes one of the channels of the 3D camera.

Example 4: The computer-implemented method of any of Examples 1 to 3, wherein the creating of the enrolled template includes down-sampling the one of the channels of the 3D camera.

Example 5: The computer-implemented method of any of Examples 1 to 4, wherein the creating of the enrolled template includes applying histogram equalization, cropping, resizing, filtering frequencies, performing color inversion, or combinations thereof, to the first image.

Example 6: The computer-implemented method of any of Examples 1 to 5, wherein the second sensor is a capacitive image sensor, an ultrasonic image sensor, or an optical under-display fingerprint sensor.

Example 7: The computer-implemented method of any of Examples 1 to 6, wherein: the first image captured by the first sensor is of a first domain, the first domain expressing a relationship among different intensities in pixels of the first image; and the second image captured by the second sensor is of a second domain, the second domain expressing a relationship among different intensities in pixels of the second image

Example 8: The computer-implemented method of Example 7, wherein the creating of the enrolled template includes using a machine-learned model to match the first domain of the first image to the second domain of the second image.

Example 9: The computer-implemented method Examples 7 and 8, wherein the enrolled template is similar to or of a similar quality to the second domain of the second image.

Example 10: The computer-implemented method of any of Examples 1 to 9, wherein the enrolled template includes vector-based templates, and wherein the comparing of the second biometric data of the second image to the enrolled template compares a vector conversion of the second image to the vector-based templates.

Example 11: The computer-implemented method of any of Examples 1 to 10, wherein the first biometric data of the verified user and the second biometric data of the user includes fingerprint data, the fingerprint data derived from a same fingertip, thumb, palm, or a plurality of fingertips.

Example 12: The computer-implemented method of any of Examples 1 to 11, wherein the enrolled template and the second image include multiple blocks or image frames, and wherein the comparing of the second biometric data of the second image to the enrolled template includes the comparing of the multiple blocks or image frames of the second image to the multiple blocks or image frames of the enrolled template. Example 13: The computer-implemented method of Example 12, wherein the multiple blocks or image frames are: overlapping; non-overlapping and apart, with a sliding distance of more than one pixel between the blocks; or adjacent, with a sliding distance of zero or one pixel between the blocks.

Example 14: The computer-implemented method of Example 13, wherein the comparing includes comparing the multiple blocks or image frames of the second image to the multiple blocks or image frames of the enrolled template to determine a confidence level for each of the one or more blocks or image frames, and wherein the authenticating the user as being the verified user is performed responsive to a confidence threshold being met by the determined confidence level.

Example 15. The computer-implemented method of any of Examples 1 to 14, wherein: the verified user is established by a first-party, a trusted third-party, a personal identification number, PIN, a username, a password, a passcode, a serial number of the computing device, or a combination thereof; and the user is a person requesting access to the computing device, application, function, or peripheral thereof.

Example 16. A computing device comprising: a first sensor; at least a second sensor; one or more processors; and one or more computer-readable media having instructions thereon that, responsive to execution by the one or more processors, perform the operations of the method of any of Examples 1 to 15.

Conclusion

[0038] While various aspects of the disclosure are described in the foregoing description and illustrated in the drawings, it is to be understood that this disclosure is not limited thereto but may be variously embodied to practice within the scope of the following claims. From the foregoing description, it will be apparent that various changes may be made without departing from the spirit and scope of the disclosure as defined by the following claims.